diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md new file mode 100644 index 0000000000..0953611722 --- /dev/null +++ b/.github/CODE_OF_CONDUCT.md @@ -0,0 +1,44 @@ +# Contributor Code of Conduct + +As contributors and maintainers of this project, and in the interest of fostering an open +and welcoming community, we pledge to respect all people who contribute through reporting +issues, posting feature requests, updating documentation, submitting pull requests or +patches, and other activities. + +We are committed to making participation in this project a harassment-free experience for +everyone, regardless of level of experience, gender, gender identity and expression, +sexual orientation, disability, personal appearance, body size, race, ethnicity, age, +religion, or nationality. + +Examples of unacceptable behavior by participants include: + +* The use of sexualized language or imagery +* Personal attacks +* Trolling or insulting/derogatory comments +* Public or private harassment +* Publishing other's private information, such as physical or electronic addresses, + without explicit permission +* Other unethical or unprofessional conduct + +Project maintainers have the right and responsibility to remove, edit, or reject comments, +commits, code, wiki edits, issues, and other contributions that are not aligned to this +Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors +that they deem inappropriate, threatening, offensive, or harmful. + +By adopting this Code of Conduct, project maintainers commit themselves to fairly and +consistently applying these principles to every aspect of managing this project. Project +maintainers who do not follow or enforce the Code of Conduct may be permanently removed +from the project team. + +This Code of Conduct applies both within project spaces and in public spaces when an +individual is representing the project or its community. + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by +contacting a project maintainer at lettuce-redis-client-users@googlegroups.com. All complaints will +be reviewed and investigated and will result in a response that is deemed necessary and +appropriate to the circumstances. Maintainers are obligated to maintain confidentiality +with regard to the reporter of an incident. + +This Code of Conduct is adapted from the +[Contributor Covenant](https://contributor-covenant.org), version 1.3.0, available at +[contributor-covenant.org/version/1/3/0/](https://contributor-covenant.org/version/1/3/0/). diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index e1b11de6e6..814158f34f 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,21 +1,82 @@ -# Contributing to lettuce +# Contributing to Lettuce -If you would like to contribute code you can do so through GitHub by forking the repository and sending a pull request. +Lettuce is released under the Apache 2.0 license. If you would like to contribute something, or simply want to hack on the code this document should help you get started. -When submitting code, please make every effort to follow existing conventions and style in order to keep the code as readable as possible. -Formatting settings are provided for Eclipse in https://github.com/mp911de/lettuce/blob/master/formatting.xml +## Code of Conduct + +This project adheres to the Contributor Covenant [code of +conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to lettuce-redis-client-users@googlegroups.com. + +## Using GitHub Issues + +We use GitHub issues to track bugs and enhancements. If you have a general usage question please ask on [Stack Overflow](https://stackoverflow.com). +The Lettuce team and the broader community monitor the [`lettuce`](https://stackoverflow.com/tags/lettuce) tag. + +If you are reporting a bug, please help to speed up problem diagnosis by providing as much information as possible. +Ideally, that would include a small sample project that reproduces the problem. + +## Quickstart + +For the impatient, if you want to submit a quick pull request: + +* Don't create a pull request upfront. Create a feature request ticket first, so we can discuss your idea. +* Upon agreeing the feature is a good fit for Lettuce, please: + * Make sure there is a ticket in GitHub issues. + * Make sure you use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes. + * Make sure you submit test cases (unit or integration tests) that back your changes. + * Try to reuse existing test sample code (domain classes). Try not to amend existing test cases but create new ones dedicated to the changes you’re making to the codebase. Try to test as locally as possible but potentially also add integration tests. + +When submitting code, please make every effort to follow existing conventions and style in order to keep the code as readable as possible. Formatting changes create a lot of noise and reduce the likelyhood of merging the pull request. +Formatting settings are provided for Eclipse in https://github.com/lettuce-io/lettuce-core/blob/master/formatting.xml ## Bugreports If you report a bug, please ensure to specify the following: -* lettuce version (e.g. 3.2.Final) -* Contextual information (what were you trying to do using lettuce) -* Simplest possible steps to reproduce - * JUnit tests to reproduce are great but not obligatory +* Check GitHub issues whether someone else already filed a ticket. If not, then feel free to create another one. +* Comments on closed tickets are typically a bad idea as there's little attention on closed tickets. Please create a new one. +* Lettuce version (e.g. 3.2.Final). +* Contextual information (what were you trying to do using Lettuce). +* Simplest possible steps to reproduce: + * Ideally, a [Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve). + * JUnit tests to reproduce are great but not obligatory. + +## Features + +If you want to request a feature, please ensure to specify the following: + +* What do you want to achieve? +* Contextual information (what were you trying to do using Lettuce). +* Ideally, but not required: Describe your idea how you would implement it. + +## Questions + +If you have a question, then check one of the following places first as GitHub issues are for bugs and feature requests. Typically, forums, chats, and mailing lists are the best place to ask your question as you can expect to get an answer faster there: + +**Checkout the docs** + +* [Reference documentation](https://lettuce.io/docs/) +* [Wiki](https://github.com/lettuce-io/lettuce-core/wiki) +* [Javadoc](https://lettuce.io/core/release/api/) + +**Communication** + +* Google Group/Mailing List (General discussion, announcements and releases): [lettuce-redis-client-users](https://groups.google.com/d/forum/lettuce-redis-client-users) or [lettuce-redis-client-users@googlegroups.com](mailto:lettuce-redis-client-users@googlegroups.com) +* Stack Overflow (Questions): [Questions about Lettuce](https://stackoverflow.com/questions/tagged/lettuce) +* Gitter (General discussion): [![Join the chat at https://gitter.im/lettuce-io/Lobby](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/lettuce-io/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) + +### Building from Source + +Lettuce source can be built from the command line using Maven on JDK 1.8 or above. + +The project can be built from the root directory using the standard Maven command: -## License +```bash + $ mvn clean test +``` -By contributing your code, you agree to license your contribution under the terms of [Apache License 2.0] (http://www.apache.org/licenses/LICENSE-2.0). +You can run a full build including integration tests using the `make` command: -All files are released with the Apache 2.0 license. \ No newline at end of file +```bash + $ make test +``` diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index 3ad4440d1a..0000000000 --- a/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,16 +0,0 @@ - -Make sure that: - -- [ ] You have read the [contribution guidelines](https://github.com/mp911de/lettuce/blob/master/.github/CONTRIBUTING.md). -- [ ] You specify the lettuce version and environment so it's obvious which version is affected -- [ ] You provide a reproducible test case (either descriptive of as JUnit test) if it's a bug or the expected behavior differs from the actual behavior. - - \ No newline at end of file diff --git a/.github/ISSUE_TEMPLATE/Bug_report.md b/.github/ISSUE_TEMPLATE/Bug_report.md new file mode 100644 index 0000000000..8a7be54c99 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/Bug_report.md @@ -0,0 +1,52 @@ +--- +name: 🐛 Bug Report +about: If something isn't working as expected 🤔. +labels: 'type: bug' +--- + +## Bug Report + + + +#### Current Behavior + + + +
+Stack trace + +```java +// your stack trace here; +``` + +
+ +#### Input Code + + + +
+Input Code + +```java +// your code here; +``` + +
+ +#### Expected behavior/code + + + +#### Environment + +- Lettuce version(s): [e.g. 5.0.0.RELEASE, 4.2.2.Final] +- Redis version: [e.g. 4.0.9] + +#### Possible Solution + + + +#### Additional context + + diff --git a/.github/ISSUE_TEMPLATE/Feature_request.md b/.github/ISSUE_TEMPLATE/Feature_request.md new file mode 100644 index 0000000000..e8a75fd03c --- /dev/null +++ b/.github/ISSUE_TEMPLATE/Feature_request.md @@ -0,0 +1,26 @@ +--- +name: 🚀 Feature Request +about: I have a suggestion (and may want to implement it 🙂)! +labels: 'type: enhancement' +--- + +## Feature Request + + + +#### Is your feature request related to a problem? Please describe + +A clear and concise description of what the problem is. Ex. I have an issue when [...] + +#### Describe the solution you'd like + +A clear and concise description of what you want to happen. Add any considered drawbacks. + +#### Describe alternatives you've considered + +A clear and concise description of any alternative solutions or features you've considered. + +#### Teachability, Documentation, Adoption, Migration Strategy + +If you can, explain how users will be able to use this and possibly write out a version the docs. +Maybe a screenshot or design? diff --git a/.github/ISSUE_TEMPLATE/Help_us.md b/.github/ISSUE_TEMPLATE/Help_us.md new file mode 100644 index 0000000000..ec438fda1b --- /dev/null +++ b/.github/ISSUE_TEMPLATE/Help_us.md @@ -0,0 +1,15 @@ +--- +name: 🤝 Support us on Lettuce +about: If you would like to support our efforts in maintaining this community-driven project 🙌! + +--- + +Help support Lettuce! + +Lettuce has always been a community project, not really backed or owned by any single (or group) of companies. While some maintainers used to work at companies supporting open source no one was working on it full time and there certainly isn't a huge company or team anywhere doing all this work. + +--- + +As a group of volunteers you can help us in a few ways + +- Giving developer time on the project. (Message us on [Twitter](https://twitter.com/LettuceDriver) or [Gitter](https://gitter.im/lettuce-io/Lobby) for guidance). Companies should be paying their employees to contribute back to the open source projects they use everyday. diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml new file mode 100644 index 0000000000..7f4b5ba46b --- /dev/null +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -0,0 +1,4 @@ +contact_links: + - name: Questions + url: https://gitter.im/lettuce-io/Lobby + about: Please ask questions on Gitter or StackOverflow. diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 1d1ad1830a..be6b012783 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -3,10 +3,11 @@ Thank you for proposing a pull request. This template will guide you through the --> Make sure that: -- [ ] You have read the [contribution guidelines](https://github.com/mp911de/lettuce/blob/master/.github/CONTRIBUTING.md). -- [ ] You use the code formatters provided [here](https://github.com/mp911de/lettuce/blob/master/formatting.xml) and have them applied to your changes. Don’t submit any formatting related changes. +- [ ] You have read the [contribution guidelines](https://github.com/lettuce-io/lettuce-core/blob/master/.github/CONTRIBUTING.md). +- [ ] You have created a feature request first to discuss your contribution intent. Please reference the feature request ticket number in the pull request. +- [ ] You use the code formatters provided [here](https://github.com/lettuce-io/lettuce-core/blob/master/formatting.xml) and have them applied to your changes. Don’t submit any formatting related changes. - [ ] You submit test cases (unit or integration tests) that back your changes. \ No newline at end of file +--> diff --git a/.gitignore b/.gitignore index b1c53c7797..76866d2b7f 100644 --- a/.gitignore +++ b/.gitignore @@ -1,7 +1,6 @@ target/ *.rdb *.aof - - atlassian-ide-plugin.xml *.iml @@ -14,8 +13,4 @@ work/ .settings dependency-reduced-pom.xml .idea -.vagrant -Vagrantfile -CluTest.java - -CluTest.java +.flattened-pom.xml diff --git a/.travis.yml b/.travis.yml index a6f794fe99..3b53e9645a 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,26 +1,67 @@ +dist: xenial + +os: linux + language: java -jdk: - - oraclejdk8 -env: - matrix: - - PROFILE=netty-40 - - PROFILE=netty-41 -sudo: false -before_install: - - if [[ ! -f stunnel.tar.gz ]]; then wget -O stunnel.tar.gz ftp://ftp.stunnel.org/stunnel/archive/5.x/stunnel-5.33.tar.gz; fi - - if [[ ! -f ./stunnel-5.33/configure ]]; then tar -xzf stunnel.tar.gz; fi - - if [[ ! -f ./stunnel-5.33/src/stunnel ]]; then cd ./stunnel-5.33; ./configure; make; cd ..; fi - - export PATH="$PATH:$(pwd)/stunnel-5.33/src" -install: make prepare ssl-keys -script: make test-coveralls PROFILE=${PROFILE} + +jobs: + include: + - jdk: openjdk8 + env: + - JDK='OpenJDK 8' + - REDIS=unstable + - jdk: openjdk8 + env: + - JDK='OpenJDK 8' + - PROFILE=jmh + - jdk: openjdk8 + env: + - JDK='OpenJDK 8' + - REDIS=6.0 + - jdk: openjdk8 + env: + - JDK='OpenJDK 8' + - REDIS=5.0 + - jdk: openjdk8 + env: + - JDK='OpenJDK 8' + - REDIS=4.0 + - jdk: openjdk8 + env: + - JDK='OpenJDK 8' + - REDIS=3.2 + - env: JDK='OpenJDK 12' + before_install: wget https://github.com/sormuras/bach/raw/master/install-jdk.sh && . ./install-jdk.sh -f 12 + - env: JDK='OpenJDK 13' + before_install: wget https://github.com/sormuras/bach/raw/master/install-jdk.sh && . ./install-jdk.sh -f 13 + - env: JDK='OpenJDK 14' + before_install: wget https://github.com/sormuras/bach/raw/master/install-jdk.sh && . ./install-jdk.sh -f 14 + - env: JDK='OpenJDK EA' + before_install: wget https://github.com/sormuras/bach/raw/master/install-jdk.sh && . ./install-jdk.sh + allow_failures: + - env: JDK='OpenJDK EA' + +install: + - if [[ ! -f downloads/stunnel-5.56.tar.gz ]]; then wget -O downloads/stunnel-5.56.tar.gz ftp://ftp.stunnel.org/stunnel/archive/5.x/stunnel-5.56.tar.gz; fi + - if [[ ! -f ./stunnel-5.56/configure ]]; then tar -xzf downloads/stunnel-5.56.tar.gz; fi + - if [[ ! -f ./stunnel-5.56/src/stunnel ]]; then cd ./stunnel-5.56; ./configure; make; cd ..; fi + - export PATH="$PATH:$(pwd)/stunnel-5.56/src" + - make prepare ssl-keys + +script: make test-coverage PROFILE=${PROFILE:-ci} + +after_success: + - bash <(curl -s https://codecov.io/bash) + cache: directories: - '$HOME/.m2/repository' - - '$TRAVIS_BUILD_DIR/stunnel-5.33' + - 'downloads' + - 'stunnel-5.56' + notifications: webhooks: urls: - - https://webhooks.gitter.im/e/c34b69f37ca13d2f8642 + - https://webhooks.gitter.im/e/7c2a962829d225c47a31 on_success: change # options: [always|never|change] default: always on_failure: always # options: [always|never|change] default: always - on_start: false # default: false \ No newline at end of file diff --git a/Makefile b/Makefile index 3ddbc22034..dfe1ddcee9 100644 --- a/Makefile +++ b/Makefile @@ -1,10 +1,12 @@ +SHELL := /bin/bash PATH := ./work/redis-git/src:${PATH} ROOT_DIR := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) STUNNEL_BIN := $(shell which stunnel) BREW_BIN := $(shell which brew) YUM_BIN := $(shell which yum) APT_BIN := $(shell which apt-get) -PROFILE ?= netty-41 +PROFILE ?= ci +REDIS ?= unstable define REDIS_CLUSTER_CONFIG1 c2043458aa5646cee429fdd5e3c18220dddf2ce5 127.0.0.1:7380 master - 1434887920102 1434887920002 0 connected 12000-16383 @@ -39,32 +41,32 @@ vars currentEpoch 3 lastVoteEpoch 0 endef define REDIS_CLUSTER_CONFIG8 -c2043458aa5646cee429fdd5e3c18220dddf2ce5 127.0.0.1:7580 master - 0 1434887920102 0 connected -1c541b6daf98719769e6aacf338a7d81f108a180 127.0.0.1:7581 master - 0 1434887920102 3 connected 10001-16384 -2c07344ffa94ede5ea57a2367f190af6144c1adb 127.0.0.1:7582 myself,master - 0 0 2 connected 0-10000 -27f88788f03a86296b7d860152f4ae24ee59c8c9 127.0.0.1:7579 master - 0 1434887920102 1 connected +c2043458aa5646cee429fdd5e3c18220dddf2ce5 127.0.0.1:7580 master - 1434887920102 1434887920002 0 connected 10001-16383 +27f88788f03a86296b7d860152f4ae24ee59c8c9 127.0.0.1:7579 myself,master - 0 0 1 connected 0-10000 +2c07344ffa94ede5ea57a2367f190af6144c1adb 127.0.0.1:7582 slave c2043458aa5646cee429fdd5e3c18220dddf2ce5 1434887920102 1434887920002 2 connected +1c541b6daf98719769e6aacf338a7d81f108a180 127.0.0.1:7581 slave 27f88788f03a86296b7d860152f4ae24ee59c8c9 1434887920102 1434887920002 3 connected vars currentEpoch 3 lastVoteEpoch 0 endef define REDIS_CLUSTER_CONFIG_SSL_1 -27f88788f03a86296b7d860152f4ae24ee59c8c9 127.0.0.1:7479@17479 myself,master - 0 1434887920102 1 connected 0-10000 -1c541b6daf98719769e6aacf338a7d81f108a180 127.0.0.1:7480@17480 slave 27f88788f03a86296b7d860152f4ae24ee59c8c9 0 1434887920102 3 connected -2c07344ffa94ede5ea57a2367f190af6144c1adb 127.0.0.1:7481@17481 master - 0 0 2 connected 10001-16384 -vars currentEpoch 3 lastVoteEpoch 0 +cf2354ef19ee813a962350b51438314aebce1fe2 127.0.0.1:7479@17479 myself,master - 0 1578163609000 0 connected 0-10000 +cac8e053dd6f85fab470be57d29dcbac2a4b85c4 127.0.0.1:7480@17480 slave cf2354ef19ee813a962350b51438314aebce1fe2 0 1578163609301 1 connected +6554e5b1b158dccd4b1d9ca294a3e46a2d3e556d 127.0.0.1:7481@17481 master - 0 1578163609301 2 connected 10001-16383 +vars currentEpoch 2 lastVoteEpoch 0 endef define REDIS_CLUSTER_CONFIG_SSL_2 -27f88788f03a86296b7d860152f4ae24ee59c8c9 127.0.0.1:7479@17479 master - 0 1434887920102 1 connected 0-10000 -1c541b6daf98719769e6aacf338a7d81f108a180 127.0.0.1:7480@17480 myself,slave 27f88788f03a86296b7d860152f4ae24ee59c8c9 0 1434887920102 3 connected -2c07344ffa94ede5ea57a2367f190af6144c1adb 127.0.0.1:7481@17481 master - 0 0 2 connected 10001-16384 -vars currentEpoch 3 lastVoteEpoch 0 +cf2354ef19ee813a962350b51438314aebce1fe2 127.0.0.1:7479@17479 master - 0 1578163609245 0 connected 0-10000 +cac8e053dd6f85fab470be57d29dcbac2a4b85c4 127.0.0.1:7480@17480 myself,slave cf2354ef19ee813a962350b51438314aebce1fe2 0 1578163609000 1 connected +6554e5b1b158dccd4b1d9ca294a3e46a2d3e556d 127.0.0.1:7481@17481 master - 0 1578163609245 2 connected 10001-16383 +vars currentEpoch 2 lastVoteEpoch 0 endef define REDIS_CLUSTER_CONFIG_SSL_3 -27f88788f03a86296b7d860152f4ae24ee59c8c9 127.0.0.1:7479@17479 master - 0 1434887920102 1 connected 0-10000 -1c541b6daf98719769e6aacf338a7d81f108a180 127.0.0.1:7480@17480 slave 27f88788f03a86296b7d860152f4ae24ee59c8c9 0 1434887920102 3 connected -2c07344ffa94ede5ea57a2367f190af6144c1adb 127.0.0.1:7481@17481 myself,master - 0 0 2 connected 10001-16384 -vars currentEpoch 3 lastVoteEpoch 0 +cac8e053dd6f85fab470be57d29dcbac2a4b85c4 127.0.0.1:7480@17480 slave cf2354ef19ee813a962350b51438314aebce1fe2 0 1578163609279 1 connected +cf2354ef19ee813a962350b51438314aebce1fe2 127.0.0.1:7479@17479 master - 0 1578163609279 0 connected 0-10000 +6554e5b1b158dccd4b1d9ca294a3e46a2d3e556d 127.0.0.1:7481@17481 myself,master - 0 1578163609000 2 connected 10001-16383 +vars currentEpoch 2 lastVoteEpoch 0 endef @@ -120,8 +122,8 @@ work/sentinel-%.conf: @echo logfile $(shell pwd)/work/redis-sentinel-$*.log >> $@ @echo sentinel monitor mymaster 127.0.0.1 6482 1 >> $@ - @echo sentinel down-after-milliseconds mymaster 100 >> $@ - @echo sentinel failover-timeout mymaster 100 >> $@ + @echo sentinel down-after-milliseconds mymaster 200 >> $@ + @echo sentinel failover-timeout mymaster 200 >> $@ @echo sentinel parallel-syncs mymaster 1 >> $@ @echo unixsocket $(ROOT_DIR)/work/socket-$* >> $@ @echo unixsocketperm 777 >> $@ @@ -148,7 +150,7 @@ work/cluster-node-7385.conf: @echo appendonly no >> $@ @echo unixsocket $(ROOT_DIR)/work/socket-7385 >> $@ @echo cluster-enabled yes >> $@ - @echo cluster-node-timeout 50 >> $@ + @echo cluster-node-timeout 150 >> $@ @echo cluster-config-file $(shell pwd)/work/cluster-node-config-7385.conf >> $@ @echo requirepass foobared >> $@ @@ -163,7 +165,7 @@ work/cluster-node-7479.conf: @echo save \"\" >> $@ @echo appendonly no >> $@ @echo cluster-enabled yes >> $@ - @echo cluster-node-timeout 50 >> $@ + @echo cluster-node-timeout 150 >> $@ @echo cluster-config-file $(shell pwd)/work/cluster-node-config-7479.conf >> $@ @echo cluster-announce-port 7443 >> $@ @echo requirepass foobared >> $@ @@ -179,7 +181,7 @@ work/cluster-node-7480.conf: @echo save \"\" >> $@ @echo appendonly no >> $@ @echo cluster-enabled yes >> $@ - @echo cluster-node-timeout 50 >> $@ + @echo cluster-node-timeout 150 >> $@ @echo cluster-config-file $(shell pwd)/work/cluster-node-config-7480.conf >> $@ @echo cluster-announce-port 7444 >> $@ @echo requirepass foobared >> $@ @@ -195,7 +197,7 @@ work/cluster-node-7481.conf: @echo save \"\" >> $@ @echo appendonly no >> $@ @echo cluster-enabled yes >> $@ - @echo cluster-node-timeout 50 >> $@ + @echo cluster-node-timeout 150 >> $@ @echo cluster-config-file $(shell pwd)/work/cluster-node-config-7481.conf >> $@ @echo cluster-announce-port 7445 >> $@ @echo requirepass foobared >> $@ @@ -213,11 +215,11 @@ work/cluster-node-%.conf: @echo client-output-buffer-limit pubsub 256k 128k 5 >> $@ @echo unixsocket $(ROOT_DIR)/work/socket-$* >> $@ @echo cluster-enabled yes >> $@ - @echo cluster-node-timeout 50 >> $@ + @echo cluster-node-timeout 150 >> $@ @echo cluster-config-file $(shell pwd)/work/cluster-node-config-$*.conf >> $@ work/cluster-node-%.pid: work/cluster-node-%.conf work/redis-git/src/redis-server - work/redis-git/src/redis-server $< + work/redis-git/src/redis-server $< || true cluster-start: work/cluster-node-7379.pid work/cluster-node-7380.pid work/cluster-node-7381.pid work/cluster-node-7382.pid work/cluster-node-7383.pid work/cluster-node-7384.pid work/cluster-node-7385.pid work/cluster-node-7479.pid work/cluster-node-7480.pid work/cluster-node-7481.pid work/cluster-node-7582.pid @@ -228,10 +230,10 @@ cluster-start: work/cluster-node-7379.pid work/cluster-node-7380.pid work/cluste work/stunnel.conf: @mkdir -p $(@D) - @echo cert=$(ROOT_DIR)/work/cert.pem >> $@ - @echo key=$(ROOT_DIR)/work/key.pem >> $@ - @echo capath=$(ROOT_DIR)/work/cert.pem >> $@ - @echo cafile=$(ROOT_DIR)/work/cert.pem >> $@ + @echo cert=$(ROOT_DIR)/work/ca/certs/localhost.cert.pem >> $@ + @echo key=$(ROOT_DIR)/work/ca/private/localhost.decrypted.key.pem >> $@ + @echo capath=$(ROOT_DIR)/work/ca/certs/ca.cert.pem >> $@ + @echo cafile=$(ROOT_DIR)/work/ca/certs/ca.cert.pem >> $@ @echo delay=yes >> $@ @echo pid=$(ROOT_DIR)/work/stunnel.pid >> $@ @echo foreground = no >> $@ @@ -240,25 +242,56 @@ work/stunnel.conf: @echo accept = 127.0.0.1:6443 >> $@ @echo connect = 127.0.0.1:6479 >> $@ - @echo [stunnel-2] >> $@ + @echo [foo-host] >> $@ @echo accept = 127.0.0.1:6444 >> $@ @echo connect = 127.0.0.1:6479 >> $@ - @echo cert=$(ROOT_DIR)/work/localhost.pem >> $@ - @echo capath=$(ROOT_DIR)/work/localhost.pem >> $@ - @echo cafile=$(ROOT_DIR)/work/localhost.pem >> $@ - + @echo cert=$(ROOT_DIR)/work/ca/certs/foo-host.cert.pem >> $@ + @echo key=$(ROOT_DIR)/work/ca/private/foo-host.decrypted.key.pem >> $@ + @echo [ssl-cluster-node-1] >> $@ @echo accept = 127.0.0.1:7443 >> $@ @echo connect = 127.0.0.1:7479 >> $@ - + @echo [ssl-cluster-node-2] >> $@ @echo accept = 127.0.0.1:7444 >> $@ - @echo connect = 127.0.0.1:7480 >> $@ - + @echo connect = 127.0.0.1:7480 >> $@ + @echo [ssl-cluster-node-3] >> $@ @echo accept = 127.0.0.1:7445 >> $@ @echo connect = 127.0.0.1:7481 >> $@ - + + @echo [ssl-sentinel-1] >> $@ + @echo accept = 127.0.0.1:26822 >> $@ + @echo connect = 127.0.0.1:26379 >> $@ + + @echo [ssl-sentinel-2] >> $@ + @echo accept = 127.0.0.1:26823 >> $@ + @echo connect = 127.0.0.1:26380 >> $@ + + @echo [ssl-sentinel-3] >> $@ + @echo accept = 127.0.0.1:26824 >> $@ + @echo connect = 127.0.0.1:26381 >> $@ + + @echo [ssl-sentinel-master] >> $@ + @echo accept = 127.0.0.1:6925 >> $@ + @echo connect = 127.0.0.1:6482 >> $@ + + @echo [ssl-sentinel-slave] >> $@ + @echo accept = 127.0.0.1:6926 >> $@ + @echo connect = 127.0.0.1:6482 >> $@ + + @echo [stunnel-client-cert] >> $@ + @echo accept = 127.0.0.1:6445 >> $@ + @echo connect = 127.0.0.1:6479 >> $@ + @echo verify=2 >> $@ + + @echo [stunnel-master-slave-node-1] >> $@ + @echo accept = 127.0.0.1:8443 >> $@ + @echo connect = 127.0.0.1:6482 >> $@ + + @echo [stunnel-master-slave-node-2] >> $@ + @echo accept = 127.0.0.1:8444 >> $@ + @echo connect = 127.0.0.1:6483 >> $@ work/stunnel.pid: work/stunnel.conf ssl-keys which stunnel4 >/dev/null 2>&1 && stunnel4 $(ROOT_DIR)/work/stunnel.conf || stunnel $(ROOT_DIR)/work/stunnel.conf @@ -301,35 +334,25 @@ cleanup: stop # SSL Keys # - remove Java keystore as becomes stale ########## -work/key.pem work/cert.pem: - @mkdir -p $(@D) - openssl genrsa -out work/key.pem 4096 - openssl req -new -x509 -key work/key.pem -out work/cert.pem -days 365 -subj "/O=lettuce/ST=Some-State/C=DE/CN=lettuce-test" - openssl req -new -x509 -key work/key.pem -out work/localhost.pem -days 365 -subj "/O=lettuce/ST=Some-State/C=DE/CN=localhost" - chmod go-rwx work/key.pem - chmod go-rwx work/cert.pem - chmod go-rwx work/localhost.pem - - rm -f work/keystore.jks - - rm -f work/keystore-localhost.jks - work/keystore.jks: @mkdir -p $(@D) - $$JAVA_HOME/bin/keytool -importcert -keystore work/keystore.jks -file work/cert.pem -noprompt -storepass changeit - $$JAVA_HOME/bin/keytool -importcert -keystore work/keystore-localhost.jks -file work/localhost.pem -noprompt -storepass different + - rm -f work/*.jks + - rm -Rf work/ca + src/test/bash/create_certificates.sh -ssl-keys: work/key.pem work/cert.pem work/keystore.jks +ssl-keys: work/keystore.jks stop: pkill stunnel || true pkill redis-server && sleep 1 || true pkill redis-sentinel && sleep 1 || true -test-coveralls: start - mvn -B -DskipTests=false clean compile test jacoco:report coveralls:report -P$(PROFILE) +test-coverage: start + mvn -B -DskipITs=false clean compile verify jacoco:report -P$(PROFILE) $(MAKE) stop test: start - mvn -B -DskipTests=false clean compile test -P$(PROFILE) + mvn -B -DskipITs=false clean compile verify -P$(PROFILE) $(MAKE) stop prepare: stop @@ -365,8 +388,9 @@ endif endif work/redis-git/src/redis-cli work/redis-git/src/redis-server: - [ ! -e work/redis-git ] && git clone https://github.com/antirez/redis.git --branch unstable --single-branch work/redis-git && cd work/redis-git|| true - [ -e work/redis-git ] && cd work/redis-git && git fetch && git merge origin/master || true + [ -d "work/redis-git" ] && cd work/redis-git && git reset --hard || \ + git clone https://github.com/antirez/redis.git work/redis-git + cd work/redis-git && git checkout -q $(REDIS) && git pull origin $(REDIS) $(MAKE) -C work/redis-git clean $(MAKE) -C work/redis-git -j4 @@ -380,8 +404,3 @@ release: mvn release:perform -Psonatype-oss-release ls target/checkout/target/*-bin.zip | xargs gpg -b -a ls target/checkout/target/*-bin.tar.gz | xargs gpg -b -a - cd target/checkout && mvn site:site && mvn -o scm-publish:publish-scm -Dgithub.site.upload.skip=false - -apidocs: - mvn site:site - ./apidocs.sh diff --git a/README.md b/README.md index 0a10f778bf..5e6bf25472 100644 --- a/README.md +++ b/README.md @@ -1,41 +1,45 @@ -lettuce - Advanced Java Redis client + Lettuce - Advanced Java Redis client =============================== -[![Build Status](https://travis-ci.org/mp911de/lettuce.svg)](https://travis-ci.org/mp911de/lettuce) [![Coverage Status](https://img.shields.io/coveralls/mp911de/lettuce.svg)](https://coveralls.io/r/mp911de/lettuce) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/biz.paluch.redis/lettuce/badge.svg)](https://maven-badges.herokuapp.com/maven-central/biz.paluch.redis/lettuce) +[![Build Status](https://travis-ci.org/lettuce-io/lettuce-core.svg)](https://travis-ci.org/lettuce-io/lettuce-core) [![codecov](https://codecov.io/gh/lettuce-io/lettuce-core/branch/master/graph/badge.svg)](https://codecov.io/gh/lettuce-io/lettuce-core) + [![Maven Central](https://maven-badges.herokuapp.com/maven-central/io.lettuce/lettuce-core/badge.svg)](https://maven-badges.herokuapp.com/maven-central/io.lettuce/lettuce-core) Lettuce is a scalable thread-safe Redis client for synchronous, asynchronous and reactive usage. Multiple threads may share one connection if they avoid blocking and transactional operations such as `BLPOP` and `MULTI`/`EXEC`. -lettuce is built with [netty](https://github.com/netty/netty). +Lettuce is built with [netty](https://github.com/netty/netty). Supports advanced Redis features such as Sentinel, Cluster, Pipelining, Auto-Reconnect and Redis data models. -This version of lettuce has been tested against the latest Redis source-build. +This version of Lettuce has been tested against the latest Redis source-build. -* lettuce 3.x works with Java 6, 7 and 8, lettuce 4.x requires Java 8 -* [synchronous](https://github.com/mp911de/lettuce/wiki/Basic-usage), [asynchronous](https://github.com/mp911de/lettuce/wiki/Asynchronous-API-%284.0%29) and [reactive](https://github.com/mp911de/lettuce/wiki/Reactive-API-%284.0%29) usage -* [Redis Sentinel](https://github.com/mp911de/lettuce/wiki/Redis-Sentinel) -* [Redis Cluster](https://github.com/mp911de/lettuce/wiki/Redis-Cluster) -* [SSL](https://github.com/mp911de/lettuce/wiki/SSL-Connections) and [Unix Domain Socket](https://github.com/mp911de/lettuce/wiki/Unix-Domain-Sockets) connections -* [Streaming API](https://github.com/mp911de/lettuce/wiki/Streaming-API) -* [CDI](https://github.com/mp911de/lettuce/wiki/CDI-Support) and [Spring](https://github.com/mp911de/lettuce/wiki/Spring-Support) integration -* [Codecs](https://github.com/mp911de/lettuce/wiki/Codecs) (for UTF8/bit/JSON etc. representation of your data) -* multiple [Command Interfaces](https://github.com/mp911de/lettuce/wiki/Command-Interfaces-%284.0%29) +* [synchronous](https://github.com/lettuce-io/lettuce-core/wiki/Basic-usage), [asynchronous](https://github.com/lettuce-io/lettuce-core/wiki/Asynchronous-API-%284.0%29) and [reactive](https://github.com/lettuce-io/lettuce-core/wiki/Reactive-API-%285.0%29) usage +* [Redis Sentinel](https://github.com/lettuce-io/lettuce-core/wiki/Redis-Sentinel) +* [Redis Cluster](https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster) +* [SSL](https://github.com/lettuce-io/lettuce-core/wiki/SSL-Connections) and [Unix Domain Socket](https://github.com/lettuce-io/lettuce-core/wiki/Unix-Domain-Sockets) connections +* [Streaming API](https://github.com/lettuce-io/lettuce-core/wiki/Streaming-API) +* [CDI](https://github.com/lettuce-io/lettuce-core/wiki/CDI-Support) and [Spring](https://github.com/lettuce-io/lettuce-core/wiki/Spring-Support) integration +* [Codecs](https://github.com/lettuce-io/lettuce-core/wiki/Codecs) (for UTF8/bit/JSON etc. representation of your data) +* multiple [Command Interfaces](https://github.com/lettuce-io/lettuce-core/wiki/Command-Interfaces-%284.0%29) +* Compatible with Java 8 and 9 (implicit automatic module w/o descriptors) -See the [Wiki](https://github.com/mp911de/lettuce/wiki) for more docs. +See the [reference documentation](https://lettuce.io/docs/) and [Wiki](https://github.com/lettuce-io/lettuce-core/wiki) for more details. Communication --------------- -* Google Group: [lettuce-redis-client-users](https://groups.google.com/d/forum/lettuce-redis-client-users) or lettuce-redis-client-users@googlegroups.com -* [![Join the chat at https://gitter.im/mp911de/lettuce](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/mp911de/lettuce?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) -* [Github Issues](https://github.com/mp911de/lettuce/issues) +* Google Group/Mailing List (General discussion, announcements and releases): [lettuce-redis-client-users](https://groups.google.com/d/forum/lettuce-redis-client-users) or lettuce-redis-client-users@googlegroups.com +* Stack Overflow (Questions): [https://stackoverflow.com/questions/tagged/lettuce](https://stackoverflow.com/questions/tagged/lettuce) +* Gitter (General discussion): [![Join the chat at https://gitter.im/lettuce-io/Lobby](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/lettuce-io/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +* Twitter: [@LettuceDriver](https://twitter.com/LettuceDriver) +* [GitHub Issues](https://github.com/lettuce-io/lettuce-core/issues) (Bug reports, feature requests) Documentation --------------- -* [Wiki](https://github.com/mp911de/lettuce/wiki) +* [Reference documentation](https://lettuce.io/docs/) +* [Wiki](https://github.com/lettuce-io/lettuce-core/wiki) * [Javadoc](http://redis.paluch.biz/docs/api/releases/latest/) @@ -44,76 +48,47 @@ Binaries/Download Binaries and dependency information for Maven, Ivy, Gradle and others can be found at http://search.maven.org. -Releases of lettuce are available in the maven central repository. Take also a look at the [Download](https://github.com/mp911de/lettuce/wiki/Download) page in the [Wiki](https://github.com/mp911de/lettuce/wiki). +Releases of lettuce are available in the Maven Central repository. Take also a look at the [Releases](https://github.com/lettuce-io/lettuce-core/releases). Example for Maven: ```xml - biz.paluch.redis - lettuce + io.lettuce + lettuce-core x.y.z ``` -Shaded JAR-File (packaged dependencies and relocated to the `com.lambdaworks` package to prevent version conflicts) +If you'd rather like the latest snapshots of the upcoming major version, use our Maven snapshot repository and declare the appropriate dependency version. ```xml - biz.paluch.redis - lettuce - x.y.z - shaded - - - io.reactivex - rxjava - - - org.latencyutils - LatencyUtils - - - io.netty - netty-common - - - io.netty - netty-transport - - - io.netty - netty-handler - - - io.netty - netty-codec - - - com.google.guava - guava - - - io.netty - netty-transport-native-epoll - - - org.apache.commons - commons-pool2 - - + io.lettuce + lettuce-core + x.y.z.BUILD-SNAPSHOT -``` -or snapshots at https://oss.sonatype.org/content/repositories/snapshots/ + + + sonatype-snapshots + Sonatype Snapshot Repository + https://oss.sonatype.org/content/repositories/snapshots/ + + true + + + +``` Basic Usage ----------- ```java -RedisClient client = RedisClient.create("redis://localhost") -RedisStringsConnection connection = client.connect() -String value = connection.get("key") +RedisClient client = RedisClient.create("redis://localhost"); +StatefulRedisConnection connection = client.connect(); +RedisStringCommands sync = connection.sync(); +String value = sync.get("key"); ``` Each Redis command is implemented by one or more methods with names identical @@ -121,7 +96,7 @@ to the lowercase Redis command name. Complex commands with multiple modifiers that change the result type include the CamelCased modifier as part of the command name, e.g. zrangebyscore and zrangebyscoreWithScores. -See [Basic usage](https://github.com/mp911de/lettuce/wiki/Basic-usage) for further details. +See [Basic usage](https://github.com/lettuce-io/lettuce-core/wiki/Basic-usage) for further details. Asynchronous API ------------------------ @@ -138,7 +113,7 @@ set.get() == "OK" get.get() == "value" ``` -See [Asynchronous API](https://github.com/mp911de/lettuce/wiki/Asynchronous-API-%284.0%29) for further details. +See [Asynchronous API](https://github.com/lettuce-io/lettuce-core/wiki/Asynchronous-API-%284.0%29) for further details. Reactive API ------------------------ @@ -146,22 +121,22 @@ Reactive API ```java StatefulRedisConnection connection = client.connect(); RedisStringReactiveCommands reactive = connection.reactive(); -Observable set = reactive.set("key", "value") -Observable get = reactive.get("key") +Mono set = reactive.set("key", "value"); +Mono get = reactive.get("key"); set.subscribe(); -get.toBlocking().single() == "value" +get.block() == "value" ``` -See [Reactive API](https://github.com/mp911de/lettuce/wiki/Reactive-API-%284.0%29) for further details. +See [Reactive API](https://github.com/lettuce-io/lettuce-core/wiki/Reactive-API-%285.0%29) for further details. Pub/Sub ------- ```java RedisPubSubCommands connection = client.connectPubSub().sync(); -connection.addListener(new RedisPubSubListener() { ... }) +connection.getStatefulConnection().addListener(new RedisPubSubListener() { ... }) connection.subscribe("channel") ``` @@ -169,12 +144,12 @@ Building ----------- Lettuce is built with Apache Maven. The tests require multiple running Redis instances for different test cases which -are configured using a ```Makefile```. All tests run against Redis branch 3.0 +are configured using a ```Makefile```. Tests run by default against Redis `unstable`. To build: ``` -$ git clone https://github.com/mp911de/lettuce.git +$ git clone https://github.com/lettuce-io/lettuce-core.git $ cd lettuce/ $ make prepare ssl-keys $ make test @@ -189,17 +164,16 @@ $ make test Bugs and Feedback ----------- -For bugs, questions and discussions please use the [Github Issues](https://github.com/mp911de/lettuce/issues). +For bugs, questions and discussions please use the [GitHub Issues](https://github.com/lettuce-io/lettuce-core/issues). License ------- -* [Apache License 2.0] (http://www.apache.org/licenses/LICENSE-2.0) +* [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) * Fork of https://github.com/wg/lettuce Contributing ------- Github is for social coding: if you want to write code, I encourage contributions through pull requests from forks of this repository. -Create Github tickets for bugs and new features and comment on the ones that you are interested in and take a look into [CONTRIBUTING.md](https://github.com/mp911de/lettuce/blob/master/.github/CONTRIBUTING.md) - +Create Github tickets for bugs and new features and comment on the ones that you are interested in and take a look into [CONTRIBUTING.md](https://github.com/lettuce-io/lettuce-core/blob/master/.github/CONTRIBUTING.md) diff --git a/RELEASE-NOTES.md b/RELEASE-NOTES.md index 8eb845b561..3f94f5ec51 100644 --- a/RELEASE-NOTES.md +++ b/RELEASE-NOTES.md @@ -1,290 +1,165 @@ -lettuce 4.2.0 RELEASE NOTES -=========================== +Lettuce 6.0.0 M1 RELEASE NOTES +============================== -That's the zero behind 4.2? Well, that's to not break OSGi support. Now let's -talk about the more interesting things. Lettuce 4.2.0 is a major release and -completes development of several notable enhancements. +The Lettuce team is delighted to announce the availability of the first Lettuce 6 milestone. -This release comes with SSL support, Publish/Subscribe and adaptive topology -refreshing for Redis Cluster. It features a major refactoring of the command -handling and several improvements for Cloud-based Redis services and -improvements to the Master/Slave API. +Lettuce 6 aligns with Redis 6 in terms of API and protocol changes. Both protocols, RESP and RESP3 are supported side-by-side defaulting to RESP. -The usage of Guava was reduced for the most parts. Only the `LoadingCache`, -`InetAddresses` and `HostAndPort` components are still in use. A future lettuce -5.0 version will eliminate the use of Guava completely. +Most notable changes that ship with this release are -**Important note for users of connection-pooling and latency tracking** +* RESP3 support +* ACL Authentication with username/password +* Asynchronous Cluster Topology Refresh +* API cleanups/Breaking Changes -Dependencies were streamlined with this release. Apache's `commons-pool2` and -`latencyutils` are now _optional_ dependencies. If you use connection-pooling or -latency tracking, please include these dependencies explicitly otherwise these -features will be disabled. +We're working towards the next milestone and looking at further Redis 6 features such as client-side caching how these can be incorporated into Lettuce. The release date of Lettuce 6 depends on Redis 6 availability. -Lettuce 4.2.0 was verified with several Cloud-based Redis services. It works -with different AWS ElastiCache usage patterns and is known to work with the -Azure Redis Cluster service supporting SSL and authentication (see below). +Thanks to all contributors who made Lettuce 6.0.0.M1 possible. +Lettuce requires a minimum of Java 8 to build and run and is compatible with Java 14. It is tested continuously against the latest Redis source-build. -lettuce 4.2.0 is fully binary compatible with the last release and can be used -as a drop-in replacement for 4.1.x. This update is strongly recommended for -lettuce 4.x users as it fixes some critical connection synchronization bugs. +If you need any support, meet Lettuce at -Thanks to all contributors that made lettuce 4.2.0 possible. - -lettuce 4.2.0 requires Java 8 and cannot be used with Java 6 or 7. - - -Redis Cluster Publish/Subscribe -------------------------------- -Redis Cluster -provides Publish/Subscribe features to broadcast messages across the cluster. -Using the standalone client allows using Publish/Subscribe with Redis Cluster -but comes with the limitation of high-availability/failover. - -If a node goes down, the connection is lost until the node is available again. -lettuce addresses this issue with Redis Cluster Publish/Subscribe and provides a -failover mechanism. - -Publish/Subscribe messages and subscriptions are operated on the default cluster -connection. The default connection is established with the node with the least -client connections to achieve a homogeneous connection distribution. It also -uses the cluster topology to failover if the currently connected node is down. +* Google Group (General discussion, announcements, and releases): https://groups.google.com/d/forum/lettuce-redis-client-users +or lettuce-redis-client-users@googlegroups.com +* Stack Overflow (Questions): https://stackoverflow.com/questions/tagged/lettuce +* Join the chat at https://gitter.im/lettuce-io/Lobby for general discussion +* GitHub Issues (Bug reports, feature requests): https://github.com/lettuce-io/lettuce-core/issues +* Documentation: https://lettuce.io/core/6.0.0.M1/reference/ +* Javadoc: https://lettuce.io/core/6.0.0.M1/api/ -Publishing a message using the regular cluster connection is still possible -(since 4.0). The regular cluster connection calculates a slot-hash from the -channel (which is the key in this case). Publishing always connects to the -master node which is responsible for the slot although `PUBLISH` is not affected -by the keyspace/slot-hash rule. +RESP3 Support +------------- -Read more: https://github.com/mp911de/lettuce/wiki/Pub-Sub-%284.0%29 +Redis 6 ships with support for a new protocol version. RESP3 brings support for additional data types to distinguish better between responses. The following response types were introduced with RESP3: +* Null: a single `null` value replacing RESP v2 `*-1` and `$-1` null values. +* Double: a floating-point number. +* Boolean: `true` or `false`. +* Blob error: binary-safe error code and message. +* Verbatim string: a binary-safe string that is typically used as user message without any escaping or filtering. +* Map: an ordered collection of key-value pairs. Keys and values can be any other RESP3 type. +* Set: an unordered collection of N other types. +* Attribute: Like the Map type, but the client should keep reading the reply ignoring the attribute type, and return it to the client as additional information. +* Push: Out-of-band data. +* Streamed strings: A large response using chunked transfer. +* Hello: Like the Map type, but is sent only when the connection between the client and the server is established, in order to welcome the client with different information like the name of the server, its version, and so forth. +* Big number: a large number non-representable by the Number type -Redis Cluster and SSL ---------------------- -Redis introduces an option to announce a specific IP address/port using -`cluster-announce-ip` and `cluster-announce-port`. This is useful for -Docker and NAT'ed setups. Furthermore, you can "hide" your Redis Cluster nodes -behind any other proxy like `stunnel`. A Redis Cluster node will announce the -specified port/IP which can map to `stunnel`, and you get an SSL-protected -Redis Cluster. Please note that `cluster-announce-ip` is not part of Redis 3.2 -but will be released in future versions. +Lettuce supports all response types except attributes. Push messages are only supported for Pub/Sub messages. -Redis Cluster SSL works pretty much the same as Redis Standalone with SSL. You -can configure SSL and other SSL/TLS options using `RedisURI`. +The protocol version can be changed through `ClientOptions` which disables protocol discovery: ```java -RedisURI redisURI = RedisURI.Builder.redis(host(), 7443) - .withSsl(true) - .withVerifyPeer(false) - .build(); - -RedisClusterClient redisClusterClient = RedisClusterClient.create(redisURI); -StatefulRedisClusterConnection connection = redisClusterClient.connect(); +ClientOptions options = ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).build(); ``` -You should disable the `verifyPeer` option if the SSL endpoints cannot provide a -valid certificate. When creating a `RedisClusterClient` using -`RedisClusterClientFactoryBean` the `verifyPeer` option is disabled by default. - -Lettuce was successfully tested with Azure Redis with SSL and authentication. - -Read more: https://github.com/mp911de/lettuce/wiki/Redis-Cluster-%284.0%29 +Future versions are going to discover the protocol version as part of the connection handshake and use the newest available protocol version. +ACL Authentication +------------------ -Redis Cluster Topology Discovery and Refreshing ----------------------------------------------- -The lettuce Redis Cluster Client -allows regular topology updates. lettuce 4.2.0 improves the existing topology -updates with adaptive refreshing and dynamic/static topology discovery. - -Adaptive refresh initiates topology view updates based on events happened during -Redis Cluster operations. Adaptive triggers lead to an immediate topology -refresh. Adaptive updates are rate-limited using a timeout since events can -happen on a large scale. Adaptive refresh triggers are disabled by default and -can be enabled selectively: - -* `MOVED_REDIRECT` -* `ASK_REDIRECT` -* `PERSISTENT_RECONNECTS` - -Dynamic/static topology discovery sources are the second change to topology -refresh. lettuce uses by default dynamic discovery. Dynamic discovery retrieves -the initial topology from the seed nodes and determines additional nodes to -request their topology view. That is to reduce split-brain views by choosing the -view which is shared by the majority of cluster nodes. - -Dynamic topology discovery also provides latency data and client count for each -node in the cluster. These details are useful for calculating the nearest node -or the least used node. - -Dynamic topology discovery can get expensive when running large Redis Clusters -as all nodes from the topology are queried for their view. Static topology -refresh sources limit the nodes to the initial seed node set. Limiting nodes is -friendly to large clusters but it will provide latency and client count only for -the seed nodes. - -Read more: https://github.com/mp911de/lettuce/wiki/Client-options#adaptive-cluster-topology-refresh - - -Redis Modules -------------- +Redis 6 supports authentication using username and password. Lettuce's `RedisURI` adapts to this change by allowing to specify a username: -Redis module support is a very young feature. lettuce provides a custom command -API to dispatch own commands. `StatefulConnection` already allows sending of -commands but requires wrapping of commands into the appropriate synchronization -wrapper (Future, Reactive, Fire+Forget). +`redis://username:password@host:port/database` -lettuce provides with 4.2.0 `dispatch(…)` methods on each API type to provide a -simpler interface. +Using RESP3 or PING on connect authenticates the connection during the handshake phase. Already connected connections may switch the user context by issuing an `AUTH` command with username and password: ```java -RedisCodec codec = new Utf8StringCodec(); - -String response = redis.dispatch(CommandType.SET, - new StatusOutput<>(codec), - new CommandArgs<>(codec) - .addKey(key) - .addValue(value)); +StatefulRedisConnection connection = client.connect(); +RedisCommands commands = connection.sync(); +commands.auth("username", "password"); ``` -Calls to `dispatch(…)` on the synchronous API are blocking calls, calls on the -asynchronous API return a `RedisFuture` and calls on the Reactive API return -an `Observable` which flat-maps collection responses. - -Using `dispatch(…)` allows to invoke arbitrary commands and works together -within transactions and Redis Cluster. Exposing this API also allows choosing a -different `RedisCodec` for particular operations. - -Read more: https://github.com/mp911de/lettuce/wiki/Custom-commands%2C-outputs-and-command-mechanics - - -CommandHandler refactoring --------------------------- -Command sending, buffering, encoding and receiving was refactored on a large scale. -Command encoding is performed in a separate handler and outside of `CommandHandler`. -It does not longer allocate an additional buffer to encode its arguments, but -arguments are directly written to the command buffer that is used to encode -single command/batch of commands. Fewer memory allocations help improving -performance and do not duplicate data. - -Synchronization and locking were reworked as well. The critical path used for -writing commands is no longer locked exclusively but uses a shared locking with -almost lock-free synchronization. - - -Improvements to Master/Slave connections ----------------------------------------- -lettuce introduced with 4.1 a Master/Slave API which is now more dynamic. It's -no longer required to connect to a Master node when using Master/Slave without -Sentinel as the Master/Slave API will discover the master by itself. Providing -one seed node enables dynamic lookup. The API is internally prepared for dynamic -updates which are used with Redis Sentinel. - -A Sentinel-managed Master/Slave setup discovers configuration changes based on -Sentinel events and updates its topology accordingly. +Asynchronous Cluster Topology Refresh +------------------------------------- -Another change is the broader support of AWS ElastiCache Master/Slave setups. -AWS ElastiCache allows various patterns for usage. One of them is the automatic -failover. AWS ElastiCache exposes a connection point hostname and updates the -DNS record to point to the current master node. Since the JVM has a built-in -cache it's not trivial to adjust the DNS lookup and caching to the special needs -which are valid only for AWS ElastiCache connections. lettuce exposes a DNS -Lookup API that defaults to the JVM lookup. Lettuce ships also with -`DirContextDnsResolver` that allows own DNS lookups using either the -system-configured DNS or external DNS servers. This implementation comes without -caching and is suitable for AWS ElastiCache. +Cluster Topology Refresh was in Lettuce 4 and 5 a blocking and fully synchronous task that required a worker thread. A side-effect of the topology refresh was that command timeouts could be delayed as the worker thread pool was used for timeout tasks and the topology refresh. Lettuce 6 ships with a fully non-blocking topology refresh mechanism which is basically a reimplementation of the previous refresh mechanism but using non-blocking components instead. -Another pattern is using AWS ElastiCache slaves. Before 4.2.0, a static setup -was required. Clients had to point to the appropriate node. The Master/Slave API -allows specifying a set of nodes which form a static Master/Slave setup. Lettuce -discovers the roles from the provided nodes and routes read/write commands -according to `ReadFrom` settings. +API cleanups/Breaking Changes +----------------------------- -Read more: https://github.com/mp911de/lettuce/wiki/Master-Slave +With this release, we took the opportunity to introduce a series of changes that put the API into a cleaner shape. - -If you need any support, meet lettuce at: - -* Google Group: https://groups.google.com/d/forum/lettuce-redis-client-users -* Gitter: https://gitter.im/mp911de/lettuce -* Github Issues: https://github.com/mp911de/lettuce/issues - - -Commands --------- -* Add support for CLUSTER BUMPEPOCH command #179 -* Support extended MIGRATE syntax #197 -* Add support for GEORADIUS STORE and STOREDIST options #199 -* Support SCAN in RedisCluster #201 -* Add support for BITFIELD command #206 -* Add zadd method accepting ScoredValue #210 -* Add support for SPOP key count #235 -* Add support for GEOHASH command #239 -* Add simple interface for custom command invocation for sync/async/reactive APIs #245 +* Script Commands: `eval`, `digest`, `scriptLoad` methods now only accept `String` and `byte[]` argument types. Previously `digest` and `scriptLoad` accepted the script contents as Codec value type which caused issues especially when marshalling values using JSON or Java Serialization. The script charset can be configured via `ClientOptions` (`ClientOptions.builder().scriptCharset(StandardCharsets.US_ASCII).build();`), defaulting to UTF-8. +* Connection: Removal of deprecated timeout methods accepting `TimeUnit`. Use methods accepting `Duration` instead. +* Async Commands: `RedisAsyncCommands.select(…)` and `.auth(…)` methods return now futures instead if being blocking methods. +* Asynchronous API Usage: Connection and Queue failures now no longer throw an exception but properly associate the failure with the Future handle. +* Master/Replica API: Move implementation classes from `io.lettuce.core.masterslave` to `io.lettuce.core.masterreplica` package. +* Internal: Removal of the internal `LettuceCharsets` utility class. +* Internal: Reduced visibility of several `protected` fields in `AbstractRedisClient` (`eventLoopGroups`, `genericWorkerPool`, `timer`, `clientResources`, `clientOptions`, `defaultTimeout`). +* Internal: Consolidation of Future synchronization utilities (`LettuceFutures`, `RefreshFutures`, `Futures`). Enhancements ------------ -* Cluster pub/sub and resilient subscriptions #138 (Thanks to @jpennell) -* Reactive API: Emit items during command processing #178 -* Allow configuration of max redirect count for cluster connections #191 -* Improve SCAN API #208 -* Support Redis Cluster with SSL #209 -* Improve CommandHandler locking #211 -* Improve command encoding of singular commands and command batches #212 (Thanks to @cwolfinger) -* Add log statement for resolved address #218 (Thanks to @mzapletal) -* Apply configured password/database number in MasterSlave connection #220 -* Improve command draining in flushCommands #228 (Thanks to @CodingFabian) -* Support dynamic master/slave connections #233 -* Expose DNS Resolver #236 -* Make latencyutils and commons-pool2 dependencies optional #237 -* Support adaptive cluster topology refreshing and static refresh sources #240 (Thanks to @RahulBabbar) -* Add static builder() methods to builders enhancement #248 -* Add factory for reconnection delay enhancement #250 -* Add integer cache for CommandArgs enhancement #251 +* Use channel thread to enqueue commands #617 +* Redesign connection activation #697 +* Add support for RESP3 #964 +* Consolidate Future utils #1039 +* Make RedisAsyncCommands.select() and auth() async #1118 (Thanks to @ikkyuland) +* Allow client to pick a specific TLS version and introduce PEM-based configuration #1167 (Thanks to @amohtashami12307) +* Optimization of BITFIELD args generation #1175 (Thanks to @ianpojman) +* Add mutate() to SocketOptions #1193 +* Add CLIENT ID command #1197 +* Lettuce not able to reconnect automatically to SSL+authenticated ElastiCache node #1201 (Thanks to @chadlwilson) +* Add support for AUTH with user + password introduced in Redis 6 #1202 (Thanks to @tgrall) +* HMSET deprecated in version 4.0.0 #1217 (Thanks to @hodur) +* Allow selection of Heap or Direct buffers for CommandHandler.buffer #1223 (Thanks to @dantheperson) +* Support JUSTID flag of XCLAIM command #1233 (Thanks to @christophstrobl) +* Add support for KEEPTTL with SET #1234 +* Add support for RxJava 3 #1235 +* Retrieve username from URI when RedisURI is built from URL #1242 (Thanks to @gkorland) +* Introduce ThreadFactoryProvider to DefaultEventLoopGroupProvider for easier customization #1243 (Thanks to @apilling6317) Fixes ----- -* pfmerge invokes PFADD instead of PFMERGE #158 (Thanks to @christophstrobl) -* Fix NPE in when command output is null #187 (Thanks to @rovarghe) -* Set interrupted bit after catching InterruptedException #192 -* Lettuce fails sometimes at shutdown: DefaultClientResources.shutdown #194 -* Extensive calls to PooledClusterConnectionProvider.closeStaleConnections #195 -* Shared resources are closed altough still users are present #196 -* Lettuce 4.1 does not repackage new dependencies #198 (Thanks to @ CodingFabian) -* Fix NPE in CommandHandler.write (Thanks to @cinnom) #213 -* Gracefully shutdown DefaultCommandLatencyCollector.PAUSE_DETECTOR #223 (Thanks to @sf-git and @johnou) -* RedisClient.connect(RedisURI) fails for unix socket based URIs #229 (Thanks to @nivekastoreth) -* HdrHistogram and LatencyUtils are not included in binary distribution #231 -* Cache update in Partitions is not thread-safe #234 -* GEORADIUSBYMEMBER, GEORADIUS and GEOPOS run into NPE when using Redis Transactions #241 -* LettuceFutures.awaitAll throws RedisCommandInterruptedException when awaiting failed commands #242 -* Fix command sequence on connection activation #253 (Thanks to @long-xuan-nguyen) -* Cluster topology refresh: Failed connections are not closed bug #255 -* Cluster topology refresh tries to connect twice for failed connection attempts #256 -* Connection lifecycle state DISCONNECTED is considered a connected sate #257 -* Writing commands while a disconnect is in progress leads to a race-condition #258 -* Canceled commands lead to connection desynchronization #262 (Thanks to @long-xuan-nguyen) +* Commands Timeout ignored/not working during refresh #1107 (Thanks to @pendula95) +* StackOverflowError in RedisPublisher #1140 (Thanks to @csunwold) +* Incorrect access on io.lettuce.core.ReadFrom.isOrderSensitive() #1145 (Thanks to @orclev) +* Consider ReadFrom.isOrderSensitive() in cluster scan command #1146 +* Improve log message for nodes that cannot be reached during reconnect/topology refresh #1152 (Thanks to @drewcsillag) +* BoundedAsyncPool doesn't work with a negative maxTotal #1181 (Thanks to @sguillope) +* TLS setup fails to a master reported by sentinel #1209 (Thanks to @ae6rt) +* Lettuce metrics creates lots of long arrays, and gives out of memory error. #1210 (Thanks to @omjego) +* CommandSegments.StringCommandType does not implement hashCode()/equals() #1211 +* Unclear documentation about quiet time for RedisClient#shutdown #1212 (Thanks to @LychakGalina) +* StreamReadOutput in Lettuce 6 creates body entries containing the stream id #1216 +* Write race condition while migrating/importing a slot #1218 (Thanks to @phyok) +* randomkey return V not K #1240 (Thanks to @hosunrise) +* ConcurrentModificationException iterating over partitions #1252 (Thanks to @johnny-costanzo) +* Replayed activation commands may fail because of their execution sequence #1255 (Thanks to @robertvazan) +* Fix infinite command timeout #1260 +* Connection leak using pingBeforeActivateConnection when PING fails #1262 (Thanks to @johnny-costanzo) +* Lettuce blocked when connecting to Redis #1269 (Thanks to @jbyjby1) +* Stream commands are not considered for ReadOnly routing #1271 (Thanks to @redviper) Other ------- -* Switch remaining tests to AssertJ #13 -* Promote 4.x branch to main branch #155 -* Add Wiki documentation for disconnectedBehavior option in ClientOptions #188 -* Switch travis-ci to container build #203 -* Refactor Makefile #207 -* Code cleanups #215 -* Reduce Google Guava usage #217 -* Improve contribution assets #219 -* Ensure OSGi compatibility #232 -* Upgrade netty to 4.0.36.Final #238 - -lettuce requires a minimum of Java 8 to build and run. It is tested continuously -against the latest Redis source-build. - -If you need any support, meet lettuce at - -* Google Group: https://groups.google.com/d/forum/lettuce-redis-client-users -or lettuce-redis-client-users@googlegroups.com -* Join the chat at https://gitter.im/mp911de/lettuce -* Github Issues: https://github.com/mp911de/lettuce/issues -* Wiki: https://github.com/mp911de/lettuce/wiki \ No newline at end of file +----- +* Refactor script content argument types to String and byte[] instead of V (value type) #1010 (Thanks to @danielsomekh) +* Render Redis.toString() to a Redis URI #1040 +* Pass Allocator as RedisStateMachine constructor argument #1053 +* Simplify condition to invoke "resolveCodec" method in AnnotationRedisCodecResolver #1149 (Thanks to @machi1990) +* Encode database in RedisURI in path when possible #1155 +* Remove LettuceCharsets #1156 +* Move SocketAddress resolution from RedisURI to SocketAddressResolver #1157 +* Remove deprecated timeout methods accepting TimeUnit #1158 +* Upgrade to RxJava 2.2.13 #1162 +* Add ByteBuf.touch(…) to aid buffer leak investigation #1164 +* Add warning log if MasterReplica(…, Iterable) contains multiple Sentinel URIs #1165 +* Adapt GEOHASH tests to 10 chars #1196 +* Migrate Master/Replica support to the appropriate package #1199 +* Disable RedisURIBuilderUnitTests failing on Windows OS #1204 (Thanks to @kshchepanovskyi) +* Provide a default port(DEFAULT_REDIS_PORT) to RedisURI's Builder #1205 (Thanks to @hepin1989) +* Update code for pub/sub to listen on the stateful connection object. #1207 (Thanks to @judepereira) +* Un-deprecate ClientOptions.pingBeforeActivateConnection #1208 +* Use consistently a shutdown timeout of 2 seconds in all AbstractRedisClient.shutdown methods #1214 +* Upgrade dependencies (netty to 4.1.49.Final) #1161, #1224, #1225, #1239, #1259 +* RedisURI class does not parse password when using redis-sentinel #1232 (Thanks to @kyrogue) +* Reduce log level to DEBUG for native library logging #1238 (Thanks to @DevJoey) +* Reduce visibility of fields in AbstractRedisClient #1241 +* Upgrade to stunnel 5.56 #1246 +* Add build profiles for multiple Java versions #1247 +* Replace outdated Sonatype parent POM with plugin definitions #1258 +* Upgrade to RxJava 3.0.2 #1261 +* Enable Sentinel tests after Redis fixes RESP3 handshake #1266 +* Consolidate exception translation and bubbling #1275 +* Reduce min thread count to 2 #1278 diff --git a/formatting.xml b/formatting.xml index ff57cdcb77..4128eb0641 100644 --- a/formatting.xml +++ b/formatting.xml @@ -1,6 +1,6 @@ - + diff --git a/pom.xml b/pom.xml index 39a4341d7e..b5cd481738 100644 --- a/pom.xml +++ b/pom.xml @@ -1,325 +1,819 @@ 4.0.0 - - org.sonatype.oss - oss-parent - 9 - - - biz.paluch.redis - lettuce - 4.3.0-SNAPSHOT + io.lettuce + lettuce-core + 6.0.0.BUILD-SNAPSHOT jar - lettuce - http://github.com/mp911de/lettuce/wiki + Lettuce + Advanced and thread-safe Java Redis client for synchronous, asynchronous, and reactive usage. Supports Cluster, Sentinel, Pipelining, Auto-Reconnect, Codecs and much more. + http://github.com/lettuce-io/lettuce-core + + + lettuce.io + https://lettuce.io + + Apache License, Version 2.0 - http://www.apache.org/licenses/LICENSE-2.0.txt + https://www.apache.org/licenses/LICENSE-2.0.txt repo Travis CI - https://travis-ci.org/mp911de/lettuce + https://travis-ci.org/lettuce-io/lettuce-core Github - https://github.com/mp911de/lettuce/issues + https://github.com/lettuce-io/lettuce-core/issues - - will - Will Glozer - mp911de Mark Paluch + + will + Will Glozer + + 3.16.1 + 2.0.SP1 + 5.11.2 + 3.10 + 2.8.0 + 1.3.2 + 3.1.0 + 4.13 + 5.6.2 + 2.2 + 2.1.12 + 2.0.3 + 2.13.3 + 3.3.3 + 4.1.50.Final + 2.0.16 + 3.3.6.RELEASE + 1.3.8 + 1.2.1 + 2.2.19 + 3.0.4 + 1.0.3 + 1.7.25 + 4.3.26.RELEASE + UTF-8 - - true - true - true - 4.2.2.Final - 3.5.0.Final - 4.1.4.Final + true + - scm:git:https://github.com/mp911de/lettuce.git - scm:git:https://github.com/mp911de/lettuce.git - http://github.com/mp911de/lettuce - - - - 3.0 - + scm:git:https://github.com/lettuce-io/lettuce-core.git + scm:git:https://github.com/lettuce-io/lettuce-core.git + http://github.com/lettuce-io/lettuce-core + HEAD + + + + + sonatype-nexus-snapshots + Sonatype Nexus Snapshots + https://oss.sonatype.org/content/repositories/snapshots/ + + + sonatype-nexus-staging + Nexus Release Repository + https://oss.sonatype.org/service/local/staging/deploy/maven2/ + + + + + + + + io.netty + netty-bom + ${netty.version} + pom + import + + + + org.springframework + spring-framework-bom + ${spring.version} + pom + import + + + + io.zipkin.brave + brave-bom + ${brave.version} + pom + import + + + + org.junit + junit-bom + ${junit5.version} + pom + import + + + + org.apache.logging.log4j + log4j-bom + ${log4j2-version} + pom + import + + + + - + - io.reactivex - rxjava - 1.1.9 + io.netty + netty-common - io.netty - netty-common - ${netty-version} + netty-handler io.netty netty-transport - ${netty-version} - io.netty - netty-handler - ${netty-version} + io.projectreactor + reactor-core + ${reactor.version} + + + + - io.netty - netty-transport-native-epoll - ${netty-version} - linux-x86_64 - provided + org.apache.commons + commons-pool2 + ${commons-pool.version} true io.netty netty-tcnative - 1.1.33.Fork20 + 2.0.30.Final ${os.detected.classifier} - provided + true + + + + + + io.netty + netty-transport-native-epoll + linux-x86_64 true - com.google.guava - guava - 18.0 + io.netty + netty-transport-native-kqueue + osx-x86_64 + true + + - org.apache.commons - commons-pool2 - 2.4.2 + io.zipkin.brave + brave true org.latencyutils LatencyUtils - 2.0.3 + ${latencyutils.version} + true + + + + org.hdrhistogram + HdrHistogram + ${hdr-histogram-version} + true + + + + + + io.reactivex + rxjava + ${rxjava.version} + true + + + + io.reactivex + rxjava-reactive-streams + ${rxjava-reactive-streams.version} + true + + + + io.reactivex.rxjava2 + rxjava + ${rxjava2.version} true - + + io.reactivex.rxjava3 + rxjava + ${rxjava3.version} + true + org.springframework spring-beans - 4.2.4.RELEASE - provided + true + + + commons-logging + commons-logging + + org.springframework spring-context - 4.2.4.RELEASE - provided + true - javax.inject - javax.inject - 1 - provided + javax.enterprise + cdi-api + ${cdi-api.version} + true - javax.enterprise - cdi-api - 1.0 - provided + javax.inject + javax.inject + 1 + true com.google.code.findbugs jsr305 - 3.0.1 - provided + 3.0.2 true + + - junit - junit - 4.12 + org.assertj + assertj-core + ${assertj-core.version} test - + - com.googlecode.multithreadedtc - multithreadedtc - 1.01 + org.hamcrest + hamcrest-library + ${hamcrest-library.version} test - org.mockito - mockito-core - 1.10.19 + org.apache.commons + commons-lang3 + ${commons-lang3.version} test - com.google.code.tempus-fugit - tempus-fugit - 1.1 + com.github.javaparser + javaparser-core + 3.6.3 + test + + + + + + org.apache.openwebbeans + openwebbeans-se + ${openwebbeans.version} test - org.apache.openwebbeans.test - cditest-owb - 1.2.8 + javax.annotation + javax.annotation-api + ${javax.annotation-api.version} test javax.servlet javax.servlet-api - 3.1.0 + ${javax.servlet-api.version} test + + - org.assertj - assertj-core - 3.2.0 + junit + junit + ${junit4.version} + test + + + + org.junit.jupiter + junit-jupiter-api + test + + + + org.junit.jupiter + junit-jupiter-engine + test + + + + org.junit.vintage + junit-vintage-engine + test + + + + org.junit.jupiter + junit-jupiter-params test + + org.apache.logging.log4j log4j-core - 2.6.2 test org.apache.logging.log4j log4j-slf4j-impl - 2.6.2 test - org.springframework - spring-test - 4.2.4.RELEASE + org.slf4j + jcl-over-slf4j + ${slf4j.version} + test + + + + + org.mockito + mockito-core + ${mockito.version} + test + + + + org.mockito + mockito-junit-jupiter + ${mockito.version} + test + + + + com.googlecode.multithreadedtc + multithreadedtc + 1.01 + test + + + + org.reactivestreams + reactive-streams-tck + ${reactive-streams-tck.version} + test + + + + io.projectreactor + reactor-test + ${reactor.version} test org.springframework - spring-expression - 4.2.4.RELEASE + spring-core test - org.hamcrest - hamcrest-library - 1.3 + org.springframework + spring-aop test - com.github.javaparser - javaparser-core - 2.3.0 + org.springframework + spring-test test - org.apache.commons - commons-lang3 - 3.4 + org.springframework + spring-expression + ${spring.version} test + + + + + src/main/resources + true + + + + + + kr.motd.maven + os-maven-plugin + 1.6.2 + + + + + + + org.codehaus.mojo + flatten-maven-plugin + 1.2.2 + + + + org.apache.maven.plugins + maven-assembly-plugin + 3.3.0 + + + + org.apache.maven.plugins + maven-antrun-plugin + 1.8 + + + + org.apache.maven.plugins + maven-compiler-plugin + 3.8.1 + + + + org.apache.maven.plugins + maven-jar-plugin + 3.2.0 + + + + org.apache.maven.plugins + maven-gpg-plugin + 1.6 + + + + org.apache.maven.plugins + maven-javadoc-plugin + 3.2.0 + + + + org.apache.maven.plugins + maven-release-plugin + 2.5.3 + + + + org.apache.maven.plugins + maven-surefire-plugin + 3.0.0-M4 + + + org.apache.maven.surefire + surefire-junit-platform + 3.0.0-M4 + + + + + + org.apache.maven.plugins + maven-failsafe-plugin + 3.0.0-M4 + + + org.apache.maven.surefire + surefire-junit-platform + 3.0.0-M4 + + + + + + org.apache.maven.plugins + maven-source-plugin + 3.2.1 + + + + org.codehaus.mojo + build-helper-maven-plugin + 3.1.0 + + + + org.codehaus.mojo + exec-maven-plugin + 3.0.0 + + + + org.jacoco + jacoco-maven-plugin + 0.8.5 + + + + net.nicoulaj.maven.plugins + checksum-maven-plugin + 1.9 + + + + + + + + org.codehaus.mojo + flatten-maven-plugin + + + flatten + process-resources + + flatten + + + true + oss + + remove + remove + remove + remove + + + + + flatten-clean + clean + + clean + + + + + + + org.apache.maven.plugins + maven-compiler-plugin + + -Xlint:all,-deprecation,-unchecked + -Xlint:none + true + false + 1.8 + 1.8 + + + + + org.apache.maven.plugins + maven-jar-plugin + + + + true + true + + + lettuce.core + + + + + + + org.apache.maven.plugins + maven-surefire-plugin + + + 4 + + + **/*UnitTests + **/*Tests + + + **/*Test + **/*IntegrationTests + + + + + + org.apache.maven.plugins + maven-failsafe-plugin + + + 4 + + + **/*IntegrationTests + **/*Test + + + **/*UnitTests + + + + + + integration-test + + integration-test + + + + + + + org.apache.maven.plugins + maven-release-plugin + + sonatype-oss-release,documentation + deploy + true + @{project.version} + + + + + org.apache.maven.plugins + maven-source-plugin + + + attach-sources + + jar + + + + + + + org.jacoco + jacoco-maven-plugin + + + prepare-agent + + prepare-agent + + + + + + + + - netty-40 - - 4.0.40.Final - - - - netty-41 + ci + sonatype-oss-release + + org.apache.maven.plugins - maven-assembly-plugin - 2.4 + maven-javadoc-plugin - src - package + attach-javadocs - single + jar + + + + + public + true +
Lettuce
+ src/main/javadoc/stylesheet.css + + https://netty.io/4.1/api/ + https://commons.apache.org/proper/commons-pool/api-2.7.0/ + http://reactivex.io/RxJava/1.x/javadoc/ + http://reactivex.io/RxJava/javadoc/ + https://projectreactor.io/docs/core/release/api/ + https://docs.spring.io/spring/docs/current/javadoc-api/ + + none + true + + + generated + a + Generated class: + + +
+
+ + + org.apache.maven.plugins + maven-gpg-plugin + + + sign-artifacts + verify + + sign - - ${project.artifactId}-${project.version} - - src/assembly/src.xml - - gnu - false - + + + + + org.apache.maven.plugins + maven-assembly-plugin + bin - ${project.artifactId}-${project.version} src/assembly/bin.xml @@ -335,9 +829,8 @@ - net.ju-n.maven.plugins + net.nicoulaj.maven.plugins checksum-maven-plugin - 1.2 MD5 @@ -377,39 +870,47 @@
+ + jmh org.openjdk.jmh jmh-core - 1.11.3 + 1.21 test org.openjdk.jmh jmh-generator-annprocess - 1.11.3 + 1.21 test - - - src/test/resources - - + + maven-surefire-plugin + + true + + + + maven-failsafe-plugin + + true + + org.codehaus.mojo build-helper-maven-plugin - 1.7 add-source - generate-sources + generate-test-sources add-test-source @@ -427,7 +928,7 @@ run-benchmarks - process-test-resources + test exec @@ -435,10 +936,25 @@ java test + -Xmx2G -classpath org.openjdk.jmh.Main .* + -tu + ns + -f + 1 + -wi + 10 + -w + 1 + -r + 1 + -i + 10 + -bm + avgt @@ -447,265 +963,156 @@ -
- + - - - kr.motd.maven - os-maven-plugin - 1.4.0.Final - - + no-install - + + true + true + - - org.apache.maven.plugins - maven-compiler-plugin - 3.1 - - -Xlint:all,-deprecation,-unchecked - -Xlint:none - true - false - 1.8 - 1.8 - - + + + + org.apache.maven.plugins + maven-install-plugin + + true + + + + org.apache.maven.plugins + maven-deploy-plugin + + true + false + + + + + - - org.apache.maven.plugins - maven-javadoc-plugin - 2.9.1 - - - attach-javadocs - - jar - - - - - public - - http://netty.io/4.0/api/ - http://commons.apache.org/proper/commons-pool/api-2.4.2/ - http://docs.guava-libraries.googlecode.com/git/javadoc/ - http://reactivex.io/RxJava/javadoc/ - - -Xdoclint:all -Xdoclint:-html -Xdoclint:-syntax - - - generated - a - Generated class: - - - - + - - org.apache.maven.plugins - maven-surefire-plugin - 2.17 - - - 4 - - - + documentation - - org.apache.maven.plugins - maven-release-plugin - 2.5 - - sonatype-oss-release - -DskipSigning=false - deploy - true - @{project.version} - - + - - org.apache.maven.plugins - maven-gpg-plugin - 1.5 - - ${skipSigning} - - - - sign-artifacts - verify - - sign - - - - + - - org.apache.maven.plugins - maven-source-plugin - 2.3 - - - attach-sources - - jar - - - - + + org.apache.maven.plugins + maven-antrun-plugin + - - org.eluder.coveralls - coveralls-maven-plugin - 3.1.0 - + + rename-reference-docs + process-resources + + + + + + + run + + - - org.jacoco - jacoco-maven-plugin - 0.7.5.201505241946 - - - prepare-agent - - prepare-agent - - - - + + - - org.apache.maven.plugins - maven-site-plugin - 3.4 - - false - - - - org.apache.maven.doxia - doxia-module-markdown - 1.6 - - - org.apache.maven.scm - maven-scm-provider-gitexe - 1.9 - - - + + org.asciidoctor + asciidoctor-maven-plugin + 2.0.0-RC.1 + + + org.asciidoctor + asciidoctorj-pdf + 1.5.0-beta.5 + + + - - org.apache.maven.plugins - maven-scm-publish-plugin - 1.1 - - gh-pages - ${project.build.directory}/site - ${github.site.upload.skip} - true - scm:git:ssh://git@github.com/mp911de/redis.paluch.biz.git - apidocs/** - - + + html + generate-resources + + process-asciidoc + + + html5 + ${project.build.directory}/site/reference/html + + book + + true + true + stylesheets + golo.css + + + + + + pdf + generate-resources + + process-asciidoc + + + pdf + + + + - - org.apache.maven.plugins - maven-shade-plugin - 2.3 - - - package - - shade - - - - com.google.guava:guava - - com/google/common/escape/** - com/google/common/eventbus/** - com/google/common/hash/** - com/google/common/html/** - com/google/common/math/** - com/google/common/xml/** - - - - - - - rx - com.lambdaworks.rx - - - - com.google - com.lambdaworks.com.google - - - - org.HdrHistogram - com.lambdaworks.org.HdrHistogram - - - - org.LatencyUtils - com.lambdaworks.org.LatencyUtils - - - - org.apache.commons.pool2 - com.lambdaworks.org.apache.commons.pool2 - - - - io - com.lambdaworks.io - - - false - true + src/main/asciidoc + index.asciidoc + book + + ${project.version} + true + 3 + true + https://raw.githubusercontent.com/wiki/lettuce-io/lettuce-core/ + + + + font + coderay + - - - - - + + + + org.apache.maven.plugins + maven-assembly-plugin + + + docs + package + + single + + + + src/assembly/docs.xml + + gnu + true + + + + + + + +
+ +
- - - - org.apache.maven.plugins - maven-project-info-reports-plugin - 2.7 - - - - index - plugin-management - distribution-management - dependency-info - dependencies - scm - issue-tracking - cim - dependency-management - project-team - summary - - - - - - diff --git a/prepare-apidocs-upload.sh b/prepare-apidocs-upload.sh deleted file mode 100755 index 90e5448c8c..0000000000 --- a/prepare-apidocs-upload.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash -VERSION=$(xpath -e "/project/version" pom.xml 2>/dev/null | sed 's///g;s/<\/version>//g') - -if [[ "" == "${VERSION}" ]] -then - echo "Cannot determine version" - exit 1 -fi - -BASE=target/redis.paluch.biz -TARGET_BASE=/docs/api/releases/ - -git clone https://github.com/mp911de/redis.paluch.biz.git ${BASE} -cd ${BASE} && git checkout gh-pages && cd ../.. - -if [[ ${VERSION} == *"SNAPSHOT"* ]] -then - TARGET_BASE=/docs/api/snapshots/ -fi - -TARGET_DIR=${BASE}${TARGET_BASE}${VERSION} - -mkdir -p ${TARGET_DIR} - -cp -R target/site/apidocs/* ${TARGET_DIR} - -cd ${BASE} -git add . -git commit -m "Generated API docs" - diff --git a/src/assembly/bin.xml b/src/assembly/bin.xml index a65ed2ad90..2365757bfb 100644 --- a/src/assembly/bin.xml +++ b/src/assembly/bin.xml @@ -12,27 +12,17 @@ - biz.paluch.redis:lettuce:jar:${project.version} - biz.paluch.redis:lettuce:jar:shaded:${project.version} + io.lettuce:lettuce-core:jar:${project.version} - false true true - io.netty:netty-transport-native-epoll - - provided - dependencies - false - - - - io.reactivex:* io.netty:* - com.google.guava:* + io.projectreactor:* + org.reactivestreams:reactive-streams:* org.apache.commons:* org.latencyutils:* org.hdrhistogram:* @@ -42,9 +32,9 @@ - biz.paluch.redis:lettuce:*:javadoc + io.lettuce:lettuce-core:*:javadoc - apidocs + docs/apidocs true false true @@ -59,5 +49,9 @@ RELEASE-NOTES.md + + ${project.build.directory}/site/reference + docs/reference + diff --git a/src/assembly/docs.xml b/src/assembly/docs.xml new file mode 100644 index 0000000000..60d345f53e --- /dev/null +++ b/src/assembly/docs.xml @@ -0,0 +1,17 @@ + + + docs + + zip + + / + + + ${project.build.directory}/site/reference/html + + true + + + diff --git a/src/assembly/src.xml b/src/assembly/src.xml deleted file mode 100644 index 739ade2591..0000000000 --- a/src/assembly/src.xml +++ /dev/null @@ -1,23 +0,0 @@ - - - src - - zip - tar.gz - - ${project.version}-${project.version}-src - - - src/main/java - / - true - - - src/main/resources - / - true - - - \ No newline at end of file diff --git a/src/main/asciidoc/advanced-usage.asciidoc b/src/main/asciidoc/advanced-usage.asciidoc new file mode 100644 index 0000000000..1b46f997a4 --- /dev/null +++ b/src/main/asciidoc/advanced-usage.asciidoc @@ -0,0 +1,50 @@ +:auto-reconnect-link: <> +:client-options-link: <> +:client-resources-link: <> + +:custom-commands-command-output-link: <> +:custom-commands-command-exec-model-link: <> + +[[advanced-usage]] +== Advanced usage + +[[client-resources]] +=== Configuring Client resources +include::{ext-doc}/Configuring-Client-resources.asciidoc[leveloffset=+2] + +[[client-options]] +=== Client Options +include::{ext-doc}/Client-Options.asciidoc[leveloffset=+2] + +[[ssl]] +=== SSL Connections +include::{ext-doc}/SSL-Connections.asciidoc[leveloffset=+2] + +[[native-transports]] +=== Native Transports +include::{ext-doc}/Native-Transports.asciidoc[leveloffset=+2] + +[[unix-domain-sockets]] +=== Unix Domain Sockets +include::{ext-doc}/Unix-Domain-Sockets.asciidoc[leveloffset=+2] + +[[streaming-api]] +=== Streaming API +include::{ext-doc}/Streaming-API.asciidoc[leveloffset=+1] + +[[events]] +=== Events +include::{ext-doc}/Connection-Events.asciidoc[leveloffset=+2] + +=== Pipelining and command flushing +include::{ext-doc}/Pipelining-and-command-flushing.asciidoc[leveloffset=+2] + +=== Connection Pooling +include::{ext-doc}/Connection-Pooling-5.1.asciidoc[leveloffset=+2] + +=== Custom commands +include::{ext-doc}/Custom-commands%2C-outputs-and-command-mechanics.asciidoc[leveloffset=+2] + +[[command-execution-reliability]] +=== Command execution reliability +include::{ext-doc}/Command-execution-reliability.asciidoc[leveloffset=+2] diff --git a/src/main/asciidoc/faq.asciidoc b/src/main/asciidoc/faq.asciidoc new file mode 100644 index 0000000000..1793a75438 --- /dev/null +++ b/src/main/asciidoc/faq.asciidoc @@ -0,0 +1,5 @@ +:client-options-link: <> + +[[faq]] +== Frequently Asked Questions +include::{ext-doc}/Frequently-Asked-Questions.asciidoc[leveloffset=+1] diff --git a/src/main/asciidoc/getting-started.asciidoc b/src/main/asciidoc/getting-started.asciidoc new file mode 100644 index 0000000000..6cbfcec23a --- /dev/null +++ b/src/main/asciidoc/getting-started.asciidoc @@ -0,0 +1,34 @@ +:ssl-link: <> +:uds-link: <> +:native-transport-link: <> +:basic-synchronous-link: <> +:asynchronous-api-link: <> +:reactive-api-link: <> +:asynchronous-link: <> +:reactive-link: <> + +[[getting-started]] +== Getting Started +include::{ext-doc}/Getting-started-%285.0%29.asciidoc[leveloffset=+1] + +[[connecting-redis]] +== Connecting Redis +include::{ext-doc}/Redis-URI-and-connection-details.asciidoc[] + +[[basic-usage]] +=== Basic Usage +include::{ext-doc}/Basic-usage.asciidoc[leveloffset=+1] + +[[asynchronous-api]] +=== Asynchronous API +include::{ext-doc}/Asynchronous-API.asciidoc[leveloffset=+2] + +[[reactive-api]] +=== Reactive API +include::{ext-doc}/Reactive-API-%285.0%29.asciidoc[leveloffset=+2] + +=== Publish/Subscribe +include::{ext-doc}/Pub-Sub.asciidoc[leveloffset=+1] + +=== Transactions/Multi +include::{ext-doc}/Transactions.asciidoc[leveloffset=+1] diff --git a/src/main/asciidoc/ha-sharding.asciidoc b/src/main/asciidoc/ha-sharding.asciidoc new file mode 100644 index 0000000000..2036343b95 --- /dev/null +++ b/src/main/asciidoc/ha-sharding.asciidoc @@ -0,0 +1,27 @@ +:redis-sentinel-link: <> +:master-replica-api-link: <> +:cco-up-to-5-times: <> +:cco-link: <> +:cco-periodic-link: <> +:cco-adaptive-link: <> + +[[ha-sharding]] +== High-Availability and Sharding + +[[master-slave]] +[[master-replica]] +=== Master/Replica +include::{ext-doc}/Master-Replica.asciidoc[leveloffset=+2] + +[[redis-sentinel]] +=== Redis Sentinel +include::{ext-doc}/Redis-Sentinel.asciidoc[leveloffset=+2] + +[[redis-cluster]] +=== Redis Cluster +include::{ext-doc}/Redis-Cluster.asciidoc[leveloffset=+2] + +[[readfrom-settings]] +=== ReadFrom Settings +include::{ext-doc}/ReadFrom-Settings.asciidoc[leveloffset=+2] + diff --git a/src/main/asciidoc/images/apple-touch-icon-144.png b/src/main/asciidoc/images/apple-touch-icon-144.png new file mode 100644 index 0000000000..8adb9fff09 Binary files /dev/null and b/src/main/asciidoc/images/apple-touch-icon-144.png differ diff --git a/src/main/asciidoc/images/apple-touch-icon-180.png b/src/main/asciidoc/images/apple-touch-icon-180.png new file mode 100644 index 0000000000..d0928b5316 Binary files /dev/null and b/src/main/asciidoc/images/apple-touch-icon-180.png differ diff --git a/src/main/asciidoc/images/lettuce-green-text@2x.png b/src/main/asciidoc/images/lettuce-green-text@2x.png new file mode 100644 index 0000000000..adff15525a Binary files /dev/null and b/src/main/asciidoc/images/lettuce-green-text@2x.png differ diff --git a/src/main/asciidoc/images/touch-icon-192x192.png b/src/main/asciidoc/images/touch-icon-192x192.png new file mode 100644 index 0000000000..450da57d8a Binary files /dev/null and b/src/main/asciidoc/images/touch-icon-192x192.png differ diff --git a/src/main/asciidoc/index-docinfo.html b/src/main/asciidoc/index-docinfo.html new file mode 100644 index 0000000000..d47a3c38a1 --- /dev/null +++ b/src/main/asciidoc/index-docinfo.html @@ -0,0 +1,5 @@ + + + + + \ No newline at end of file diff --git a/src/main/asciidoc/index.asciidoc b/src/main/asciidoc/index.asciidoc new file mode 100644 index 0000000000..d469f672fa --- /dev/null +++ b/src/main/asciidoc/index.asciidoc @@ -0,0 +1,33 @@ += Lettuce Reference Guide +Mark Paluch ; +:ext-doc: https://raw.githubusercontent.com/wiki/lettuce-io/lettuce-core +{version} +:doctype: book +:icons: font +:toc: +:sectnums: +:sectanchors: +:docinfo: +ifdef::backend-pdf[] +:title-logo-image: images/lettuce-green-text@2x.png +endif::[] + +ifdef::backend-html5[] +image::lettuce-green-text@2x.png[width=50%,link=https://lettuce.io] +endif::[] + +include::overview.asciidoc[] + +include::new-features.adoc[leveloffset=+1] + +include::getting-started.asciidoc[] + +include::ha-sharding.asciidoc[] + +include::redis-command-interfaces.asciidoc[] + +include::advanced-usage.asciidoc[] + +include::integration-extension.asciidoc[] + +include::faq.asciidoc[] diff --git a/src/main/asciidoc/integration-extension.asciidoc b/src/main/asciidoc/integration-extension.asciidoc new file mode 100644 index 0000000000..b66fccd3cc --- /dev/null +++ b/src/main/asciidoc/integration-extension.asciidoc @@ -0,0 +1,15 @@ +[[integration-extension]] +== Integration and Extension + +[[codecs]] +=== Codecs +include::{ext-doc}/Codecs.asciidoc[leveloffset=+1] + +[[cdi-support]] +=== CDI Support +include::{ext-doc}/CDI-Support.asciidoc[leveloffset=+1] + +[[spring-support]] +=== Spring Support +include::{ext-doc}/Spring-Support.asciidoc[leveloffset=+1] + diff --git a/src/main/asciidoc/new-features.adoc b/src/main/asciidoc/new-features.adoc new file mode 100644 index 0000000000..e26e4cf222 --- /dev/null +++ b/src/main/asciidoc/new-features.adoc @@ -0,0 +1,58 @@ +[[new-features]] += New & Noteworthy + +[[new-features.6-0-0]] +== What's new in Lettuce 6.0 + +* Support for RESP3 usage with Redis 6 along with RESP2/RESP3 handshake and protocol version discovery. +* ACL authentication using username and password or password-only authentication. +* Cluster topology refresh is now non-blocking. +* RxJava 3 support. +* Refined Scripting API accepting the Lua script either as `byte[]` or `String`. +* Connection and Queue failures now no longer throw an exception but properly associate the failure with the Future handle. +* Removal of deprecated API including timeout methods accepting `TimeUnit`. +Use methods accepting `Duration` instead. +* Lots of internal refinements. + +[[new-features.5-3-0]] +== What's new in Lettuce 5.3 + +* Improved SSL configuration supporting Cipher suite selection and PEM-encoded certificates. +* Fixed method signature for `randomkey()`. +* Un-deprecated `ClientOptions.pingBeforeActivateConnection` to allow connection verification during connection handshake. + +[[new-features.5-2-0]] +== What's new in Lettuce 5.2 + +* Allow randomization of read candidates using Redis Cluster. +* SSL support for Redis Sentinel. + +[[new-features.5-1-0]] +== What's new in Lettuce 5.1 + +* Add support for `ZPOPMIN`, `ZPOPMAX`, `BZPOPMIN`, `BZPOPMAX` commands. +* Add support for Redis Command Tracing through Brave, see <>. +* Add support for https://redis.io/topics/streams-intro[Redis Streams]. +* Asynchronous `connect()` for Master/Replica connections. +* <> through `AsyncConnectionPoolSupport` and `AsyncPool`. +* Dedicated exceptions for Redis `LOADING`, `BUSY`, and `NOSCRIPT` responses. +* Commands in at-most-once mode (auto-reconnect disabled) are now canceled already on disconnect. +* Global command timeouts (also for reactive and asynchronous API usage) configurable through <>. +* Host and port mappers for Lettuce usage behind connection tunnels/proxies through `SocketAddressResolver`, see <>. +* `SCRIPT LOAD` dispatch to all cluster nodes when issued through `RedisAdvancedClusterCommands`. +* Reactive `ScanStream` to iterate over the keyspace using `SCAN` commands. +* Transactions using Master/Replica connections are bound to the master node. + +[[new-features.5-0-0]] +== What's new in Lettuce 5.0 + +* New artifact coordinates: `io.lettuce:lettuce-core` and packages moved from `com.lambdaworks.redis` to `io.lettuce.core`. +* <> now Reactive Streams-based using https://projectreactor.io/[Project Reactor]. +* <> supporting dynamic command invocation and Redis Modules. +* Enhanced, immutable Key-Value objects. +* Asynchronous Cluster connect. +* Native transport support for Kqueue on macOS systems. +* Removal of support for Guava. +* Removal of deprecated `RedisConnection` and `RedisAsyncConnection` interfaces. +* Java 9 compatibility. +* HTML and PDF reference documentation along with a new project website: https://lettuce.io. diff --git a/src/main/asciidoc/overview.asciidoc b/src/main/asciidoc/overview.asciidoc new file mode 100644 index 0000000000..fed5868d26 --- /dev/null +++ b/src/main/asciidoc/overview.asciidoc @@ -0,0 +1,75 @@ +[[overview]] +== Overview + +This document is the reference guide for Lettuce. It explains how to use Lettuce, its concepts, semantics, and the syntax. + +You can read this reference guide in a linear fashion, or you can skip sections if something does not interest you. + +This section provides some basic introduction to Redis. The rest of the document refers only to Lettuce features and assumes the user is familiar with Redis concepts. + +[[overview.redis]] +=== Knowing Redis +NoSQL stores have taken the storage world by storm. It is a vast domain with a plethora of solutions, terms and patterns (to make things worse even the term itself has multiple http://www.google.com/search?q=nosql+acronym[meanings]). While some of the principles are common, it is crucial that the user is familiar to some degree with Redis. The best way to get acquainted to this solutions is to read their documentation and follow their documentation - it usually doesn't take more then 5-10 minutes to go through them and if you are coming from an RDMBS-only background many times these exercises can be an eye opener. + +The jumping off ground for learning about Redis is http://www.redis.io/[redis.io]. Here is a list of other useful resources: + +* The http://try.redis.io/[interactive tutorial] introduces Redis. +* The http://redis.io/commands[command references] explains Redis commands and contains links to getting started guides, reference documentation and tutorials. + +=== Project Reactor + +https://projectreactor.io[Reactor] is a highly optimized reactive library for building efficient, non-blocking +applications on the JVM based on the https://github.com/reactive-streams/reactive-streams-jvm[Reactive Streams Specification]. +Reactor based applications can sustain very high throughput message rates and operate with a very low memory footprint, +making it suitable for building efficient event-driven applications using the microservices architecture. + +Reactor implements two publishers https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html[Flux] and +https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html[Mono], both of which support non-blocking back-pressure. +This enables exchange of data between threads with well-defined memory usage, avoiding unnecessary intermediate buffering or blocking. + +=== Non-blocking API for Redis + +Lettuce is a scalable thread-safe Redis client based on http://netty.io[netty] and Reactor. Lettuce provides <>, <> and <> APIs to interact with Redis. + +[[overview.requirements]] +=== Requirements + +Lettuce 4.x and 5.x binaries require JDK level 8.0 and above. + +In terms of http://redis.io/[Redis], at least 2.6. + +=== Additional Help Resources + +Learning a new framework is not always straight forward. In this section, we try to provide what we think is an easy to follow guide for starting with Lettuce. However, if you encounter issues or you are just looking for an advice, feel free to use one of the links below: + +[[overview.support]] +==== Support + +There are a few support options available: + + * Lettuce on Stackoverflow http://stackoverflow.com/questions/tagged/lettuce[Stackoverflow] is a tag for all Lettuce users to share information and help each other. Note that registration is needed *only* for posting. + * Get in touch with the community on https://gitter.im/lettuce-io/Lobby[Gitter]. + * Google Group: https://groups.google.com/d/forum/lettuce-redis-client-users[lettuce-redis-client-users] or mailto:lettuce-redis-client-users@googlegroups.com[lettuce-redis-client-users@googlegroups.com]. + * Report bugs (or ask questions) in Github issues https://github.com/lettuce-io/lettuce-core/issues. + +[[overview.development]] +==== Following Development + +For information on the Lettuce source code repository, nightly builds and snapshot artifacts please see the https://lettuce.io[Lettuce homepage]. You can help make lettuce best serve the needs of the lettuce community by interacting with developers through the Community on http://stackoverflow.com/questions/tagged/lettuce[Stackoverflow]. To follow developer activity look for the mailing list information on the https://lettuce.io[lettuce homepage]. If you encounter a bug or want to suggest an improvement, please create a ticket on the lettuce issue https://github.com/lettuce-io/lettuce-core/issues[tracker]. + +==== Project Metadata + +* Version Control – https://github.com/lettuce-io/lettuce-core +* Releases and Binary Packages – https://github.com/lettuce-io/lettuce-core/releases +* Issue tracker – https://github.com/lettuce-io/lettuce-core/issues +* Release repository – https://repo1.maven.org/maven2/ (Maven Central) +* Snapshot repository – https://oss.sonatype.org/content/repositories/snapshots/ (OSS Sonatype Snapshots) + +=== Where to go from here + * Head to <> if you feel like jumping straight into the code. + * Go to <> for Master/Replica, Redis Sentinel and Redis Cluster topics. + * In order to dig deeper into the core features of Reactor: + ** If you’re looking for client configuration options, performance related behavior and how to use various transports, go to <>. + ** See <> for extending Lettuce with codecs or integrate it in your CDI/Spring application. + ** You want to know more about *at-least-once* and *at-most-once*? Take a look into <>. + diff --git a/src/main/asciidoc/redis-command-interfaces.asciidoc b/src/main/asciidoc/redis-command-interfaces.asciidoc new file mode 100644 index 0000000000..ae5b750bf6 --- /dev/null +++ b/src/main/asciidoc/redis-command-interfaces.asciidoc @@ -0,0 +1,4 @@ + +[[redis-command-interfaces]] +include::{ext-doc}/Redis-Command-Interfaces.asciidoc[leveloffset=+1] + diff --git a/src/main/asciidoc/stylesheets/golo.css b/src/main/asciidoc/stylesheets/golo.css new file mode 100644 index 0000000000..5114f5e871 --- /dev/null +++ b/src/main/asciidoc/stylesheets/golo.css @@ -0,0 +1,2005 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +@import url('https://fonts.googleapis.com/css?family=Raleway:300:400:700'); +@import url(https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/1.6.2/semantic.min.css); + + +#header .details br+span.author:before { + content: "\00a0\0026\00a0"; + color: rgba(0,0,0,.85); +} + +#header .details br+span.email:before { + content: "("; +} + +#header .details br+span.email:after { + content: ")"; +} + +/*! normalize.css v2.1.2 | MIT License | git.io/normalize */ +/* ========================================================================== HTML5 display definitions ========================================================================== */ +/** Correct `block` display not defined in IE 8/9. */ +@import url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/3.2.1/css/font-awesome.css); + +article, aside, details, figcaption, figure, footer, header, hgroup, main, nav, section, summary { + display: block; +} + +/** Correct `inline-block` display not defined in IE 8/9. */ +audio, canvas, video { + display: inline-block; +} + +/** Prevent modern browsers from displaying `audio` without controls. Remove excess height in iOS 5 devices. */ +audio:not([controls]) { + display: none; + height: 0; +} + +/** Address `[hidden]` styling not present in IE 8/9. Hide the `template` element in IE, Safari, and Firefox < 22. */ +[hidden], template { + display: none; +} + +script { + display: none !important; +} + +/* ========================================================================== Base ========================================================================== */ +/** 1. Set default font family to sans-serif. 2. Prevent iOS text size adjust after orientation change, without disabling user zoom. */ +html { + font-family: sans-serif; /* 1 */ + -ms-text-size-adjust: 100%; /* 2 */ + -webkit-text-size-adjust: 100%; /* 2 */ +} + +/** Remove default margin. */ +body { + margin: 0; +} + +/* ========================================================================== Links ========================================================================== */ +/** Remove the gray background color from active links in IE 10. */ +a { + background: transparent; +} + +/** Address `outline` inconsistency between Chrome and other browsers. */ +a:focus { + outline: thin dotted; +} + +/** Improve readability when focused and also mouse hovered in all browsers. */ +a:active, a:hover { + outline: 0; +} + +/* ========================================================================== Typography ========================================================================== */ +/** Address variable `h1` font-size and margin within `section` and `article` contexts in Firefox 4+, Safari 5, and Chrome. */ +h1 { + font-size: 2em; + margin: 1.2em 0; +} + +/** Address styling not present in IE 8/9, Safari 5, and Chrome. */ +abbr[title] { + border-bottom: 1px dotted; +} + +/** Address style set to `bolder` in Firefox 4+, Safari 5, and Chrome. */ +b, strong { + font-weight: bold; +} + +/** Address styling not present in Safari 5 and Chrome. */ +dfn { + font-style: italic; +} + +/** Address differences between Firefox and other browsers. */ +hr { + -moz-box-sizing: content-box; + box-sizing: content-box; + height: 0; +} + +/** Address styling not present in IE 8/9. */ +mark { + background: #ff0; + color: #000; +} + +/** Correct font family set oddly in Safari 5 and Chrome. */ +code, kbd, pre, samp { + font-family: Menlo, Monaco, 'Liberation Mono', Consolas, monospace; + font-size: 1em; +} + +/** Improve readability of pre-formatted text in all browsers. */ +pre { + white-space: pre-wrap; +} + +/** Set consistent quote types. */ +q { + quotes: "\201C" "\201D" "\2018" "\2019"; +} + +/** Address inconsistent and variable font size in all browsers. */ +small { + font-size: 80%; +} + +/** Prevent `sub` and `sup` affecting `line-height` in all browsers. */ +sub, sup { + font-size: 75%; + line-height: 0; + position: relative; + vertical-align: baseline; +} + +sup { + top: -0.5em; +} + +sub { + bottom: -0.25em; +} + +/* ========================================================================== Embedded content ========================================================================== */ +/** Remove border when inside `a` element in IE 8/9. */ +img { + border: 0; +} + +/** Correct overflow displayed oddly in IE 9. */ +svg:not(:root) { + overflow: hidden; +} + +/* ========================================================================== Figures ========================================================================== */ +/** Address margin not present in IE 8/9 and Safari 5. */ +figure { + margin: 0; +} + +/* ========================================================================== Forms ========================================================================== */ +/** Define consistent border, margin, and padding. */ +fieldset { + border: 1px solid #c0c0c0; + margin: 0 2px; + padding: 0.35em 0.625em 0.75em; +} + +/** 1. Correct `color` not being inherited in IE 8/9. 2. Remove padding so people aren't caught out if they zero out fieldsets. */ +legend { + border: 0; /* 1 */ + padding: 0; /* 2 */ +} + +/** 1. Correct font family not being inherited in all browsers. 2. Correct font size not being inherited in all browsers. 3. Address margins set differently in Firefox 4+, Safari 5, and Chrome. */ +button, input, select, textarea { + font-family: inherit; /* 1 */ + font-size: 100%; /* 2 */ + margin: 0; /* 3 */ +} + +/** Address Firefox 4+ setting `line-height` on `input` using `!important` in the UA stylesheet. */ +button, input { + line-height: normal; +} + +/** Address inconsistent `text-transform` inheritance for `button` and `select`. All other form control elements do not inherit `text-transform` values. Correct `button` style inheritance in Chrome, Safari 5+, and IE 8+. Correct `select` style inheritance in Firefox 4+ and Opera. */ +button, select { + text-transform: none; +} + +/** 1. Avoid the WebKit bug in Android 4.0.* where (2) destroys native `audio` and `video` controls. 2. Correct inability to style clickable `input` types in iOS. 3. Improve usability and consistency of cursor style between image-type `input` and others. */ +button, html input[type="button"], input[type="reset"], input[type="submit"] { + -webkit-appearance: button; /* 2 */ + cursor: pointer; /* 3 */ +} + +/** Re-set default cursor for disabled elements. */ +button[disabled], html input[disabled] { + cursor: default; +} + +/** 1. Address box sizing set to `content-box` in IE 8/9. 2. Remove excess padding in IE 8/9. */ +input[type="checkbox"], input[type="radio"] { + box-sizing: border-box; /* 1 */ + padding: 0; /* 2 */ +} + +/** 1. Address `appearance` set to `searchfield` in Safari 5 and Chrome. 2. Address `box-sizing` set to `border-box` in Safari 5 and Chrome (include `-moz` to future-proof). */ +input[type="search"] { + -webkit-appearance: textfield; /* 1 */ + -moz-box-sizing: content-box; + -webkit-box-sizing: content-box; /* 2 */ + box-sizing: content-box; +} + +/** Remove inner padding and search cancel button in Safari 5 and Chrome on OS X. */ +input[type="search"]::-webkit-search-cancel-button, input[type="search"]::-webkit-search-decoration { + -webkit-appearance: none; +} + +/** Remove inner padding and border in Firefox 4+. */ +button::-moz-focus-inner, input::-moz-focus-inner { + border: 0; + padding: 0; +} + +/** 1. Remove default vertical scrollbar in IE 8/9. 2. Improve readability and alignment in all browsers. */ +textarea { + overflow: auto; /* 1 */ + vertical-align: top; /* 2 */ +} + +/* ========================================================================== Tables ========================================================================== */ +/** Remove most spacing between table cells. */ +table { + border-collapse: collapse; + border-spacing: 0; +} + +meta.foundation-mq-small { + font-family: "only screen and (min-width: 768px)"; + width: 768px; +} + +meta.foundation-mq-medium { + font-family: "only screen and (min-width:1280px)"; + width: 1280px; +} + +meta.foundation-mq-large { + font-family: "only screen and (min-width:1440px)"; + width: 1440px; +} + +*, *:before, *:after { + -moz-box-sizing: border-box; + -webkit-box-sizing: border-box; + box-sizing: border-box; +} + +html, body { + font-size: 100%; +} + +body { + background: white; + color: #34302d; + padding: 0; + margin: 0; + font-family: "Helvetica Neue", "Helvetica", Helvetica, Arial, sans-serif; + font-weight: 400; + font-style: normal; + line-height: 1.8em; + position: relative; + cursor: auto; +} + +#content, #content p { + line-height: 1.8em; + margin-top: 1.5em; +} + +#content li p { + margin-top: 0.25em; +} + +a:hover { + cursor: pointer; +} + +img, object, embed { + max-width: 100%; + height: auto; +} + +object, embed { + height: 100%; +} + +img { + -ms-interpolation-mode: bicubic; +} + +#map_canvas img, #map_canvas embed, #map_canvas object, .map_canvas img, .map_canvas embed, .map_canvas object { + max-width: none !important; +} + +.left { + float: left !important; +} + +.right { + float: right !important; +} + +.text-left { + text-align: left !important; +} + +.text-right { + text-align: right !important; +} + +.text-center { + text-align: center !important; +} + +.text-justify { + text-align: justify !important; +} + +.hide { + display: none; +} + +.antialiased, body { + -webkit-font-smoothing: antialiased; +} + +img { + display: inline-block; + vertical-align: middle; +} + +textarea { + height: auto; + min-height: 50px; +} + +select { + width: 100%; +} + +p.lead, .paragraph.lead > p, #preamble > .sectionbody > .paragraph:first-of-type p { + font-size: 1.21875em; +} + +.subheader, #content #toctitle, .admonitionblock td.content > .title, .exampleblock > .title, .imageblock > .title, .listingblock > .title, .literalblock > .title, .mathblock > .title, .openblock > .title, .paragraph > .title, .quoteblock > .title, .sidebarblock > .title, .tableblock > .title, .verseblock > .title, .videoblock > .title, .dlist > .title, .olist > .title, .ulist > .title, .qlist > .title, .hdlist > .title, .tableblock > caption { + color: #6db33f; + font-weight: 300; + margin-top: 0.2em; + margin-bottom: 0.5em; +} + +/* Typography resets */ +div, dl, dt, dd, ul, ol, li, h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6, pre, form, p, blockquote, th, td { + margin: 0; + padding: 0; + direction: ltr; +} + +/* Default Link Styles */ +a { + color: #6db33f; + line-height: inherit; + text-decoration: none; +} + +a:hover, a:focus { + color: #6db33f; + text-decoration: underline; +} + +a img { + border: none; +} + +/* Default paragraph styles */ +p { + font-family: inherit; + font-weight: normal; + font-size: 1em; + margin-bottom: 1.25em; + text-rendering: optimizeLegibility; +} + +p aside { + font-size: 0.875em; + font-style: italic; +} + +/* Default header styles */ +h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6 { + font-family: "Raleway", Arial, sans-serif; + font-weight: normal; + font-style: normal; + color: #34302d; + text-rendering: optimizeLegibility; + margin-top: 1.6em; + margin-bottom: 0.6em; +} + +h1 small, h2 small, h3 small, #toctitle small, .sidebarblock > .content > .title small, h4 small, h5 small, h6 small { + font-size: 60%; + color: #6db33f; + line-height: 0; +} + +h1 { + font-size: 2.125em; + line-height: 2em; +} + +h2 { + font-size: 1.6875em; + line-height: 1.5em; +} + +h3, #toctitle, .sidebarblock > .content > .title { + font-size: 1.375em; + line-height: 1.3em; +} + +h4 { + font-size: 1.125em; +} + +h5 { + font-size: 1.125em; +} + +h6 { + font-size: 1em; +} + +hr { + border: solid #dcd2c9; + border-width: 1px 0 0; + clear: both; + margin: 1.25em 0 1.1875em; + height: 0; +} + +/* Helpful Typography Defaults */ +em, i { + font-style: italic; + line-height: inherit; +} + +strong, b { + font-weight: bold; + line-height: inherit; +} + +small { + font-size: 60%; + line-height: inherit; +} + +code { + font-family: Consolas, "Liberation Mono", Courier, monospace; + font-weight: bold; + color: #305CB5; +} + +/* Lists */ +ul, ol, dl { + font-size: 1em; + margin-bottom: 1.25em; + list-style-position: outside; + font-family: inherit; +} + +ul, ol { + margin-left: 1.5em; +} + +ul.no-bullet, ol.no-bullet { + margin-left: 1.5em; +} + +/* Unordered Lists */ +ul li ul, ul li ol { + margin-left: 1.25em; + margin-bottom: 0; + font-size: 1em; /* Override nested font-size change */ +} + +ul.square li ul, ul.circle li ul, ul.disc li ul { + list-style: inherit; +} + +ul.square { + list-style-type: square; +} + +ul.circle { + list-style-type: circle; +} + +ul.disc { + list-style-type: disc; +} + +ul.no-bullet { + list-style: none; +} + +/* Ordered Lists */ +ol li ul, ol li ol { + margin-left: 1.25em; + margin-bottom: 0; +} + +/* Definition Lists */ +dl dt { + margin-bottom: 0.3125em; + font-weight: bold; +} + +dl dd { + margin-bottom: 1.25em; +} + +/* Abbreviations */ +abbr, acronym { + text-transform: uppercase; + font-size: 90%; + color: #34302d; + border-bottom: 1px dotted #dddddd; + cursor: help; +} + +abbr { + text-transform: none; +} + +/* Blockquotes */ +blockquote { + margin: 0 0 1.25em; + padding: 0.5625em 1.25em 0 1.1875em; + border-left: 1px solid #dddddd; +} + +blockquote cite { + display: block; + font-size: 0.8125em; + color: #655241; +} + +blockquote cite:before { + content: "\2014 \0020"; +} + +blockquote cite a, blockquote cite a:visited { + color: #655241; +} + +blockquote, blockquote p { + color: #34302d; +} + +/* Microformats */ +.vcard { + display: inline-block; + margin: 0 0 1.25em 0; + border: 1px solid #dddddd; + padding: 0.625em 0.75em; +} + +.vcard li { + margin: 0; + display: block; +} + +.vcard .fn { + font-weight: bold; + font-size: 0.9375em; +} + +.vevent .summary { + font-weight: bold; +} + +.vevent abbr { + cursor: auto; + text-decoration: none; + font-weight: bold; + border: none; + padding: 0 0.0625em; +} + +@media only screen and (min-width: 768px) { + h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6 { + } + + h1 { + font-size: 2.75em; + } + + h2 { + font-size: 2.3125em; + } + + h3, #toctitle, .sidebarblock > .content > .title { + font-size: 1.6875em; + } + + h4 { + font-size: 1.4375em; + } +} + +/* Print styles. Inlined to avoid required HTTP connection: www.phpied.com/delay-loading-your-print-css/ Credit to Paul Irish and HTML5 Boilerplate (html5boilerplate.com) +*/ +.print-only { + display: none !important; +} + +@media print { + * { + background: transparent !important; + color: #000 !important; /* Black prints faster: h5bp.com/s */ + box-shadow: none !important; + text-shadow: none !important; + } + + a, a:visited { + text-decoration: underline; + } + + a[href]:after { + content: " (" attr(href) ")"; + } + + abbr[title]:after { + content: " (" attr(title) ")"; + } + + .ir a:after, a[href^="javascript:"]:after, a[href^="#"]:after { + content: ""; + } + + pre, blockquote { + border: 1px solid #999; + page-break-inside: avoid; + } + + thead { + display: table-header-group; /* h5bp.com/t */ + } + + tr, img { + page-break-inside: avoid; + } + + img { + max-width: 100% !important; + } + + @page { + margin: 0.5cm; + } + + p, h2, h3, #toctitle, .sidebarblock > .content > .title { + orphans: 3; + widows: 3; + } + + h2, h3, #toctitle, .sidebarblock > .content > .title { + page-break-after: avoid; + } + + .hide-on-print { + display: none !important; + } + + .print-only { + display: block !important; + } + + .hide-for-print { + display: none !important; + } + + .show-for-print { + display: inherit !important; + } +} + +/* Tables */ +table { + background: white; + margin-bottom: 1.25em; + border: solid 1px #34302d; +} + +table thead, table tfoot { + font-weight: bold; +} + +table thead tr th, table thead tr td, table tfoot tr th, table tfoot tr td { + padding: 0.5em 0.625em 0.625em; + font-size: inherit; + color: #34302d; + text-align: left; +} + +table thead tr th { + color: white; + background: #34302d; +} + +table tr th, table tr td { + padding: 0.5625em 0.625em; + font-size: inherit; + color: #34302d; + border: 0 none; +} + +table tr.even, table tr.alt, table tr:nth-of-type(even) { + background: #f2F2F2; +} + +table thead tr th, table tfoot tr th, table tbody tr td, table tr td, table tfoot tr td { + display: table-cell; +} + +.clearfix:before, .clearfix:after, .float-group:before, .float-group:after { + content: " "; + display: table; +} + +.clearfix:after, .float-group:after { + clear: both; +} + +*:not(pre) > code { + font-size: inherit; + padding: 0; + white-space: nowrap; + background-color: inherit; + border: 0 solid #dddddd; + -webkit-border-radius: 6px; + border-radius: 6px; + text-shadow: none; +} + +pre, pre > code { + color: black; + font-family: monospace, serif; + font-weight: normal; +} + +.keyseq { + color: #774417; +} + +kbd:not(.keyseq) { + display: inline-block; + color: #211306; + font-size: 0.75em; + background-color: #F7F7F7; + border: 1px solid #ccc; + -webkit-border-radius: 3px; + border-radius: 3px; + -webkit-box-shadow: 0 1px 0 rgba(0, 0, 0, 0.2), 0 0 0 2px white inset; + box-shadow: 0 1px 0 rgba(0, 0, 0, 0.2), 0 0 0 2px white inset; + margin: -0.15em 0.15em 0 0.15em; + padding: 0.2em 0.6em 0.2em 0.5em; + vertical-align: middle; + white-space: nowrap; +} + +.keyseq kbd:first-child { + margin-left: 0; +} + +.keyseq kbd:last-child { + margin-right: 0; +} + +.menuseq, .menu { + color: black; +} + +b.button:before, b.button:after { + position: relative; + top: -1px; + font-weight: normal; +} + +b.button:before { + content: "["; + padding: 0 3px 0 2px; +} + +b.button:after { + content: "]"; + padding: 0 2px 0 3px; +} + +p a > code:hover { + color: #541312; +} + +#header, #content, #footnotes, #footer { + width: 100%; + margin-left: auto; + margin-right: auto; + margin-top: 0; + margin-bottom: 0; + max-width: 62.5em; + *zoom: 1; + position: relative; + padding-left: 4em; + padding-right: 4em; +} + +#header:before, #header:after, #content:before, #content:after, #footnotes:before, #footnotes:after, #footer:before, #footer:after { + content: " "; + display: table; +} + +#header:after, #content:after, #footnotes:after, #footer:after { + clear: both; +} + +#header { + margin-bottom: 2.5em; +} + +#header > h1 { + color: #34302d; + font-weight: 400; +} + +#header span { + color: #34302d; +} + +#header #revnumber { + text-transform: capitalize; +} + +#header br { + display: none; +} + +#header br + span { +} + +#revdate { + display: block; +} + +#toc { + border-bottom: 1px solid #e6dfd8; + padding-bottom: 1.25em; +} + +#toc > ul { + margin-left: 0.25em; +} + +#toc ul.sectlevel0 > li > a { + font-style: italic; +} + +#toc ul.sectlevel0 ul.sectlevel1 { + margin-left: 0; + margin-top: 0.5em; + margin-bottom: 0.5em; +} + +#toc ul { + list-style-type: none; +} + +#toctitle { + color: #385dbd; +} + +@media only screen and (min-width: 768px) { + body.toc2 { + padding-left: 15em; + padding-right: 0; + } + + #toc.toc2 { + position: fixed; + width: 15em; + left: 0; + border-bottom: 0; + z-index: 1000; + padding: 1em; + height: 100%; + top: 0px; + background: #F1F1F1; + overflow: auto; + + -moz-transition-property: top; + -o-transition-property: top; + -webkit-transition-property: top; + transition-property: top; + -moz-transition-duration: 0.4s; + -o-transition-duration: 0.4s; + -webkit-transition-duration: 0.4s; + transition-duration: 0.4s; + } + + #reactor-header { + position: fixed; + top: -75px; + left: 0; + right: 0; + height: 75px; + + + -moz-transition-property: top; + -o-transition-property: top; + -webkit-transition-property: top; + transition-property: top; + -moz-transition-duration: 0.4s; + -o-transition-duration: 0.4s; + -webkit-transition-duration: 0.4s; + transition-duration: 0.4s; + } + + body.head-show #toc.toc2 { + top: 75px; + } + body.head-show #reactor-header { + top: 0; + } + + #toc.toc2 a { + color: #34302d; + font-family: "Raleway", Arial, sans-serif; + } + + #toc.toc2 #toctitle { + margin-top: 0; + font-size: 1.2em; + } + + #toc.toc2 > ul { + font-size: .90em; + } + + #toc.toc2 ul ul { + margin-left: 0; + padding-left: 0.4em; + } + + #toc.toc2 ul.sectlevel0 ul.sectlevel1 { + padding-left: 0; + margin-top: 0.5em; + margin-bottom: 0.5em; + } + + body.toc2.toc-right { + padding-left: 0; + padding-right: 15em; + } + + body.toc2.toc-right #toc.toc2 { + border-right: 0; + border-left: 1px solid #e6dfd8; + left: auto; + right: 0; + } +} + +@media only screen and (min-width: 1280px) { + body.toc2 { + padding-left: 20em; + padding-right: 0; + } + + #toc.toc2 { + width: 20em; + } + + #toc.toc2 #toctitle { + font-size: 1.375em; + } + + #toc.toc2 > ul { + font-size: 0.95em; + } + + #toc.toc2 ul ul { + padding-left: 1.25em; + } + + body.toc2.toc-right { + padding-left: 0; + padding-right: 20em; + } +} + +#content #toc { + border-style: solid; + border-width: 1px; + border-color: #d9d9d9; + margin-bottom: 1.25em; + padding: 1.25em; + background: #f2f2f2; + border-width: 0; + -webkit-border-radius: 6px; + border-radius: 6px; +} + +#content #toc > :first-child { + margin-top: 0; +} + +#content #toc > :last-child { + margin-bottom: 0; +} + +#content #toc a { + text-decoration: none; +} + +#content #toctitle { + font-weight: bold; + font-family: "Raleway", Arial, sans-serif; + font-size: 1em; + padding-left: 0.125em; +} + +#footer { + max-width: 100%; + background-color: white; + padding: 1.25em; + color: #CCC; + border-top: 3px solid #F1F1F1; +} + +#footer-text { + color: #444; + line-height: 1.44; +} + +.sect1 { + padding-bottom: 1.25em; +} + +.sect1 + .sect1 { + border-top: 1px solid #e6dfd8; +} + +#content h1 > a.anchor, h2 > a.anchor, h3 > a.anchor, #toctitle > a.anchor, .sidebarblock > .content > .title > a.anchor, h4 > a.anchor, h5 > a.anchor, h6 > a.anchor { + position: absolute; + width: 1em; + margin-left: -1em; + display: block; + text-decoration: none; + visibility: hidden; + text-align: center; + font-weight: normal; +} + +#content h1 > a.anchor:before, h2 > a.anchor:before, h3 > a.anchor:before, #toctitle > a.anchor:before, .sidebarblock > .content > .title > a.anchor:before, h4 > a.anchor:before, h5 > a.anchor:before, h6 > a.anchor:before { + content: '\00A7'; + font-size: .85em; + vertical-align: text-top; + display: block; + margin-top: 0.05em; +} + +#content h1:hover > a.anchor, #content h1 > a.anchor:hover, h2:hover > a.anchor, h2 > a.anchor:hover, h3:hover > a.anchor, #toctitle:hover > a.anchor, .sidebarblock > .content > .title:hover > a.anchor, h3 > a.anchor:hover, #toctitle > a.anchor:hover, .sidebarblock > .content > .title > a.anchor:hover, h4:hover > a.anchor, h4 > a.anchor:hover, h5:hover > a.anchor, h5 > a.anchor:hover, h6:hover > a.anchor, h6 > a.anchor:hover { + visibility: visible; +} + +#content h1 > a.link, h2 > a.link, h3 > a.link, #toctitle > a.link, .sidebarblock > .content > .title > a.link, h4 > a.link, h5 > a.link, h6 > a.link { + color: #34302d; + text-decoration: none; +} + +#content h1 > a.link:hover, h2 > a.link:hover, h3 > a.link:hover, #toctitle > a.link:hover, .sidebarblock > .content > .title > a.link:hover, h4 > a.link:hover, h5 > a.link:hover, h6 > a.link:hover { + color: #34302d; +} + +.imageblock, .literalblock, .listingblock, .mathblock, .verseblock, .videoblock { + margin-bottom: 1.25em; + margin-top: 1.25em; +} + +.admonitionblock td.content > .title, .exampleblock > .title, .imageblock > .title, .listingblock > .title, .literalblock > .title, .mathblock > .title, .openblock > .title, .paragraph > .title, .quoteblock > .title, .sidebarblock > .title, .tableblock > .title, .verseblock > .title, .videoblock > .title, .dlist > .title, .olist > .title, .ulist > .title, .qlist > .title, .hdlist > .title { + text-align: left; + font-weight: bold; +} + +.tableblock > caption { + text-align: left; + font-weight: bold; + white-space: nowrap; + overflow: visible; + max-width: 0; +} + +table.tableblock #preamble > .sectionbody > .paragraph:first-of-type p { + font-size: inherit; +} + +.admonitionblock > table { + border: 0; + background: none; + width: 100%; +} + +.admonitionblock > table td.icon { + text-align: center; + width: 80px; +} + +.admonitionblock > table td.icon img { + max-width: none; +} + +.admonitionblock > table td.icon .title { + font-weight: bold; + text-transform: uppercase; +} + +.admonitionblock > table td.content { + padding-left: 1.125em; + padding-right: 1.25em; + border-left: 1px solid #dcd2c9; + color: #34302d; +} + +.admonitionblock > table td.content > :last-child > :last-child { + margin-bottom: 0; +} + +.exampleblock > .content { + border-top: 1px solid #6db33f; + border-bottom: 1px solid #6db33f; + margin-bottom: 1.25em; + padding: 1.25em; + background: white; +} + +.exampleblock > .content > :first-child { + margin-top: 0; +} + +.exampleblock > .content > :last-child { + margin-bottom: 0; +} + +.exampleblock > .content h1, .exampleblock > .content h2, .exampleblock > .content h3, .exampleblock > .content #toctitle, .sidebarblock.exampleblock > .content > .title, .exampleblock > .content h4, .exampleblock > .content h5, .exampleblock > .content h6, .exampleblock > .content p { + color: #333333; +} + +.exampleblock > .content h1, .exampleblock > .content h2, .exampleblock > .content h3, .exampleblock > .content #toctitle, .sidebarblock.exampleblock > .content > .title, .exampleblock > .content h4, .exampleblock > .content h5, .exampleblock > .content h6 { + margin-bottom: 0.625em; +} + +.exampleblock > .content h1.subheader, .exampleblock > .content h2.subheader, .exampleblock > .content h3.subheader, .exampleblock > .content .subheader#toctitle, .sidebarblock.exampleblock > .content > .subheader.title, .exampleblock > .content h4.subheader, .exampleblock > .content h5.subheader, .exampleblock > .content h6.subheader { +} + +.exampleblock.result > .content { + -webkit-box-shadow: 0 1px 8px #d9d9d9; + box-shadow: 0 1px 8px #d9d9d9; +} + +.sidebarblock { + padding: 1.25em 2em; + background: #F1F1F1; + margin: 2em -2em; + +} + +.sidebarblock > :first-child { + margin-top: 0; +} + +.sidebarblock > :last-child { + margin-bottom: 0; +} + +.sidebarblock h1, .sidebarblock h2, .sidebarblock h3, .sidebarblock #toctitle, .sidebarblock > .content > .title, .sidebarblock h4, .sidebarblock h5, .sidebarblock h6, .sidebarblock p { + color: #333333; +} + +.sidebarblock h1, .sidebarblock h2, .sidebarblock h3, .sidebarblock #toctitle, .sidebarblock > .content > .title, .sidebarblock h4, .sidebarblock h5, .sidebarblock h6 { + margin-bottom: 0.625em; +} + +.sidebarblock h1.subheader, .sidebarblock h2.subheader, .sidebarblock h3.subheader, .sidebarblock .subheader#toctitle, .sidebarblock > .content > .subheader.title, .sidebarblock h4.subheader, .sidebarblock h5.subheader, .sidebarblock h6.subheader { +} + +.sidebarblock > .content > .title { + color: #6db33f; + margin-top: 0; + font-size: 1.2em; +} + +.exampleblock > .content > :last-child > :last-child, .exampleblock > .content .olist > ol > li:last-child > :last-child, .exampleblock > .content .ulist > ul > li:last-child > :last-child, .exampleblock > .content .qlist > ol > li:last-child > :last-child, .sidebarblock > .content > :last-child > :last-child, .sidebarblock > .content .olist > ol > li:last-child > :last-child, .sidebarblock > .content .ulist > ul > li:last-child > :last-child, .sidebarblock > .content .qlist > ol > li:last-child > :last-child { + margin-bottom: 0; +} + +.literalblock pre:not([class]), .listingblock pre:not([class]) { + background-color:#f2f2f2; +} + +.literalblock pre, .literalblock pre[class], .listingblock pre, .listingblock pre[class] { + border-width: 1px; + border-style: solid; + border-color: rgba(21, 35, 71, 0.1); + -webkit-border-radius: 6px; + border-radius: 6px; + padding: 0.8em; + word-wrap: break-word; +} + +.literalblock pre.nowrap, .literalblock pre[class].nowrap, .listingblock pre.nowrap, .listingblock pre[class].nowrap { + overflow-x: auto; + white-space: pre; + word-wrap: normal; +} + +.literalblock pre > code, .literalblock pre[class] > code, .listingblock pre > code, .listingblock pre[class] > code { + display: block; +} + +@media only screen { + .literalblock pre, .literalblock pre[class], .listingblock pre, .listingblock pre[class] { + font-size: 0.72em; + } +} + +@media only screen and (min-width: 768px) { + .literalblock pre, .literalblock pre[class], .listingblock pre, .listingblock pre[class] { + font-size: 0.81em; + } +} + +@media only screen and (min-width: 1280px) { + .literalblock pre, .literalblock pre[class], .listingblock pre, .listingblock pre[class] { + font-size: 0.9em; + } +} + +.listingblock pre.highlight { + padding: 0; + line-height: 1.4em; +} + +.listingblock pre.highlight > code { + padding: 0.8em; +} + +.listingblock > .content { + position: relative; +} + +.listingblock:hover code[class*=" language-"]:before { + text-transform: uppercase; + font-size: 0.9em; + color: #999; + position: absolute; + top: 0.375em; + right: 0.375em; +} + +.listingblock:hover code.asciidoc:before { + content: "asciidoc"; +} + +.listingblock:hover code.clojure:before { + content: "clojure"; +} + +.listingblock:hover code.css:before { + content: "css"; +} + +.listingblock:hover code.groovy:before { + content: "groovy"; +} + +.listingblock:hover code.html:before { + content: "html"; +} + +.listingblock:hover code.java:before { + content: "java"; +} + +.listingblock:hover code.javascript:before { + content: "javascript"; +} + +.listingblock:hover code.python:before { + content: "python"; +} + +.listingblock:hover code.ruby:before { + content: "ruby"; +} + +.listingblock:hover code.sass:before { + content: "sass"; +} + +.listingblock:hover code.scss:before { + content: "scss"; +} + +.listingblock:hover code.xml:before { + content: "xml"; +} + +.listingblock:hover code.yaml:before { + content: "yaml"; +} + +.listingblock.terminal pre .command:before { + content: attr(data-prompt); + padding-right: 0.5em; + color: #999; +} + +.listingblock.terminal pre .command:not([data-prompt]):before { + content: '$'; +} + +table.pyhltable { + border: 0; + margin-bottom: 0; +} + +table.pyhltable td { + vertical-align: top; + padding-top: 0; + padding-bottom: 0; +} + +table.pyhltable td.code { + padding-left: .75em; + padding-right: 0; +} + +.highlight.pygments .lineno, table.pyhltable td:not(.code) { + color: #999; + padding-left: 0; + padding-right: .5em; + border-right: 1px solid #dcd2c9; +} + +.highlight.pygments .lineno { + display: inline-block; + margin-right: .25em; +} + +table.pyhltable .linenodiv { + background-color: transparent !important; + padding-right: 0 !important; +} + +.quoteblock { + margin: 0 0 1.25em; + padding: 0.5625em 1.25em 0 1.1875em; + border-left: 3px solid #dddddd; +} + +.quoteblock blockquote { + margin: 0 0 1.25em 0; + padding: 0 0 0.5625em 0; + border: 0; +} + +.quoteblock blockquote > .paragraph:last-child p { + margin-bottom: 0; +} + +.quoteblock .attribution { + margin-top: -.25em; + padding-bottom: 0.5625em; + font-size: 0.8125em; +} + +.quoteblock .attribution br { + display: none; +} + +.quoteblock .attribution cite { + display: block; + margin-bottom: 0.625em; +} + +table thead th, table tfoot th { + font-weight: bold; +} + +table.tableblock.grid-all { + border-collapse: separate; + border-radius: 6px; + border-top: 1px solid #34302d; + border-bottom: 1px solid #34302d; +} + +table.tableblock.frame-topbot, table.tableblock.frame-none { + border-left: 0; + border-right: 0; +} + +table.tableblock.frame-sides, table.tableblock.frame-none { + border-top: 0; + border-bottom: 0; +} + +table.tableblock td .paragraph:last-child p > p:last-child, table.tableblock th > p:last-child, table.tableblock td > p:last-child { + margin-bottom: 0; +} + +th.tableblock.halign-left, td.tableblock.halign-left { + text-align: left; +} + +th.tableblock.halign-right, td.tableblock.halign-right { + text-align: right; +} + +th.tableblock.halign-center, td.tableblock.halign-center { + text-align: center; +} + +th.tableblock.valign-top, td.tableblock.valign-top { + vertical-align: top; +} + +th.tableblock.valign-bottom, td.tableblock.valign-bottom { + vertical-align: bottom; +} + +th.tableblock.valign-middle, td.tableblock.valign-middle { + vertical-align: middle; +} + +tbody tr th { + display: table-cell; + background: rgba(105, 60, 22, 0.25); +} + +tbody tr th, tbody tr th p, tfoot tr th, tfoot tr th p { + color: #211306; + font-weight: bold; +} + +td > div.verse { + white-space: pre; +} + +ol { + margin-left: 1.75em; +} + +ul li ol { + margin-left: 1.5em; +} + +dl dd { + margin-left: 1.125em; +} + +dl dd:last-child, dl dd:last-child > :last-child { + margin-bottom: 0; +} + +ol > li p, ul > li p, ul dd, ol dd, .olist .olist, .ulist .ulist, .ulist .olist, .olist .ulist { + margin-bottom: 0.625em; +} + +ul.unstyled, ol.unnumbered, ul.checklist, ul.none { + list-style-type: none; +} + +ul.unstyled, ol.unnumbered, ul.checklist { + margin-left: 0.625em; +} + +ul.checklist li > p:first-child > i[class^="icon-check"]:first-child, ul.checklist li > p:first-child > input[type="checkbox"]:first-child { + margin-right: 0.25em; +} + +ul.checklist li > p:first-child > input[type="checkbox"]:first-child { + position: relative; + top: 1px; +} + +ul.inline { + margin: 0 auto 0.625em auto; + margin-left: -1.375em; + margin-right: 0; + padding: 0; + list-style: none; + overflow: hidden; +} + +ul.inline > li { + list-style: none; + float: left; + margin-left: 1.375em; + display: block; +} + +ul.inline > li > * { + display: block; +} + +.unstyled dl dt { + font-weight: normal; + font-style: normal; +} + +ol.arabic { + list-style-type: decimal; +} + +ol.decimal { + list-style-type: decimal-leading-zero; +} + +ol.loweralpha { + list-style-type: lower-alpha; +} + +ol.upperalpha { + list-style-type: upper-alpha; +} + +ol.lowerroman { + list-style-type: lower-roman; +} + +ol.upperroman { + list-style-type: upper-roman; +} + +ol.lowergreek { + list-style-type: lower-greek; +} + +.hdlist > table, .colist > table { + border: 0; + background: none; +} + +.hdlist > table > tbody > tr, .colist > table > tbody > tr { + background: none; +} + +td.hdlist1 { + padding-right: .75em; + font-weight: bold; +} + +td.hdlist1, td.hdlist2 { + vertical-align: top; +} + +.literalblock + .colist, .listingblock + .colist { + margin-top: -0.5em; +} + +.colist > table tr > td:first-of-type { + padding: 0 .75em; +} + +.colist > table tr > td:last-of-type { + padding: 0.25em 0; +} + +.qanda > ol > li > p > em:only-child { + color: #063f40; +} + +.thumb, .th { + line-height: 0; + display: inline-block; + border: solid 4px white; + -webkit-box-shadow: 0 0 0 1px #dddddd; + box-shadow: 0 0 0 1px #dddddd; +} + +.imageblock.left, .imageblock[style*="float: left"] { + margin: 0.25em 0.625em 1.25em 0; +} + +.imageblock.right, .imageblock[style*="float: right"] { + margin: 0.25em 0 1.25em 0.625em; +} + +.imageblock > .title { + margin-bottom: 0; +} + +.imageblock.thumb, .imageblock.th { + border-width: 6px; +} + +.imageblock.thumb > .title, .imageblock.th > .title { + padding: 0 0.125em; +} + +.image.left, .image.right { + margin-top: 0.25em; + margin-bottom: 0.25em; + display: inline-block; + line-height: 0; +} + +.image.left { + margin-right: 0.625em; +} + +.image.right { + margin-left: 0.625em; +} + +a.image { + text-decoration: none; +} + +span.footnote, span.footnoteref { + vertical-align: super; + font-size: 0.875em; +} + +span.footnote a, span.footnoteref a { + text-decoration: none; +} + +#footnotes { + padding-top: 0.75em; + padding-bottom: 0.75em; + margin-bottom: 0.625em; +} + +#footnotes hr { + width: 20%; + min-width: 6.25em; + margin: -.25em 0 .75em 0; + border-width: 1px 0 0 0; +} + +#footnotes .footnote { + padding: 0 0.375em; + font-size: 0.875em; + margin-left: 1.2em; + text-indent: -1.2em; + margin-bottom: .2em; +} + +#footnotes .footnote a:first-of-type { + font-weight: bold; + text-decoration: none; +} + +#footnotes .footnote:last-of-type { + margin-bottom: 0; +} + +#content #footnotes { + margin-top: -0.625em; + margin-bottom: 0; + padding: 0.75em 0; +} + +.gist .file-data > table { + border: none; + background: #fff; + width: 100%; + margin-bottom: 0; +} + +.gist .file-data > table td.line-data { + width: 99%; +} + +div.unbreakable { + page-break-inside: avoid; +} + +.big { + font-size: larger; +} + +.small { + font-size: smaller; +} + +.underline { + text-decoration: underline; +} + +.overline { + text-decoration: overline; +} + +.line-through { + text-decoration: line-through; +} + +.aqua { + color: #00bfbf; +} + +.aqua-background { + background-color: #00fafa; +} + +.black { + color: black; +} + +.black-background { + background-color: black; +} + +.blue { + color: #0000bf; +} + +.blue-background { + background-color: #0000fa; +} + +.fuchsia { + color: #bf00bf; +} + +.fuchsia-background { + background-color: #fa00fa; +} + +.gray { + color: #606060; +} + +.gray-background { + background-color: #7d7d7d; +} + +.green { + color: #006000; +} + +.green-background { + background-color: #007d00; +} + +.lime { + color: #00bf00; +} + +.lime-background { + background-color: #00fa00; +} + +.maroon { + color: #600000; +} + +.maroon-background { + background-color: #7d0000; +} + +.navy { + color: #000060; +} + +.navy-background { + background-color: #00007d; +} + +.olive { + color: #606000; +} + +.olive-background { + background-color: #7d7d00; +} + +.purple { + color: #600060; +} + +.purple-background { + background-color: #7d007d; +} + +.red { + color: #bf0000; +} + +.red-background { + background-color: #fa0000; +} + +.silver { + color: #909090; +} + +.silver-background { + background-color: #bcbcbc; +} + +.teal { + color: #006060; +} + +.teal-background { + background-color: #007d7d; +} + +.white { + color: #bfbfbf; +} + +.white-background { + background-color: #fafafa; +} + +.yellow { + color: #bfbf00; +} + +.yellow-background { + background-color: #fafa00; +} + +span.icon > [class^="icon-"], span.icon > [class*=" icon-"] { + cursor: default; +} + +.admonitionblock td.icon [class^="icon-"]:before { + font-size: 2.5em; + text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.5); + cursor: default; +} + +.admonitionblock td.icon .icon-note:before { + content: "\f05a"; + color: #095557; + color: #064042; +} + +.admonitionblock td.icon .icon-tip:before { + content: "\f0eb"; + text-shadow: 1px 1px 2px rgba(155, 155, 0, 0.8); + color: #111; +} + +.admonitionblock td.icon .icon-warning:before { + content: "\f071"; + color: #bf6900; +} + +.admonitionblock td.icon .icon-caution:before { + content: "\f06d"; + color: #bf3400; +} + +.admonitionblock td.icon .icon-important:before { + content: "\f06a"; + color: #bf0000; +} + +.conum { + display: inline-block; + color: white !important; + background-color: #211306; + -webkit-border-radius: 100px; + border-radius: 100px; + text-align: center; + width: 20px; + height: 20px; + font-size: 12px; + font-weight: bold; + line-height: 20px; + font-family: Arial, sans-serif; + font-style: normal; + position: relative; + top: -2px; + letter-spacing: -1px; +} + +.conum * { + color: white !important; +} + +.conum + b { + display: none; +} + +.conum:after { + content: attr(data-value); +} + +.conum:not([data-value]):empty { + display: none; +} + +body { + padding-top: 60px; +} + +#toc.toc2 ul ul { + padding-left: 1em; +} +#toc.toc2 ul ul.sectlevel2 { +} + +#toctitle { + color: #34302d; + display: none; +} + +#header h1 { + font-weight: bold; + position: relative; + left: -0.0625em; +} + +#header h1 span.lo { + color: #dc9424; +} + +#content h2, #content h3, #content #toctitle, #content .sidebarblock > .content > .title, #content h4, #content h5, #content #toctitle { + font-weight: normal; + position: relative; + left: -0.0625em; +} + +#content h2 { + font-weight: bold; +} + +.literalblock .content pre.highlight, .listingblock .content pre.highlight { + background-color:#f2f2f2; +} + +.admonitionblock > table td.content { + border-color: #e6dfd8; +} + +table.tableblock.grid-all { + -webkit-border-radius: 0; + border-radius: 0; +} + +#footer { + background-color: #while; + color: #34302d; +} + +.imageblock .title { + text-align: center; +} + +#content h1.sect0 { + font-size: 48px; +} + +#toc > ul > li > a { + font-size: large; +} diff --git a/src/main/java/com/lambdaworks/codec/Base16.java b/src/main/java/com/lambdaworks/codec/Base16.java deleted file mode 100644 index f7d606fbf2..0000000000 --- a/src/main/java/com/lambdaworks/codec/Base16.java +++ /dev/null @@ -1,50 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.codec; - -/** - * High-performance base16 (AKA hex) codec. - * - * @author Will Glozer - */ -public class Base16 { - private static final char[] upper = "0123456789ABCDEF".toCharArray(); - private static final char[] lower = "0123456789abcdef".toCharArray(); - private static final byte[] decode = new byte[128]; - - static { - for (int i = 0; i < 10; i++) { - decode['0' + i] = (byte) i; - decode['A' + i] = (byte) (10 + i); - decode['a' + i] = (byte) (10 + i); - } - } - - /** - * Utility constructor. - */ - private Base16() { - - } - - /** - * Encode bytes to base16 chars. - * - * @param src Bytes to encode. - * @param upper Use upper or lowercase chars. - * - * @return Encoded chars. - */ - public static char[] encode(byte[] src, boolean upper) { - char[] table = upper ? Base16.upper : Base16.lower; - char[] dst = new char[src.length * 2]; - - for (int si = 0, di = 0; si < src.length; si++) { - byte b = src[si]; - dst[di++] = table[(b & 0xf0) >>> 4]; - dst[di++] = table[(b & 0x0f)]; - } - - return dst; - } -} diff --git a/src/main/java/com/lambdaworks/codec/CRC16.java b/src/main/java/com/lambdaworks/codec/CRC16.java deleted file mode 100644 index ff5fb4e880..0000000000 --- a/src/main/java/com/lambdaworks/codec/CRC16.java +++ /dev/null @@ -1,60 +0,0 @@ -package com.lambdaworks.codec; - -/** - * @author Mark Paluch - *
    - *
  • Name: XMODEM (also known as ZMODEM or CRC-16/ACORN)
  • - *
  • Width: 16 bit
  • - *
  • Poly: 1021 (That is actually x16 + x12 + x5 + 1)
  • - *
  • Initialization: 0000
  • - *
  • Reflect Input byte: False
  • - *
  • Reflect Output CRC: False
  • - *
  • Xor constant to output CRC: 0000
  • - *
- * @since 3.0 - */ -public class CRC16 { - - private static final int[] LOOKUP_TABLE = { 0x0000, 0x1021, 0x2042, 0x3063, 0x4084, 0x50A5, 0x60C6, 0x70E7, 0x8108, 0x9129, - 0xA14A, 0xB16B, 0xC18C, 0xD1AD, 0xE1CE, 0xF1EF, 0x1231, 0x0210, 0x3273, 0x2252, 0x52B5, 0x4294, 0x72F7, 0x62D6, - 0x9339, 0x8318, 0xB37B, 0xA35A, 0xD3BD, 0xC39C, 0xF3FF, 0xE3DE, 0x2462, 0x3443, 0x0420, 0x1401, 0x64E6, 0x74C7, - 0x44A4, 0x5485, 0xA56A, 0xB54B, 0x8528, 0x9509, 0xE5EE, 0xF5CF, 0xC5AC, 0xD58D, 0x3653, 0x2672, 0x1611, 0x0630, - 0x76D7, 0x66F6, 0x5695, 0x46B4, 0xB75B, 0xA77A, 0x9719, 0x8738, 0xF7DF, 0xE7FE, 0xD79D, 0xC7BC, 0x48C4, 0x58E5, - 0x6886, 0x78A7, 0x0840, 0x1861, 0x2802, 0x3823, 0xC9CC, 0xD9ED, 0xE98E, 0xF9AF, 0x8948, 0x9969, 0xA90A, 0xB92B, - 0x5AF5, 0x4AD4, 0x7AB7, 0x6A96, 0x1A71, 0x0A50, 0x3A33, 0x2A12, 0xDBFD, 0xCBDC, 0xFBBF, 0xEB9E, 0x9B79, 0x8B58, - 0xBB3B, 0xAB1A, 0x6CA6, 0x7C87, 0x4CE4, 0x5CC5, 0x2C22, 0x3C03, 0x0C60, 0x1C41, 0xEDAE, 0xFD8F, 0xCDEC, 0xDDCD, - 0xAD2A, 0xBD0B, 0x8D68, 0x9D49, 0x7E97, 0x6EB6, 0x5ED5, 0x4EF4, 0x3E13, 0x2E32, 0x1E51, 0x0E70, 0xFF9F, 0xEFBE, - 0xDFDD, 0xCFFC, 0xBF1B, 0xAF3A, 0x9F59, 0x8F78, 0x9188, 0x81A9, 0xB1CA, 0xA1EB, 0xD10C, 0xC12D, 0xF14E, 0xE16F, - 0x1080, 0x00A1, 0x30C2, 0x20E3, 0x5004, 0x4025, 0x7046, 0x6067, 0x83B9, 0x9398, 0xA3FB, 0xB3DA, 0xC33D, 0xD31C, - 0xE37F, 0xF35E, 0x02B1, 0x1290, 0x22F3, 0x32D2, 0x4235, 0x5214, 0x6277, 0x7256, 0xB5EA, 0xA5CB, 0x95A8, 0x8589, - 0xF56E, 0xE54F, 0xD52C, 0xC50D, 0x34E2, 0x24C3, 0x14A0, 0x0481, 0x7466, 0x6447, 0x5424, 0x4405, 0xA7DB, 0xB7FA, - 0x8799, 0x97B8, 0xE75F, 0xF77E, 0xC71D, 0xD73C, 0x26D3, 0x36F2, 0x0691, 0x16B0, 0x6657, 0x7676, 0x4615, 0x5634, - 0xD94C, 0xC96D, 0xF90E, 0xE92F, 0x99C8, 0x89E9, 0xB98A, 0xA9AB, 0x5844, 0x4865, 0x7806, 0x6827, 0x18C0, 0x08E1, - 0x3882, 0x28A3, 0xCB7D, 0xDB5C, 0xEB3F, 0xFB1E, 0x8BF9, 0x9BD8, 0xABBB, 0xBB9A, 0x4A75, 0x5A54, 0x6A37, 0x7A16, - 0x0AF1, 0x1AD0, 0x2AB3, 0x3A92, 0xFD2E, 0xED0F, 0xDD6C, 0xCD4D, 0xBDAA, 0xAD8B, 0x9DE8, 0x8DC9, 0x7C26, 0x6C07, - 0x5C64, 0x4C45, 0x3CA2, 0x2C83, 0x1CE0, 0x0CC1, 0xEF1F, 0xFF3E, 0xCF5D, 0xDF7C, 0xAF9B, 0xBFBA, 0x8FD9, 0x9FF8, - 0x6E17, 0x7E36, 0x4E55, 0x5E74, 0x2E93, 0x3EB2, 0x0ED1, 0x1EF0 }; - - /** - * Utility constructor. - */ - private CRC16() { - - } - - /** - * Create a CRC16 checksum from the bytes. - * - * @param bytes input bytes - * @return CRC16 as interger value - */ - public static int crc16(byte[] bytes) { - int crc = 0x0000; - - for (byte b : bytes) { - crc = ((crc << 8) ^ LOOKUP_TABLE[((crc >>> 8) ^ (b & 0xFF)) & 0xFF]); - } - return crc & 0xFFFF; - } - -} diff --git a/src/main/java/com/lambdaworks/codec/package-info.java b/src/main/java/com/lambdaworks/codec/package-info.java deleted file mode 100644 index d1646d9b5f..0000000000 --- a/src/main/java/com/lambdaworks/codec/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Base16 and CRC16 codecs. - */ -package com.lambdaworks.codec; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/AbstractRedisAsyncCommands.java b/src/main/java/com/lambdaworks/redis/AbstractRedisAsyncCommands.java deleted file mode 100644 index bb49c23a11..0000000000 --- a/src/main/java/com/lambdaworks/redis/AbstractRedisAsyncCommands.java +++ /dev/null @@ -1,1854 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.protocol.CommandType.EXEC; - -import java.util.Date; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.GeoArgs.Unit; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.async.*; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.output.*; -import com.lambdaworks.redis.protocol.*; - -/** - * An asynchronous and thread-safe API for a Redis connection. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public abstract class AbstractRedisAsyncCommands - implements RedisHashesAsyncConnection, RedisKeysAsyncConnection, RedisStringsAsyncConnection, - RedisListsAsyncConnection, RedisSetsAsyncConnection, RedisSortedSetsAsyncConnection, - RedisScriptingAsyncConnection, RedisServerAsyncConnection, RedisHLLAsyncConnection, - BaseRedisAsyncConnection, RedisClusterAsyncConnection, RedisGeoAsyncConnection, - - RedisHashAsyncCommands, RedisKeyAsyncCommands, RedisStringAsyncCommands, RedisListAsyncCommands, - RedisSetAsyncCommands, RedisSortedSetAsyncCommands, RedisScriptingAsyncCommands, - RedisServerAsyncCommands, RedisHLLAsyncCommands, BaseRedisAsyncCommands, - RedisTransactionalAsyncCommands, RedisGeoAsyncCommands, RedisClusterAsyncCommands { - - protected MultiOutput multi; - protected RedisCommandBuilder commandBuilder; - protected RedisCodec codec; - protected StatefulConnection connection; - - /** - * Initialize a new instance. - * - * @param connection the connection to operate on - * @param codec the codec for command encoding - */ - public AbstractRedisAsyncCommands(StatefulConnection connection, RedisCodec codec) { - this.connection = connection; - this.codec = codec; - commandBuilder = new RedisCommandBuilder(codec); - } - - @Override - public RedisFuture append(K key, V value) { - return dispatch(commandBuilder.append(key, value)); - } - - @Override - public String auth(String password) { - AsyncCommand cmd = authAsync(password); - return LettuceFutures.awaitOrCancel(cmd, connection.getTimeout(), connection.getTimeoutUnit()); - } - - public AsyncCommand authAsync(String password) { - return dispatch(commandBuilder.auth(password)); - } - - @Override - public RedisFuture bgrewriteaof() { - return dispatch(commandBuilder.bgrewriteaof()); - } - - @Override - public RedisFuture bgsave() { - return dispatch(commandBuilder.bgsave()); - } - - @Override - public RedisFuture bitcount(K key) { - return dispatch(commandBuilder.bitcount(key)); - } - - @Override - public RedisFuture bitcount(K key, long start, long end) { - return dispatch(commandBuilder.bitcount(key, start, end)); - } - - @Override - public RedisFuture> bitfield(K key, BitFieldArgs bitFieldArgs) { - return dispatch(commandBuilder.bitfield(key, bitFieldArgs)); - } - - @Override - public RedisFuture bitpos(K key, boolean state) { - return dispatch(commandBuilder.bitpos(key, state)); - } - - @Override - public RedisFuture bitpos(K key, boolean state, long start, long end) { - return dispatch(commandBuilder.bitpos(key, state, start, end)); - } - - @Override - public RedisFuture bitopAnd(K destination, K... keys) { - return dispatch(commandBuilder.bitopAnd(destination, keys)); - } - - @Override - public RedisFuture bitopNot(K destination, K source) { - return dispatch(commandBuilder.bitopNot(destination, source)); - } - - @Override - public RedisFuture bitopOr(K destination, K... keys) { - return dispatch(commandBuilder.bitopOr(destination, keys)); - } - - @Override - public RedisFuture bitopXor(K destination, K... keys) { - return dispatch(commandBuilder.bitopXor(destination, keys)); - } - - @Override - public RedisFuture> blpop(long timeout, K... keys) { - return dispatch(commandBuilder.blpop(timeout, keys)); - } - - @Override - public RedisFuture> brpop(long timeout, K... keys) { - return dispatch(commandBuilder.brpop(timeout, keys)); - } - - @Override - public RedisFuture brpoplpush(long timeout, K source, K destination) { - return dispatch(commandBuilder.brpoplpush(timeout, source, destination)); - } - - @Override - public RedisFuture clientGetname() { - return dispatch(commandBuilder.clientGetname()); - } - - @Override - public RedisFuture clientSetname(K name) { - return dispatch(commandBuilder.clientSetname(name)); - } - - @Override - public RedisFuture clientKill(String addr) { - return dispatch(commandBuilder.clientKill(addr)); - } - - @Override - public RedisFuture clientKill(KillArgs killArgs) { - return dispatch(commandBuilder.clientKill(killArgs)); - } - - @Override - public RedisFuture clientPause(long timeout) { - return dispatch(commandBuilder.clientPause(timeout)); - } - - @Override - public RedisFuture clientList() { - return dispatch(commandBuilder.clientList()); - } - - @Override - public RedisFuture> command() { - return dispatch(commandBuilder.command()); - } - - @Override - public RedisFuture> commandInfo(String... commands) { - return dispatch(commandBuilder.commandInfo(commands)); - } - - @Override - public RedisFuture> commandInfo(CommandType... commands) { - String[] stringCommands = new String[commands.length]; - for (int i = 0; i < commands.length; i++) { - stringCommands[i] = commands[i].name(); - } - - return commandInfo(stringCommands); - } - - @Override - public RedisFuture commandCount() { - return dispatch(commandBuilder.commandCount()); - } - - @Override - public RedisFuture> configGet(String parameter) { - return dispatch(commandBuilder.configGet(parameter)); - } - - @Override - public RedisFuture configResetstat() { - return dispatch(commandBuilder.configResetstat()); - } - - @Override - public RedisFuture configSet(String parameter, String value) { - return dispatch(commandBuilder.configSet(parameter, value)); - } - - @Override - public RedisFuture configRewrite() { - return dispatch(commandBuilder.configRewrite()); - } - - @Override - public RedisFuture dbsize() { - return dispatch(commandBuilder.dbsize()); - } - - @Override - public RedisFuture debugCrashAndRecover(Long delay) { - return dispatch(commandBuilder.debugCrashAndRecover(delay)); - } - - @Override - public RedisFuture debugHtstats(int db) { - return dispatch(commandBuilder.debugHtstats(db)); - } - - @Override - public RedisFuture debugObject(K key) { - return dispatch(commandBuilder.debugObject(key)); - } - - @Override - public void debugOom() { - dispatch(commandBuilder.debugOom()); - } - - @Override - public RedisFuture debugReload() { - return dispatch(commandBuilder.debugReload()); - } - - @Override - public RedisFuture debugRestart(Long delay) { - return dispatch(commandBuilder.debugRestart(delay)); - } - - @Override - public RedisFuture debugSdslen(K key) { - return dispatch(commandBuilder.debugSdslen(key)); - } - - @Override - public void debugSegfault() { - dispatch(commandBuilder.debugSegfault()); - } - - @Override - public RedisFuture decr(K key) { - return dispatch(commandBuilder.decr(key)); - } - - @Override - public RedisFuture decrby(K key, long amount) { - return dispatch(commandBuilder.decrby(key, amount)); - } - - @Override - public RedisFuture del(K... keys) { - return dispatch(commandBuilder.del(keys)); - } - - public RedisFuture del(Iterable keys) { - return dispatch(commandBuilder.del(keys)); - } - - @Override - public RedisFuture unlink(K... keys) { - return dispatch(commandBuilder.unlink(keys)); - } - - public RedisFuture unlink(Iterable keys) { - return dispatch(commandBuilder.unlink(keys)); - } - - @Override - public RedisFuture discard() { - return dispatch(commandBuilder.discard()); - } - - @Override - public RedisFuture dump(K key) { - return dispatch(commandBuilder.dump(key)); - } - - @Override - public RedisFuture echo(V msg) { - return dispatch(commandBuilder.echo(msg)); - } - - @Override - @SuppressWarnings("unchecked") - public RedisFuture eval(String script, ScriptOutputType type, K... keys) { - return (RedisFuture) dispatch(commandBuilder.eval(script, type, keys)); - } - - @Override - @SuppressWarnings("unchecked") - public RedisFuture eval(String script, ScriptOutputType type, K[] keys, V... values) { - return (RedisFuture) dispatch(commandBuilder.eval(script, type, keys, values)); - } - - @Override - @SuppressWarnings("unchecked") - public RedisFuture evalsha(String digest, ScriptOutputType type, K... keys) { - return (RedisFuture) dispatch(commandBuilder.evalsha(digest, type, keys)); - } - - @Override - @SuppressWarnings("unchecked") - public RedisFuture evalsha(String digest, ScriptOutputType type, K[] keys, V... values) { - return (RedisFuture) dispatch(commandBuilder.evalsha(digest, type, keys, values)); - } - - @Override - public RedisFuture exists(K key) { - return dispatch(commandBuilder.exists(key)); - } - - @Override - public RedisFuture exists(K... keys) { - return dispatch(commandBuilder.exists(keys)); - } - - public RedisFuture exists(Iterable keys) { - return dispatch(commandBuilder.exists(keys)); - } - - @Override - public RedisFuture expire(K key, long seconds) { - return dispatch(commandBuilder.expire(key, seconds)); - } - - @Override - public RedisFuture expireat(K key, Date timestamp) { - return expireat(key, timestamp.getTime() / 1000); - } - - @Override - public RedisFuture expireat(K key, long timestamp) { - return dispatch(commandBuilder.expireat(key, timestamp)); - } - - @Override - public RedisFuture> exec() { - return dispatch(EXEC, null); - } - - @Override - public RedisFuture flushall() { - return dispatch(commandBuilder.flushall()); - } - - @Override - public RedisFuture flushallAsync() { - return dispatch(commandBuilder.flushallAsync()); - } - - @Override - public RedisFuture flushdb() { - return dispatch(commandBuilder.flushdb()); - } - - @Override - public RedisFuture flushdbAsync() { - return dispatch(commandBuilder.flushdbAsync()); - } - - @Override - public RedisFuture get(K key) { - return dispatch(commandBuilder.get(key)); - } - - @Override - public RedisFuture getbit(K key, long offset) { - return dispatch(commandBuilder.getbit(key, offset)); - } - - @Override - public RedisFuture getrange(K key, long start, long end) { - return dispatch(commandBuilder.getrange(key, start, end)); - } - - @Override - public RedisFuture getset(K key, V value) { - return dispatch(commandBuilder.getset(key, value)); - } - - @Override - public RedisFuture hdel(K key, K... fields) { - return dispatch(commandBuilder.hdel(key, fields)); - } - - @Override - public RedisFuture hexists(K key, K field) { - return dispatch(commandBuilder.hexists(key, field)); - } - - @Override - public RedisFuture hget(K key, K field) { - return dispatch(commandBuilder.hget(key, field)); - } - - @Override - public RedisFuture hincrby(K key, K field, long amount) { - return dispatch(commandBuilder.hincrby(key, field, amount)); - } - - @Override - public RedisFuture hincrbyfloat(K key, K field, double amount) { - return dispatch(commandBuilder.hincrbyfloat(key, field, amount)); - } - - @Override - public RedisFuture> hgetall(K key) { - return dispatch(commandBuilder.hgetall(key)); - } - - @Override - public RedisFuture hgetall(KeyValueStreamingChannel channel, K key) { - return dispatch(commandBuilder.hgetall(channel, key)); - } - - @Override - public RedisFuture> hkeys(K key) { - return dispatch(commandBuilder.hkeys(key)); - } - - @Override - public RedisFuture hkeys(KeyStreamingChannel channel, K key) { - return dispatch(commandBuilder.hkeys(channel, key)); - } - - @Override - public RedisFuture hlen(K key) { - return dispatch(commandBuilder.hlen(key)); - } - - @Override - public RedisFuture hstrlen(K key, K field) { - return dispatch(commandBuilder.hstrlen(key, field)); - } - - @Override - public RedisFuture> hmget(K key, K... fields) { - return dispatch(commandBuilder.hmget(key, fields)); - } - - @Override - public RedisFuture hmget(ValueStreamingChannel channel, K key, K... fields) { - return dispatch(commandBuilder.hmget(channel, key, fields)); - } - - @Override - public RedisFuture hmset(K key, Map map) { - return dispatch(commandBuilder.hmset(key, map)); - } - - @Override - public RedisFuture hset(K key, K field, V value) { - return dispatch(commandBuilder.hset(key, field, value)); - } - - @Override - public RedisFuture hsetnx(K key, K field, V value) { - return dispatch(commandBuilder.hsetnx(key, field, value)); - } - - @Override - public RedisFuture> hvals(K key) { - return dispatch(commandBuilder.hvals(key)); - } - - @Override - public RedisFuture hvals(ValueStreamingChannel channel, K key) { - return dispatch(commandBuilder.hvals(channel, key)); - } - - @Override - public RedisFuture incr(K key) { - return dispatch(commandBuilder.incr(key)); - } - - @Override - public RedisFuture incrby(K key, long amount) { - return dispatch(commandBuilder.incrby(key, amount)); - } - - @Override - public RedisFuture incrbyfloat(K key, double amount) { - return dispatch(commandBuilder.incrbyfloat(key, amount)); - } - - @Override - public RedisFuture info() { - return dispatch(commandBuilder.info()); - } - - @Override - public RedisFuture info(String section) { - return dispatch(commandBuilder.info(section)); - } - - @Override - public RedisFuture> keys(K pattern) { - return dispatch(commandBuilder.keys(pattern)); - } - - @Override - public RedisFuture keys(KeyStreamingChannel channel, K pattern) { - return dispatch(commandBuilder.keys(channel, pattern)); - } - - @Override - public RedisFuture lastsave() { - return dispatch(commandBuilder.lastsave()); - } - - @Override - public RedisFuture lindex(K key, long index) { - return dispatch(commandBuilder.lindex(key, index)); - } - - @Override - public RedisFuture linsert(K key, boolean before, V pivot, V value) { - return dispatch(commandBuilder.linsert(key, before, pivot, value)); - } - - @Override - public RedisFuture llen(K key) { - return dispatch(commandBuilder.llen(key)); - } - - @Override - public RedisFuture lpop(K key) { - return dispatch(commandBuilder.lpop(key)); - } - - @Override - public RedisFuture lpush(K key, V... values) { - return dispatch(commandBuilder.lpush(key, values)); - } - - @Override - public RedisFuture lpushx(K key, V value) { - return dispatch(commandBuilder.lpushx(key, value)); - } - - @Override - public RedisFuture lpushx(K key, V... values) { - return dispatch(commandBuilder.lpushx(key, values)); - } - - @Override - public RedisFuture> lrange(K key, long start, long stop) { - return dispatch(commandBuilder.lrange(key, start, stop)); - } - - @Override - public RedisFuture lrange(ValueStreamingChannel channel, K key, long start, long stop) { - return dispatch(commandBuilder.lrange(channel, key, start, stop)); - } - - @Override - public RedisFuture lrem(K key, long count, V value) { - return dispatch(commandBuilder.lrem(key, count, value)); - } - - @Override - public RedisFuture lset(K key, long index, V value) { - return dispatch(commandBuilder.lset(key, index, value)); - } - - @Override - public RedisFuture ltrim(K key, long start, long stop) { - return dispatch(commandBuilder.ltrim(key, start, stop)); - } - - @Override - public RedisFuture migrate(String host, int port, K key, int db, long timeout) { - return dispatch(commandBuilder.migrate(host, port, key, db, timeout)); - } - - @Override - public RedisFuture migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs) { - return dispatch(commandBuilder.migrate(host, port, db, timeout, migrateArgs)); - } - - @Override - public RedisFuture> mget(K... keys) { - return dispatch(commandBuilder.mget(keys)); - } - - public RedisFuture> mget(Iterable keys) { - return dispatch(commandBuilder.mget(keys)); - } - - @Override - public RedisFuture mget(ValueStreamingChannel channel, K... keys) { - return dispatch(commandBuilder.mget(channel, keys)); - } - - public RedisFuture mget(ValueStreamingChannel channel, Iterable keys) { - return dispatch(commandBuilder.mget(channel, keys)); - } - - @Override - public RedisFuture move(K key, int db) { - return dispatch(commandBuilder.move(key, db)); - } - - @Override - public RedisFuture multi() { - return dispatch(commandBuilder.multi()); - } - - @Override - public RedisFuture mset(Map map) { - return dispatch(commandBuilder.mset(map)); - } - - @Override - public RedisFuture msetnx(Map map) { - return dispatch(commandBuilder.msetnx(map)); - } - - @Override - public RedisFuture objectEncoding(K key) { - return dispatch(commandBuilder.objectEncoding(key)); - } - - @Override - public RedisFuture objectIdletime(K key) { - return dispatch(commandBuilder.objectIdletime(key)); - } - - @Override - public RedisFuture objectRefcount(K key) { - return dispatch(commandBuilder.objectRefcount(key)); - } - - @Override - public RedisFuture persist(K key) { - return dispatch(commandBuilder.persist(key)); - } - - @Override - public RedisFuture pexpire(K key, long milliseconds) { - return dispatch(commandBuilder.pexpire(key, milliseconds)); - } - - @Override - public RedisFuture pexpireat(K key, Date timestamp) { - return pexpireat(key, timestamp.getTime()); - } - - @Override - public RedisFuture pexpireat(K key, long timestamp) { - return dispatch(commandBuilder.pexpireat(key, timestamp)); - } - - @Override - public RedisFuture ping() { - return dispatch(commandBuilder.ping()); - } - - @Override - public RedisFuture readOnly() { - AsyncCommand cmd = dispatch(commandBuilder.readOnly()); - return cmd; - } - - @Override - public RedisFuture readWrite() { - AsyncCommand cmd = dispatch(commandBuilder.readWrite()); - return cmd; - } - - @Override - public RedisFuture pttl(K key) { - return dispatch(commandBuilder.pttl(key)); - } - - @Override - public RedisFuture publish(K channel, V message) { - return dispatch(commandBuilder.publish(channel, message)); - } - - @Override - public RedisFuture> pubsubChannels() { - return dispatch(commandBuilder.pubsubChannels()); - } - - @Override - public RedisFuture> pubsubChannels(K channel) { - return dispatch(commandBuilder.pubsubChannels(channel)); - } - - @Override - public RedisFuture> pubsubNumsub(K... channels) { - return dispatch(commandBuilder.pubsubNumsub(channels)); - } - - @Override - public RedisFuture pubsubNumpat() { - return dispatch(commandBuilder.pubsubNumpat()); - } - - @Override - public RedisFuture quit() { - return dispatch(commandBuilder.quit()); - } - - @Override - public RedisFuture> role() { - return dispatch(commandBuilder.role()); - } - - @Override - public RedisFuture randomkey() { - return dispatch(commandBuilder.randomkey()); - } - - @Override - public RedisFuture rename(K key, K newKey) { - return dispatch(commandBuilder.rename(key, newKey)); - } - - @Override - public RedisFuture renamenx(K key, K newKey) { - return dispatch(commandBuilder.renamenx(key, newKey)); - } - - @Override - public RedisFuture restore(K key, long ttl, byte[] value) { - return dispatch(commandBuilder.restore(key, ttl, value)); - } - - @Override - public RedisFuture rpop(K key) { - return dispatch(commandBuilder.rpop(key)); - } - - @Override - public RedisFuture rpoplpush(K source, K destination) { - return dispatch(commandBuilder.rpoplpush(source, destination)); - } - - @Override - public RedisFuture rpush(K key, V... values) { - return dispatch(commandBuilder.rpush(key, values)); - } - - @Override - public RedisFuture rpushx(K key, V value) { - return dispatch(commandBuilder.rpushx(key, value)); - } - - @Override - public RedisFuture rpushx(K key, V... values) { - return dispatch(commandBuilder.rpushx(key, values)); - } - - @Override - public RedisFuture sadd(K key, V... members) { - return dispatch(commandBuilder.sadd(key, members)); - } - - @Override - public RedisFuture save() { - return dispatch(commandBuilder.save()); - } - - @Override - public RedisFuture scard(K key) { - return dispatch(commandBuilder.scard(key)); - } - - @Override - public RedisFuture> scriptExists(String... digests) { - return dispatch(commandBuilder.scriptExists(digests)); - } - - @Override - public RedisFuture scriptFlush() { - return dispatch(commandBuilder.scriptFlush()); - } - - @Override - public RedisFuture scriptKill() { - return dispatch(commandBuilder.scriptKill()); - } - - @Override - public RedisFuture scriptLoad(V script) { - return dispatch(commandBuilder.scriptLoad(script)); - } - - @Override - public RedisFuture> sdiff(K... keys) { - return dispatch(commandBuilder.sdiff(keys)); - } - - @Override - public RedisFuture sdiff(ValueStreamingChannel channel, K... keys) { - return dispatch(commandBuilder.sdiff(channel, keys)); - } - - @Override - public RedisFuture sdiffstore(K destination, K... keys) { - return dispatch(commandBuilder.sdiffstore(destination, keys)); - } - - public String select(int db) { - AsyncCommand cmd = selectAsync(db); - String status = LettuceFutures.awaitOrCancel(cmd, connection.getTimeout(), connection.getTimeoutUnit()); - return status; - } - - protected AsyncCommand selectAsync(int db) { - return dispatch(commandBuilder.select(db)); - } - - @Override - public RedisFuture set(K key, V value) { - return dispatch(commandBuilder.set(key, value)); - } - - @Override - public RedisFuture set(K key, V value, SetArgs setArgs) { - return dispatch(commandBuilder.set(key, value, setArgs)); - } - - @Override - public RedisFuture setbit(K key, long offset, int value) { - return dispatch(commandBuilder.setbit(key, offset, value)); - } - - @Override - public RedisFuture setex(K key, long seconds, V value) { - return dispatch(commandBuilder.setex(key, seconds, value)); - } - - @Override - public RedisFuture psetex(K key, long milliseconds, V value) { - return dispatch(commandBuilder.psetex(key, milliseconds, value)); - } - - @Override - public RedisFuture setnx(K key, V value) { - return dispatch(commandBuilder.setnx(key, value)); - } - - @Override - public RedisFuture setrange(K key, long offset, V value) { - return dispatch(commandBuilder.setrange(key, offset, value)); - } - - @Deprecated - public void shutdown() { - dispatch(commandBuilder.shutdown()); - } - - @Override - public void shutdown(boolean save) { - dispatch(commandBuilder.shutdown(save)); - } - - @Override - public RedisFuture> sinter(K... keys) { - return dispatch(commandBuilder.sinter(keys)); - } - - @Override - public RedisFuture sinter(ValueStreamingChannel channel, K... keys) { - return dispatch(commandBuilder.sinter(channel, keys)); - } - - @Override - public RedisFuture sinterstore(K destination, K... keys) { - return dispatch(commandBuilder.sinterstore(destination, keys)); - } - - @Override - public RedisFuture sismember(K key, V member) { - return dispatch(commandBuilder.sismember(key, member)); - } - - @Override - public RedisFuture smove(K source, K destination, V member) { - return dispatch(commandBuilder.smove(source, destination, member)); - } - - @Override - public RedisFuture slaveof(String host, int port) { - return dispatch(commandBuilder.slaveof(host, port)); - } - - @Override - public RedisFuture slaveofNoOne() { - return dispatch(commandBuilder.slaveofNoOne()); - } - - @Override - public RedisFuture> slowlogGet() { - return dispatch(commandBuilder.slowlogGet()); - } - - @Override - public RedisFuture> slowlogGet(int count) { - return dispatch(commandBuilder.slowlogGet(count)); - } - - @Override - public RedisFuture slowlogLen() { - return dispatch(commandBuilder.slowlogLen()); - } - - @Override - public RedisFuture slowlogReset() { - return dispatch(commandBuilder.slowlogReset()); - } - - @Override - public RedisFuture> smembers(K key) { - return dispatch(commandBuilder.smembers(key)); - } - - @Override - public RedisFuture smembers(ValueStreamingChannel channel, K key) { - return dispatch(commandBuilder.smembers(channel, key)); - } - - @Override - public RedisFuture> sort(K key) { - return dispatch(commandBuilder.sort(key)); - } - - @Override - public RedisFuture sort(ValueStreamingChannel channel, K key) { - return dispatch(commandBuilder.sort(channel, key)); - } - - @Override - public RedisFuture> sort(K key, SortArgs sortArgs) { - return dispatch(commandBuilder.sort(key, sortArgs)); - } - - @Override - public RedisFuture sort(ValueStreamingChannel channel, K key, SortArgs sortArgs) { - return dispatch(commandBuilder.sort(channel, key, sortArgs)); - } - - @Override - public RedisFuture sortStore(K key, SortArgs sortArgs, K destination) { - return dispatch(commandBuilder.sortStore(key, sortArgs, destination)); - } - - @Override - public RedisFuture spop(K key) { - return dispatch(commandBuilder.spop(key)); - } - - @Override - public RedisFuture> spop(K key, long count) { - return dispatch(commandBuilder.spop(key, count)); - } - - @Override - public RedisFuture srandmember(K key) { - return dispatch(commandBuilder.srandmember(key)); - } - - @Override - public RedisFuture> srandmember(K key, long count) { - return dispatch(commandBuilder.srandmember(key, count)); - } - - @Override - public RedisFuture srandmember(ValueStreamingChannel channel, K key, long count) { - return dispatch(commandBuilder.srandmember(channel, key, count)); - } - - @Override - public RedisFuture srem(K key, V... members) { - return dispatch(commandBuilder.srem(key, members)); - } - - @Override - public RedisFuture> sunion(K... keys) { - return dispatch(commandBuilder.sunion(keys)); - } - - @Override - public RedisFuture sunion(ValueStreamingChannel channel, K... keys) { - return dispatch(commandBuilder.sunion(channel, keys)); - } - - @Override - public RedisFuture sunionstore(K destination, K... keys) { - return dispatch(commandBuilder.sunionstore(destination, keys)); - } - - @Override - public RedisFuture sync() { - return dispatch(commandBuilder.sync()); - } - - @Override - public RedisFuture strlen(K key) { - return dispatch(commandBuilder.strlen(key)); - } - - @Override - public RedisFuture touch(K... keys) { - return dispatch(commandBuilder.touch(keys)); - } - - public RedisFuture touch(Iterable keys) { - return dispatch(commandBuilder.touch(keys)); - } - - @Override - public RedisFuture ttl(K key) { - return dispatch(commandBuilder.ttl(key)); - } - - @Override - public RedisFuture type(K key) { - return dispatch(commandBuilder.type(key)); - } - - @Override - public RedisFuture watch(K... keys) { - return dispatch(commandBuilder.watch(keys)); - } - - @Override - public RedisFuture unwatch() { - return dispatch(commandBuilder.unwatch()); - } - - @Override - public RedisFuture zadd(K key, double score, V member) { - return dispatch(commandBuilder.zadd(key, null, score, member)); - } - - @Override - public RedisFuture zadd(K key, Object... scoresAndValues) { - return dispatch(commandBuilder.zadd(key, null, scoresAndValues)); - } - - @Override - public RedisFuture zadd(K key, ScoredValue... scoredValues) { - return dispatch(commandBuilder.zadd(key, null, scoredValues)); - } - - @Override - public RedisFuture zadd(K key, ZAddArgs zAddArgs, double score, V member) { - return dispatch(commandBuilder.zadd(key, zAddArgs, score, member)); - } - - @Override - public RedisFuture zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues) { - return dispatch(commandBuilder.zadd(key, zAddArgs, scoresAndValues)); - } - - @Override - public RedisFuture zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues) { - return dispatch(commandBuilder.zadd(key, zAddArgs, scoredValues)); - } - - @Override - public RedisFuture zaddincr(K key, double score, V member) { - return dispatch(commandBuilder.zaddincr(key, score, member)); - } - - @Override - public RedisFuture zcard(K key) { - return dispatch(commandBuilder.zcard(key)); - } - - @Override - public RedisFuture zcount(K key, double min, double max) { - return dispatch(commandBuilder.zcount(key, min, max)); - } - - @Override - public RedisFuture zcount(K key, String min, String max) { - return dispatch(commandBuilder.zcount(key, min, max)); - } - - @Override - public RedisFuture zincrby(K key, double amount, K member) { - return dispatch(commandBuilder.zincrby(key, amount, member)); - } - - @Override - public RedisFuture zinterstore(K destination, K... keys) { - return dispatch(commandBuilder.zinterstore(destination, keys)); - } - - @Override - public RedisFuture zinterstore(K destination, ZStoreArgs storeArgs, K... keys) { - return dispatch(commandBuilder.zinterstore(destination, storeArgs, keys)); - } - - @Override - public RedisFuture> zrange(K key, long start, long stop) { - return dispatch(commandBuilder.zrange(key, start, stop)); - } - - @Override - public RedisFuture>> zrangeWithScores(K key, long start, long stop) { - return dispatch(commandBuilder.zrangeWithScores(key, start, stop)); - } - - @Override - public RedisFuture> zrangebyscore(K key, double min, double max) { - return dispatch(commandBuilder.zrangebyscore(key, min, max)); - } - - @Override - public RedisFuture> zrangebyscore(K key, String min, String max) { - return dispatch(commandBuilder.zrangebyscore(key, min, max)); - } - - @Override - public RedisFuture> zrangebyscore(K key, double min, double max, long offset, long count) { - return dispatch(commandBuilder.zrangebyscore(key, min, max, offset, count)); - } - - @Override - public RedisFuture> zrangebyscore(K key, String min, String max, long offset, long count) { - return dispatch(commandBuilder.zrangebyscore(key, min, max, offset, count)); - } - - @Override - public RedisFuture>> zrangebyscoreWithScores(K key, double min, double max) { - return dispatch(commandBuilder.zrangebyscoreWithScores(key, min, max)); - } - - @Override - public RedisFuture>> zrangebyscoreWithScores(K key, String min, String max) { - return dispatch(commandBuilder.zrangebyscoreWithScores(key, min, max)); - } - - @Override - public RedisFuture>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count) { - return dispatch(commandBuilder.zrangebyscoreWithScores(key, min, max, offset, count)); - } - - @Override - public RedisFuture>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count) { - return dispatch(commandBuilder.zrangebyscoreWithScores(key, min, max, offset, count)); - } - - @Override - public RedisFuture zrange(ValueStreamingChannel channel, K key, long start, long stop) { - return dispatch(commandBuilder.zrange(channel, key, start, stop)); - } - - @Override - public RedisFuture zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { - return dispatch(commandBuilder.zrangeWithScores(channel, key, start, stop)); - } - - @Override - public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max) { - return dispatch(commandBuilder.zrangebyscore(channel, key, min, max)); - } - - @Override - public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max) { - return dispatch(commandBuilder.zrangebyscore(channel, key, min, max)); - } - - @Override - public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, - long count) { - return dispatch(commandBuilder.zrangebyscore(channel, key, min, max, offset, count)); - } - - @Override - public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, - long count) { - return dispatch(commandBuilder.zrangebyscore(channel, key, min, max, offset, count)); - } - - @Override - public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max) { - return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, min, max)); - } - - @Override - public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max) { - return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, min, max)); - } - - @Override - public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, - long offset, long count) { - return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, min, max, offset, count)); - } - - @Override - public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, - long offset, long count) { - return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, min, max, offset, count)); - } - - @Override - public RedisFuture zrank(K key, V member) { - return dispatch(commandBuilder.zrank(key, member)); - } - - @Override - public RedisFuture zrem(K key, V... members) { - return dispatch(commandBuilder.zrem(key, members)); - } - - @Override - public RedisFuture zremrangebyrank(K key, long start, long stop) { - return dispatch(commandBuilder.zremrangebyrank(key, start, stop)); - } - - @Override - public RedisFuture zremrangebyscore(K key, double min, double max) { - return dispatch(commandBuilder.zremrangebyscore(key, min, max)); - } - - @Override - public RedisFuture zremrangebyscore(K key, String min, String max) { - return dispatch(commandBuilder.zremrangebyscore(key, min, max)); - } - - @Override - public RedisFuture> zrevrange(K key, long start, long stop) { - return dispatch(commandBuilder.zrevrange(key, start, stop)); - } - - @Override - public RedisFuture>> zrevrangeWithScores(K key, long start, long stop) { - return dispatch(commandBuilder.zrevrangeWithScores(key, start, stop)); - } - - @Override - public RedisFuture> zrevrangebyscore(K key, double max, double min) { - return dispatch(commandBuilder.zrevrangebyscore(key, max, min)); - } - - @Override - public RedisFuture> zrevrangebyscore(K key, String max, String min) { - return dispatch(commandBuilder.zrevrangebyscore(key, max, min)); - } - - @Override - public RedisFuture> zrevrangebyscore(K key, double max, double min, long offset, long count) { - return dispatch(commandBuilder.zrevrangebyscore(key, max, min, offset, count)); - } - - @Override - public RedisFuture> zrevrangebyscore(K key, String max, String min, long offset, long count) { - return dispatch(commandBuilder.zrevrangebyscore(key, max, min, offset, count)); - } - - @Override - public RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min) { - return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, max, min)); - } - - @Override - public RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min) { - return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, max, min)); - } - - @Override - public RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count) { - return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, max, min, offset, count)); - } - - @Override - public RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count) { - return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, max, min, offset, count)); - } - - @Override - public RedisFuture zrevrange(ValueStreamingChannel channel, K key, long start, long stop) { - return dispatch(commandBuilder.zrevrange(channel, key, start, stop)); - } - - @Override - public RedisFuture zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { - return dispatch(commandBuilder.zrevrangeWithScores(channel, key, start, stop)); - } - - @Override - public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min) { - return dispatch(commandBuilder.zrevrangebyscore(channel, key, max, min)); - } - - @Override - public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min) { - return dispatch(commandBuilder.zrevrangebyscore(channel, key, max, min)); - } - - @Override - public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, - long count) { - return dispatch(commandBuilder.zrevrangebyscore(channel, key, max, min, offset, count)); - } - - @Override - public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, - long count) { - return dispatch(commandBuilder.zrevrangebyscore(channel, key, max, min, offset, count)); - } - - @Override - public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min) { - return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min)); - } - - @Override - public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min) { - return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min)); - } - - @Override - public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, - long offset, long count) { - return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min, offset, count)); - } - - @Override - public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, - long offset, long count) { - return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min, offset, count)); - } - - @Override - public RedisFuture zrevrank(K key, V member) { - return dispatch(commandBuilder.zrevrank(key, member)); - } - - @Override - public RedisFuture zscore(K key, V member) { - return dispatch(commandBuilder.zscore(key, member)); - } - - @Override - public RedisFuture zunionstore(K destination, K... keys) { - return dispatch(commandBuilder.zunionstore(destination, keys)); - } - - @Override - public RedisFuture zunionstore(K destination, ZStoreArgs storeArgs, K... keys) { - return dispatch(commandBuilder.zunionstore(destination, storeArgs, keys)); - } - - @Override - public RedisFuture> scan() { - return dispatch(commandBuilder.scan()); - } - - @Override - public RedisFuture> scan(ScanArgs scanArgs) { - return dispatch(commandBuilder.scan(scanArgs)); - } - - @Override - public RedisFuture> scan(ScanCursor scanCursor, ScanArgs scanArgs) { - return dispatch(commandBuilder.scan(scanCursor, scanArgs)); - } - - @Override - public RedisFuture> scan(ScanCursor scanCursor) { - return dispatch(commandBuilder.scan(scanCursor)); - } - - @Override - public RedisFuture scan(KeyStreamingChannel channel) { - return dispatch(commandBuilder.scanStreaming(channel)); - } - - @Override - public RedisFuture scan(KeyStreamingChannel channel, ScanArgs scanArgs) { - return dispatch(commandBuilder.scanStreaming(channel, scanArgs)); - } - - @Override - public RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { - return dispatch(commandBuilder.scanStreaming(channel, scanCursor, scanArgs)); - } - - @Override - public RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor) { - return dispatch(commandBuilder.scanStreaming(channel, scanCursor)); - } - - @Override - public RedisFuture> sscan(K key) { - return dispatch(commandBuilder.sscan(key)); - } - - @Override - public RedisFuture> sscan(K key, ScanArgs scanArgs) { - return dispatch(commandBuilder.sscan(key, scanArgs)); - } - - @Override - public RedisFuture> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - return dispatch(commandBuilder.sscan(key, scanCursor, scanArgs)); - } - - @Override - public RedisFuture> sscan(K key, ScanCursor scanCursor) { - return dispatch(commandBuilder.sscan(key, scanCursor)); - } - - @Override - public RedisFuture sscan(ValueStreamingChannel channel, K key) { - return dispatch(commandBuilder.sscanStreaming(channel, key)); - } - - @Override - public RedisFuture sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs) { - return dispatch(commandBuilder.sscanStreaming(channel, key, scanArgs)); - } - - @Override - public RedisFuture sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs) { - return dispatch(commandBuilder.sscanStreaming(channel, key, scanCursor, scanArgs)); - } - - @Override - public RedisFuture sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor) { - return dispatch(commandBuilder.sscanStreaming(channel, key, scanCursor)); - } - - @Override - public RedisFuture> hscan(K key) { - return dispatch(commandBuilder.hscan(key)); - } - - @Override - public RedisFuture> hscan(K key, ScanArgs scanArgs) { - return dispatch(commandBuilder.hscan(key, scanArgs)); - } - - @Override - public RedisFuture> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - return dispatch(commandBuilder.hscan(key, scanCursor, scanArgs)); - } - - @Override - public RedisFuture> hscan(K key, ScanCursor scanCursor) { - return dispatch(commandBuilder.hscan(key, scanCursor)); - } - - @Override - public RedisFuture hscan(KeyValueStreamingChannel channel, K key) { - return dispatch(commandBuilder.hscanStreaming(channel, key)); - } - - @Override - public RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs) { - return dispatch(commandBuilder.hscanStreaming(channel, key, scanArgs)); - } - - @Override - public RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, - ScanArgs scanArgs) { - return dispatch(commandBuilder.hscanStreaming(channel, key, scanCursor, scanArgs)); - } - - @Override - public RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor) { - return dispatch(commandBuilder.hscanStreaming(channel, key, scanCursor)); - } - - @Override - public RedisFuture> zscan(K key) { - return dispatch(commandBuilder.zscan(key)); - } - - @Override - public RedisFuture> zscan(K key, ScanArgs scanArgs) { - return dispatch(commandBuilder.zscan(key, scanArgs)); - } - - @Override - public RedisFuture> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - return dispatch(commandBuilder.zscan(key, scanCursor, scanArgs)); - } - - @Override - public RedisFuture> zscan(K key, ScanCursor scanCursor) { - return dispatch(commandBuilder.zscan(key, scanCursor)); - } - - @Override - public RedisFuture zscan(ScoredValueStreamingChannel channel, K key) { - return dispatch(commandBuilder.zscanStreaming(channel, key)); - } - - @Override - public RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs) { - return dispatch(commandBuilder.zscanStreaming(channel, key, scanArgs)); - } - - @Override - public RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, - ScanArgs scanArgs) { - return dispatch(commandBuilder.zscanStreaming(channel, key, scanCursor, scanArgs)); - } - - @Override - public RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor) { - return dispatch(commandBuilder.zscanStreaming(channel, key, scanCursor)); - } - - @Override - public String digest(V script) { - return LettuceStrings.digest(codec.encodeValue(script)); - } - - @Override - public RedisFuture> time() { - return dispatch(commandBuilder.time()); - } - - @Override - public RedisFuture waitForReplication(int replicas, long timeout) { - return dispatch(commandBuilder.wait(replicas, timeout)); - } - - @Override - public RedisFuture pfadd(K key, V value, V... moreValues) { - return dispatch(commandBuilder.pfadd(key, value, moreValues)); - } - - @Override - public RedisFuture pfadd(K key, V... values) { - return dispatch(commandBuilder.pfadd(key, values)); - } - - @Override - public RedisFuture pfmerge(K destkey, K sourcekey, K... moreSourceKeys) { - return dispatch(commandBuilder.pfmerge(destkey, sourcekey, moreSourceKeys)); - } - - @Override - public RedisFuture pfmerge(K destkey, K... sourcekeys) { - return dispatch(commandBuilder.pfmerge(destkey, sourcekeys)); - } - - @Override - public RedisFuture pfcount(K key, K... moreKeys) { - return dispatch(commandBuilder.pfcount(key, moreKeys)); - } - - @Override - public RedisFuture pfcount(K... keys) { - return dispatch(commandBuilder.pfcount(keys)); - } - - @Override - public RedisFuture clusterBumpepoch() { - return dispatch(commandBuilder.clusterBumpepoch()); - } - - @Override - public RedisFuture clusterMeet(String ip, int port) { - return dispatch(commandBuilder.clusterMeet(ip, port)); - } - - @Override - public RedisFuture clusterForget(String nodeId) { - return dispatch(commandBuilder.clusterForget(nodeId)); - } - - @Override - public RedisFuture clusterAddSlots(int... slots) { - return dispatch(commandBuilder.clusterAddslots(slots)); - } - - @Override - public RedisFuture clusterDelSlots(int... slots) { - return dispatch(commandBuilder.clusterDelslots(slots)); - } - - @Override - public RedisFuture clusterInfo() { - return dispatch(commandBuilder.clusterInfo()); - } - - @Override - public RedisFuture clusterMyId() { - return dispatch(commandBuilder.clusterMyId()); - } - - @Override - public RedisFuture clusterNodes() { - return dispatch(commandBuilder.clusterNodes()); - } - - @Override - public RedisFuture> clusterGetKeysInSlot(int slot, int count) { - return dispatch(commandBuilder.clusterGetKeysInSlot(slot, count)); - } - - @Override - public RedisFuture clusterCountKeysInSlot(int slot) { - return dispatch(commandBuilder.clusterCountKeysInSlot(slot)); - } - - @Override - public RedisFuture clusterCountFailureReports(String nodeId) { - return dispatch(commandBuilder.clusterCountFailureReports(nodeId)); - } - - @Override - public RedisFuture clusterKeyslot(K key) { - return dispatch(commandBuilder.clusterKeyslot(key)); - } - - @Override - public RedisFuture clusterSaveconfig() { - return dispatch(commandBuilder.clusterSaveconfig()); - } - - @Override - public RedisFuture clusterSetConfigEpoch(long configEpoch) { - return dispatch(commandBuilder.clusterSetConfigEpoch(configEpoch)); - } - - @Override - public RedisFuture> clusterSlots() { - return dispatch(commandBuilder.clusterSlots()); - } - - @Override - public RedisFuture clusterSetSlotNode(int slot, String nodeId) { - return dispatch(commandBuilder.clusterSetSlotNode(slot, nodeId)); - } - - @Override - public RedisFuture clusterSetSlotStable(int slot) { - return dispatch(commandBuilder.clusterSetSlotStable(slot)); - } - - @Override - public RedisFuture clusterSetSlotMigrating(int slot, String nodeId) { - return dispatch(commandBuilder.clusterSetSlotMigrating(slot, nodeId)); - } - - @Override - public RedisFuture clusterSetSlotImporting(int slot, String nodeId) { - return dispatch(commandBuilder.clusterSetSlotImporting(slot, nodeId)); - } - - @Override - public RedisFuture clusterFailover(boolean force) { - return dispatch(commandBuilder.clusterFailover(force)); - } - - @Override - public RedisFuture clusterReset(boolean hard) { - return dispatch(commandBuilder.clusterReset(hard)); - } - - @Override - public RedisFuture asking() { - return dispatch(commandBuilder.asking()); - } - - @Override - public RedisFuture clusterReplicate(String nodeId) { - return dispatch(commandBuilder.clusterReplicate(nodeId)); - } - - @Override - public RedisFuture clusterFlushslots() { - return dispatch(commandBuilder.clusterFlushslots()); - } - - @Override - public RedisFuture> clusterSlaves(String nodeId) { - return dispatch(commandBuilder.clusterSlaves(nodeId)); - } - - @Override - public RedisFuture zlexcount(K key, String min, String max) { - return dispatch(commandBuilder.zlexcount(key, min, max)); - } - - @Override - public RedisFuture zremrangebylex(K key, String min, String max) { - return dispatch(commandBuilder.zremrangebylex(key, min, max)); - } - - @Override - public RedisFuture> zrangebylex(K key, String min, String max) { - return dispatch(commandBuilder.zrangebylex(key, min, max)); - } - - @Override - public RedisFuture> zrangebylex(K key, String min, String max, long offset, long count) { - return dispatch(commandBuilder.zrangebylex(key, min, max, offset, count)); - } - - @Override - public RedisFuture geoadd(K key, double longitude, double latitude, V member) { - return dispatch(commandBuilder.geoadd(key, longitude, latitude, member)); - } - - @Override - public RedisFuture geoadd(K key, Object... lngLatMember) { - return dispatch(commandBuilder.geoadd(key, lngLatMember)); - } - - @Override - public RedisFuture> geohash(K key, V... members) { - return dispatch(commandBuilder.geohash(key, members)); - } - - @Override - public RedisFuture> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit) { - return dispatch(commandBuilder.georadius(key, longitude, latitude, distance, unit.name())); - } - - @Override - public RedisFuture>> georadius(K key, double longitude, double latitude, double distance, - GeoArgs.Unit unit, GeoArgs geoArgs) { - return dispatch(commandBuilder.georadius(key, longitude, latitude, distance, unit.name(), geoArgs)); - } - - @Override - public RedisFuture georadius(K key, double longitude, double latitude, double distance, Unit unit, - GeoRadiusStoreArgs geoRadiusStoreArgs) { - return dispatch(commandBuilder.georadius(key, longitude, latitude, distance, unit.name(), geoRadiusStoreArgs)); - } - - @Override - public RedisFuture> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit) { - return dispatch(commandBuilder.georadiusbymember(key, member, distance, unit.name())); - } - - @Override - public RedisFuture>> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, - GeoArgs geoArgs) { - return dispatch(commandBuilder.georadiusbymember(key, member, distance, unit.name(), geoArgs)); - } - - @Override - public RedisFuture georadiusbymember(K key, V member, double distance, Unit unit, - GeoRadiusStoreArgs geoRadiusStoreArgs) { - return dispatch(commandBuilder.georadiusbymember(key, member, distance, unit.name(), geoRadiusStoreArgs)); - } - - @Override - public RedisFuture> geopos(K key, V... members) { - return dispatch(commandBuilder.geopos(key, members)); - } - - @Override - public RedisFuture geodist(K key, V from, V to, GeoArgs.Unit unit) { - return dispatch(commandBuilder.geodist(key, from, to, unit)); - } - - @Override - public RedisFuture dispatch(ProtocolKeyword type, CommandOutput output) { - - LettuceAssert.notNull(type, "Command type must not be null"); - LettuceAssert.notNull(output, "CommandOutput type must not be null"); - - return dispatch(new AsyncCommand<>(new Command<>(type, output))); - } - - @Override - public RedisFuture dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args) { - - LettuceAssert.notNull(type, "Command type must not be null"); - LettuceAssert.notNull(output, "CommandOutput type must not be null"); - LettuceAssert.notNull(args, "CommandArgs type must not be null"); - - return dispatch(new AsyncCommand<>(new Command<>(type, output, args))); - } - - protected RedisFuture dispatch(CommandType type, CommandOutput output) { - return dispatch(type, output, null); - } - - protected RedisFuture dispatch(CommandType type, CommandOutput output, CommandArgs args) { - return dispatch(new AsyncCommand<>(new Command<>(type, output, args))); - } - - public AsyncCommand dispatch(RedisCommand cmd) { - AsyncCommand asyncCommand = new AsyncCommand<>(cmd); - RedisCommand dispatched = connection.dispatch(asyncCommand); - if (dispatched instanceof AsyncCommand) { - return (AsyncCommand) dispatched; - } - return asyncCommand; - } - - public void setTimeout(long timeout, TimeUnit unit) { - connection.setTimeout(timeout, unit); - } - - @Override - public void close() { - connection.close(); - } - - @Override - public boolean isOpen() { - return connection.isOpen(); - } - - @Override - public void reset() { - getConnection().reset(); - } - - public StatefulConnection getConnection() { - return connection; - } - - @Override - public void setAutoFlushCommands(boolean autoFlush) { - connection.setAutoFlushCommands(autoFlush); - } - - @Override - public void flushCommands() { - connection.flushCommands(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/AbstractRedisClient.java b/src/main/java/com/lambdaworks/redis/AbstractRedisClient.java deleted file mode 100644 index 95ce06eacc..0000000000 --- a/src/main/java/com/lambdaworks/redis/AbstractRedisClient.java +++ /dev/null @@ -1,378 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Closeable; -import java.net.SocketAddress; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.concurrent.*; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.function.Supplier; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CommandHandler; -import com.lambdaworks.redis.pubsub.PubSubCommandHandler; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.DefaultClientResources; - -import io.netty.bootstrap.Bootstrap; -import io.netty.buffer.PooledByteBufAllocator; -import io.netty.channel.*; -import io.netty.channel.group.ChannelGroup; -import io.netty.channel.group.ChannelGroupFuture; -import io.netty.channel.group.DefaultChannelGroup; -import io.netty.channel.nio.NioEventLoopGroup; -import io.netty.channel.socket.nio.NioSocketChannel; -import io.netty.util.HashedWheelTimer; -import io.netty.util.concurrent.EventExecutorGroup; -import io.netty.util.concurrent.Future; -import io.netty.util.internal.ConcurrentSet; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Base Redis client. This class holds the netty infrastructure, {@link ClientOptions} and the basic connection procedure. This - * class creates the netty {@link EventLoopGroup}s for NIO ({@link NioEventLoopGroup}) and EPoll ( - * {@link io.netty.channel.epoll.EpollEventLoopGroup}) with a default of {@code Runtime.getRuntime().availableProcessors() * 4} - * threads. Reuse the instance as much as possible since the {@link EventLoopGroup} instances are expensive and can consume a - * huge part of your resources, if you create multiple instances. - *

- * You can set the number of threads per {@link NioEventLoopGroup} by setting the {@code io.netty.eventLoopThreads} system - * property to a reasonable number of threads. - *

- * - * @author Mark Paluch - * @since 3.0 - */ -public abstract class AbstractRedisClient { - - protected static final PooledByteBufAllocator BUF_ALLOCATOR = PooledByteBufAllocator.DEFAULT; - protected static final InternalLogger logger = InternalLoggerFactory.getInstance(RedisClient.class); - - /** - * @deprecated use map eventLoopGroups instead. - */ - @Deprecated - protected EventLoopGroup eventLoopGroup; - protected EventExecutorGroup genericWorkerPool; - - protected final Map, EventLoopGroup> eventLoopGroups = new ConcurrentHashMap<>(2); - protected final HashedWheelTimer timer; - protected final ChannelGroup channels; - protected final ClientResources clientResources; - protected long timeout = 60; - protected TimeUnit unit; - protected ConnectionEvents connectionEvents = new ConnectionEvents(); - protected Set closeableResources = new ConcurrentSet<>(); - - protected volatile ClientOptions clientOptions = ClientOptions.builder().build(); - - private final boolean sharedResources; - private final AtomicBoolean shutdown = new AtomicBoolean(); - - /** - * @deprecated use {@link #AbstractRedisClient(ClientResources)} - */ - @Deprecated - protected AbstractRedisClient() { - this(null); - } - - /** - * Create a new instance with client resources. - * - * @param clientResources the client resources. If {@literal null}, the client will create a new dedicated instance of - * client resources and keep track of them. - */ - protected AbstractRedisClient(ClientResources clientResources) { - - if (clientResources == null) { - sharedResources = false; - this.clientResources = DefaultClientResources.create(); - } else { - sharedResources = true; - this.clientResources = clientResources; - } - - unit = TimeUnit.SECONDS; - - genericWorkerPool = this.clientResources.eventExecutorGroup(); - channels = new DefaultChannelGroup(genericWorkerPool.next()); - timer = new HashedWheelTimer(); - } - - /** - * Set the default timeout for {@link com.lambdaworks.redis.RedisConnection connections} created by this client. The timeout - * applies to connection attempts and non-blocking commands. - * - * @param timeout Default connection timeout. - * @param unit Unit of time for the timeout. - */ - public void setDefaultTimeout(long timeout, TimeUnit unit) { - this.timeout = timeout; - this.unit = unit; - } - - @SuppressWarnings("unchecked") - protected > T connectAsyncImpl(final CommandHandler handler, - final T connection, final Supplier socketAddressSupplier) { - - ConnectionBuilder connectionBuilder = ConnectionBuilder.connectionBuilder(); - connectionBuilder.clientOptions(clientOptions); - connectionBuilder.clientResources(clientResources); - connectionBuilder(handler, connection, socketAddressSupplier, connectionBuilder, null); - channelType(connectionBuilder, null); - return (T) initializeChannel(connectionBuilder); - } - - /** - * Populate connection builder with necessary resources. - * - * @param handler instance of a CommandHandler for writing redis commands - * @param connection implementation of a RedisConnection - * @param socketAddressSupplier address supplier for initial connect and re-connect - * @param connectionBuilder connection builder to configure the connection - * @param redisURI URI of the redis instance - */ - protected void connectionBuilder(CommandHandler handler, RedisChannelHandler connection, - Supplier socketAddressSupplier, ConnectionBuilder connectionBuilder, RedisURI redisURI) { - - Bootstrap redisBootstrap = new Bootstrap(); - redisBootstrap.option(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024); - redisBootstrap.option(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024); - redisBootstrap.option(ChannelOption.ALLOCATOR, BUF_ALLOCATOR); - - SocketOptions socketOptions = getOptions().getSocketOptions(); - - redisBootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, - (int) socketOptions.getConnectTimeoutUnit().toMillis(socketOptions.getConnectTimeout())); - redisBootstrap.option(ChannelOption.SO_KEEPALIVE, socketOptions.isKeepAlive()); - redisBootstrap.option(ChannelOption.TCP_NODELAY, socketOptions.isTcpNoDelay()); - - if (redisURI == null) { - connectionBuilder.timeout(timeout, unit); - } else { - connectionBuilder.timeout(redisURI.getTimeout(), redisURI.getUnit()); - connectionBuilder.password(redisURI.getPassword()); - } - - connectionBuilder.bootstrap(redisBootstrap); - connectionBuilder.channelGroup(channels).connectionEvents(connectionEvents).timer(timer); - connectionBuilder.commandHandler(handler).socketAddressSupplier(socketAddressSupplier).connection(connection); - connectionBuilder.workerPool(genericWorkerPool); - } - - protected void channelType(ConnectionBuilder connectionBuilder, ConnectionPoint connectionPoint) { - - connectionBuilder.bootstrap().group(getEventLoopGroup(connectionPoint)); - - if (connectionPoint != null && connectionPoint.getSocket() != null) { - checkForEpollLibrary(); - connectionBuilder.bootstrap().channel(EpollProvider.epollDomainSocketChannelClass); - } else { - connectionBuilder.bootstrap().channel(NioSocketChannel.class); - } - } - - private synchronized EventLoopGroup getEventLoopGroup(ConnectionPoint connectionPoint) { - - if ((connectionPoint == null || connectionPoint.getSocket() == null) - && !eventLoopGroups.containsKey(NioEventLoopGroup.class)) { - - if (eventLoopGroup == null) { - eventLoopGroup = clientResources.eventLoopGroupProvider().allocate(NioEventLoopGroup.class); - } - - eventLoopGroups.put(NioEventLoopGroup.class, eventLoopGroup); - } - - if (connectionPoint != null && connectionPoint.getSocket() != null) { - checkForEpollLibrary(); - - if (!eventLoopGroups.containsKey(EpollProvider.epollEventLoopGroupClass)) { - EventLoopGroup epl = clientResources.eventLoopGroupProvider().allocate(EpollProvider.epollEventLoopGroupClass); - eventLoopGroups.put(EpollProvider.epollEventLoopGroupClass, epl); - } - } - - if (connectionPoint == null || connectionPoint.getSocket() == null) { - return eventLoopGroups.get(NioEventLoopGroup.class); - } - - if (connectionPoint != null && connectionPoint.getSocket() != null) { - checkForEpollLibrary(); - return eventLoopGroups.get(EpollProvider.epollEventLoopGroupClass); - } - - throw new IllegalStateException("This should not have happened in a binary decision. Please file a bug."); - } - - private void checkForEpollLibrary() { - EpollProvider.checkForEpollLibrary(); - } - - @SuppressWarnings("unchecked") - protected > T initializeChannel(ConnectionBuilder connectionBuilder) { - - RedisChannelHandler connection = connectionBuilder.connection(); - SocketAddress redisAddress = connectionBuilder.socketAddress(); - try { - - logger.debug("Connecting to Redis at {}", redisAddress); - - Bootstrap redisBootstrap = connectionBuilder.bootstrap(); - RedisChannelInitializer initializer = connectionBuilder.build(); - redisBootstrap.handler(initializer); - ChannelFuture connectFuture = redisBootstrap.connect(redisAddress); - - connectFuture.await(); - - if (!connectFuture.isSuccess()) { - if (connectFuture.cause() instanceof Exception) { - throw (Exception) connectFuture.cause(); - } - connectFuture.get(); - } - - try { - initializer.channelInitialized().get(connectionBuilder.getTimeout(), connectionBuilder.getTimeUnit()); - } catch (TimeoutException e) { - throw new RedisConnectionException("Could not initialize channel within " + connectionBuilder.getTimeout() + " " - + connectionBuilder.getTimeUnit(), e); - } - connection.registerCloseables(closeableResources, connection); - - return (T) connection; - } catch (RedisException e) { - connectionBuilder.commandHandler().initialState(); - throw e; - } catch (Exception e) { - connectionBuilder.commandHandler().initialState(); - throw new RedisConnectionException("Unable to connect to " + redisAddress, e); - } - } - - /** - * Shutdown this client and close all open connections. The client should be discarded after calling shutdown. The shutdown - * has 2 secs quiet time and a timeout of 15 secs. - */ - public void shutdown() { - shutdown(2, 15, TimeUnit.SECONDS); - } - - /** - * Shutdown this client and close all open connections. The client should be discarded after calling shutdown. - * - * @param quietPeriod the quiet period as described in the documentation - * @param timeout the maximum amount of time to wait until the executor is shutdown regardless if a task was submitted - * during the quiet period - * @param timeUnit the unit of {@code quietPeriod} and {@code timeout} - */ - public void shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { - - if (shutdown.compareAndSet(false, true)) { - - timer.stop(); - - while (!closeableResources.isEmpty()) { - Closeable closeableResource = closeableResources.iterator().next(); - try { - closeableResource.close(); - } catch (Exception e) { - logger.debug("Exception on Close: " + e.getMessage(), e); - } - closeableResources.remove(closeableResource); - } - - List> closeFutures = new ArrayList<>(); - - for (Channel c : channels) { - ChannelPipeline pipeline = c.pipeline(); - - CommandHandler commandHandler = pipeline.get(CommandHandler.class); - if (commandHandler != null && !commandHandler.isClosed()) { - commandHandler.close(); - } - - PubSubCommandHandler psCommandHandler = pipeline.get(PubSubCommandHandler.class); - if (psCommandHandler != null && !psCommandHandler.isClosed()) { - psCommandHandler.close(); - } - } - - ChannelGroupFuture closeFuture = channels.close(); - closeFutures.add(closeFuture); - - if (!sharedResources) { - clientResources.shutdown(quietPeriod, timeout, timeUnit); - } else { - for (EventLoopGroup eventExecutors : eventLoopGroups.values()) { - Future groupCloseFuture = clientResources.eventLoopGroupProvider().release(eventExecutors, quietPeriod, - timeout, timeUnit); - closeFutures.add(groupCloseFuture); - } - } - - for (Future future : closeFutures) { - try { - future.get(); - } catch (Exception e) { - throw new RedisException(e); - } - } - } - } - - protected int getResourceCount() { - return closeableResources.size(); - } - - protected int getChannelCount() { - return channels.size(); - } - - /** - * Add a listener for the RedisConnectionState. The listener is notified every time a connect/disconnect/IO exception - * happens. The listeners are not bound to a specific connection, so every time a connection event happens on any - * connection, the listener will be notified. The corresponding netty channel handler (async connection) is passed on the - * event. - * - * @param listener must not be {@literal null} - */ - public void addListener(RedisConnectionStateListener listener) { - LettuceAssert.notNull(listener, "RedisConnectionStateListener must not be null"); - connectionEvents.addListener(listener); - } - - /** - * Removes a listener. - * - * @param listener must not be {@literal null} - */ - public void removeListener(RedisConnectionStateListener listener) { - - LettuceAssert.notNull(listener, "RedisConnectionStateListener must not be null"); - connectionEvents.removeListener(listener); - } - - /** - * Returns the {@link ClientOptions} which are valid for that client. Connections inherit the current options at the moment - * the connection is created. Changes to options will not affect existing connections. - * - * @return the {@link ClientOptions} for this client - */ - public ClientOptions getOptions() { - return clientOptions; - } - - /** - * Set the {@link ClientOptions} for the client. - * - * @param clientOptions client options for the client and connections that are created after setting the options - */ - protected void setOptions(ClientOptions clientOptions) { - LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); - this.clientOptions = clientOptions; - } -} diff --git a/src/main/java/com/lambdaworks/redis/AbstractRedisReactiveCommands.java b/src/main/java/com/lambdaworks/redis/AbstractRedisReactiveCommands.java deleted file mode 100644 index 8d9326fc10..0000000000 --- a/src/main/java/com/lambdaworks/redis/AbstractRedisReactiveCommands.java +++ /dev/null @@ -1,1856 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.protocol.CommandType.EXEC; - -import java.util.Date; -import java.util.Map; -import java.util.concurrent.TimeUnit; -import java.util.function.Supplier; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.*; -import rx.Observable; -import rx.Subscriber; - -import com.lambdaworks.redis.GeoArgs.Unit; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.rx.*; -import com.lambdaworks.redis.cluster.api.rx.RedisClusterReactiveCommands; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.output.*; - -/** - * A reactive and thread-safe API for a Redis connection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public abstract class AbstractRedisReactiveCommands implements RedisHashReactiveCommands, - RedisKeyReactiveCommands, RedisStringReactiveCommands, RedisListReactiveCommands, - RedisSetReactiveCommands, RedisSortedSetReactiveCommands, RedisScriptingReactiveCommands, - RedisServerReactiveCommands, RedisHLLReactiveCommands, BaseRedisReactiveCommands, - RedisTransactionalReactiveCommands, RedisGeoReactiveCommands, RedisClusterReactiveCommands { - - protected MultiOutput multi; - protected RedisCommandBuilder commandBuilder; - protected RedisCodec codec; - protected StatefulConnection connection; - - /** - * Initialize a new instance. - * - * @param connection the connection to operate on - * @param codec the codec for command encoding - */ - public AbstractRedisReactiveCommands(StatefulConnection connection, RedisCodec codec) { - this.connection = connection; - this.codec = codec; - commandBuilder = new RedisCommandBuilder(codec); - } - - @Override - public Observable append(K key, V value) { - return createObservable(() -> commandBuilder.append(key, value)); - } - - @Override - public Observable auth(String password) { - return createObservable(() -> commandBuilder.auth(password)); - - } - - @Override - public Observable bgrewriteaof() { - return createObservable(commandBuilder::bgrewriteaof); - } - - @Override - public Observable bgsave() { - return createObservable(commandBuilder::bgsave); - } - - @Override - public Observable bitcount(K key) { - return createObservable(() -> commandBuilder.bitcount(key)); - } - - @Override - public Observable bitcount(K key, long start, long end) { - return createObservable(() -> commandBuilder.bitcount(key, start, end)); - } - - @Override - public Observable bitfield(K key, BitFieldArgs args) { - return createDissolvingObservable(() -> commandBuilder.bitfield(key, args)); - } - - @Override - public Observable bitpos(K key, boolean state) { - return createObservable(() -> commandBuilder.bitpos(key, state)); - } - - @Override - public Observable bitpos(K key, boolean state, long start, long end) { - return createObservable(() -> commandBuilder.bitpos(key, state, start, end)); - } - - @Override - public Observable bitopAnd(K destination, K... keys) { - return createObservable(() -> commandBuilder.bitopAnd(destination, keys)); - } - - @Override - public Observable bitopNot(K destination, K source) { - return createObservable(() -> commandBuilder.bitopNot(destination, source)); - } - - @Override - public Observable bitopOr(K destination, K... keys) { - return createObservable(() -> commandBuilder.bitopOr(destination, keys)); - } - - @Override - public Observable bitopXor(K destination, K... keys) { - return createObservable(() -> commandBuilder.bitopXor(destination, keys)); - } - - @Override - public Observable> blpop(long timeout, K... keys) { - return createObservable(() -> commandBuilder.blpop(timeout, keys)); - } - - @Override - public Observable> brpop(long timeout, K... keys) { - return createObservable(() -> commandBuilder.brpop(timeout, keys)); - } - - @Override - public Observable brpoplpush(long timeout, K source, K destination) { - return createObservable(() -> commandBuilder.brpoplpush(timeout, source, destination)); - } - - @Override - public Observable clientGetname() { - return createObservable(commandBuilder::clientGetname); - } - - @Override - public Observable clientSetname(K name) { - return createObservable(() -> commandBuilder.clientSetname(name)); - } - - @Override - public Observable clientKill(String addr) { - return createObservable(() -> commandBuilder.clientKill(addr)); - } - - @Override - public Observable clientKill(KillArgs killArgs) { - return createObservable(() -> commandBuilder.clientKill(killArgs)); - } - - @Override - public Observable clientPause(long timeout) { - return createObservable(() -> commandBuilder.clientPause(timeout)); - } - - @Override - public Observable clientList() { - return createObservable(commandBuilder::clientList); - } - - @Override - public Observable command() { - return createDissolvingObservable(commandBuilder::command); - } - - @Override - public Observable commandInfo(String... commands) { - return createDissolvingObservable(() -> commandBuilder.commandInfo(commands)); - } - - @Override - public Observable commandInfo(CommandType... commands) { - String[] stringCommands = new String[commands.length]; - for (int i = 0; i < commands.length; i++) { - stringCommands[i] = commands[i].name(); - } - - return commandInfo(stringCommands); - } - - @Override - public Observable commandCount() { - return createObservable(commandBuilder::commandCount); - } - - @Override - public Observable configGet(String parameter) { - return createDissolvingObservable(() -> commandBuilder.configGet(parameter)); - } - - @Override - public Observable configResetstat() { - return createObservable(commandBuilder::configResetstat); - } - - @Override - public Observable configSet(String parameter, String value) { - return createObservable(() -> commandBuilder.configSet(parameter, value)); - } - - @Override - public Observable configRewrite() { - return createObservable(commandBuilder::configRewrite); - } - - @Override - public Observable dbsize() { - return createObservable(commandBuilder::dbsize); - } - - @Override - public Observable debugCrashAndRecover(Long delay) { - return createObservable(() -> (commandBuilder.debugCrashAndRecover(delay))); - } - - @Override - public Observable debugHtstats(int db) { - return createObservable(() -> commandBuilder.debugHtstats(db)); - } - - @Override - public Observable debugObject(K key) { - return createObservable(() -> commandBuilder.debugObject(key)); - } - - @Override - public Observable debugOom() { - return Observable.just(Success.Success).doOnCompleted(commandBuilder::debugOom); - } - - @Override - public Observable debugReload() { - return createObservable(() -> (commandBuilder.debugReload())); - } - - @Override - public Observable debugRestart(Long delay) { - return createObservable(() -> (commandBuilder.debugRestart(delay))); - } - - @Override - public Observable debugSdslen(K key) { - return createObservable(() -> (commandBuilder.debugSdslen(key))); - } - - @Override - public Observable debugSegfault() { - return Observable.just(Success.Success).doOnCompleted(commandBuilder::debugSegfault); - } - - @Override - public Observable decr(K key) { - return createObservable(() -> commandBuilder.decr(key)); - } - - @Override - public Observable decrby(K key, long amount) { - return createObservable(() -> commandBuilder.decrby(key, amount)); - } - - @Override - public Observable del(K... keys) { - return createObservable(() -> commandBuilder.del(keys)); - } - - public Observable del(Iterable keys) { - return createObservable(() -> commandBuilder.del(keys)); - } - - @Override - public Observable unlink(K... keys) { - return createObservable(() -> commandBuilder.unlink(keys)); - } - - public Observable unlink(Iterable keys) { - return createObservable(() -> commandBuilder.unlink(keys)); - } - - @Override - public Observable discard() { - return createObservable(commandBuilder::discard); - } - - @Override - public Observable dump(K key) { - return createObservable(() -> commandBuilder.dump(key)); - } - - @Override - public Observable echo(V msg) { - return createObservable(() -> commandBuilder.echo(msg)); - } - - @Override - @SuppressWarnings("unchecked") - public Observable eval(String script, ScriptOutputType type, K... keys) { - return (Observable) createObservable(() -> commandBuilder.eval(script, type, keys)); - } - - @Override - @SuppressWarnings("unchecked") - public Observable eval(String script, ScriptOutputType type, K[] keys, V... values) { - return (Observable) createObservable(() -> commandBuilder.eval(script, type, keys, values)); - } - - @Override - @SuppressWarnings("unchecked") - public Observable evalsha(String digest, ScriptOutputType type, K... keys) { - return (Observable) createObservable(() -> commandBuilder.evalsha(digest, type, keys)); - } - - @Override - @SuppressWarnings("unchecked") - public Observable evalsha(String digest, ScriptOutputType type, K[] keys, V... values) { - return (Observable) createObservable(() -> commandBuilder.evalsha(digest, type, keys, values)); - } - - public Observable exists(K key) { - return createObservable(() -> commandBuilder.exists(key)); - } - - @Override - public Observable exists(K... keys) { - return createObservable(() -> commandBuilder.exists(keys)); - } - - public Observable exists(Iterable keys) { - return createObservable(() -> commandBuilder.exists(keys)); - } - - @Override - public Observable expire(K key, long seconds) { - return createObservable(() -> commandBuilder.expire(key, seconds)); - } - - @Override - public Observable expireat(K key, long timestamp) { - return createObservable(() -> commandBuilder.expireat(key, timestamp)); - } - - @Override - public Observable expireat(K key, Date timestamp) { - return expireat(key, timestamp.getTime() / 1000); - } - - @Override - public Observable exec() { - return createDissolvingObservable(EXEC, null, null); - } - - @Override - public Observable flushall() { - return createObservable(commandBuilder::flushall); - } - - @Override - public Observable flushallAsync() { - return createObservable(commandBuilder::flushallAsync); - } - - @Override - public Observable flushdb() { - return createObservable(commandBuilder::flushdb); - } - - @Override - public Observable flushdbAsync() { - return createObservable(commandBuilder::flushdbAsync); - } - - @Override - public Observable get(K key) { - return createObservable(() -> commandBuilder.get(key)); - } - - @Override - public Observable getbit(K key, long offset) { - return createObservable(() -> commandBuilder.getbit(key, offset)); - } - - @Override - public Observable getrange(K key, long start, long end) { - return createObservable(() -> commandBuilder.getrange(key, start, end)); - } - - @Override - public Observable getset(K key, V value) { - return createObservable(() -> commandBuilder.getset(key, value)); - } - - @Override - public Observable hdel(K key, K... fields) { - return createObservable(() -> commandBuilder.hdel(key, fields)); - } - - @Override - public Observable hexists(K key, K field) { - return createObservable(() -> commandBuilder.hexists(key, field)); - } - - @Override - public Observable hget(K key, K field) { - return createObservable(() -> commandBuilder.hget(key, field)); - } - - @Override - public Observable hincrby(K key, K field, long amount) { - return createObservable(() -> commandBuilder.hincrby(key, field, amount)); - } - - @Override - public Observable hincrbyfloat(K key, K field, double amount) { - return createObservable(() -> commandBuilder.hincrbyfloat(key, field, amount)); - } - - @Override - public Observable> hgetall(K key) { - return createObservable(() -> commandBuilder.hgetall(key)); - } - - @Override - public Observable hgetall(KeyValueStreamingChannel channel, K key) { - return createObservable(() -> commandBuilder.hgetall(channel, key)); - } - - @Override - public Observable hkeys(K key) { - return createDissolvingObservable(() -> commandBuilder.hkeys(key)); - } - - @Override - public Observable hkeys(KeyStreamingChannel channel, K key) { - return createObservable(() -> commandBuilder.hkeys(channel, key)); - } - - @Override - public Observable hlen(K key) { - return createObservable(() -> commandBuilder.hlen(key)); - } - - @Override - public Observable hstrlen(K key, K field) { - return createObservable(() -> commandBuilder.hstrlen(key, field)); - } - - @Override - public Observable hmget(K key, K... fields) { - return createDissolvingObservable(() -> commandBuilder.hmget(key, fields)); - } - - @Override - public Observable hmget(ValueStreamingChannel channel, K key, K... fields) { - return createObservable(() -> commandBuilder.hmget(channel, key, fields)); - } - - @Override - public Observable hmset(K key, Map map) { - return createObservable(() -> commandBuilder.hmset(key, map)); - } - - @Override - public Observable hset(K key, K field, V value) { - return createObservable(() -> commandBuilder.hset(key, field, value)); - } - - @Override - public Observable hsetnx(K key, K field, V value) { - return createObservable(() -> commandBuilder.hsetnx(key, field, value)); - } - - @Override - public Observable hvals(K key) { - return createDissolvingObservable(() -> commandBuilder.hvals(key)); - } - - @Override - public Observable hvals(ValueStreamingChannel channel, K key) { - return createObservable(() -> commandBuilder.hvals(channel, key)); - } - - @Override - public Observable incr(K key) { - return createObservable(() -> commandBuilder.incr(key)); - } - - @Override - public Observable incrby(K key, long amount) { - return createObservable(() -> commandBuilder.incrby(key, amount)); - } - - @Override - public Observable incrbyfloat(K key, double amount) { - return createObservable(() -> commandBuilder.incrbyfloat(key, amount)); - } - - @Override - public Observable info() { - return createObservable(commandBuilder::info); - } - - @Override - public Observable info(String section) { - return createObservable(() -> commandBuilder.info(section)); - } - - @Override - public Observable keys(K pattern) { - return createDissolvingObservable(() -> commandBuilder.keys(pattern)); - } - - @Override - public Observable keys(KeyStreamingChannel channel, K pattern) { - return createObservable(() -> commandBuilder.keys(channel, pattern)); - } - - @Override - public Observable lastsave() { - return createObservable(commandBuilder::lastsave); - } - - @Override - public Observable lindex(K key, long index) { - return createObservable(() -> commandBuilder.lindex(key, index)); - } - - @Override - public Observable linsert(K key, boolean before, V pivot, V value) { - return createObservable(() -> commandBuilder.linsert(key, before, pivot, value)); - } - - @Override - public Observable llen(K key) { - return createObservable(() -> commandBuilder.llen(key)); - } - - @Override - public Observable lpop(K key) { - return createObservable(() -> commandBuilder.lpop(key)); - } - - @Override - public Observable lpush(K key, V... values) { - return createObservable(() -> commandBuilder.lpush(key, values)); - } - - @Override - public Observable lpushx(K key, V value) { - return createObservable(() -> commandBuilder.lpushx(key, value)); - } - - @Override - public Observable lpushx(K key, V... values) { - return createObservable(() -> commandBuilder.lpushx(key, values)); - } - - @Override - public Observable lrange(K key, long start, long stop) { - return createDissolvingObservable(() -> commandBuilder.lrange(key, start, stop)); - } - - @Override - public Observable lrange(ValueStreamingChannel channel, K key, long start, long stop) { - return createObservable(() -> commandBuilder.lrange(channel, key, start, stop)); - } - - @Override - public Observable lrem(K key, long count, V value) { - return createObservable(() -> commandBuilder.lrem(key, count, value)); - } - - @Override - public Observable lset(K key, long index, V value) { - return createObservable(() -> commandBuilder.lset(key, index, value)); - } - - @Override - public Observable ltrim(K key, long start, long stop) { - return createObservable(() -> commandBuilder.ltrim(key, start, stop)); - } - - @Override - public Observable migrate(String host, int port, K key, int db, long timeout) { - return createObservable(() -> commandBuilder.migrate(host, port, key, db, timeout)); - } - - @Override - public Observable migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs) { - return createObservable(() -> commandBuilder.migrate(host, port, db, timeout, migrateArgs)); - } - - @Override - public Observable mget(K... keys) { - return createDissolvingObservable(() -> commandBuilder.mget(keys)); - } - - public Observable mget(Iterable keys) { - return createDissolvingObservable(() -> commandBuilder.mget(keys)); - } - - @Override - public Observable mget(ValueStreamingChannel channel, K... keys) { - return createObservable(() -> commandBuilder.mget(channel, keys)); - } - - public Observable mget(ValueStreamingChannel channel, Iterable keys) { - return createObservable(() -> commandBuilder.mget(channel, keys)); - } - - @Override - public Observable move(K key, int db) { - return createObservable(() -> commandBuilder.move(key, db)); - } - - @Override - public Observable multi() { - return createObservable(commandBuilder::multi); - } - - @Override - public Observable mset(Map map) { - return createObservable(() -> commandBuilder.mset(map)); - } - - @Override - public Observable msetnx(Map map) { - return createObservable(() -> commandBuilder.msetnx(map)); - } - - @Override - public Observable objectEncoding(K key) { - return createObservable(() -> commandBuilder.objectEncoding(key)); - } - - @Override - public Observable objectIdletime(K key) { - return createObservable(() -> commandBuilder.objectIdletime(key)); - } - - @Override - public Observable objectRefcount(K key) { - return createObservable(() -> commandBuilder.objectRefcount(key)); - } - - @Override - public Observable persist(K key) { - return createObservable(() -> commandBuilder.persist(key)); - } - - @Override - public Observable pexpire(K key, long milliseconds) { - return createObservable(() -> commandBuilder.pexpire(key, milliseconds)); - } - - @Override - public Observable pexpireat(K key, Date timestamp) { - return pexpireat(key, timestamp.getTime()); - } - - @Override - public Observable pexpireat(K key, long timestamp) { - return createObservable(() -> commandBuilder.pexpireat(key, timestamp)); - } - - @Override - public Observable ping() { - return createObservable(commandBuilder::ping); - } - - @Override - public Observable readOnly() { - return createObservable(commandBuilder::readOnly); - } - - @Override - public Observable readWrite() { - return createObservable(commandBuilder::readWrite); - } - - @Override - public Observable pttl(K key) { - return createObservable(() -> commandBuilder.pttl(key)); - } - - @Override - public Observable publish(K channel, V message) { - return createObservable(() -> commandBuilder.publish(channel, message)); - } - - @Override - public Observable pubsubChannels() { - return createDissolvingObservable(commandBuilder::pubsubChannels); - } - - @Override - public Observable pubsubChannels(K channel) { - return createDissolvingObservable(() -> commandBuilder.pubsubChannels(channel)); - } - - @Override - public Observable> pubsubNumsub(K... channels) { - return createObservable(() -> commandBuilder.pubsubNumsub(channels)); - } - - @Override - public Observable pubsubNumpat() { - return createObservable(commandBuilder::pubsubNumpat); - } - - @Override - public Observable quit() { - return createObservable(commandBuilder::quit); - } - - @Override - public Observable role() { - return createDissolvingObservable(commandBuilder::role); - } - - @Override - public Observable randomkey() { - return createObservable(commandBuilder::randomkey); - } - - @Override - public Observable rename(K key, K newKey) { - return createObservable(() -> commandBuilder.rename(key, newKey)); - } - - @Override - public Observable renamenx(K key, K newKey) { - return createObservable(() -> commandBuilder.renamenx(key, newKey)); - } - - @Override - public Observable restore(K key, long ttl, byte[] value) { - return createObservable(() -> commandBuilder.restore(key, ttl, value)); - } - - @Override - public Observable rpop(K key) { - return createObservable(() -> commandBuilder.rpop(key)); - } - - @Override - public Observable rpoplpush(K source, K destination) { - return createObservable(() -> commandBuilder.rpoplpush(source, destination)); - } - - @Override - public Observable rpush(K key, V... values) { - return createObservable(() -> commandBuilder.rpush(key, values)); - } - - @Override - public Observable rpushx(K key, V value) { - return createObservable(() -> commandBuilder.rpushx(key, value)); - } - - @Override - public Observable rpushx(K key, V... values) { - return createObservable(() -> commandBuilder.rpushx(key, values)); - } - - @Override - public Observable sadd(K key, V... members) { - return createObservable(() -> commandBuilder.sadd(key, members)); - } - - @Override - public Observable save() { - return createObservable(commandBuilder::save); - } - - @Override - public Observable scard(K key) { - return createObservable(() -> commandBuilder.scard(key)); - } - - @Override - public Observable scriptExists(String... digests) { - return createDissolvingObservable(() -> commandBuilder.scriptExists(digests)); - } - - @Override - public Observable scriptFlush() { - return createObservable(commandBuilder::scriptFlush); - } - - @Override - public Observable scriptKill() { - return createObservable(commandBuilder::scriptKill); - } - - @Override - public Observable scriptLoad(V script) { - return createObservable(() -> commandBuilder.scriptLoad(script)); - } - - @Override - public Observable sdiff(K... keys) { - return createDissolvingObservable(() -> commandBuilder.sdiff(keys)); - } - - @Override - public Observable sdiff(ValueStreamingChannel channel, K... keys) { - return createObservable(() -> commandBuilder.sdiff(channel, keys)); - } - - @Override - public Observable sdiffstore(K destination, K... keys) { - return createObservable(() -> commandBuilder.sdiffstore(destination, keys)); - } - - public Observable select(int db) { - return createObservable(() -> commandBuilder.select(db)); - } - - @Override - public Observable set(K key, V value) { - return createObservable(() -> commandBuilder.set(key, value)); - } - - @Override - public Observable set(K key, V value, SetArgs setArgs) { - return createObservable(() -> commandBuilder.set(key, value, setArgs)); - } - - @Override - public Observable setbit(K key, long offset, int value) { - return createObservable(() -> commandBuilder.setbit(key, offset, value)); - } - - @Override - public Observable setex(K key, long seconds, V value) { - return createObservable(() -> commandBuilder.setex(key, seconds, value)); - } - - @Override - public Observable psetex(K key, long milliseconds, V value) { - return createObservable(() -> commandBuilder.psetex(key, milliseconds, value)); - } - - @Override - public Observable setnx(K key, V value) { - return createObservable(() -> commandBuilder.setnx(key, value)); - } - - @Override - public Observable setrange(K key, long offset, V value) { - return createObservable(() -> commandBuilder.setrange(key, offset, value)); - } - - @Override - public Observable shutdown(boolean save) { - return getSuccessObservable(createObservable(() -> commandBuilder.shutdown(save))); - } - - @Override - public Observable sinter(K... keys) { - return createDissolvingObservable(() -> commandBuilder.sinter(keys)); - } - - @Override - public Observable sinter(ValueStreamingChannel channel, K... keys) { - return createObservable(() -> commandBuilder.sinter(channel, keys)); - } - - @Override - public Observable sinterstore(K destination, K... keys) { - return createObservable(() -> commandBuilder.sinterstore(destination, keys)); - } - - @Override - public Observable sismember(K key, V member) { - return createObservable(() -> commandBuilder.sismember(key, member)); - } - - @Override - public Observable smove(K source, K destination, V member) { - return createObservable(() -> commandBuilder.smove(source, destination, member)); - } - - @Override - public Observable slaveof(String host, int port) { - return createObservable(() -> commandBuilder.slaveof(host, port)); - } - - @Override - public Observable slaveofNoOne() { - return createObservable(() -> commandBuilder.slaveofNoOne()); - } - - @Override - public Observable slowlogGet() { - return createDissolvingObservable(() -> commandBuilder.slowlogGet()); - } - - @Override - public Observable slowlogGet(int count) { - return createDissolvingObservable(() -> commandBuilder.slowlogGet(count)); - } - - @Override - public Observable slowlogLen() { - return createObservable(() -> commandBuilder.slowlogLen()); - } - - @Override - public Observable slowlogReset() { - return createObservable(() -> commandBuilder.slowlogReset()); - } - - @Override - public Observable smembers(K key) { - return createDissolvingObservable(() -> commandBuilder.smembers(key)); - } - - @Override - public Observable smembers(ValueStreamingChannel channel, K key) { - return createObservable(() -> commandBuilder.smembers(channel, key)); - } - - @Override - public Observable sort(K key) { - return createDissolvingObservable(() -> commandBuilder.sort(key)); - } - - @Override - public Observable sort(ValueStreamingChannel channel, K key) { - return createObservable(() -> commandBuilder.sort(channel, key)); - } - - @Override - public Observable sort(K key, SortArgs sortArgs) { - return createDissolvingObservable(() -> commandBuilder.sort(key, sortArgs)); - } - - @Override - public Observable sort(ValueStreamingChannel channel, K key, SortArgs sortArgs) { - return createObservable(() -> commandBuilder.sort(channel, key, sortArgs)); - } - - @Override - public Observable sortStore(K key, SortArgs sortArgs, K destination) { - return createObservable(() -> commandBuilder.sortStore(key, sortArgs, destination)); - } - - @Override - public Observable spop(K key) { - return createObservable(() -> commandBuilder.spop(key)); - } - - @Override - public Observable spop(K key, long count) { - return createDissolvingObservable(() -> commandBuilder.spop(key, count)); - } - - @Override - public Observable srandmember(K key) { - return createObservable(() -> commandBuilder.srandmember(key)); - } - - @Override - public Observable srandmember(K key, long count) { - return createDissolvingObservable(() -> commandBuilder.srandmember(key, count)); - } - - @Override - public Observable srandmember(ValueStreamingChannel channel, K key, long count) { - return createObservable(() -> commandBuilder.srandmember(channel, key, count)); - } - - @Override - public Observable srem(K key, V... members) { - return createObservable(() -> commandBuilder.srem(key, members)); - } - - @Override - public Observable sunion(K... keys) { - return createDissolvingObservable(() -> commandBuilder.sunion(keys)); - } - - @Override - public Observable sunion(ValueStreamingChannel channel, K... keys) { - return createObservable(() -> commandBuilder.sunion(channel, keys)); - } - - @Override - public Observable sunionstore(K destination, K... keys) { - return createObservable(() -> commandBuilder.sunionstore(destination, keys)); - } - - @Override - public Observable sync() { - return createObservable(commandBuilder::sync); - } - - @Override - public Observable strlen(K key) { - return createObservable(() -> commandBuilder.strlen(key)); - } - - @Override - public Observable touch(K... keys) { - return createObservable(() -> commandBuilder.touch(keys)); - } - - public Observable touch(Iterable keys) { - return createObservable(() -> commandBuilder.touch(keys)); - } - - @Override - public Observable ttl(K key) { - return createObservable(() -> commandBuilder.ttl(key)); - } - - @Override - public Observable type(K key) { - return createObservable(() -> commandBuilder.type(key)); - } - - @Override - public Observable watch(K... keys) { - return createObservable(() -> commandBuilder.watch(keys)); - } - - @Override - public Observable unwatch() { - return createObservable(commandBuilder::unwatch); - } - - @Override - public Observable zadd(K key, double score, V member) { - return createObservable(() -> commandBuilder.zadd(key, null, score, member)); - } - - @Override - public Observable zadd(K key, Object... scoresAndValues) { - return createObservable(() -> commandBuilder.zadd(key, null, scoresAndValues)); - } - - @Override - public Observable zadd(K key, ScoredValue... scoredValues) { - return createObservable(() -> commandBuilder.zadd(key, null, scoredValues)); - } - - @Override - public Observable zadd(K key, ZAddArgs zAddArgs, double score, V member) { - return createObservable(() -> commandBuilder.zadd(key, zAddArgs, score, member)); - } - - @Override - public Observable zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues) { - return createObservable(() -> commandBuilder.zadd(key, zAddArgs, scoresAndValues)); - } - - @Override - public Observable zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues) { - return createObservable(() -> commandBuilder.zadd(key, zAddArgs, scoredValues)); - } - - @Override - public Observable zaddincr(K key, double score, V member) { - return createObservable(() -> commandBuilder.zaddincr(key, score, member)); - } - - @Override - public Observable zcard(K key) { - return createObservable(() -> commandBuilder.zcard(key)); - } - - @Override - public Observable zcount(K key, double min, double max) { - return createObservable(() -> commandBuilder.zcount(key, min, max)); - } - - @Override - public Observable zcount(K key, String min, String max) { - return createObservable(() -> commandBuilder.zcount(key, min, max)); - } - - @Override - public Observable zincrby(K key, double amount, K member) { - return createObservable(() -> commandBuilder.zincrby(key, amount, member)); - } - - @Override - public Observable zinterstore(K destination, K... keys) { - return createObservable(() -> commandBuilder.zinterstore(destination, keys)); - } - - @Override - public Observable zinterstore(K destination, ZStoreArgs storeArgs, K... keys) { - return createObservable(() -> commandBuilder.zinterstore(destination, storeArgs, keys)); - } - - @Override - public Observable zrange(K key, long start, long stop) { - return createDissolvingObservable(() -> commandBuilder.zrange(key, start, stop)); - } - - @Override - public Observable> zrangeWithScores(K key, long start, long stop) { - return createDissolvingObservable(() -> commandBuilder.zrangeWithScores(key, start, stop)); - } - - @Override - public Observable zrangebyscore(K key, double min, double max) { - return createDissolvingObservable(() -> commandBuilder.zrangebyscore(key, min, max)); - } - - @Override - public Observable zrangebyscore(K key, String min, String max) { - return createDissolvingObservable(() -> commandBuilder.zrangebyscore(key, min, max)); - } - - @Override - public Observable zrangebyscore(K key, double min, double max, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrangebyscore(key, min, max, offset, count)); - } - - @Override - public Observable zrangebyscore(K key, String min, String max, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrangebyscore(key, min, max, offset, count)); - } - - @Override - public Observable> zrangebyscoreWithScores(K key, double min, double max) { - return createDissolvingObservable(() -> commandBuilder.zrangebyscoreWithScores(key, min, max)); - } - - @Override - public Observable> zrangebyscoreWithScores(K key, String min, String max) { - return createDissolvingObservable(() -> commandBuilder.zrangebyscoreWithScores(key, min, max)); - } - - @Override - public Observable> zrangebyscoreWithScores(K key, double min, double max, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrangebyscoreWithScores(key, min, max, offset, count)); - } - - @Override - public Observable> zrangebyscoreWithScores(K key, String min, String max, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrangebyscoreWithScores(key, min, max, offset, count)); - } - - @Override - public Observable zrange(ValueStreamingChannel channel, K key, long start, long stop) { - return createObservable(() -> commandBuilder.zrange(channel, key, start, stop)); - } - - @Override - public Observable zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { - return createObservable(() -> commandBuilder.zrangeWithScores(channel, key, start, stop)); - } - - @Override - public Observable zrangebyscore(ValueStreamingChannel channel, K key, double min, double max) { - return createObservable(() -> commandBuilder.zrangebyscore(channel, key, min, max)); - } - - @Override - public Observable zrangebyscore(ValueStreamingChannel channel, K key, String min, String max) { - return createObservable(() -> commandBuilder.zrangebyscore(channel, key, min, max)); - } - - @Override - public Observable zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, - long count) { - return createObservable(() -> commandBuilder.zrangebyscore(channel, key, min, max, offset, count)); - } - - @Override - public Observable zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, - long count) { - return createObservable(() -> commandBuilder.zrangebyscore(channel, key, min, max, offset, count)); - } - - @Override - public Observable zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max) { - return createObservable(() -> commandBuilder.zrangebyscoreWithScores(channel, key, min, max)); - } - - @Override - public Observable zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max) { - return createObservable(() -> commandBuilder.zrangebyscoreWithScores(channel, key, min, max)); - } - - @Override - public Observable zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, - long offset, long count) { - return createObservable(() -> commandBuilder.zrangebyscoreWithScores(channel, key, min, max, offset, count)); - } - - @Override - public Observable zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, - long offset, long count) { - return createObservable(() -> commandBuilder.zrangebyscoreWithScores(channel, key, min, max, offset, count)); - } - - @Override - public Observable zrank(K key, V member) { - return createObservable(() -> commandBuilder.zrank(key, member)); - } - - @Override - public Observable zrem(K key, V... members) { - return createObservable(() -> commandBuilder.zrem(key, members)); - } - - @Override - public Observable zremrangebyrank(K key, long start, long stop) { - return createObservable(() -> commandBuilder.zremrangebyrank(key, start, stop)); - } - - @Override - public Observable zremrangebyscore(K key, double min, double max) { - return createObservable(() -> commandBuilder.zremrangebyscore(key, min, max)); - } - - @Override - public Observable zremrangebyscore(K key, String min, String max) { - return createObservable(() -> commandBuilder.zremrangebyscore(key, min, max)); - } - - @Override - public Observable zrevrange(K key, long start, long stop) { - return createDissolvingObservable(() -> commandBuilder.zrevrange(key, start, stop)); - } - - @Override - public Observable> zrevrangeWithScores(K key, long start, long stop) { - return createDissolvingObservable(() -> commandBuilder.zrevrangeWithScores(key, start, stop)); - } - - @Override - public Observable zrevrangebyscore(K key, double max, double min) { - return createDissolvingObservable(() -> commandBuilder.zrevrangebyscore(key, max, min)); - } - - @Override - public Observable zrevrangebyscore(K key, String max, String min) { - return createDissolvingObservable(() -> commandBuilder.zrevrangebyscore(key, max, min)); - } - - @Override - public Observable zrevrangebyscore(K key, double max, double min, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrevrangebyscore(key, max, min, offset, count)); - } - - @Override - public Observable zrevrangebyscore(K key, String max, String min, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrevrangebyscore(key, max, min, offset, count)); - } - - @Override - public Observable> zrevrangebyscoreWithScores(K key, double max, double min) { - return createDissolvingObservable(() -> commandBuilder.zrevrangebyscoreWithScores(key, max, min)); - } - - @Override - public Observable> zrevrangebyscoreWithScores(K key, String max, String min) { - return createDissolvingObservable(() -> commandBuilder.zrevrangebyscoreWithScores(key, max, min)); - } - - @Override - public Observable> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrevrangebyscoreWithScores(key, max, min, offset, count)); - } - - @Override - public Observable> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrevrangebyscoreWithScores(key, max, min, offset, count)); - } - - @Override - public Observable zrevrange(ValueStreamingChannel channel, K key, long start, long stop) { - return createObservable(() -> commandBuilder.zrevrange(channel, key, start, stop)); - } - - @Override - public Observable zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { - return createObservable(() -> commandBuilder.zrevrangeWithScores(channel, key, start, stop)); - } - - @Override - public Observable zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min) { - return createObservable(() -> commandBuilder.zrevrangebyscore(channel, key, max, min)); - } - - @Override - public Observable zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min) { - return createObservable(() -> commandBuilder.zrevrangebyscore(channel, key, max, min)); - } - - @Override - public Observable zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, - long count) { - return createObservable(() -> commandBuilder.zrevrangebyscore(channel, key, max, min, offset, count)); - } - - @Override - public Observable zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, - long count) { - return createObservable(() -> commandBuilder.zrevrangebyscore(channel, key, max, min, offset, count)); - } - - @Override - public Observable zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min) { - return createObservable(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min)); - } - - @Override - public Observable zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min) { - return createObservable(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min)); - } - - @Override - public Observable zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, - long offset, long count) { - return createObservable(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min, offset, count)); - } - - @Override - public Observable zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, - long offset, long count) { - return createObservable(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min, offset, count)); - } - - @Override - public Observable zrevrank(K key, V member) { - return createObservable(() -> commandBuilder.zrevrank(key, member)); - } - - @Override - public Observable zscore(K key, V member) { - return createObservable(() -> commandBuilder.zscore(key, member)); - } - - @Override - public Observable zunionstore(K destination, K... keys) { - return createObservable(() -> commandBuilder.zunionstore(destination, keys)); - } - - @Override - public Observable zunionstore(K destination, ZStoreArgs storeArgs, K... keys) { - return createObservable(() -> commandBuilder.zunionstore(destination, storeArgs, keys)); - } - - @Override - public Observable> scan() { - return createObservable(commandBuilder::scan); - } - - @Override - public Observable> scan(ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.scan(scanArgs)); - } - - @Override - public Observable> scan(ScanCursor scanCursor, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.scan(scanCursor, scanArgs)); - } - - @Override - public Observable> scan(ScanCursor scanCursor) { - return createObservable(() -> commandBuilder.scan(scanCursor)); - } - - @Override - public Observable scan(KeyStreamingChannel channel) { - return createObservable(() -> commandBuilder.scanStreaming(channel)); - } - - @Override - public Observable scan(KeyStreamingChannel channel, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.scanStreaming(channel, scanArgs)); - } - - @Override - public Observable scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.scanStreaming(channel, scanCursor, scanArgs)); - } - - @Override - public Observable scan(KeyStreamingChannel channel, ScanCursor scanCursor) { - return createObservable(() -> commandBuilder.scanStreaming(channel, scanCursor)); - } - - @Override - public Observable> sscan(K key) { - return createObservable(() -> commandBuilder.sscan(key)); - } - - @Override - public Observable> sscan(K key, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.sscan(key, scanArgs)); - } - - @Override - public Observable> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.sscan(key, scanCursor, scanArgs)); - } - - @Override - public Observable> sscan(K key, ScanCursor scanCursor) { - return createObservable(() -> commandBuilder.sscan(key, scanCursor)); - } - - @Override - public Observable sscan(ValueStreamingChannel channel, K key) { - return createObservable(() -> commandBuilder.sscanStreaming(channel, key)); - } - - @Override - public Observable sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.sscanStreaming(channel, key, scanArgs)); - } - - @Override - public Observable sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.sscanStreaming(channel, key, scanCursor, scanArgs)); - } - - @Override - public Observable sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor) { - return createObservable(() -> commandBuilder.sscanStreaming(channel, key, scanCursor)); - } - - @Override - public Observable> hscan(K key) { - return createObservable(() -> commandBuilder.hscan(key)); - } - - @Override - public Observable> hscan(K key, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.hscan(key, scanArgs)); - } - - @Override - public Observable> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.hscan(key, scanCursor, scanArgs)); - } - - @Override - public Observable> hscan(K key, ScanCursor scanCursor) { - return createObservable(() -> commandBuilder.hscan(key, scanCursor)); - } - - @Override - public Observable hscan(KeyValueStreamingChannel channel, K key) { - return createObservable(() -> commandBuilder.hscanStreaming(channel, key)); - } - - @Override - public Observable hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.hscanStreaming(channel, key, scanArgs)); - } - - @Override - public Observable hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, - ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.hscanStreaming(channel, key, scanCursor, scanArgs)); - } - - @Override - public Observable hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor) { - return createObservable(() -> commandBuilder.hscanStreaming(channel, key, scanCursor)); - } - - @Override - public Observable> zscan(K key) { - return createObservable(() -> commandBuilder.zscan(key)); - } - - @Override - public Observable> zscan(K key, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.zscan(key, scanArgs)); - } - - @Override - public Observable> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.zscan(key, scanCursor, scanArgs)); - } - - @Override - public Observable> zscan(K key, ScanCursor scanCursor) { - return createObservable(() -> commandBuilder.zscan(key, scanCursor)); - } - - @Override - public Observable zscan(ScoredValueStreamingChannel channel, K key) { - return createObservable(() -> commandBuilder.zscanStreaming(channel, key)); - } - - @Override - public Observable zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.zscanStreaming(channel, key, scanArgs)); - } - - @Override - public Observable zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, - ScanArgs scanArgs) { - return createObservable(() -> commandBuilder.zscanStreaming(channel, key, scanCursor, scanArgs)); - } - - @Override - public Observable zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor) { - return createObservable(() -> commandBuilder.zscanStreaming(channel, key, scanCursor)); - } - - @Override - public String digest(V script) { - return LettuceStrings.digest(codec.encodeValue(script)); - } - - @Override - public Observable time() { - return createDissolvingObservable(commandBuilder::time); - } - - @Override - public Observable waitForReplication(int replicas, long timeout) { - return createObservable(() -> commandBuilder.wait(replicas, timeout)); - } - - @Override - public Observable pfadd(K key, V... values) { - return createObservable(() -> commandBuilder.pfadd(key, values)); - } - - public Observable pfadd(K key, V value, V... values) { - return createObservable(() -> commandBuilder.pfadd(key, value, values)); - } - - @Override - public Observable pfmerge(K destkey, K... sourcekeys) { - return createObservable(() -> commandBuilder.pfmerge(destkey, sourcekeys)); - } - - public Observable pfmerge(K destkey, K sourceKey, K... sourcekeys) { - return createObservable(() -> commandBuilder.pfmerge(destkey, sourceKey, sourcekeys)); - } - - @Override - public Observable pfcount(K... keys) { - return createObservable(() -> commandBuilder.pfcount(keys)); - } - - public Observable pfcount(K key, K... keys) { - return createObservable(() -> commandBuilder.pfcount(key, keys)); - } - - @Override - public Observable clusterBumpepoch() { - return createObservable(() -> commandBuilder.clusterBumpepoch()); - } - - @Override - public Observable clusterMeet(String ip, int port) { - return createObservable(() -> commandBuilder.clusterMeet(ip, port)); - } - - @Override - public Observable clusterForget(String nodeId) { - return createObservable(() -> commandBuilder.clusterForget(nodeId)); - } - - @Override - public Observable clusterAddSlots(int... slots) { - return createObservable(() -> commandBuilder.clusterAddslots(slots)); - } - - @Override - public Observable clusterDelSlots(int... slots) { - return createObservable(() -> commandBuilder.clusterDelslots(slots)); - } - - @Override - public Observable clusterInfo() { - return createObservable(commandBuilder::clusterInfo); - } - - @Override - public Observable clusterMyId() { - return createObservable(commandBuilder::clusterMyId); - } - - @Override - public Observable clusterNodes() { - return createObservable(commandBuilder::clusterNodes); - } - - @Override - public Observable clusterGetKeysInSlot(int slot, int count) { - return createDissolvingObservable(() -> commandBuilder.clusterGetKeysInSlot(slot, count)); - } - - @Override - public Observable clusterCountKeysInSlot(int slot) { - return createObservable(() -> commandBuilder.clusterCountKeysInSlot(slot)); - } - - @Override - public Observable clusterCountFailureReports(String nodeId) { - return createObservable(() -> commandBuilder.clusterCountFailureReports(nodeId)); - } - - @Override - public Observable clusterKeyslot(K key) { - return createObservable(() -> commandBuilder.clusterKeyslot(key)); - } - - @Override - public Observable clusterSaveconfig() { - return createObservable(() -> commandBuilder.clusterSaveconfig()); - } - - @Override - public Observable clusterSetConfigEpoch(long configEpoch) { - return createObservable(() -> commandBuilder.clusterSetConfigEpoch(configEpoch)); - } - - @Override - public Observable clusterSlots() { - return createDissolvingObservable(commandBuilder::clusterSlots); - } - - @Override - public Observable clusterSetSlotNode(int slot, String nodeId) { - return createObservable(() -> commandBuilder.clusterSetSlotNode(slot, nodeId)); - } - - @Override - public Observable clusterSetSlotStable(int slot) { - return createObservable(() -> commandBuilder.clusterSetSlotStable(slot)); - } - - @Override - public Observable clusterSetSlotMigrating(int slot, String nodeId) { - return createObservable(() -> commandBuilder.clusterSetSlotMigrating(slot, nodeId)); - } - - @Override - public Observable clusterSetSlotImporting(int slot, String nodeId) { - return createObservable(() -> commandBuilder.clusterSetSlotImporting(slot, nodeId)); - } - - @Override - public Observable clusterFailover(boolean force) { - return createObservable(() -> commandBuilder.clusterFailover(force)); - } - - @Override - public Observable clusterReset(boolean hard) { - return createObservable(() -> commandBuilder.clusterReset(hard)); - } - - @Override - public Observable asking() { - return createObservable(commandBuilder::asking); - } - - @Override - public Observable clusterReplicate(String nodeId) { - return createObservable(() -> commandBuilder.clusterReplicate(nodeId)); - } - - @Override - public Observable clusterFlushslots() { - return createObservable(commandBuilder::clusterFlushslots); - } - - @Override - public Observable clusterSlaves(String nodeId) { - return createDissolvingObservable(() -> commandBuilder.clusterSlaves(nodeId)); - } - - @Override - public Observable zlexcount(K key, String min, String max) { - return createObservable(() -> commandBuilder.zlexcount(key, min, max)); - } - - @Override - public Observable zremrangebylex(K key, String min, String max) { - return createObservable(() -> commandBuilder.zremrangebylex(key, min, max)); - } - - @Override - public Observable zrangebylex(K key, String min, String max) { - return createDissolvingObservable(() -> commandBuilder.zrangebylex(key, min, max)); - } - - @Override - public Observable zrangebylex(K key, String min, String max, long offset, long count) { - return createDissolvingObservable(() -> commandBuilder.zrangebylex(key, min, max, offset, count)); - } - - @Override - public Observable geoadd(K key, double longitude, double latitude, V member) { - return createObservable(() -> commandBuilder.geoadd(key, longitude, latitude, member)); - } - - @Override - public Observable geoadd(K key, Object... lngLatMember) { - return createDissolvingObservable(() -> commandBuilder.geoadd(key, lngLatMember)); - } - - @Override - public Observable geohash(K key, V... members) { - return createDissolvingObservable(() -> commandBuilder.geohash(key, members)); - } - - @Override - public Observable georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit) { - return createDissolvingObservable(() -> commandBuilder.georadius(key, longitude, latitude, distance, unit.name())); - } - - @Override - public Observable> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, - GeoArgs geoArgs) { - return createDissolvingObservable(() -> commandBuilder.georadius(key, longitude, latitude, distance, unit.name(), - geoArgs)); - } - - @Override - public Observable georadius(K key, double longitude, double latitude, double distance, Unit unit, - GeoRadiusStoreArgs geoRadiusStoreArgs) { - return createDissolvingObservable( - () -> commandBuilder.georadius(key, longitude, latitude, distance, unit.name(), geoRadiusStoreArgs)); - } - - @Override - public Observable georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit) { - return createDissolvingObservable(() -> commandBuilder.georadiusbymember(key, member, distance, unit.name())); - } - - @Override - public Observable> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs) { - return createDissolvingObservable(() -> commandBuilder.georadiusbymember(key, member, distance, unit.name(), geoArgs)); - } - - @Override - public Observable georadiusbymember(K key, V member, double distance, Unit unit, - GeoRadiusStoreArgs geoRadiusStoreArgs) { - return createDissolvingObservable( - () -> commandBuilder.georadiusbymember(key, member, distance, unit.name(), geoRadiusStoreArgs)); - } - - @Override - public Observable geopos(K key, V... members) { - return createDissolvingObservable(() -> commandBuilder.geopos(key, members)); - } - - @Override - public Observable geodist(K key, V from, V to, GeoArgs.Unit unit) { - return createDissolvingObservable(() -> commandBuilder.geodist(key, from, to, unit)); - } - - @Override - public Observable dispatch(ProtocolKeyword type, CommandOutput output) { - - LettuceAssert.notNull(type, "Command type must not be null"); - LettuceAssert.notNull(output, "CommandOutput type must not be null"); - - return createDissolvingObservable(() -> new Command<>(type, output)); - } - - @Override - public Observable dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args) { - - LettuceAssert.notNull(type, "Command type must not be null"); - LettuceAssert.notNull(output, "CommandOutput type must not be null"); - LettuceAssert.notNull(args, "CommandArgs type must not be null"); - - return createDissolvingObservable(() -> new Command<>(type, output, args)); - } - - protected Observable createObservable(CommandType type, CommandOutput output, CommandArgs args) { - return createObservable(() -> new Command<>(type, output, args)); - } - - public Observable createObservable(Supplier> commandSupplier) { - return Observable.create(new ReactiveCommandDispatcher(commandSupplier, connection, false)); - } - - @SuppressWarnings("unchecked") - public R createDissolvingObservable(Supplier> commandSupplier) { - return (R) Observable.create(new ReactiveCommandDispatcher<>(commandSupplier, connection, true)); - } - - @SuppressWarnings("unchecked") - public R createDissolvingObservable(CommandType type, CommandOutput output, CommandArgs args) { - return (R) Observable.create(new ReactiveCommandDispatcher(() -> new Command<>(type, output, args), - connection, true)); - } - - /** - * Emits just {@link Success#Success} or the {@link Throwable} after the inner observable is completed. - * - * @param observable inner observable - * @param used for type inference - * @return Success observable - */ - protected Observable getSuccessObservable(final Observable observable) { - return Observable.create(new Observable.OnSubscribe() { - @Override - public void call(Subscriber subscriber) { - - observable.subscribe(new Subscriber() { - @Override - public void onCompleted() { - subscriber.onNext(Success.Success); - subscriber.onCompleted(); - } - - @Override - public void onError(Throwable throwable) { - subscriber.onError(throwable); - } - - @Override - public void onNext(Object k) { - - } - }); - } - }); - } - - public void setTimeout(long timeout, TimeUnit unit) { - connection.setTimeout(timeout, unit); - } - - @Override - public void close() { - connection.close(); - } - - @Override - public boolean isOpen() { - return connection.isOpen(); - } - - @Override - public void reset() { - getConnection().reset(); - } - - public StatefulConnection getConnection() { - return connection; - } -} diff --git a/src/main/java/com/lambdaworks/redis/BaseRedisAsyncConnection.java b/src/main/java/com/lambdaworks/redis/BaseRedisAsyncConnection.java deleted file mode 100644 index c1062301a8..0000000000 --- a/src/main/java/com/lambdaworks/redis/BaseRedisAsyncConnection.java +++ /dev/null @@ -1,200 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Closeable; -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.api.async.BaseRedisAsyncCommands; -import com.lambdaworks.redis.output.CommandOutput; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.ProtocolKeyword; - -/** - * - * Basic asynchronous executed commands. - * - * @author Mark Paluch - * @param Key type. - * @param Value type. - * @since 3.0 - * @deprecated Use {@link BaseRedisAsyncCommands} - */ -@Deprecated -public interface BaseRedisAsyncConnection extends Closeable, BaseRedisAsyncCommands { - - /** - * Post a message to a channel. - * - * @param channel the channel type: key - * @param message the message type: value - * @return RedisFuture<Long> integer-reply the number of clients that received the message. - */ - RedisFuture publish(K channel, V message); - - /** - * Lists the currently *active channels*. - * - * @return RedisFuture<List<K>> array-reply a list of active channels, optionally matching the specified - * pattern. - */ - RedisFuture> pubsubChannels(); - - /** - * Lists the currently *active channels*. - * - * @param channel the key - * @return RedisFuture<List<K>> array-reply a list of active channels, optionally matching the specified - * pattern. - */ - RedisFuture> pubsubChannels(K channel); - - /** - * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. - * - * @param channels channel keys - * @return array-reply a list of channels and number of subscribers for every channel. - */ - RedisFuture> pubsubNumsub(K... channels); - - /** - * Returns the number of subscriptions to patterns. - * - * @return RedisFuture<Long> integer-reply the number of patterns all the clients are subscribed to. - */ - RedisFuture pubsubNumpat(); - - /** - * Echo the given string. - * - * @param msg the message type: value - * @return RedisFuture<V> bulk-string-reply - */ - RedisFuture echo(V msg); - - /** - * Return the role of the instance in the context of replication. - * - * @return RedisFuture<List<Object>> array-reply where the first element is one of master, slave, sentinel and - * the additional elements are role-specific. - */ - RedisFuture> role(); - - /** - * Ping the server. - * - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture ping(); - - /** - * Close the connection. - * - * @return RedisFuture<String> simple-string-reply always OK. - */ - RedisFuture quit(); - - /** - * Create a SHA1 digest from a Lua script. - * - * @param script script content - * @return the SHA1 value - */ - String digest(V script); - - /** - * Discard all commands issued after MULTI. - * - * @return RedisFuture<String> simple-string-reply always {@code OK}. - */ - RedisFuture discard(); - - /** - * Execute all commands issued after MULTI. - * - * @return RedisFuture<List<Object>> array-reply each element being the reply to each of the commands in the - * atomic transaction. - * - * When using {@code WATCH}, {@code EXEC} can return a - */ - RedisFuture> exec(); - - /** - * Mark the start of a transaction block. - * - * @return RedisFuture<String> simple-string-reply always {@code OK}. - */ - RedisFuture multi(); - - /** - * Watch the given keys to determine execution of the MULTI/EXEC block. - * - * @param keys the key - * @return RedisFuture<String> simple-string-reply always {@code OK}. - */ - RedisFuture watch(K... keys); - - /** - * Forget about all watched keys. - * - * @return RedisFuture<String> simple-string-reply always {@code OK}. - */ - RedisFuture unwatch(); - - /** - * Wait for replication. - * - * @param replicas minimum number of replicas - * @param timeout timeout in milliseconds - * @return number of replicas - */ - RedisFuture waitForReplication(int replicas, long timeout); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param response type - * @return the command response - */ - RedisFuture dispatch(ProtocolKeyword type, CommandOutput output); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param args the command arguments, must not be {@literal null}. - * @param response type - * @return the command response - */ - RedisFuture dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); - - /** - * Close the connection. The connection will become not usable anymore as soon as this method was called. - */ - @Override - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands - * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is - * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. - * - * @param autoFlush state of autoFlush. - */ - void setAutoFlushCommands(boolean autoFlush); - - /** - * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to - * achieve batching. No-op if channel is not connected. - */ - void flushCommands(); - -} diff --git a/src/main/java/com/lambdaworks/redis/BaseRedisConnection.java b/src/main/java/com/lambdaworks/redis/BaseRedisConnection.java deleted file mode 100644 index e8418d786a..0000000000 --- a/src/main/java/com/lambdaworks/redis/BaseRedisConnection.java +++ /dev/null @@ -1,195 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Closeable; -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.output.CommandOutput; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.ProtocolKeyword; - -/** - * - * Basic synchronous executed commands. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link com.lambdaworks.redis.api.sync.BaseRedisCommands} - */ -@Deprecated -public interface BaseRedisConnection extends Closeable { - - /** - * Post a message to a channel. - * - * @param channel the channel type: key - * @param message the message type: value - * @return Long integer-reply the number of clients that received the message. - */ - Long publish(K channel, V message); - - /** - * Lists the currently *active channels*. - * - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - List pubsubChannels(); - - /** - * Lists the currently *active channels*. - * - * @param channel the key - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - List pubsubChannels(K channel); - - /** - * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. - * - * @param channels channel keys - * @return array-reply a list of channels and number of subscribers for every channel. - */ - Map pubsubNumsub(K... channels); - - /** - * Returns the number of subscriptions to patterns. - * - * @return Long integer-reply the number of patterns all the clients are subscribed to. - */ - Long pubsubNumpat(); - - /** - * Echo the given string. - * - * @param msg the message type: value - * @return V bulk-string-reply - */ - V echo(V msg); - - /** - * Return the role of the instance in the context of replication. - * - * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional - * elements are role-specific. - */ - List role(); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - String ping(); - - /** - * Switch connection to Read-Only mode when connecting to a cluster. - * - * @return String simple-string-reply. - */ - String readOnly(); - - /** - * Switch connection to Read-Write mode (default) when connecting to a cluster. - * - * @return String simple-string-reply. - */ - String readWrite(); - - /** - * Close the connection. - * - * @return String simple-string-reply always OK. - */ - String quit(); - - /** - * Create a SHA1 digest from a Lua script. - * - * @param script script content - * @return the SHA1 value - */ - String digest(V script); - - /** - * Discard all commands issued after MULTI. - * - * @return String simple-string-reply always {@code OK}. - */ - String discard(); - - /** - * Execute all commands issued after MULTI. - * - * @return List<Object> array-reply each element being the reply to each of the commands in the atomic transaction. - * - * When using {@code WATCH}, {@code EXEC} can return a - */ - List exec(); - - /** - * Mark the start of a transaction block. - * - * @return String simple-string-reply always {@code OK}. - */ - String multi(); - - /** - * Watch the given keys to determine execution of the MULTI/EXEC block. - * - * @param keys the key - * @return String simple-string-reply always {@code OK}. - */ - String watch(K... keys); - - /** - * Forget about all watched keys. - * - * @return String simple-string-reply always {@code OK}. - */ - String unwatch(); - - /** - * Wait for replication. - * - * @param replicas minimum number of replicas - * @param timeout timeout in milliseconds - * @return number of replicas - */ - Long waitForReplication(int replicas, long timeout); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param response type - * @return the command response - */ - T dispatch(ProtocolKeyword type, CommandOutput output); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param args the command arguments, must not be {@literal null}. - * @param response type - * @return the command response - */ - T dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); - - /** - * Close the connection. The connection will become not usable anymore as soon as this method was called. - */ - @Override - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - -} diff --git a/src/main/java/com/lambdaworks/redis/BitFieldArgs.java b/src/main/java/com/lambdaworks/redis/BitFieldArgs.java deleted file mode 100644 index 7e562e7acc..0000000000 --- a/src/main/java/com/lambdaworks/redis/BitFieldArgs.java +++ /dev/null @@ -1,458 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.LettuceCharsets; -import com.lambdaworks.redis.protocol.ProtocolKeyword; - -/** - * Arguments and types for the {@code BITFIELD} command. - * - * @author Mark Paluch - * @since 4.2 - */ -public class BitFieldArgs { - - private List commands; - - /** - * Creates a new {@link BitFieldArgs} instance. - */ - public BitFieldArgs() { - this(new ArrayList<>()); - } - - private BitFieldArgs(List commands) { - LettuceAssert.notNull(commands, "Commands must not be null"); - this.commands = commands; - } - - public static class Builder { - - /** - * Utility constructor. - */ - private Builder() { - - } - - /** - * Adds a new {@link Get} subcommand. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @param offset bitfield offset - * @return a new {@link Get} subcommand for the given {@code bitFieldType} and {@code offset}. - */ - public static BitFieldArgs get(BitFieldType bitFieldType, int offset) { - return new BitFieldArgs().get(bitFieldType, offset); - } - - /** - * Adds a new {@link Set} subcommand. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @param offset bitfield offset - * @param value the value - * @return a new {@link Set} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - */ - public static BitFieldArgs set(BitFieldType bitFieldType, int offset, long value) { - return new BitFieldArgs().set(bitFieldType, offset, value); - } - - /** - * Adds a new {@link IncrBy} subcommand. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @param offset bitfield offset - * @param value the value - * @return a new {@link IncrBy} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - */ - public static BitFieldArgs incrBy(BitFieldType bitFieldType, int offset, long value) { - return new BitFieldArgs().incrBy(bitFieldType, offset, value); - } - - /** - * Adds a new {@link Overflow} subcommand. - * - * @param overflowType type of overflow, must not be {@literal null}. - * @return a new {@link Overflow} subcommand for the given {@code overflowType}. - */ - public static BitFieldArgs overflow(OverflowType overflowType) { - return new BitFieldArgs().overflow(overflowType); - } - } - - /** - * Creates a new signed {@link BitFieldType} for the given number of {@code bits}. - * - * Redis allows up to {@code 64} bits for unsigned integers. - * - * @param bits - * @return - */ - public static BitFieldType signed(int bits) { - return new BitFieldType(true, bits); - } - - /** - * Creates a new unsigned {@link BitFieldType} for the given number of {@code bits}. Redis allows up to {@code 63} bits for - * unsigned integers. - * - * @param bits - * @return - */ - public static BitFieldType unsigned(int bits) { - return new BitFieldType(false, bits); - } - - /** - * Adds a new {@link SubCommand} to the {@code BITFIELD} execution. - * - * @param subCommand - */ - private BitFieldArgs addSubCommand(SubCommand subCommand) { - LettuceAssert.notNull(subCommand, "SubCommand must not be null"); - commands.add(subCommand); - return this; - } - - /** - * Adds a new {@link Get} subcommand using offset {@code 0} and the field type of the previous command. - * - * @return a new {@link Get} subcommand for the given {@code bitFieldType} and {@code offset}. - * @throws IllegalStateException if no previous field type was found - */ - public BitFieldArgs get() { - return get(previousFieldType()); - } - - /** - * Adds a new {@link Get} subcommand using offset {@code 0}. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @return a new {@link Get} subcommand for the given {@code bitFieldType} and {@code offset}. - */ - public BitFieldArgs get(BitFieldType bitFieldType) { - return get(bitFieldType, 0); - } - - /** - * Adds a new {@link Get} subcommand. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @param offset bitfield offset - * @return a new {@link Get} subcommand for the given {@code bitFieldType} and {@code offset}. - */ - public BitFieldArgs get(BitFieldType bitFieldType, int offset) { - return addSubCommand(new Get(bitFieldType, offset)); - } - - /** - * Adds a new {@link Get} subcommand using the field type of the previous command. - * - * @param offset bitfield offset - * @return a new {@link Get} subcommand for the given {@code bitFieldType} and {@code offset}. - * @throws IllegalStateException if no previous field type was found - */ - public BitFieldArgs get(int offset) { - return get(previousFieldType(), offset); - } - - /** - * Adds a new {@link Set} subcommand using offset {@code 0} and the field type of the previous command. - * - * @param value the value - * @return a new {@link Set} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - * @throws IllegalStateException if no previous field type was found - */ - public BitFieldArgs set(long value) { - return set(previousFieldType(), value); - } - - /** - * Adds a new {@link Set} subcommand using offset {@code 0}. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @param value the value - * @return a new {@link Set} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - */ - public BitFieldArgs set(BitFieldType bitFieldType, long value) { - return set(bitFieldType, 0, value); - } - - /** - * Adds a new {@link Set} subcommand using the field type of the previous command. - * - * @param offset bitfield offset - * @param value the value - * @return a new {@link Set} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - * @throws IllegalStateException if no previous field type was found - */ - public BitFieldArgs set(int offset, long value) { - return set(previousFieldType(), offset, value); - } - - /** - * Adds a new {@link Set} subcommand. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @param offset bitfield offset - * @param value the value - * @return a new {@link Set} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - */ - public BitFieldArgs set(BitFieldType bitFieldType, int offset, long value) { - return addSubCommand(new Set(bitFieldType, offset, value)); - } - - /** - * Adds a new {@link IncrBy} subcommand using offset {@code 0} and the field type of the previous command. - * - * @param value the value - * @return a new {@link IncrBy} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - * @throws IllegalStateException if no previous field type was found - */ - public BitFieldArgs incrBy(long value) { - return incrBy(previousFieldType(), value); - } - - /** - * Adds a new {@link IncrBy} subcommand using offset {@code 0}. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @param value the value - * @return a new {@link IncrBy} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - */ - public BitFieldArgs incrBy(BitFieldType bitFieldType, long value) { - return incrBy(bitFieldType, 0, value); - } - - /** - * Adds a new {@link IncrBy} subcommand using the field type of the previous command. - * - * @param offset bitfield offset - * @param value the value - * @return a new {@link IncrBy} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - * @throws IllegalStateException if no previous field type was found - */ - public BitFieldArgs incrBy(int offset, long value) { - return incrBy(previousFieldType(), offset, value); - } - - /** - * Adds a new {@link IncrBy} subcommand. - * - * @param bitFieldType the bit field type, must not be {@literal null}. - * @param offset bitfield offset - * @param value the value - * @return a new {@link IncrBy} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. - */ - public BitFieldArgs incrBy(BitFieldType bitFieldType, int offset, long value) { - return addSubCommand(new IncrBy(bitFieldType, offset, value)); - } - - /** - * Adds a new {@link Overflow} subcommand. - * - * @param overflowType type of overflow, must not be {@literal null}. - * @return a new {@link Overflow} subcommand for the given {@code overflowType}. - */ - public BitFieldArgs overflow(OverflowType overflowType) { - return addSubCommand(new Overflow(overflowType)); - } - - private BitFieldType previousFieldType() { - - List list = new ArrayList<>(commands); - Collections.reverse(list); - - for (SubCommand command : list) { - - if (command instanceof Get) { - return ((Get) command).bitFieldType; - } - - if (command instanceof Set) { - return ((Set) command).bitFieldType; - } - - if (command instanceof IncrBy) { - return ((IncrBy) command).bitFieldType; - } - } - - throw new IllegalStateException("No previous field type found"); - } - - /** - * Representation for the {@code SET} subcommand for {@code BITFIELD}. - */ - private static class Set extends SubCommand { - - private final BitFieldType bitFieldType; - private final long offset; - private final long value; - - private Set(BitFieldType bitFieldType, int offset, long value) { - - LettuceAssert.notNull(bitFieldType, "BitFieldType must not be null"); - LettuceAssert.isTrue(offset > -1, "Offset must be greater or equal to 0"); - - this.offset = offset; - this.bitFieldType = bitFieldType; - this.value = value; - } - - @Override - void build(CommandArgs args) { - args.add(CommandType.SET).add(bitFieldType.asString()).add(offset).add(value); - } - } - - /** - * Representation for the {@code GET} subcommand for {@code BITFIELD}. - */ - private static class Get extends SubCommand { - - private final BitFieldType bitFieldType; - private final long offset; - - private Get(BitFieldType bitFieldType, int offset) { - - LettuceAssert.notNull(bitFieldType, "BitFieldType must not be null"); - LettuceAssert.isTrue(offset > -1, "Offset must be greater or equal to 0"); - - this.offset = offset; - this.bitFieldType = bitFieldType; - } - - @Override - void build(CommandArgs args) { - args.add(CommandType.GET).add(bitFieldType.asString()).add(offset); - } - } - - /** - * Representation for the {@code INCRBY} subcommand for {@code BITFIELD}. - */ - private static class IncrBy extends SubCommand { - - private final BitFieldType bitFieldType; - private final long offset; - private final long value; - - private IncrBy(BitFieldType bitFieldType, int offset, long value) { - - LettuceAssert.notNull(bitFieldType, "BitFieldType must not be null"); - LettuceAssert.isTrue(offset > -1, "Offset must be greater or equal to 0"); - - this.offset = offset; - this.bitFieldType = bitFieldType; - this.value = value; - } - - @Override - void build(CommandArgs args) { - args.add(CommandType.INCRBY).add(bitFieldType.asString()).add(offset).add(value); - } - } - - /** - * Representation for the {@code INCRBY} subcommand for {@code BITFIELD}. - */ - private static class Overflow extends SubCommand { - - private final OverflowType overflowType; - - private Overflow(OverflowType overflowType) { - - LettuceAssert.notNull(overflowType, "OverflowType must not be null"); - this.overflowType = overflowType; - } - - @Override - void build(CommandArgs args) { - args.add("OVERFLOW").add(overflowType); - } - } - - /** - * Base class for bitfield subcommands. - */ - private abstract static class SubCommand { - abstract void build(CommandArgs args); - } - - void build(CommandArgs args) { - - for (SubCommand command : commands) { - command.build(args); - } - } - - /** - * Represents the overflow types for the {@code OVERFLOW} subcommand argument. - */ - public enum OverflowType implements ProtocolKeyword { - - WRAP, SAT, FAIL; - - public final byte[] bytes; - - private OverflowType() { - bytes = name().getBytes(LettuceCharsets.ASCII); - } - - @Override - public byte[] getBytes() { - return bytes; - } - } - - /** - * Represents a bit field type with details about signed/unsigned and the number of bits. - */ - public static class BitFieldType { - - private final boolean signed; - private final int bits; - - private BitFieldType(boolean signed, int bits) { - - LettuceAssert.isTrue(bits > 0, "Bits must be greater 0"); - - if (signed) { - LettuceAssert.isTrue(bits < 65, "Signed integers support only up to 64 bits"); - } else { - LettuceAssert.isTrue(bits < 64, "Unsigned integers support only up to 63 bits"); - } - - this.signed = signed; - this.bits = bits; - } - - /** - * - * @return {@literal true} if the bitfield type is signed. - */ - public boolean isSigned() { - return signed; - } - - /** - * - * @return number of bits. - */ - public int getBits() { - return bits; - } - - private String asString() { - return (signed ? "i" : "u") + bits; - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/ChannelGroupListener.java b/src/main/java/com/lambdaworks/redis/ChannelGroupListener.java deleted file mode 100644 index 2bd4d42a4b..0000000000 --- a/src/main/java/com/lambdaworks/redis/ChannelGroupListener.java +++ /dev/null @@ -1,36 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import io.netty.channel.ChannelHandler; -import io.netty.channel.ChannelHandlerContext; -import io.netty.channel.ChannelInboundHandlerAdapter; -import io.netty.channel.group.ChannelGroup; - -/** - * A netty {@link ChannelHandler} responsible for monitoring the channel and adding/removing the channel from/to the - * ChannelGroup. - * - * @author Will Glozer - */ -@ChannelHandler.Sharable -class ChannelGroupListener extends ChannelInboundHandlerAdapter { - - private ChannelGroup channels; - - public ChannelGroupListener(ChannelGroup channels) { - this.channels = channels; - } - - @Override - public void channelActive(ChannelHandlerContext ctx) throws Exception { - channels.add(ctx.channel()); - super.channelActive(ctx); - } - - @Override - public void channelInactive(ChannelHandlerContext ctx) throws Exception { - channels.remove(ctx.channel()); - super.channelInactive(ctx); - } -} diff --git a/src/main/java/com/lambdaworks/redis/ClientOptions.java b/src/main/java/com/lambdaworks/redis/ClientOptions.java deleted file mode 100644 index 64039f1ac9..0000000000 --- a/src/main/java/com/lambdaworks/redis/ClientOptions.java +++ /dev/null @@ -1,320 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Serializable; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Client Options to control the behavior of {@link RedisClient}. - * - * @author Mark Paluch - */ -public class ClientOptions implements Serializable { - - public static final boolean DEFAULT_PING_BEFORE_ACTIVATE_CONNECTION = false; - public static final boolean DEFAULT_AUTO_RECONNECT = true; - public static final boolean DEFAULT_CANCEL_CMD_RECONNECT_FAIL = false; - public static final boolean DEFAULT_SUSPEND_RECONNECT_PROTO_FAIL = false; - public static final int DEFAULT_REQUEST_QUEUE_SIZE = Integer.MAX_VALUE; - public static final DisconnectedBehavior DEFAULT_DISCONNECTED_BEHAVIOR = DisconnectedBehavior.DEFAULT; - public static final SocketOptions DEFAULT_SOCKET_OPTIONS = SocketOptions.create(); - public static final SslOptions DEFAULT_SSL_OPTIONS = SslOptions.create(); - - private final boolean pingBeforeActivateConnection; - private final boolean autoReconnect; - private final boolean cancelCommandsOnReconnectFailure; - private final boolean suspendReconnectOnProtocolFailure; - private final int requestQueueSize; - private final DisconnectedBehavior disconnectedBehavior; - private final SocketOptions socketOptions; - private final SslOptions sslOptions; - - protected ClientOptions(Builder builder) { - pingBeforeActivateConnection = builder.pingBeforeActivateConnection; - cancelCommandsOnReconnectFailure = builder.cancelCommandsOnReconnectFailure; - autoReconnect = builder.autoReconnect; - suspendReconnectOnProtocolFailure = builder.suspendReconnectOnProtocolFailure; - requestQueueSize = builder.requestQueueSize; - disconnectedBehavior = builder.disconnectedBehavior; - socketOptions = builder.socketOptions; - sslOptions = builder.sslOptions; - } - - protected ClientOptions(ClientOptions original) { - this.pingBeforeActivateConnection = original.isPingBeforeActivateConnection(); - this.autoReconnect = original.isAutoReconnect(); - this.cancelCommandsOnReconnectFailure = original.isCancelCommandsOnReconnectFailure(); - this.suspendReconnectOnProtocolFailure = original.isSuspendReconnectOnProtocolFailure(); - this.requestQueueSize = original.getRequestQueueSize(); - this.disconnectedBehavior = original.getDisconnectedBehavior(); - this.socketOptions = original.getSocketOptions(); - this.sslOptions = original.getSslOptions(); - } - - /** - * Create a copy of {@literal options} - * - * @param options the original - * @return A new instance of {@link ClientOptions} containing the values of {@literal options} - */ - public static ClientOptions copyOf(ClientOptions options) { - return new ClientOptions(options); - } - - /** - * Returns a new {@link ClientOptions.Builder} to construct {@link ClientOptions}. - * - * @return a new {@link ClientOptions.Builder} to construct {@link ClientOptions}. - */ - public static ClientOptions.Builder builder() { - return new ClientOptions.Builder(); - } - - /** - * Create a new instance of {@link ClientOptions} with default settings. - * - * @return a new instance of {@link ClientOptions} with default settings - */ - public static ClientOptions create() { - return builder().build(); - } - - /** - * Builder for {@link ClientOptions}. - */ - public static class Builder { - - private boolean pingBeforeActivateConnection = DEFAULT_PING_BEFORE_ACTIVATE_CONNECTION; - private boolean autoReconnect = DEFAULT_AUTO_RECONNECT; - private boolean cancelCommandsOnReconnectFailure = DEFAULT_CANCEL_CMD_RECONNECT_FAIL; - private boolean suspendReconnectOnProtocolFailure = DEFAULT_SUSPEND_RECONNECT_PROTO_FAIL; - private int requestQueueSize = DEFAULT_REQUEST_QUEUE_SIZE; - private DisconnectedBehavior disconnectedBehavior = DEFAULT_DISCONNECTED_BEHAVIOR; - private SocketOptions socketOptions = DEFAULT_SOCKET_OPTIONS; - private SslOptions sslOptions = DEFAULT_SSL_OPTIONS; - - /** - * @deprecated Use {@link ClientOptions#builder()} - */ - @Deprecated - public Builder() { - } - - /** - * Sets the {@literal PING} before activate connection flag. Defaults to {@literal false}. See - * {@link #DEFAULT_PING_BEFORE_ACTIVATE_CONNECTION}. - * - * @param pingBeforeActivateConnection true/false - * @return {@code this} - */ - public Builder pingBeforeActivateConnection(boolean pingBeforeActivateConnection) { - this.pingBeforeActivateConnection = pingBeforeActivateConnection; - return this; - } - - /** - * Enables or disables auto reconnection on connection loss. Defaults to {@literal true}. See - * {@link #DEFAULT_AUTO_RECONNECT}. - * - * @param autoReconnect true/false - * @return {@code this} - */ - public Builder autoReconnect(boolean autoReconnect) { - this.autoReconnect = autoReconnect; - return this; - } - - /** - * Suspends reconnect when reconnects run into protocol failures (SSL verification, PING before connect fails). Defaults - * to {@literal false}. See {@link #DEFAULT_SUSPEND_RECONNECT_PROTO_FAIL}. - * - * @param suspendReconnectOnProtocolFailure true/false - * @return {@code this} - */ - public Builder suspendReconnectOnProtocolFailure(boolean suspendReconnectOnProtocolFailure) { - this.suspendReconnectOnProtocolFailure = suspendReconnectOnProtocolFailure; - return this; - } - - /** - * Allows cancelling queued commands in case a reconnect fails.Defaults to {@literal false}. See - * {@link #DEFAULT_CANCEL_CMD_RECONNECT_FAIL}. - * - * @param cancelCommandsOnReconnectFailure true/false - * @return {@code this} - */ - public Builder cancelCommandsOnReconnectFailure(boolean cancelCommandsOnReconnectFailure) { - this.cancelCommandsOnReconnectFailure = cancelCommandsOnReconnectFailure; - return this; - } - - /** - * Set the per-connection request queue size. The command invocation will lead to a {@link RedisException} if the queue - * size is exceeded. Setting the {@code requestQueueSize} to a lower value will lead earlier to exceptions during - * overload or while the connection is in a disconnected state. A higher value means hitting the boundary will take - * longer to occur, but more requests will potentially be queued up and more heap space is used. Defaults to - * {@link Integer#MAX_VALUE}. See {@link #DEFAULT_REQUEST_QUEUE_SIZE}. - * - * @param requestQueueSize the queue size. - * @return {@code this} - */ - public Builder requestQueueSize(int requestQueueSize) { - this.requestQueueSize = requestQueueSize; - return this; - } - - /** - * Sets the behavior for command invocation when connections are in a disconnected state. Defaults to {@literal true}. - * See {@link #DEFAULT_DISCONNECTED_BEHAVIOR}. - * - * @param disconnectedBehavior must not be {@literal null}. - * @return {@code this} - */ - public Builder disconnectedBehavior(DisconnectedBehavior disconnectedBehavior) { - - LettuceAssert.notNull(disconnectedBehavior, "DisconnectedBehavior must not be null"); - this.disconnectedBehavior = disconnectedBehavior; - return this; - } - - /** - * Sets the low-level {@link SocketOptions} for the connections kept to Redis servers. See - * {@link #DEFAULT_SOCKET_OPTIONS}. - * - * @param socketOptions must not be {@literal null}. - * @return {@code this} - */ - public Builder socketOptions(SocketOptions socketOptions) { - - LettuceAssert.notNull(socketOptions, "SocketOptions must not be null"); - this.socketOptions = socketOptions; - return this; - } - - /** - * Sets the {@link SslOptions} for SSL connections kept to Redis servers. See {@link #DEFAULT_SSL_OPTIONS}. - * - * @param sslOptions must not be {@literal null}. - * @return {@code this} - */ - public Builder sslOptions(SslOptions sslOptions) { - - LettuceAssert.notNull(sslOptions, "SslOptions must not be null"); - this.sslOptions = sslOptions; - return this; - } - - /** - * Create a new instance of {@link ClientOptions}. - * - * @return new instance of {@link ClientOptions} - */ - public ClientOptions build() { - return new ClientOptions(this); - } - } - - /** - * Enables initial {@literal PING} barrier before any connection is usable. If {@literal true} (default is {@literal false} - * ), every connection and reconnect will issue a {@literal PING} command and awaits its response before the connection is - * activated and enabled for use. If the check fails, the connect/reconnect is treated as failure. - * - * @return {@literal true} if {@literal PING} barrier is enabled. - */ - public boolean isPingBeforeActivateConnection() { - return pingBeforeActivateConnection; - } - - /** - * Controls auto-reconnect behavior on connections. If auto-reconnect is {@literal true} (default), it is enabled. As soon - * as a connection gets closed/reset without the intention to close it, the client will try to reconnect and re-issue any - * queued commands. - * - * This flag has also the effect that disconnected connections will refuse commands and cancel these with an exception. - * - * @return {@literal true} if auto-reconnect is enabled. - */ - public boolean isAutoReconnect() { - return autoReconnect; - } - - /** - * If this flag is {@literal true} any queued commands will be canceled when a reconnect fails within the activation - * sequence. Default is {@literal false}. - * - * @return {@literal true} if commands should be cancelled on reconnect failures. - */ - public boolean isCancelCommandsOnReconnectFailure() { - return cancelCommandsOnReconnectFailure; - } - - /** - * If this flag is {@literal true} the reconnect will be suspended on protocol errors. Protocol errors are errors while SSL - * negotiation or when PING before connect fails. - * - * @return {@literal true} if reconnect will be suspended on protocol errors. - */ - public boolean isSuspendReconnectOnProtocolFailure() { - return suspendReconnectOnProtocolFailure; - } - - /** - * Request queue size for a connection. This value applies per connection. The command invocation will throw a - * {@link RedisException} if the queue size is exceeded and a new command is requested. Defaults to - * {@link Integer#MAX_VALUE}. - * - * @return the request queue size. - */ - public int getRequestQueueSize() { - return requestQueueSize; - } - - /** - * Behavior for command invocation when connections are in a disconnected state. Defaults to - * {@link DisconnectedBehavior#DEFAULT true}. See {@link #DEFAULT_DISCONNECTED_BEHAVIOR}. - * - * @return the behavior for command invocation when connections are in a disconnected state - */ - public DisconnectedBehavior getDisconnectedBehavior() { - return disconnectedBehavior; - } - - /** - * Returns the {@link SocketOptions}. - * - * @return the {@link SocketOptions}. - */ - public SocketOptions getSocketOptions() { - return socketOptions; - } - - /** - * Returns the {@link SslOptions}. - * - * @return the {@link SslOptions}. - */ - public SslOptions getSslOptions() { - return sslOptions; - } - - /** - * Behavior of connections in disconnected state. - */ - public enum DisconnectedBehavior { - - /** - * Accept commands when auto-reconnect is enabled, reject commands when auto-reconnect is disabled. - */ - DEFAULT, - - /** - * Accept commands in disconnected state. - */ - ACCEPT_COMMANDS, - - /** - * Reject commands in disconnected state. - */ - REJECT_COMMANDS, - } -} diff --git a/src/main/java/com/lambdaworks/redis/CloseEvents.java b/src/main/java/com/lambdaworks/redis/CloseEvents.java deleted file mode 100644 index 05476d44ec..0000000000 --- a/src/main/java/com/lambdaworks/redis/CloseEvents.java +++ /dev/null @@ -1,29 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.Set; - -import io.netty.util.internal.ConcurrentSet; - -/** - * Close Events Facility. Can register/unregister CloseListener and fire a closed event to all registered listeners. - * - * @author Mark Paluch - * @since 3.0 - */ -class CloseEvents { - private Set listeners = new ConcurrentSet(); - - public void fireEventClosed(Object resource) { - for (CloseListener listener : listeners) { - listener.resourceClosed(resource); - } - } - - public void addListener(CloseListener listener) { - listeners.add(listener); - } - - interface CloseListener { - void resourceClosed(Object resource); - } -} diff --git a/src/main/java/com/lambdaworks/redis/ConnectionBuilder.java b/src/main/java/com/lambdaworks/redis/ConnectionBuilder.java deleted file mode 100644 index 13cbeafed2..0000000000 --- a/src/main/java/com/lambdaworks/redis/ConnectionBuilder.java +++ /dev/null @@ -1,198 +0,0 @@ -package com.lambdaworks.redis; - -import java.net.SocketAddress; -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.TimeUnit; -import java.util.function.Supplier; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CommandEncoder; -import com.lambdaworks.redis.protocol.CommandHandler; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; -import com.lambdaworks.redis.protocol.ReconnectionListener; -import com.lambdaworks.redis.resource.ClientResources; - -import io.netty.bootstrap.Bootstrap; -import io.netty.channel.ChannelHandler; -import io.netty.channel.group.ChannelGroup; -import io.netty.util.Timer; -import io.netty.util.concurrent.EventExecutorGroup; - -/** - * Connection builder for connections. This class is part of the internal API. - * - * @author Mark Paluch - */ -public class ConnectionBuilder { - - private Supplier socketAddressSupplier; - private ConnectionEvents connectionEvents; - private RedisChannelHandler connection; - private CommandHandler commandHandler; - private ChannelGroup channelGroup; - private Timer timer; - private Bootstrap bootstrap; - private ClientOptions clientOptions; - private EventExecutorGroup workerPool; - private long timeout; - private TimeUnit timeUnit; - private ClientResources clientResources; - private char[] password; - private ReconnectionListener reconnectionListener = ReconnectionListener.NO_OP; - - public static ConnectionBuilder connectionBuilder() { - return new ConnectionBuilder(); - } - - protected List buildHandlers() { - - LettuceAssert.assertState(channelGroup != null, "ChannelGroup must be set"); - LettuceAssert.assertState(connectionEvents != null, "ConnectionEvents must be set"); - LettuceAssert.assertState(connection != null, "Connection must be set"); - LettuceAssert.assertState(clientResources != null, "ClientResources must be set"); - - List handlers = new ArrayList<>(); - - connection.setOptions(clientOptions); - - handlers.add(new ChannelGroupListener(channelGroup)); - handlers.add(new CommandEncoder()); - handlers.add(commandHandler); - handlers.add(connection); - handlers.add(new ConnectionEventTrigger(connectionEvents, connection, clientResources.eventBus())); - - if (clientOptions.isAutoReconnect()) { - handlers.add(createConnectionWatchdog()); - } - - return handlers; - } - - protected ConnectionWatchdog createConnectionWatchdog() { - - LettuceAssert.assertState(bootstrap != null, "Bootstrap must be set for autoReconnect=true"); - LettuceAssert.assertState(timer != null, "Timer must be set for autoReconnect=true"); - LettuceAssert.assertState(socketAddressSupplier != null, "SocketAddressSupplier must be set for autoReconnect=true"); - - ConnectionWatchdog watchdog = new ConnectionWatchdog(clientResources.reconnectDelay(), clientOptions, bootstrap, timer, - workerPool, socketAddressSupplier, reconnectionListener); - - watchdog.setListenOnChannelInactive(true); - return watchdog; - } - - public RedisChannelInitializer build() { - return new PlainChannelInitializer(clientOptions.isPingBeforeActivateConnection(), password(), buildHandlers(), - clientResources.eventBus()); - } - - public ConnectionBuilder socketAddressSupplier(Supplier socketAddressSupplier) { - this.socketAddressSupplier = socketAddressSupplier; - return this; - } - - public SocketAddress socketAddress() { - LettuceAssert.assertState(socketAddressSupplier != null, "SocketAddressSupplier must be set"); - return socketAddressSupplier.get(); - } - - public ConnectionBuilder timeout(long timeout, TimeUnit timeUnit) { - this.timeout = timeout; - this.timeUnit = timeUnit; - return this; - } - - public long getTimeout() { - return timeout; - } - - public TimeUnit getTimeUnit() { - return timeUnit; - } - - public ConnectionBuilder reconnectionListener(ReconnectionListener reconnectionListener) { - - LettuceAssert.notNull(reconnectionListener, "ReconnectionListener must not be null"); - this.reconnectionListener = reconnectionListener; - return this; - } - - public ConnectionBuilder clientOptions(ClientOptions clientOptions) { - this.clientOptions = clientOptions; - return this; - } - - public ConnectionBuilder workerPool(EventExecutorGroup workerPool) { - this.workerPool = workerPool; - return this; - } - - public ConnectionBuilder connectionEvents(ConnectionEvents connectionEvents) { - this.connectionEvents = connectionEvents; - return this; - } - - public ConnectionBuilder connection(RedisChannelHandler connection) { - this.connection = connection; - return this; - } - - public ConnectionBuilder channelGroup(ChannelGroup channelGroup) { - this.channelGroup = channelGroup; - return this; - } - - public ConnectionBuilder commandHandler(CommandHandler commandHandler) { - this.commandHandler = commandHandler; - return this; - } - - public ConnectionBuilder timer(Timer timer) { - this.timer = timer; - return this; - } - - public ConnectionBuilder bootstrap(Bootstrap bootstrap) { - this.bootstrap = bootstrap; - return this; - } - - public ConnectionBuilder clientResources(ClientResources clientResources) { - this.clientResources = clientResources; - return this; - } - - public ConnectionBuilder password(char[] password) { - this.password = password; - return this; - } - - public RedisChannelHandler connection() { - return connection; - } - - public CommandHandler commandHandler() { - return commandHandler; - } - - public Bootstrap bootstrap() { - return bootstrap; - } - - public ClientOptions clientOptions() { - return clientOptions; - } - - public ClientResources clientResources() { - return clientResources; - } - - public char[] password() { - return password; - } - - public EventExecutorGroup workerPool() { - return workerPool; - } -} diff --git a/src/main/java/com/lambdaworks/redis/ConnectionEventTrigger.java b/src/main/java/com/lambdaworks/redis/ConnectionEventTrigger.java deleted file mode 100644 index e2cd48469e..0000000000 --- a/src/main/java/com/lambdaworks/redis/ConnectionEventTrigger.java +++ /dev/null @@ -1,64 +0,0 @@ -package com.lambdaworks.redis; - -import java.net.SocketAddress; - -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.event.connection.ConnectionDeactivatedEvent; - -import io.netty.channel.Channel; -import io.netty.channel.ChannelHandler; -import io.netty.channel.ChannelHandlerContext; -import io.netty.channel.ChannelInboundHandlerAdapter; -import io.netty.channel.local.LocalAddress; - -/** - * @author Mark Paluch - * @since 3.0 - */ -@ChannelHandler.Sharable -class ConnectionEventTrigger extends ChannelInboundHandlerAdapter { - private final ConnectionEvents connectionEvents; - private final RedisChannelHandler connection; - private final EventBus eventBus; - - public ConnectionEventTrigger(ConnectionEvents connectionEvents, RedisChannelHandler connection, EventBus eventBus) { - this.connectionEvents = connectionEvents; - this.connection = connection; - this.eventBus = eventBus; - } - - @Override - public void channelActive(ChannelHandlerContext ctx) throws Exception { - connectionEvents.fireEventRedisConnected(connection); - super.channelActive(ctx); - } - - @Override - public void channelInactive(ChannelHandlerContext ctx) throws Exception { - connectionEvents.fireEventRedisDisconnected(connection); - eventBus.publish(new ConnectionDeactivatedEvent(local(ctx), remote(ctx))); - super.channelInactive(ctx); - } - - @Override - public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { - connectionEvents.fireEventRedisExceptionCaught(connection, cause); - super.exceptionCaught(ctx, cause); - } - - static SocketAddress remote(ChannelHandlerContext ctx) { - if (ctx.channel() != null && ctx.channel().remoteAddress() != null) { - return ctx.channel().remoteAddress(); - } - return new LocalAddress("unknown"); - } - - static SocketAddress local(ChannelHandlerContext ctx) { - Channel channel = ctx.channel(); - if (channel != null && channel.localAddress() != null) { - return channel.localAddress(); - } - return LocalAddress.ANY; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/ConnectionEvents.java b/src/main/java/com/lambdaworks/redis/ConnectionEvents.java deleted file mode 100644 index 7a470d5183..0000000000 --- a/src/main/java/com/lambdaworks/redis/ConnectionEvents.java +++ /dev/null @@ -1,81 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.Set; -import java.util.concurrent.CompletableFuture; - -import io.netty.util.internal.ConcurrentSet; - -/** - * Close Events Facility. Can register/unregister CloseListener and fire a closed event to all registered listeners. - * - * @author Mark Paluch - * @since 3.0 - */ -public class ConnectionEvents { - private final Set listeners = new ConcurrentSet<>(); - - protected void fireEventRedisConnected(RedisChannelHandler connection) { - for (RedisConnectionStateListener listener : listeners) { - listener.onRedisConnected(connection); - } - } - - protected void fireEventRedisDisconnected(RedisChannelHandler connection) { - for (RedisConnectionStateListener listener : listeners) { - listener.onRedisDisconnected(connection); - } - } - - protected void fireEventRedisExceptionCaught(RedisChannelHandler connection, Throwable cause) { - for (RedisConnectionStateListener listener : listeners) { - listener.onRedisExceptionCaught(connection, cause); - } - } - - public void addListener(RedisConnectionStateListener listener) { - listeners.add(listener); - } - - public void removeListener(RedisConnectionStateListener listener) { - listeners.remove(listener); - } - - /** - * Internal event before a channel is closed. - */ - public static class PrepareClose { - private CompletableFuture prepareCloseFuture = new CompletableFuture<>(); - - public CompletableFuture getPrepareCloseFuture() { - return prepareCloseFuture; - } - } - - /** - * Internal event when a channel is closed. - */ - public static class Close { - } - - /** - * Internal event when a channel is activated. - */ - public static class Activated { - } - - /** - * Internal event when a reconnect is initiated. - */ - public static class Reconnect { - - private final int attempt; - - public Reconnect(int attempt) { - this.attempt = attempt; - } - - public int getAttempt() { - return attempt; - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/ConnectionId.java b/src/main/java/com/lambdaworks/redis/ConnectionId.java deleted file mode 100644 index c69da6b514..0000000000 --- a/src/main/java/com/lambdaworks/redis/ConnectionId.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.redis; - -import java.net.SocketAddress; - -/** - * Connection identifier. A connection identifier consists of the {@link #localAddress()} and the {@link #remoteAddress()}. - * - * @author Mark Paluch - * @since 3.4 - */ -public interface ConnectionId { - - /** - * Returns the local address. - * - * @return the local address - */ - SocketAddress localAddress(); - - /** - * Returns the remote address. - * - * @return the remote address - */ - SocketAddress remoteAddress(); -} diff --git a/src/main/java/com/lambdaworks/redis/ConnectionPoint.java b/src/main/java/com/lambdaworks/redis/ConnectionPoint.java deleted file mode 100644 index 3d033d181f..0000000000 --- a/src/main/java/com/lambdaworks/redis/ConnectionPoint.java +++ /dev/null @@ -1,30 +0,0 @@ -package com.lambdaworks.redis; - -/** - * Interface for a connection point described with a host and port or socket. - * - * @author Mark Paluch - */ -public interface ConnectionPoint { - - /** - * Returns the host that should represent the hostname or IPv4/IPv6 literal. - * - * @return the hostname/IP address - */ - String getHost(); - - /** - * Get the current port number. - * - * @return the port number - */ - int getPort(); - - /** - * Get the socket path. - * - * @return path to a Unix Domain Socket - */ - String getSocket(); -} diff --git a/src/main/java/com/lambdaworks/redis/Connections.java b/src/main/java/com/lambdaworks/redis/Connections.java deleted file mode 100644 index 20d95eb58b..0000000000 --- a/src/main/java/com/lambdaworks/redis/Connections.java +++ /dev/null @@ -1,112 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.ExecutionException; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Utility for checking a connection's state. - * - * @author Mark Paluch - * @since 3.0 - */ -class Connections { - - /** - * Utility constructor. - */ - private Connections() { - - } - - /** - * - * @param connection must be either a {@link com.lambdaworks.redis.RedisAsyncConnection} or - * {@link com.lambdaworks.redis.RedisConnection} and must not be {@literal null} - * @return true if the connection is valid (ping works) - * @throws java.lang.NullPointerException if connection is null - * @throws java.lang.IllegalArgumentException if connection is not a supported type - */ - public static final boolean isValid(Object connection) { - - LettuceAssert.notNull(connection, "Connection must not be null"); - if (connection instanceof RedisAsyncConnection) { - RedisAsyncConnection redisAsyncConnection = (RedisAsyncConnection) connection; - try { - redisAsyncConnection.ping().get(); - return true; - } catch (ExecutionException | RuntimeException e) { - return false; - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - } - } - - if (connection instanceof RedisConnection) { - RedisConnection redisConnection = (RedisConnection) connection; - try { - redisConnection.ping(); - return true; - } catch (RuntimeException e) { - return false; - } - } - - throw new IllegalArgumentException("Connection class " + connection.getClass() + " not supported"); - } - - /** - * - * @param connection must be either a {@link com.lambdaworks.redis.RedisAsyncConnection} or - * {@link com.lambdaworks.redis.RedisConnection} and must not be {@literal null} - * @return true if the connection is open. - * @throws java.lang.NullPointerException if connection is null - * @throws java.lang.IllegalArgumentException if connection is not a supported type - */ - public static final boolean isOpen(Object connection) { - - LettuceAssert.notNull(connection, "Connection must not be null"); - if (connection instanceof RedisAsyncConnection) { - RedisAsyncConnection redisAsyncConnection = (RedisAsyncConnection) connection; - return redisAsyncConnection.isOpen(); - } - - if (connection instanceof RedisConnection) { - RedisConnection redisConnection = (RedisConnection) connection; - return redisConnection.isOpen(); - } - - throw new IllegalArgumentException("Connection class " + connection.getClass() + " not supported"); - } - - /** - * Closes silently a connection. - * - * @param connection must be either a {@link com.lambdaworks.redis.RedisAsyncConnection} or - * {@link com.lambdaworks.redis.RedisConnection} and must not be {@literal null} - * @throws java.lang.NullPointerException if connection is null - * @throws java.lang.IllegalArgumentException if connection is not a supported type - */ - public static void close(Object connection) { - - LettuceAssert.notNull(connection, "Connection must not be null"); - try { - if (connection instanceof RedisAsyncConnection) { - RedisAsyncConnection redisAsyncConnection = (RedisAsyncConnection) connection; - redisAsyncConnection.close(); - return; - } - - if (connection instanceof RedisConnection) { - RedisConnection redisConnection = (RedisConnection) connection; - redisConnection.close(); - return; - } - } catch (RuntimeException e) { - return; - } - throw new IllegalArgumentException("Connection class " + connection.getClass() + " not supported"); - - } -} diff --git a/src/main/java/com/lambdaworks/redis/EpollProvider.java b/src/main/java/com/lambdaworks/redis/EpollProvider.java deleted file mode 100644 index 95e1bf6447..0000000000 --- a/src/main/java/com/lambdaworks/redis/EpollProvider.java +++ /dev/null @@ -1,92 +0,0 @@ -package com.lambdaworks.redis; - -import java.lang.reflect.Constructor; -import java.net.SocketAddress; -import java.util.concurrent.Callable; -import java.util.concurrent.ThreadFactory; - -import com.lambdaworks.redis.internal.LettuceAssert; - -import com.lambdaworks.redis.internal.LettuceClassUtils; -import io.netty.channel.Channel; -import io.netty.channel.EventLoopGroup; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Wraps and provides Epoll classes. This is to protect the user from {@link ClassNotFoundException}'s caused by the absence of - * the {@literal netty-transport-native-epoll} library during runtime. Internal API. - * - * @author Mark Paluch - */ -public class EpollProvider { - - protected static final InternalLogger logger = InternalLoggerFactory.getInstance(EpollProvider.class); - - public final static Class epollEventLoopGroupClass; - public final static Class epollDomainSocketChannelClass; - public final static Class domainSocketAddressClass; - static { - - epollEventLoopGroupClass = getClass("io.netty.channel.epoll.EpollEventLoopGroup"); - epollDomainSocketChannelClass = getClass("io.netty.channel.epoll.EpollDomainSocketChannel"); - domainSocketAddressClass = getClass("io.netty.channel.unix.DomainSocketAddress"); - if (epollDomainSocketChannelClass == null || epollEventLoopGroupClass == null) { - logger.debug("Starting without optional Epoll library"); - } - } - - /** - * Try to load class {@literal className}. - * - * @param className - * @param Expected return type for casting. - * @return instance of {@literal className} or null - */ - private static Class getClass(String className) { - try { - return (Class) LettuceClassUtils.forName(className); - } catch (ClassNotFoundException e) { - logger.debug("Cannot load class " + className, e); - } - return null; - } - - /** - * Check whether the Epoll library is available on the class path. - * - * @throws IllegalStateException if the {@literal netty-transport-native-epoll} library is not available - * - */ - static void checkForEpollLibrary() { - - LettuceAssert.assertState(domainSocketAddressClass != null && epollDomainSocketChannelClass != null, - "Cannot connect using sockets without the optional netty-transport-native-epoll library on the class path"); - } - - static SocketAddress newSocketAddress(String socketPath) { - return get(() -> { - Constructor constructor = domainSocketAddressClass.getConstructor(String.class); - return constructor.newInstance(socketPath); - }); - } - - public static EventLoopGroup newEventLoopGroup(int nThreads, ThreadFactory threadFactory) { - - try { - Constructor constructor = epollEventLoopGroupClass - .getConstructor(Integer.TYPE, ThreadFactory.class); - return constructor.newInstance(nThreads, threadFactory); - } catch (Exception e) { - throw new IllegalStateException(e); - } - } - - private static V get(Callable supplier) { - try { - return supplier.call(); - } catch (Exception e) { - throw new IllegalStateException(e); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/FutureSyncInvocationHandler.java b/src/main/java/com/lambdaworks/redis/FutureSyncInvocationHandler.java deleted file mode 100644 index db0091d129..0000000000 --- a/src/main/java/com/lambdaworks/redis/FutureSyncInvocationHandler.java +++ /dev/null @@ -1,57 +0,0 @@ -package com.lambdaworks.redis; - -import java.lang.reflect.InvocationTargetException; -import java.lang.reflect.Method; - -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.internal.AbstractInvocationHandler; - -/** - * Invocation-handler to synchronize API calls which use Futures as backend. This class leverages the need to implement a full - * sync class which just delegates every request. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -class FutureSyncInvocationHandler extends AbstractInvocationHandler { - - private final StatefulConnection connection; - private final Object asyncApi; - private final MethodTranslator translator; - - public FutureSyncInvocationHandler(StatefulConnection connection, Object asyncApi, Class[] interfaces) { - this.connection = connection; - this.asyncApi = asyncApi; - this.translator = new MethodTranslator(asyncApi.getClass(), interfaces); - } - - @Override - @SuppressWarnings("unchecked") - protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { - - try { - - Method targetMethod = this.translator.get(method); - Object result = targetMethod.invoke(asyncApi, args); - - if (result instanceof RedisFuture) { - RedisFuture command = (RedisFuture) result; - if (!method.getName().equals("exec") && !method.getName().equals("multi")) { - if (connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection).isMulti()) { - return null; - } - } - - LettuceFutures.awaitOrCancel(command, connection.getTimeout(), connection.getTimeoutUnit()); - return command.get(); - } - return result; - } catch (InvocationTargetException e) { - throw e.getTargetException(); - } - - } -} diff --git a/src/main/java/com/lambdaworks/redis/GeoArgs.java b/src/main/java/com/lambdaworks/redis/GeoArgs.java deleted file mode 100644 index e443e92c5f..0000000000 --- a/src/main/java/com/lambdaworks/redis/GeoArgs.java +++ /dev/null @@ -1,181 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandKeyword; - -/** - * Args for {@literal GEORADIUS} and {@literal GEORADIUSBYMEMBER} commands. - * - * @author Mark Paluch - */ -public class GeoArgs { - - private boolean withdistance; - private boolean withcoordinates; - private boolean withhash; - private Long count; - private Sort sort = Sort.none; - - /** - * Request distance for results. - * - * @return {@code this} - */ - public GeoArgs withDistance() { - withdistance = true; - return this; - } - - /** - * Request coordinates for results. - * - * @return {@code this} - */ - public GeoArgs withCoordinates() { - withcoordinates = true; - return this; - } - - /** - * Request geohash for results. - * - * @return {@code this} - */ - public GeoArgs withHash() { - withhash = true; - return this; - } - - /** - * Limit results to {@code count} entries. - * - * @param count number greater 0 - * @return {@code this} - */ - public GeoArgs withCount(long count) { - LettuceAssert.isTrue(count > 0, "Count must be greater 0"); - this.count = count; - return this; - } - - /** - * - * @return {@literal true} if distance is requested. - */ - public boolean isWithDistance() { - return withdistance; - } - - /** - * - * @return {@literal true} if coordinates are requested. - */ - public boolean isWithCoordinates() { - return withcoordinates; - } - - /** - * - * @return {@literal true} if geohash is requested. - */ - public boolean isWithHash() { - return withhash; - } - - /** - * Sort results ascending. - * - * @return {@code this} - */ - public GeoArgs asc() { - return sort(Sort.asc); - } - - /** - * Sort results descending. - * - * @return {@code this} - */ - public GeoArgs desc() { - return sort(Sort.desc); - } - - /** - * Sort results. - * - * @param sort sort order, must not be {@literal null} - * @return {@code this} - */ - public GeoArgs sort(Sort sort) { - LettuceAssert.notNull(sort, "Sort must not be null"); - - this.sort = sort; - return this; - } - - /** - * Sort order. - */ - public enum Sort { - /** - * ascending. - */ - asc, - - /** - * descending. - */ - desc, - - /** - * no sort order. - */ - none; - } - - /** - * Supported geo unit. - */ - public enum Unit { - /** - * meter. - */ - m, - /** - * kilometer. - */ - km, - /** - * feet. - */ - ft, - /** - * mile. - */ - mi; - } - - public void build(CommandArgs args) { - if (withdistance) { - args.add("withdist"); - } - - if (withhash) { - args.add("withhash"); - } - - if (withcoordinates) { - args.add("withcoord"); - } - - if (sort != null && sort != Sort.none) { - args.add(sort.name()); - } - - if (count != null) { - args.add(CommandKeyword.COUNT).add(count); - } - - } -} diff --git a/src/main/java/com/lambdaworks/redis/GeoCoordinates.java b/src/main/java/com/lambdaworks/redis/GeoCoordinates.java deleted file mode 100644 index d5bbe855c7..0000000000 --- a/src/main/java/com/lambdaworks/redis/GeoCoordinates.java +++ /dev/null @@ -1,48 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.output.DoubleOutput; - -/** - * A tuple consisting of numerical geo data points to describe geo coordinates. - * - * @author Mark Paluch - */ -public class GeoCoordinates { - - public final Number x; - public final Number y; - - public GeoCoordinates(Number x, Number y) { - - this.x = x; - this.y = y; - } - - @Override - public boolean equals(Object o) { - if (this == o) - return true; - if (!(o instanceof GeoCoordinates)) - return false; - - GeoCoordinates geoCoords = (GeoCoordinates) o; - - if (x != null ? !x.equals(geoCoords.x) : geoCoords.x != null) - return false; - return !(y != null ? !y.equals(geoCoords.y) : geoCoords.y != null); - } - - @Override - public int hashCode() { - int result = x != null ? x.hashCode() : 0; - result = 31 * result + (y != null ? y.hashCode() : 0); - return result; - } - - @Override - public String toString() { - - return String.format("(%s, %s)", x, y); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/GeoRadiusStoreArgs.java b/src/main/java/com/lambdaworks/redis/GeoRadiusStoreArgs.java deleted file mode 100644 index 66981f78db..0000000000 --- a/src/main/java/com/lambdaworks/redis/GeoRadiusStoreArgs.java +++ /dev/null @@ -1,123 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.GeoArgs.Sort; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandKeyword; - -/** - * Store Args for {@literal GEORADIUS} to store {@literal GEORADIUS} results or {@literal GEORADIUS} distances in a sorted set. - * - * @author Mark Paluch - */ -public class GeoRadiusStoreArgs { - - private K storeKey; - private K storeDistKey; - private Long count; - private Sort sort = Sort.none; - - /** - * Store the resulting members with their location in the new Geo set {@code storeKey}. - * Cannot be used together with {@link #withStoreDist(Object)}. - * - * @param storeKey the destination key. - * @return {@code this} - */ - public GeoRadiusStoreArgs withStore(K storeKey) { - LettuceAssert.notNull(storeKey, "StoreKey must not be null"); - this.storeKey = storeKey; - return this; - } - - /** - * Store the resulting members with their distance in the sorted set {@code storeKey}. - * Cannot be used together with {@link #withStore(Object)}. - * - * @param storeKey the destination key. - * @return {@code this} - */ - public GeoRadiusStoreArgs withStoreDist(K storeKey) { - LettuceAssert.notNull(storeKey, "StoreKey must not be null"); - this.storeDistKey = storeKey; - return this; - } - - /** - * Limit results to {@code count} entries. - * - * @param count number greater 0 - * @return {@code this} - */ - public GeoRadiusStoreArgs withCount(long count) { - LettuceAssert.isTrue(count > 0, "Count must be greater 0"); - this.count = count; - return this; - } - - /** - * Sort results ascending. - * - * @return {@code this} - */ - public GeoRadiusStoreArgs asc() { - return sort(Sort.asc); - } - - /** - * Sort results descending. - * - * @return {@code this} - */ - public GeoRadiusStoreArgs desc() { - return sort(Sort.desc); - } - - /** - * - * @return the key for storing results - */ - public K getStoreKey() { - return storeKey; - } - - /** - * - * @return the key for storing distance results - */ - public K getStoreDistKey() { - return storeDistKey; - } - - /** - * Sort results. - * - * @param sort sort order, must not be {@literal null} - * @return {@code this} - */ - public GeoRadiusStoreArgs sort(Sort sort) { - LettuceAssert.notNull(sort, "Sort must not be null"); - - this.sort = sort; - return this; - } - - public void build(CommandArgs args) { - - if (sort != null && sort != Sort.none) { - args.add(sort.name()); - } - - if (count != null) { - args.add(CommandKeyword.COUNT).add(count); - } - - if (storeKey != null) { - args.add("STORE").addKey((K) storeKey); - } - - if (storeDistKey != null) { - args.add("STOREDIST").addKey((K) storeDistKey); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/GeoWithin.java b/src/main/java/com/lambdaworks/redis/GeoWithin.java deleted file mode 100644 index df546ab179..0000000000 --- a/src/main/java/com/lambdaworks/redis/GeoWithin.java +++ /dev/null @@ -1,67 +0,0 @@ -package com.lambdaworks.redis; - -/** - * Geo element within a certain radius. Contains: - *
    - *
  • the member
  • - *
  • the distance from the reference point (if requested)
  • - *
  • the geohash (if requested)
  • - *
  • the coordinates (if requested)
  • - *
- * - * @param Value type. - * @author Mark Paluch - */ -public class GeoWithin { - - public final V member; - public final Double distance; - public final Long geohash; - public final GeoCoordinates coordinates; - - public GeoWithin(V member, Double distance, Long geohash, GeoCoordinates coordinates) { - this.member = member; - this.distance = distance; - this.geohash = geohash; - this.coordinates = coordinates; - } - - @Override - public boolean equals(Object o) { - if (this == o) - return true; - if (!(o instanceof GeoWithin)) - return false; - - GeoWithin geoWithin = (GeoWithin) o; - - if (member != null ? !member.equals(geoWithin.member) : geoWithin.member != null) - return false; - if (distance != null ? !distance.equals(geoWithin.distance) : geoWithin.distance != null) - return false; - if (geohash != null ? !geohash.equals(geoWithin.geohash) : geoWithin.geohash != null) - return false; - return !(coordinates != null ? !coordinates.equals(geoWithin.coordinates) : geoWithin.coordinates != null); - } - - @Override - public int hashCode() { - int result = member != null ? member.hashCode() : 0; - result = 31 * result + (distance != null ? distance.hashCode() : 0); - result = 31 * result + (geohash != null ? geohash.hashCode() : 0); - result = 31 * result + (coordinates != null ? coordinates.hashCode() : 0); - return result; - } - - @Override - public String toString() { - final StringBuffer sb = new StringBuffer(); - sb.append(getClass().getSimpleName()); - sb.append(" [member=").append(member); - sb.append(", distance=").append(distance); - sb.append(", geohash=").append(geohash); - sb.append(", coordinates=").append(coordinates); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/JavaRuntime.java b/src/main/java/com/lambdaworks/redis/JavaRuntime.java deleted file mode 100644 index 46fbfb57a9..0000000000 --- a/src/main/java/com/lambdaworks/redis/JavaRuntime.java +++ /dev/null @@ -1,17 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.internal.LettuceClassUtils.isPresent; - -/** - * Utility to determine which Java runtime is used. - * - * @author Mark Paluch - */ -public class JavaRuntime { - - /** - * Constant whether the current JDK is Java 8 or higher. - */ - public final static boolean AT_LEAST_JDK_8 = isPresent("java.lang.FunctionalInterface"); - -} diff --git a/src/main/java/com/lambdaworks/redis/KeyScanCursor.java b/src/main/java/com/lambdaworks/redis/KeyScanCursor.java deleted file mode 100644 index 0618273719..0000000000 --- a/src/main/java/com/lambdaworks/redis/KeyScanCursor.java +++ /dev/null @@ -1,20 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.List; - -/** - * Cursor providing a list of keys. - * - * @param Key type. - * @author Mark Paluch - * @since 3.0 - */ -public class KeyScanCursor extends ScanCursor { - - private final List keys = new ArrayList<>(); - - public List getKeys() { - return keys; - } -} diff --git a/src/main/java/com/lambdaworks/redis/KeyValue.java b/src/main/java/com/lambdaworks/redis/KeyValue.java deleted file mode 100644 index 5a1e3f3492..0000000000 --- a/src/main/java/com/lambdaworks/redis/KeyValue.java +++ /dev/null @@ -1,45 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -/** - * A key-value pair. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class KeyValue { - - public final K key; - public final V value; - - /** - * - * @param key the key - * @param value the value - */ - public KeyValue(K key, V value) { - this.key = key; - this.value = value; - } - - @Override - public boolean equals(Object o) { - if (o == null || getClass() != o.getClass()) { - return false; - } - KeyValue that = (KeyValue) o; - return key.equals(that.key) && value.equals(that.value); - } - - @Override - public int hashCode() { - return 31 * key.hashCode() + value.hashCode(); - } - - @Override - public String toString() { - return String.format("(%s, %s)", key, value); - } -} diff --git a/src/main/java/com/lambdaworks/redis/KillArgs.java b/src/main/java/com/lambdaworks/redis/KillArgs.java deleted file mode 100644 index ad90193999..0000000000 --- a/src/main/java/com/lambdaworks/redis/KillArgs.java +++ /dev/null @@ -1,108 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.protocol.CommandKeyword.*; -import static com.lambdaworks.redis.protocol.CommandType.*; - -import com.lambdaworks.redis.protocol.CommandArgs; - -/** - * - * Argument list builder for the redis CLIENT KILL command. Static import the - * methods from {@link Builder} and chain the method calls: {@code id(1).skipme()}. - * - * @author Mark Paluch - * @since 3.0 - */ -public class KillArgs { - - private static enum Type { - NORMAL, SLAVE, PUBSUB - } - - private Boolean skipme; - private String addr; - private Long id; - private Type type; - - /** - * Static builder methods. - */ - public static class Builder { - - /** - * Utility constructor. - */ - private Builder() { - - } - - public static KillArgs skipme() { - return new KillArgs().skipme(); - } - - public static KillArgs addr(String addr) { - return new KillArgs().addr(addr); - } - - public static KillArgs id(long id) { - return new KillArgs().id(id); - } - - public static KillArgs typePubsub() { - return new KillArgs().type(Type.PUBSUB); - } - - public static KillArgs typeNormal() { - return new KillArgs().type(Type.NORMAL); - } - - public static KillArgs typeSlave() { - return new KillArgs().type(Type.SLAVE); - } - - } - - public KillArgs skipme() { - return this.skipme(true); - } - - public KillArgs skipme(boolean state) { - this.skipme = state; - return this; - } - - public KillArgs addr(String addr) { - this.addr = addr; - return this; - } - - public KillArgs id(long id) { - this.id = id; - return this; - } - - public KillArgs type(Type type) { - this.type = type; - return this; - } - - void build(CommandArgs args) { - - if (skipme != null) { - args.add(SKIPME).add(skipme.booleanValue() ? "yes" : "no"); - } - - if (id != null) { - args.add(ID).add(id); - } - - if (addr != null) { - args.add(ADDR).add(addr); - } - - if (type != null) { - args.add(TYPE).add(type.name().toLowerCase()); - } - - } -} diff --git a/src/main/java/com/lambdaworks/redis/LettuceFutures.java b/src/main/java/com/lambdaworks/redis/LettuceFutures.java deleted file mode 100644 index a85a5cd3bd..0000000000 --- a/src/main/java/com/lambdaworks/redis/LettuceFutures.java +++ /dev/null @@ -1,112 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.ExecutionException; -import java.util.concurrent.Future; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; - -/** - * Utility to {@link #awaitAll(long, TimeUnit, Future[])} futures until they are done and to synchronize future execution using - * {@link #awaitOrCancel(RedisFuture, long, TimeUnit)}. - * - * @author Mark Paluch - * @since 3.0 - */ -public class LettuceFutures { - - private LettuceFutures() { - - } - - /** - * Wait until futures are complete or the supplied timeout is reached. Commands are not canceled (in contrast to - * {@link #awaitOrCancel(RedisFuture, long, TimeUnit)}) when the timeout expires. - * - * @param timeout Maximum time to wait for futures to complete. - * @param unit Unit of time for the timeout. - * @param futures Futures to wait for. - * @return {@literal true} if all futures complete in time, otherwise {@literal false} - */ - public static boolean awaitAll(long timeout, TimeUnit unit, Future... futures) { - boolean complete; - - try { - long nanos = unit.toNanos(timeout); - long time = System.nanoTime(); - - for (Future f : futures) { - if (nanos < 0) { - return false; - } - f.get(nanos, TimeUnit.NANOSECONDS); - long now = System.nanoTime(); - nanos -= now - time; - time = now; - } - - complete = true; - } catch (TimeoutException e) { - complete = false; - } catch (ExecutionException e) { - if (e.getCause() instanceof RedisCommandExecutionException) { - throw new RedisCommandExecutionException(e.getCause().getMessage(), e.getCause()); - } - throw new RedisException(e.getCause()); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - } catch (Exception e) { - throw new RedisCommandExecutionException(e); - } - - return complete; - } - - /** - * Wait until futures are complete or the supplied timeout is reached. Commands are canceled if the timeout is reached but - * the command is not finished. - * - * @param cmd Command to wait for - * @param timeout Maximum time to wait for futures to complete - * @param unit Unit of time for the timeout - * @param Result type - * - * @return Result of the command. - */ - public static T awaitOrCancel(RedisFuture cmd, long timeout, TimeUnit unit) { - return await(timeout, unit, cmd); - } - - /** - * Wait until futures are complete or the supplied timeout is reached. Commands are canceled if the timeout is reached but - * the command is not finished. - * - * @param cmd Command to wait for - * @param timeout Maximum time to wait for futures to complete - * @param unit Unit of time for the timeout - * @param Result type - * @deprecated The method name does not reflect what the method is doing, therefore it is deprecated. Use - * {@link #awaitOrCancel(RedisFuture, long, TimeUnit)} instead. The semantics did not change and - * {@link #awaitOrCancel(RedisFuture, long, TimeUnit)} simply calls this method. - * @return True if all futures complete in time. - */ - @Deprecated - public static T await(long timeout, TimeUnit unit, RedisFuture cmd) { - try { - if (!cmd.await(timeout, unit)) { - cmd.cancel(true); - throw new RedisCommandTimeoutException(); - } - - return cmd.get(); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - } catch (ExecutionException e) { - if (e.getCause() instanceof RedisCommandExecutionException) { - throw new RedisCommandExecutionException(e.getCause().getMessage(), e.getCause()); - } - throw new RedisException(e.getCause()); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/LettuceStrings.java b/src/main/java/com/lambdaworks/redis/LettuceStrings.java deleted file mode 100644 index c591fffea9..0000000000 --- a/src/main/java/com/lambdaworks/redis/LettuceStrings.java +++ /dev/null @@ -1,83 +0,0 @@ -package com.lambdaworks.redis; - -import java.nio.ByteBuffer; -import java.security.MessageDigest; -import java.security.NoSuchAlgorithmException; - -import com.lambdaworks.codec.Base16; - -/** - * Helper for {@link String} checks. This class is part of the internal API and may change without further notice. - * - * @author Mark Paluch - * @since 3.0 - */ -public class LettuceStrings { - - /** - * Utility constructor. - */ - private LettuceStrings() { - - } - - /** - * Checks if a CharSequence is empty ("") or null. - * - * @param cs the char sequence - * @return true if empty - */ - public static boolean isEmpty(final CharSequence cs) { - return cs == null || cs.length() == 0; - } - - /** - * Checks if a CharSequence is not empty ("") and not null. - * - * @param cs the char sequence - * @return true if not empty - * - */ - public static boolean isNotEmpty(final CharSequence cs) { - return !isEmpty(cs); - } - - /** - * Convert double to string. If double is infinite, returns positive/negative infinity {@code +inf} and {@code -inf}. - * - * @param n the double. - * @return string representation of {@code n} - */ - public static String string(double n) { - if (Double.isInfinite(n)) { - return (n > 0) ? "+inf" : "-inf"; - } - return Double.toString(n); - } - - /** - * Create SHA1 digest from Lua script. - * - * @param script the script - * @return the Base16 encoded SHA1 value - */ - public static String digest(byte[] script) { - return digest(ByteBuffer.wrap(script)); - } - - /** - * Create SHA1 digest from Lua script. - * - * @param script the script - * @return the Base16 encoded SHA1 value - */ - public static String digest(ByteBuffer script) { - try { - MessageDigest md = MessageDigest.getInstance("SHA1"); - md.update(script); - return new String(Base16.encode(md.digest(), false)); - } catch (NoSuchAlgorithmException e) { - throw new RedisException("JVM does not support SHA1"); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/MapScanCursor.java b/src/main/java/com/lambdaworks/redis/MapScanCursor.java deleted file mode 100644 index 020dc818cf..0000000000 --- a/src/main/java/com/lambdaworks/redis/MapScanCursor.java +++ /dev/null @@ -1,25 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.LinkedHashMap; -import java.util.Map; - -/** - * Scan cursor for maps. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public class MapScanCursor extends ScanCursor { - - private final Map map = new LinkedHashMap<>(); - - /** - * - * @return the map result. - */ - public Map getMap() { - return map; - } -} diff --git a/src/main/java/com/lambdaworks/redis/MigrateArgs.java b/src/main/java/com/lambdaworks/redis/MigrateArgs.java deleted file mode 100644 index 539780c64c..0000000000 --- a/src/main/java/com/lambdaworks/redis/MigrateArgs.java +++ /dev/null @@ -1,106 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.Iterator; -import java.util.List; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandKeyword; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * Argument list builder for the new redis MIGRATE command. Static import - * the methods from {@link Builder} and chain the method calls: {@code ex(10).nx()}. - * - * @author Mark Paluch - */ -public class MigrateArgs { - - private boolean copy = false; - private boolean replace = false; - List keys = new ArrayList<>(); - - public static class Builder { - - /** - * Utility constructor. - */ - private Builder() { - - } - - public static MigrateArgs copy() { - return new MigrateArgs().copy(); - } - - public static MigrateArgs replace() { - return new MigrateArgs().replace(); - } - - public static MigrateArgs key(K key) { - return new MigrateArgs().key(key); - } - - @SafeVarargs - public static MigrateArgs keys(K... keys) { - return new MigrateArgs().keys(keys); - } - - public static MigrateArgs keys(Iterable keys) { - return new MigrateArgs().keys(keys); - } - } - - public MigrateArgs copy() { - this.copy = true; - return this; - } - - public MigrateArgs replace() { - this.replace = true; - return this; - } - - public MigrateArgs key(K key) { - LettuceAssert.notNull(key, "Key must not be null"); - this.keys.add(key); - return this; - } - - @SafeVarargs - public final MigrateArgs keys(K... keys) { - LettuceAssert.notEmpty(keys, "Keys must not be empty"); - for (K key : keys) { - this.keys.add(key); - } - return this; - } - - public MigrateArgs keys(Iterable keys) { - LettuceAssert.notNull(keys, "Keys must not be null"); - Iterator iterator = keys.iterator(); - while (iterator.hasNext()) { - this.keys.add(iterator.next()); - } - return this; - } - - @SuppressWarnings("unchecked") - public void build(CommandArgs args) { - - if (copy) { - args.add(CommandKeyword.COPY); - } - - if (replace) { - args.add(CommandKeyword.REPLACE); - } - - if (keys.size() > 1) { - args.add(CommandType.KEYS); - args.addKeys((Iterable) keys); - } - - } -} diff --git a/src/main/java/com/lambdaworks/redis/PlainChannelInitializer.java b/src/main/java/com/lambdaworks/redis/PlainChannelInitializer.java deleted file mode 100644 index a8db635a0a..0000000000 --- a/src/main/java/com/lambdaworks/redis/PlainChannelInitializer.java +++ /dev/null @@ -1,138 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.ConnectionEventTrigger.local; -import static com.lambdaworks.redis.ConnectionEventTrigger.remote; - -import java.util.List; -import java.util.concurrent.CompletableFuture; - -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.event.connection.ConnectedEvent; -import com.lambdaworks.redis.event.connection.ConnectionActivatedEvent; -import com.lambdaworks.redis.event.connection.DisconnectedEvent; -import com.lambdaworks.redis.protocol.AsyncCommand; -import io.netty.channel.Channel; -import io.netty.channel.ChannelHandler; -import io.netty.channel.ChannelHandlerContext; -import io.netty.channel.ChannelPipeline; - -/** - * @author Mark Paluch - */ -class PlainChannelInitializer extends io.netty.channel.ChannelInitializer implements RedisChannelInitializer { - - final static RedisCommandBuilder INITIALIZING_CMD_BUILDER = new RedisCommandBuilder<>(new Utf8StringCodec()); - - protected boolean pingBeforeActivate; - protected CompletableFuture initializedFuture = new CompletableFuture<>(); - protected final char[] password; - private final List handlers; - private final EventBus eventBus; - - public PlainChannelInitializer(boolean pingBeforeActivateConnection, char[] password, List handlers, - EventBus eventBus) { - this.pingBeforeActivate = pingBeforeActivateConnection; - this.password = password; - this.handlers = handlers; - this.eventBus = eventBus; - } - - @Override - protected void initChannel(Channel channel) throws Exception { - - if (channel.pipeline().get("channelActivator") == null) { - - channel.pipeline().addLast("channelActivator", new RedisChannelInitializerImpl() { - - private AsyncCommand pingCommand; - - @Override - public CompletableFuture channelInitialized() { - return initializedFuture; - } - - @Override - public void channelInactive(ChannelHandlerContext ctx) throws Exception { - eventBus.publish(new DisconnectedEvent(local(ctx), remote(ctx))); - initializedFuture = new CompletableFuture<>(); - pingCommand = null; - super.channelInactive(ctx); - } - - @Override - public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { - if (evt instanceof ConnectionEvents.Close) { - if (ctx.channel().isOpen()) { - ctx.channel().close(); - } - } - - if (evt instanceof ConnectionEvents.Activated) { - if (!initializedFuture.isDone()) { - initializedFuture.complete(true); - eventBus.publish(new ConnectionActivatedEvent(local(ctx), remote(ctx))); - } - } - super.userEventTriggered(ctx, evt); - } - - @Override - public void channelActive(final ChannelHandlerContext ctx) throws Exception { - eventBus.publish(new ConnectedEvent(local(ctx), remote(ctx))); - if (pingBeforeActivate) { - if (password != null && password.length != 0) { - pingCommand = new AsyncCommand<>(INITIALIZING_CMD_BUILDER.auth(new String(password))); - } else { - pingCommand = new AsyncCommand<>(INITIALIZING_CMD_BUILDER.ping()); - } - pingBeforeActivate(pingCommand, initializedFuture, ctx, handlers); - } else { - super.channelActive(ctx); - } - } - - @Override - public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { - if (!initializedFuture.isDone()) { - initializedFuture.completeExceptionally(cause); - } - super.exceptionCaught(ctx, cause); - } - }); - } - - for (ChannelHandler handler : handlers) { - removeIfExists(channel.pipeline(), handler.getClass()); - channel.pipeline().addLast(handler); - } - } - - static void pingBeforeActivate(final AsyncCommand cmd, final CompletableFuture initializedFuture, - final ChannelHandlerContext ctx, final List handlers) throws Exception { - cmd.handle((o, throwable) -> { - if (throwable == null) { - initializedFuture.complete(true); - ctx.fireChannelActive(); - } else { - initializedFuture.completeExceptionally(throwable); - } - return null; - }); - - ctx.channel().writeAndFlush(cmd); - } - - static void removeIfExists(ChannelPipeline pipeline, Class handlerClass) { - ChannelHandler channelHandler = pipeline.get(handlerClass); - if (channelHandler != null) { - pipeline.remove(channelHandler); - } - } - - @Override - public CompletableFuture channelInitialized() { - return initializedFuture; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/ReactiveCommandDispatcher.java b/src/main/java/com/lambdaworks/redis/ReactiveCommandDispatcher.java deleted file mode 100644 index dbacd78b53..0000000000 --- a/src/main/java/com/lambdaworks/redis/ReactiveCommandDispatcher.java +++ /dev/null @@ -1,197 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.Arrays; -import java.util.Collection; -import java.util.function.Supplier; - -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.output.StreamingOutput; -import com.lambdaworks.redis.protocol.CommandWrapper; -import com.lambdaworks.redis.protocol.RedisCommand; - -import rx.Observable; -import rx.Subscriber; - -/** - * Reactive command dispatcher. - * - * @author Mark Paluch - */ -public class ReactiveCommandDispatcher implements Observable.OnSubscribe { - - private Supplier> commandSupplier; - private volatile RedisCommand command; - private StatefulConnection connection; - private boolean dissolve; - - /** - * - * @param staticCommand static command, must not be {@literal null} - * @param connection the connection, must not be {@literal null} - * @param dissolve dissolve collections into particular elements - */ - public ReactiveCommandDispatcher(RedisCommand staticCommand, StatefulConnection connection, - boolean dissolve) { - this(() -> staticCommand, connection, dissolve); - } - - /** - * - * @param commandSupplier command supplier, must not be {@literal null} - * @param connection the connection, must not be {@literal null} - * @param dissolve dissolve collections into particular elements - */ - public ReactiveCommandDispatcher(Supplier> commandSupplier, StatefulConnection connection, - boolean dissolve) { - - LettuceAssert.notNull(commandSupplier, "CommandSupplier must not be null"); - LettuceAssert.notNull(connection, "StatefulConnection must not be null"); - - this.commandSupplier = commandSupplier; - this.connection = connection; - this.dissolve = dissolve; - this.command = commandSupplier.get(); - } - - @Override - public void call(Subscriber subscriber) { - - // Reuse the first command but then discard it. - RedisCommand command = this.command; - if (command == null) { - command = commandSupplier.get(); - } - - if (command.getOutput() instanceof StreamingOutput) { - StreamingOutput streamingOutput = (StreamingOutput) command.getOutput(); - - if (connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection).isMulti()) { - streamingOutput.setSubscriber(new DelegatingWrapper( - Arrays.asList(new ObservableSubscriberWrapper<>(subscriber), streamingOutput.getSubscriber()))); - } else { - streamingOutput.setSubscriber(new ObservableSubscriberWrapper<>(subscriber)); - } - } - - connection.dispatch(new ObservableCommand<>(command, subscriber, dissolve)); - - this.command = null; - - } - - private static class ObservableCommand extends CommandWrapper { - - private final Subscriber subscriber; - private final boolean dissolve; - private boolean completed = false; - - public ObservableCommand(RedisCommand command, Subscriber subscriber, boolean dissolve) { - super(command); - this.subscriber = subscriber; - this.dissolve = dissolve; - } - - @Override - @SuppressWarnings("unchecked") - public void complete() { - if (completed || subscriber.isUnsubscribed()) { - return; - } - - try { - super.complete(); - - if (getOutput() != null) { - Object result = getOutput().get(); - if (!(getOutput() instanceof StreamingOutput) && result != null) { - - if (dissolve && result instanceof Collection) { - Collection collection = (Collection) result; - for (T t : collection) { - subscriber.onNext(t); - } - } else { - subscriber.onNext((T) result); - } - } - - if (getOutput().hasError()) { - subscriber.onError(new RedisCommandExecutionException(getOutput().getError())); - completed = true; - return; - } - } - - try { - subscriber.onCompleted(); - } catch (Exception e) { - completeExceptionally(e); - } - } finally { - completed = true; - } - } - - @Override - public void cancel() { - - if (completed || subscriber.isUnsubscribed()) { - return; - } - - super.cancel(); - subscriber.onCompleted(); - completed = true; - } - - @Override - public boolean completeExceptionally(Throwable throwable) { - if (completed || subscriber.isUnsubscribed()) { - return false; - } - - boolean b = super.completeExceptionally(throwable); - subscriber.onError(throwable); - completed = true; - return b; - } - } - - static class ObservableSubscriberWrapper implements StreamingOutput.Subscriber { - - private Subscriber subscriber; - - public ObservableSubscriberWrapper(Subscriber subscriber) { - this.subscriber = subscriber; - } - - @Override - public void onNext(T t) { - - if(subscriber.isUnsubscribed()) { - return; - } - - subscriber.onNext(t); - } - } - - static class DelegatingWrapper implements StreamingOutput.Subscriber { - - private Collection> subscribers; - - public DelegatingWrapper(Collection> subscribers) { - this.subscribers = subscribers; - } - - @Override - public void onNext(T t) { - - for (StreamingOutput.Subscriber subscriber : subscribers) { - subscriber.onNext(t); - } - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/ReadFrom.java b/src/main/java/com/lambdaworks/redis/ReadFrom.java deleted file mode 100644 index ea878616d6..0000000000 --- a/src/main/java/com/lambdaworks/redis/ReadFrom.java +++ /dev/null @@ -1,88 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; - -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -/** - * Defines from which Redis nodes data is read. - * - * @author Mark Paluch - * @since 4.0 - */ -public abstract class ReadFrom { - - /** - * Setting to read from the master only. - */ - public final static ReadFrom MASTER = new ReadFromImpl.ReadFromMaster(); - - /** - * Setting to read preferred from the master and fall back to a slave if the master is not available. - */ - public final static ReadFrom MASTER_PREFERRED = new ReadFromImpl.ReadFromMasterPreferred(); - - /** - * Setting to read from the slave only. - */ - public final static ReadFrom SLAVE = new ReadFromImpl.ReadFromSlave(); - - /** - * Setting to read from the nearest node. - */ - public final static ReadFrom NEAREST = new ReadFromImpl.ReadFromNearest(); - - /** - * Chooses the nodes from the matching Redis nodes that match this read selector. - * - * @param nodes set of nodes that are suitable for reading - * @return List of {@link RedisNodeDescription}s that are selected for reading - */ - public abstract List select(Nodes nodes); - - /** - * Retrieve the {@link ReadFrom} preset by name. - * - * @param name the name of the read from setting - * @return the {@link ReadFrom} preset - * @throws IllegalArgumentException if {@code name} is empty, {@literal null} or the {@link ReadFrom} preset is unknown. - */ - public static ReadFrom valueOf(String name) { - if (LettuceStrings.isEmpty(name)) { - throw new IllegalArgumentException("Name must not be empty"); - } - - if (name.equalsIgnoreCase("master")) { - return MASTER; - } - - if (name.equalsIgnoreCase("masterPreferred")) { - return MASTER_PREFERRED; - } - - if (name.equalsIgnoreCase("slave")) { - return SLAVE; - } - - if (name.equalsIgnoreCase("nearest")) { - return NEAREST; - } - - throw new IllegalArgumentException("ReadFrom " + name + " not supported"); - } - - /** - * Descriptor of nodes that are available for the current read operation. - */ - public interface Nodes extends Iterable { - - /** - * Returns the list of nodes that are applicable for the read operation. The list is ordered by latency. - * - * @return the collection of nodes that are applicable for reading. - * - */ - List getNodes(); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/ReadFromImpl.java b/src/main/java/com/lambdaworks/redis/ReadFromImpl.java deleted file mode 100644 index ec45fae1b1..0000000000 --- a/src/main/java/com/lambdaworks/redis/ReadFromImpl.java +++ /dev/null @@ -1,82 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; - -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -/** - * Collection of common read setting implementations. - * - * @author Mark Paluch - * @since 4.0 - */ -class ReadFromImpl { - - /** - * Read from master only. - */ - static final class ReadFromMaster extends ReadFrom { - @Override - public List select(Nodes nodes) { - for (RedisNodeDescription node : nodes) { - if (node.getRole() == RedisInstance.Role.MASTER) { - return LettuceLists.newList(node); - } - } - return Collections.emptyList(); - } - } - - /** - * Read preffered from master. If the master is not available, read from a slave. - */ - static final class ReadFromMasterPreferred extends ReadFrom { - @Override - public List select(Nodes nodes) { - List result = new ArrayList<>(); - - for (RedisNodeDescription node : nodes) { - if (node.getRole() == RedisInstance.Role.MASTER) { - result.add(node); - } - } - - for (RedisNodeDescription node : nodes) { - if (node.getRole() == RedisInstance.Role.SLAVE) { - result.add(node); - } - } - return result; - } - } - - /** - * Read from slave only. - */ - static final class ReadFromSlave extends ReadFrom { - @Override - public List select(Nodes nodes) { - List result = new ArrayList<>(); - for (RedisNodeDescription node : nodes) { - if (node.getRole() == RedisInstance.Role.SLAVE) { - result.add(node); - } - } - return result; - } - } - - /** - * Read from nearest node. - */ - static final class ReadFromNearest extends ReadFrom { - @Override - public List select(Nodes nodes) { - return nodes.getNodes(); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisAsyncCommandsImpl.java b/src/main/java/com/lambdaworks/redis/RedisAsyncCommandsImpl.java deleted file mode 100644 index bd223a9928..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisAsyncCommandsImpl.java +++ /dev/null @@ -1,36 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * An asynchronous and thread-safe API for a Redis connection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class RedisAsyncCommandsImpl extends AbstractRedisAsyncCommands implements RedisAsyncConnection, - RedisClusterAsyncConnection, RedisAsyncCommands, RedisClusterAsyncCommands { - - /** - * Initialize a new instance. - * - * @param connection the connection to operate on - * @param codec the codec for command encoding - * - */ - public RedisAsyncCommandsImpl(StatefulRedisConnection connection, RedisCodec codec) { - super(connection, codec); - } - - @Override - @SuppressWarnings("unchecked") - public StatefulRedisConnection getStatefulConnection() { - return (StatefulRedisConnection) connection; - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisAsyncConnection.java deleted file mode 100644 index dc68fe22e9..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisAsyncConnection.java +++ /dev/null @@ -1,53 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.async.RedisTransactionalAsyncCommands; - -/** - * A complete asynchronous and thread-safe Redis API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisAsyncCommands} - */ -@Deprecated -public interface RedisAsyncConnection extends RedisHashesAsyncConnection, RedisKeysAsyncConnection, - RedisStringsAsyncConnection, RedisListsAsyncConnection, RedisSetsAsyncConnection, - RedisSortedSetsAsyncConnection, RedisScriptingAsyncConnection, RedisServerAsyncConnection, - RedisHLLAsyncConnection, RedisGeoAsyncConnection, BaseRedisAsyncConnection, - RedisClusterAsyncConnection { - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - String auth(String password); - - /** - * Change the selected database for the current connection. - * - * @param db the database number - * @return String simple-string-reply - */ - String select(int db); - - /** - * @return the underlying connection. - */ - StatefulRedisConnection getStatefulConnection(); - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisChannelHandler.java b/src/main/java/com/lambdaworks/redis/RedisChannelHandler.java deleted file mode 100644 index 4d52769182..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisChannelHandler.java +++ /dev/null @@ -1,225 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Closeable; -import java.io.IOException; -import java.lang.reflect.Proxy; -import java.util.Arrays; -import java.util.Collection; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.RedisCommand; - -import io.netty.channel.ChannelHandlerContext; -import io.netty.channel.ChannelInboundHandlerAdapter; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Abstract base for every redis connection. Provides basic connection functionality and tracks open resources. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public abstract class RedisChannelHandler extends ChannelInboundHandlerAdapter implements Closeable { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(RedisChannelHandler.class); - - protected long timeout; - protected TimeUnit unit; - - private CloseEvents closeEvents = new CloseEvents(); - private final RedisChannelWriter channelWriter; - private volatile boolean closed; - private volatile boolean active = true; - private volatile ClientOptions clientOptions; - - // If DEBUG level logging has been enabled at startup. - private final boolean debugEnabled; - - /** - * @param writer the channel writer - * @param timeout timeout value - * @param unit unit of the timeout - */ - public RedisChannelHandler(RedisChannelWriter writer, long timeout, TimeUnit unit) { - - this.channelWriter = writer; - debugEnabled = logger.isDebugEnabled(); - - writer.setRedisChannelHandler(this); - setTimeout(timeout, unit); - } - - @Override - public void channelRegistered(ChannelHandlerContext ctx) throws Exception { - closed = false; - } - - /** - * Set the command timeout for this connection. - * - * @param timeout Command timeout. - * @param unit Unit of time for the timeout. - */ - public void setTimeout(long timeout, TimeUnit unit) { - this.timeout = timeout; - this.unit = unit; - } - - /** - * Close the connection. - */ - @Override - public synchronized void close() { - - if(debugEnabled) { - logger.debug("close()"); - } - - if (closed) { - logger.warn("Connection is already closed"); - return; - } - - if (!closed) { - active = false; - closed = true; - channelWriter.close(); - closeEvents.fireEventClosed(this); - closeEvents = new CloseEvents(); - } - - } - - @Override - public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { - channelRead(msg); - } - - /** - * Invoked on a channel read. - * - * @param msg channel message - */ - public void channelRead(Object msg) { - - } - - protected > C dispatch(C cmd) { - - if(debugEnabled) { - logger.debug("dispatching command {}", cmd); - } - - return channelWriter.write(cmd); - } - - /** - * Register Closeable resources. Internal access only. - * - * @param registry registry of closeables - * @param closeables closeables to register - */ - public void registerCloseables(final Collection registry, final Closeable... closeables) { - registry.addAll(Arrays.asList(closeables)); - - addListener(resource -> { - for (Closeable closeable : closeables) { - if (closeable == RedisChannelHandler.this) { - continue; - } - - try { - closeable.close(); - } catch (IOException e) { - if(debugEnabled) { - logger.debug(e.toString(), e); - } - } - } - - registry.removeAll(Arrays.asList(closeables)); - }); - } - - protected void addListener(CloseEvents.CloseListener listener) { - closeEvents.addListener(listener); - } - - /** - * - * @return true if the connection is closed (final state in the connection lifecyle). - */ - public boolean isClosed() { - return closed; - } - - /** - * Notification when the connection becomes active (connected). - */ - public void activated() { - active = true; - closed = false; - } - - /** - * Notification when the connection becomes inactive (disconnected). - */ - public void deactivated() { - active = false; - } - - /** - * - * @return the channel writer - */ - public RedisChannelWriter getChannelWriter() { - return channelWriter; - } - - /** - * - * @return true if the connection is active and not closed. - */ - public boolean isOpen() { - return active; - } - - public void reset() { - channelWriter.reset(); - } - - public ClientOptions getOptions() { - return clientOptions; - } - - public void setOptions(ClientOptions clientOptions) { - LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); - this.clientOptions = clientOptions; - } - - public long getTimeout() { - return timeout; - } - - public TimeUnit getTimeoutUnit() { - return unit; - } - - protected T syncHandler(Object asyncApi, Class... interfaces) { - FutureSyncInvocationHandler h = new FutureSyncInvocationHandler<>((StatefulConnection) this, asyncApi, interfaces); - return (T) Proxy.newProxyInstance(AbstractRedisClient.class.getClassLoader(), interfaces, h); - } - - public void setAutoFlushCommands(boolean autoFlush) { - getChannelWriter().setAutoFlushCommands(autoFlush); - } - - public void flushCommands() { - getChannelWriter().flushCommands(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisChannelInitializer.java b/src/main/java/com/lambdaworks/redis/RedisChannelInitializer.java deleted file mode 100644 index e4674d41ef..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisChannelInitializer.java +++ /dev/null @@ -1,20 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.Future; - -import io.netty.channel.ChannelHandler; - -/** - * Channel initializer to set up the transport before a Redis connection can be used. This is part of the internal API. This - * class is part of the internal API. - * - * @author Mark Paluch - */ -public interface RedisChannelInitializer extends ChannelHandler { - - /** - * - * @return future to synchronize channel initialization. Returns a new future for every reconnect. - */ - Future channelInitialized(); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisChannelInitializerImpl.java b/src/main/java/com/lambdaworks/redis/RedisChannelInitializerImpl.java deleted file mode 100644 index 9381ae63a3..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisChannelInitializerImpl.java +++ /dev/null @@ -1,11 +0,0 @@ -package com.lambdaworks.redis; - -import io.netty.channel.ChannelDuplexHandler; - -/** - * Channel initializer to set up the transport before a Redis connection can be used. This class is part of the internal API. - * - * @author Mark Paluch - */ -public abstract class RedisChannelInitializerImpl extends ChannelDuplexHandler implements RedisChannelInitializer { -} diff --git a/src/main/java/com/lambdaworks/redis/RedisChannelWriter.java b/src/main/java/com/lambdaworks/redis/RedisChannelWriter.java deleted file mode 100644 index 214364266a..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisChannelWriter.java +++ /dev/null @@ -1,58 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Closeable; - -import com.lambdaworks.redis.protocol.RedisCommand; - -/** - * Writer for a channel. Writers push commands on to the communication channel and maintain a state for the commands. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public interface RedisChannelWriter extends Closeable { - - /** - * Write a command on the channel. The command may be changed/wrapped during write and the written instance is returned - * after the call. - * - * @param command the redis command - * @param result type - * @param command type - * @return the written redis command - */ - > C write(C command); - - @Override - void close(); - - /** - * Reset the writer state. Queued commands will be canceled and the internal state will be reset. This is useful when the - * internal state machine gets out of sync with the connection. - */ - void reset(); - - /** - * Set the corresponding connection instance in order to notify it about channel active/inactive state. - * - * @param redisChannelHandler the channel handler (external connection object) - */ - void setRedisChannelHandler(RedisChannelHandler redisChannelHandler); - - /** - * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands - * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is - * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. - * - * @param autoFlush state of autoFlush. - */ - void setAutoFlushCommands(boolean autoFlush); - - /** - * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to - * achieve batching. No-op if channel is not connected. - */ - void flushCommands(); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisClient.java b/src/main/java/com/lambdaworks/redis/RedisClient.java deleted file mode 100644 index 7564015e89..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisClient.java +++ /dev/null @@ -1,942 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.codec.StringCodec; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceFactories; -import com.lambdaworks.redis.protocol.CommandHandler; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.pubsub.PubSubCommandHandler; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnectionImpl; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.SocketAddressResolver; -import com.lambdaworks.redis.sentinel.StatefulRedisSentinelConnectionImpl; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; - -import java.net.ConnectException; -import java.net.SocketAddress; -import java.util.List; -import java.util.Queue; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; -import java.util.function.Supplier; - -import static com.lambdaworks.redis.LettuceStrings.isEmpty; -import static com.lambdaworks.redis.LettuceStrings.isNotEmpty; -import static com.lambdaworks.redis.internal.LettuceClassUtils.isPresent; - -/** - * A scalable thread-safe Redis client. Multiple threads may share one connection if they avoid - * blocking and transactional operations such as BLPOP and MULTI/EXEC. {@link RedisClient} is an expensive resource. It holds a - * set of netty's {@link io.netty.channel.EventLoopGroup}'s that consist of up to {@code Number of CPU's * 4} threads. Reuse - * this instance as much as possible. - * - * @author Will Glozer - * @author Mark Paluch - */ -public class RedisClient extends AbstractRedisClient { - - private final static RedisURI EMPTY_URI = new RedisURI(); - private final static boolean POOL_AVAILABLE = isPresent("org.apache.commons.pool2.impl.GenericObjectPool"); - - private final RedisURI redisURI; - - protected RedisClient(ClientResources clientResources, RedisURI redisURI) { - super(clientResources); - - assertNotNull(redisURI); - - this.redisURI = redisURI; - setDefaultTimeout(redisURI.getTimeout(), redisURI.getUnit()); - } - - /** - * Creates a uri-less RedisClient. You can connect to different Redis servers but you must supply a {@link RedisURI} on - * connecting. Methods without having a {@link RedisURI} will fail with a {@link java.lang.IllegalStateException}. - * - * @deprecated Use the factory method {@link #create()} - */ - @Deprecated - public RedisClient() { - this(EMPTY_URI); - } - - /** - * Create a new client that connects to the supplied host on the default port. - * - * @param host Server hostname. - * @deprecated Use the factory method {@link #create(String)} - */ - @Deprecated - public RedisClient(String host) { - this(host, RedisURI.DEFAULT_REDIS_PORT); - } - - /** - * Create a new client that connects to the supplied host and port. Connection attempts and non-blocking commands will - * {@link #setDefaultTimeout timeout} after 60 seconds. - * - * @param host Server hostname. - * @param port Server port. - * @deprecated Use the factory method {@link #create(RedisURI)} - */ - @Deprecated - public RedisClient(String host, int port) { - this(RedisURI.Builder.redis(host, port).build()); - } - - /** - * Create a new client that connects to the supplied host and port. Connection attempts and non-blocking commands will - * {@link #setDefaultTimeout timeout} after 60 seconds. - * - * @param redisURI Redis URI. - * @deprecated Use the factory method {@link #create(RedisURI)} - */ - @Deprecated - public RedisClient(RedisURI redisURI) { - this(null, redisURI); - } - - /** - * Creates a uri-less RedisClient with default {@link ClientResources}. You can connect to different Redis servers but you - * must supply a {@link RedisURI} on connecting. Methods without having a {@link RedisURI} will fail with a - * {@link java.lang.IllegalStateException}. - * - * @return a new instance of {@link RedisClient} - */ - public static RedisClient create() { - return new RedisClient(null, EMPTY_URI); - } - - /** - * Create a new client that connects to the supplied {@link RedisURI uri} with default {@link ClientResources}. You can - * connect to different Redis servers but you must supply a {@link RedisURI} on connecting. - * - * @param redisURI the Redis URI, must not be {@literal null} - * @return a new instance of {@link RedisClient} - */ - public static RedisClient create(RedisURI redisURI) { - assertNotNull(redisURI); - return new RedisClient(null, redisURI); - } - - /** - * Create a new client that connects to the supplied uri with default {@link ClientResources}. You can connect to different - * Redis servers but you must supply a {@link RedisURI} on connecting. - * - * @param uri the Redis URI, must not be {@literal null} - * @return a new instance of {@link RedisClient} - */ - public static RedisClient create(String uri) { - LettuceAssert.notEmpty(uri, "URI must not be empty"); - return new RedisClient(null, RedisURI.create(uri)); - } - - /** - * Creates a uri-less RedisClient with shared {@link ClientResources}. You need to shut down the {@link ClientResources} - * upon shutting down your application. You can connect to different Redis servers but you must supply a {@link RedisURI} on - * connecting. Methods without having a {@link RedisURI} will fail with a {@link java.lang.IllegalStateException}. - * - * @param clientResources the client resources, must not be {@literal null} - * @return a new instance of {@link RedisClient} - */ - public static RedisClient create(ClientResources clientResources) { - assertNotNull(clientResources); - return new RedisClient(clientResources, EMPTY_URI); - } - - /** - * Create a new client that connects to the supplied uri with shared {@link ClientResources}.You need to shut down the - * {@link ClientResources} upon shutting down your application. You can connect to different Redis servers but you must - * supply a {@link RedisURI} on connecting. - * - * @param clientResources the client resources, must not be {@literal null} - * @param uri the Redis URI, must not be {@literal null} - * - * @return a new instance of {@link RedisClient} - */ - public static RedisClient create(ClientResources clientResources, String uri) { - assertNotNull(clientResources); - LettuceAssert.notEmpty(uri, "URI must not be empty"); - return create(clientResources, RedisURI.create(uri)); - } - - /** - * Create a new client that connects to the supplied {@link RedisURI uri} with shared {@link ClientResources}. You need to - * shut down the {@link ClientResources} upon shutting down your application.You can connect to different Redis servers but - * you must supply a {@link RedisURI} on connecting. - * - * @param clientResources the client resources, must not be {@literal null} - * @param redisURI the Redis URI, must not be {@literal null} - * @return a new instance of {@link RedisClient} - */ - public static RedisClient create(ClientResources clientResources, RedisURI redisURI) { - assertNotNull(clientResources); - assertNotNull(redisURI); - return new RedisClient(clientResources, redisURI); - } - - /** - * Creates a connection pool for synchronous connections. 5 max idle connections and 20 max active connections. Please keep - * in mind to free all collections and close the pool once you do not need it anymore. Requires Apache commons-pool2 - * dependency. - * - * @return a new {@link RedisConnectionPool} instance - */ - public RedisConnectionPool> pool() { - return pool(5, 20); - } - - /** - * Creates a connection pool for synchronous connections. Please keep in mind to free all collections and close the pool - * once you do not need it anymore. Requires Apache commons-pool2 dependency. - * - * @param maxIdle max idle connections in pool - * @param maxActive max active connections in pool - * @return a new {@link RedisConnectionPool} instance - */ - public RedisConnectionPool> pool(int maxIdle, int maxActive) { - return pool(newStringStringCodec(), maxIdle, maxActive); - } - - /** - * Creates a connection pool for synchronous connections. Please keep in mind to free all collections and close the pool - * once you do not need it anymore. Requires Apache commons-pool2 dependency. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param maxIdle max idle connections in pool - * @param maxActive max active connections in pool - * @param Key type - * @param Value type - * @return a new {@link RedisConnectionPool} instance - */ - @SuppressWarnings("unchecked") - public RedisConnectionPool> pool(final RedisCodec codec, int maxIdle, int maxActive) { - - checkPoolDependency(); - checkForRedisURI(); - LettuceAssert.notNull(codec, "RedisCodec must not be null"); - - long maxWait = makeTimeout(); - RedisConnectionPool> pool = new RedisConnectionPool<>( - new RedisConnectionPool.RedisConnectionProvider>() { - @Override - public RedisCommands createConnection() { - return connect(codec, redisURI).sync(); - } - - @Override - @SuppressWarnings("rawtypes") - public Class> getComponentType() { - return (Class) RedisCommands.class; - } - }, maxActive, maxIdle, maxWait); - - pool.addListener(closeableResources::remove); - - closeableResources.add(pool); - - return pool; - } - - protected long makeTimeout() { - return TimeUnit.MILLISECONDS.convert(timeout, unit); - } - - /** - * Creates a connection pool for asynchronous connections. 5 max idle connections and 20 max active connections. Please keep - * in mind to free all collections and close the pool once you do not need it anymore. Requires Apache commons-pool2 - * dependency. - * - * @return a new {@link RedisConnectionPool} instance - */ - public RedisConnectionPool> asyncPool() { - return asyncPool(5, 20); - } - - /** - * Creates a connection pool for asynchronous connections. Please keep in mind to free all collections and close the pool - * once you do not need it anymore. Requires Apache commons-pool2 dependency. - * - * @param maxIdle max idle connections in pool - * @param maxActive max active connections in pool - * @return a new {@link RedisConnectionPool} instance - */ - public RedisConnectionPool> asyncPool(int maxIdle, int maxActive) { - return asyncPool(newStringStringCodec(), maxIdle, maxActive); - } - - /** - * Creates a connection pool for asynchronous connections. Please keep in mind to free all collections and close the pool - * once you do not need it anymore. Requires Apache commons-pool2 dependency. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param maxIdle max idle connections in pool - * @param maxActive max active connections in pool - * @param Key type - * @param Value type - * @return a new {@link RedisConnectionPool} instance - */ - public RedisConnectionPool> asyncPool(final RedisCodec codec, int maxIdle, - int maxActive) { - - checkPoolDependency(); - checkForRedisURI(); - LettuceAssert.notNull(codec, "RedisCodec must not be null"); - - long maxWait = makeTimeout(); - RedisConnectionPool> pool = new RedisConnectionPool<>( - new RedisConnectionPool.RedisConnectionProvider>() { - @Override - public RedisAsyncCommands createConnection() { - return connectStandalone(codec, redisURI, defaultTimeout()).async(); - } - - @Override - @SuppressWarnings({ "rawtypes", "unchecked" }) - public Class> getComponentType() { - return (Class) RedisAsyncCommands.class; - } - }, maxActive, maxIdle, maxWait); - - pool.addListener(closeableResources::remove); - - closeableResources.add(pool); - - return pool; - } - - /** - * Open a new connection to a Redis server that treats keys and values as UTF-8 strings. - * - * @return A new stateful Redis connection - */ - public StatefulRedisConnection connect() { - return connect(newStringStringCodec()); - } - - /** - * Open a new connection to a Redis server. Use the supplied {@link RedisCodec codec} to encode/decode keys and values. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new stateful Redis connection - */ - public StatefulRedisConnection connect(RedisCodec codec) { - checkForRedisURI(); - return connectStandalone(codec, this.redisURI, defaultTimeout()); - } - - /** - * Open a new connection to a Redis server using the supplied {@link RedisURI} that treats keys and values as UTF-8 strings. - * - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @return A new connection - */ - public StatefulRedisConnection connect(RedisURI redisURI) { - return connectStandalone(newStringStringCodec(), redisURI, Timeout.from(redisURI)); - } - - /** - * Open a new connection to a Redis server using the supplied {@link RedisURI} and the supplied {@link RedisCodec codec} to - * encode/decode keys. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - */ - public StatefulRedisConnection connect(RedisCodec codec, RedisURI redisURI) { - return connectStandalone(codec, redisURI, Timeout.from(redisURI)); - } - - /** - * Open a new asynchronous connection to a Redis server that treats keys and values as UTF-8 strings. - * - * @return A new connection - */ - @Deprecated - public RedisAsyncCommands connectAsync() { - return connect(newStringStringCodec()).async(); - } - - /** - * Open a new asynchronous connection to a Redis server. Use the supplied {@link RedisCodec codec} to encode/decode keys and - * values. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - * @deprecated Use {@code connect(codec).async()} - */ - @Deprecated - public RedisAsyncCommands connectAsync(RedisCodec codec) { - return connectStandalone(codec, redisURI, defaultTimeout()).async(); - } - - /** - * Open a new asynchronous connection to a Redis server using the supplied {@link RedisURI} that treats keys and values as - * UTF-8 strings. - * - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @return A new connection - * @deprecated Use {@code connect(redisURI).async()} - */ - @Deprecated - public RedisAsyncCommands connectAsync(RedisURI redisURI) { - return connectStandalone(newStringStringCodec(), redisURI, Timeout.from(redisURI)).async(); - } - - /** - * Open a new asynchronous connection to a Redis server using the supplied {@link RedisURI} and the supplied - * {@link RedisCodec codec} to encode/decode keys. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - * @deprecated Use {@code connect(codec, redisURI).async()} - */ - @Deprecated - public RedisAsyncCommands connectAsync(RedisCodec codec, RedisURI redisURI) { - return connectStandalone(codec, redisURI, Timeout.from(redisURI)).async(); - } - - private StatefulRedisConnection connectStandalone(RedisCodec codec, RedisURI redisURI, Timeout timeout) { - - assertNotNull(codec); - checkValidRedisURI(redisURI); - - Queue> queue = LettuceFactories.newConcurrentQueue(); - - CommandHandler handler = new CommandHandler<>(clientOptions, clientResources, queue); - - StatefulRedisConnectionImpl connection = newStatefulRedisConnection(handler, codec, timeout.timeout, - timeout.timeUnit); - connectStateful(handler, connection, redisURI); - return connection; - } - - private void connectStateful(CommandHandler handler, StatefulRedisConnectionImpl connection, - RedisURI redisURI) { - - ConnectionBuilder connectionBuilder; - if (redisURI.isSsl()) { - SslConnectionBuilder sslConnectionBuilder = SslConnectionBuilder.sslConnectionBuilder(); - sslConnectionBuilder.ssl(redisURI); - connectionBuilder = sslConnectionBuilder; - } else { - connectionBuilder = ConnectionBuilder.connectionBuilder(); - } - - connectionBuilder.clientOptions(clientOptions); - connectionBuilder.clientResources(clientResources); - connectionBuilder(handler, connection, getSocketAddressSupplier(redisURI), connectionBuilder, redisURI); - channelType(connectionBuilder, redisURI); - initializeChannel(connectionBuilder); - - if (redisURI.getPassword() != null && redisURI.getPassword().length != 0) { - connection.async().auth(new String(redisURI.getPassword())); - } - - if (redisURI.getDatabase() != 0) { - connection.async().select(redisURI.getDatabase()); - } - - } - - /** - * Open a new pub/sub connection to a Redis server that treats keys and values as UTF-8 strings. - * - * @return A new stateful pub/sub connection - */ - public StatefulRedisPubSubConnection connectPubSub() { - return connectPubSub(newStringStringCodec(), redisURI, defaultTimeout()); - } - - /** - * Open a new pub/sub connection to a Redis server using the supplied {@link RedisURI} that treats keys and values as UTF-8 - * strings. - * - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @return A new stateful pub/sub connection - */ - public StatefulRedisPubSubConnection connectPubSub(RedisURI redisURI) { - return connectPubSub(newStringStringCodec(), redisURI, Timeout.from(redisURI)); - } - - /** - * Open a new pub/sub connection to the Redis server using the supplied {@link RedisURI} and use the supplied - * {@link RedisCodec codec} to encode/decode keys and values. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new stateful pub/sub connection - */ - public StatefulRedisPubSubConnection connectPubSub(RedisCodec codec) { - checkForRedisURI(); - return connectPubSub(codec, redisURI, defaultTimeout()); - } - - /** - * Open a new pub/sub connection to the Redis server using the supplied {@link RedisURI} and use the supplied - * {@link RedisCodec codec} to encode/decode keys and values. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param redisURI the redis server to connect to, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - */ - public StatefulRedisPubSubConnection connectPubSub(RedisCodec codec, RedisURI redisURI) { - return connectPubSub(codec, redisURI, Timeout.from(redisURI)); - } - - private StatefulRedisPubSubConnection connectPubSub(RedisCodec codec, RedisURI redisURI, - Timeout timeout) { - - assertNotNull(codec); - checkValidRedisURI(redisURI); - - Queue> queue = LettuceFactories.newConcurrentQueue(); - - PubSubCommandHandler handler = new PubSubCommandHandler<>(clientOptions, clientResources, queue, codec); - StatefulRedisPubSubConnectionImpl connection = newStatefulRedisPubSubConnection(handler, codec, timeout.timeout, - timeout.timeUnit); - - connectStateful(handler, connection, redisURI); - - return connection; - } - - /** - * Open a connection to a Redis Sentinel that treats keys and values as UTF-8 strings. - * - * @return A new stateful Redis Sentinel connection - */ - public StatefulRedisSentinelConnection connectSentinel() { - return connectSentinel(newStringStringCodec()); - } - - /** - * Open a connection to a Redis Sentinel that treats keys and use the supplied {@link RedisCodec codec} to encode/decode - * keys and values. The client {@link RedisURI} must contain one or more sentinels. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new stateful Redis Sentinel connection - */ - public StatefulRedisSentinelConnection connectSentinel(RedisCodec codec) { - checkForRedisURI(); - return connectSentinel(codec, redisURI, defaultTimeout()); - } - - /** - * Open a connection to a Redis Sentinel using the supplied {@link RedisURI} that treats keys and values as UTF-8 strings. - * The client {@link RedisURI} must contain one or more sentinels. - * - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @return A new connection - */ - public StatefulRedisSentinelConnection connectSentinel(RedisURI redisURI) { - return connectSentinel(newStringStringCodec(), redisURI, Timeout.from(redisURI)); - } - - /** - * Open a connection to a Redis Sentinel using the supplied {@link RedisURI} and use the supplied {@link RedisCodec codec} - * to encode/decode keys and values. The client {@link RedisURI} must contain one or more sentinels. - * - * @param codec the Redis server to connect to, must not be {@literal null} - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - */ - public StatefulRedisSentinelConnection connectSentinel(RedisCodec codec, RedisURI redisURI) { - return connectSentinel(codec, redisURI, Timeout.from(redisURI)); - } - - /** - * Open a new asynchronous connection to a Redis Sentinel that treats keys and values as UTF-8 strings. You must supply a - * valid RedisURI containing one or more sentinels. - * - * @return a new connection - * @deprecated Use {@code connectSentinel().async()} - */ - @Deprecated - public RedisSentinelAsyncCommands connectSentinelAsync() { - return connectSentinel(newStringStringCodec(), redisURI, defaultTimeout()).async(); - } - - /** - * Open a new asynchronous connection to a Redis Sentinela nd use the supplied {@link RedisCodec codec} to encode/decode - * keys and values. You must supply a valid RedisURI containing one or more sentinels. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return a new connection - * @deprecated Use {@code connectSentinel(codec).async()} - */ - @Deprecated - public RedisSentinelAsyncCommands connectSentinelAsync(RedisCodec codec) { - checkForRedisURI(); - return connectSentinel(codec, redisURI, defaultTimeout()).async(); - } - - /** - * Open a new asynchronous connection to a Redis Sentinel using the supplied {@link RedisURI} that treats keys and values as - * UTF-8 strings. You must supply a valid RedisURI containing a redis host or one or more sentinels. - * - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @return A new connection - * @deprecated Use {@code connectSentinel(redisURI).async()} - */ - @Deprecated - public RedisSentinelAsyncCommands connectSentinelAsync(RedisURI redisURI) { - return connectSentinel(newStringStringCodec(), redisURI, Timeout.from(redisURI)).async(); - } - - /** - * Open a new asynchronous connection to a Redis Sentinel using the supplied {@link RedisURI} and use the supplied - * {@link RedisCodec codec} to encode/decode keys and values. You must supply a valid RedisURI containing a redis host or - * one or more sentinels. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - * @deprecated Use {@code connectSentinel(codec, redisURI).async()} - */ - @Deprecated - public RedisSentinelAsyncCommands connectSentinelAsync(RedisCodec codec, RedisURI redisURI) { - return connectSentinel(codec, redisURI, Timeout.from(redisURI)).async(); - } - - private StatefulRedisSentinelConnection connectSentinel(RedisCodec codec, RedisURI redisURI, - Timeout timeout) { - assertNotNull(codec); - checkValidRedisURI(redisURI); - - Queue> queue = LettuceFactories.newConcurrentQueue(); - - ConnectionBuilder connectionBuilder = ConnectionBuilder.connectionBuilder(); - connectionBuilder.clientOptions(ClientOptions.copyOf(getOptions())); - connectionBuilder.clientResources(clientResources); - - final CommandHandler commandHandler = new CommandHandler<>(clientOptions, clientResources, queue); - - StatefulRedisSentinelConnectionImpl connection = newStatefulRedisSentinelConnection(commandHandler, codec, - timeout.timeout, timeout.timeUnit); - - logger.debug("Trying to get a Sentinel connection for one of: " + redisURI.getSentinels()); - - connectionBuilder(commandHandler, connection, getSocketAddressSupplier(redisURI), connectionBuilder, redisURI); - - if (redisURI.getSentinels().isEmpty() && (isNotEmpty(redisURI.getHost()) || !isEmpty(redisURI.getSocket()))) { - channelType(connectionBuilder, redisURI); - initializeChannel(connectionBuilder); - } else { - boolean connected = false; - boolean first = true; - Exception causingException = null; - validateUrisAreOfSameConnectionType(redisURI.getSentinels()); - for (RedisURI uri : redisURI.getSentinels()) { - if (first) { - channelType(connectionBuilder, uri); - first = false; - } - connectionBuilder.socketAddressSupplier(getSocketAddressSupplier(uri)); - - if (logger.isDebugEnabled()) { - SocketAddress socketAddress = SocketAddressResolver.resolve(redisURI, clientResources.dnsResolver()); - logger.debug("Connecting to Sentinel, address: " + socketAddress); - } - try { - initializeChannel(connectionBuilder); - connected = true; - break; - } catch (Exception e) { - logger.warn("Cannot connect sentinel at " + uri + ": " + e.toString()); - causingException = e; - if (e instanceof ConnectException) { - continue; - } - } - } - if (!connected) { - throw new RedisConnectionException("Cannot connect to a sentinel: " + redisURI.getSentinels(), - causingException); - } - } - - return connection; - } - - /** - * Create a new instance of {@link StatefulRedisPubSubConnectionImpl} or a subclass. - * - * @param commandHandler the command handler - * @param codec codec - * @param Key-Type - * @param Value Type - * @return new instance of StatefulRedisPubSubConnectionImpl - * @deprecated Use {@link #newStatefulRedisPubSubConnection(PubSubCommandHandler, RedisCodec, long, TimeUnit)} - */ - @Deprecated - protected StatefulRedisPubSubConnectionImpl newStatefulRedisPubSubConnection( - PubSubCommandHandler commandHandler, RedisCodec codec) { - return newStatefulRedisPubSubConnection(commandHandler, codec, timeout, unit); - } - - /** - * Create a new instance of {@link StatefulRedisPubSubConnectionImpl} or a subclass. - * - * @param commandHandler the command handler - * @param codec codec - * @param timeout default timeout - * @param unit default timeout unit - * @param Key-Type - * @param Value Type - * @return new instance of StatefulRedisPubSubConnectionImpl - */ - protected StatefulRedisPubSubConnectionImpl newStatefulRedisPubSubConnection( - PubSubCommandHandler commandHandler, RedisCodec codec, long timeout, TimeUnit unit) { - return new StatefulRedisPubSubConnectionImpl<>(commandHandler, codec, timeout, unit); - } - - /** - * Create a new instance of {@link StatefulRedisSentinelConnectionImpl} or a subclass. - * - * @param commandHandler the command handler - * @param codec codec - * @param Key-Type - * @param Value Type - * @return new instance of StatefulRedisSentinelConnectionImpl - * @deprecated Use {@link #newStatefulRedisSentinelConnection(CommandHandler, RedisCodec, long, TimeUnit)} - */ - @Deprecated - protected StatefulRedisSentinelConnectionImpl newStatefulRedisSentinelConnection( - CommandHandler commandHandler, RedisCodec codec) { - return newStatefulRedisSentinelConnection(commandHandler, codec, timeout, unit); - } - - /** - * Create a new instance of {@link StatefulRedisSentinelConnectionImpl} or a subclass. - * - * @param commandHandler the command handler - * @param codec codec - * @param timeout default timeout - * @param unit default timeout unit - * @param Key-Type - * @param Value Type - * @return new instance of StatefulRedisSentinelConnectionImpl - */ - protected StatefulRedisSentinelConnectionImpl newStatefulRedisSentinelConnection( - CommandHandler commandHandler, RedisCodec codec, long timeout, TimeUnit unit) { - return new StatefulRedisSentinelConnectionImpl<>(commandHandler, codec, timeout, unit); - } - - /** - * Create a new instance of {@link StatefulRedisConnectionImpl} or a subclass. - * - * @param commandHandler the command handler - * @param codec codec - * @param Key-Type - * @param Value Type - * @return new instance of StatefulRedisConnectionImpl - * @deprecated use {@link #newStatefulRedisConnection(CommandHandler, RedisCodec, long, TimeUnit)} - */ - @Deprecated - protected StatefulRedisConnectionImpl newStatefulRedisConnection(CommandHandler commandHandler, - RedisCodec codec) { - return newStatefulRedisConnection(commandHandler, codec, timeout, unit); - } - - /** - * Create a new instance of {@link StatefulRedisConnectionImpl} or a subclass. - * - * @param commandHandler the command handler - * @param codec codec - * @param timeout default timeout - * @param unit default timeout unit - * @param Key-Type - * @param Value Type - * @return new instance of StatefulRedisConnectionImpl - */ - protected StatefulRedisConnectionImpl newStatefulRedisConnection(CommandHandler commandHandler, - RedisCodec codec, long timeout, TimeUnit unit) { - return new StatefulRedisConnectionImpl<>(commandHandler, codec, timeout, unit); - } - - private void validateUrisAreOfSameConnectionType(List redisUris) { - boolean unixDomainSocket = false; - boolean inetSocket = false; - for (RedisURI sentinel : redisUris) { - if (sentinel.getSocket() != null) { - unixDomainSocket = true; - } - if (sentinel.getHost() != null) { - inetSocket = true; - } - } - - if (unixDomainSocket && inetSocket) { - throw new RedisConnectionException("You cannot mix unix domain socket and IP socket URI's"); - } - } - - private Supplier getSocketAddressSupplier(final RedisURI redisURI) { - return () -> { - try { - SocketAddress socketAddress = getSocketAddress(redisURI); - logger.debug("Resolved SocketAddress {} using {}", socketAddress, redisURI); - return socketAddress; - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - } catch (TimeoutException | ExecutionException e) { - throw new RedisException(e); - } - }; - } - - /** - * Returns the {@link ClientResources} which are used with that client. - * - * @return the {@link ClientResources} for this client - */ - public ClientResources getResources() { - return clientResources; - } - - protected SocketAddress getSocketAddress(RedisURI redisURI) - throws InterruptedException, TimeoutException, ExecutionException { - SocketAddress redisAddress; - - if (redisURI.getSentinelMasterId() != null && !redisURI.getSentinels().isEmpty()) { - logger.debug("Connecting to Redis using Sentinels {}, MasterId {}", redisURI.getSentinels(), - redisURI.getSentinelMasterId()); - redisAddress = lookupRedis(redisURI); - - if (redisAddress == null) { - throw new RedisConnectionException( - "Cannot provide redisAddress using sentinel for masterId " + redisURI.getSentinelMasterId()); - } - - } else { - redisAddress = SocketAddressResolver.resolve(redisURI, clientResources.dnsResolver()); - } - return redisAddress; - } - - private SocketAddress lookupRedis(RedisURI sentinelUri) throws InterruptedException, TimeoutException, ExecutionException { - RedisSentinelAsyncCommands connection = connectSentinel(sentinelUri).async(); - try { - return connection.getMasterAddrByName(sentinelUri.getSentinelMasterId()).get(timeout, unit); - } finally { - connection.close(); - } - } - - private void checkValidRedisURI(RedisURI redisURI) { - - LettuceAssert.notNull(redisURI, "A valid RedisURI is needed"); - - if (redisURI.getSentinels().isEmpty()) { - if (isEmpty(redisURI.getHost()) && isEmpty(redisURI.getSocket())) { - throw new IllegalArgumentException("RedisURI for Redis Standalone does not contain a host or a socket"); - } - } else { - - if (isEmpty(redisURI.getSentinelMasterId())) { - throw new IllegalArgumentException("TRedisURI for Redis Sentinel requires a masterId"); - } - - for (RedisURI sentinel : redisURI.getSentinels()) { - if (isEmpty(sentinel.getHost()) && isEmpty(sentinel.getSocket())) { - throw new IllegalArgumentException("RedisURI for Redis Sentinel does not contain a host or a socket"); - } - } - } - } - - protected RedisCodec newStringStringCodec() { - return StringCodec.UTF8; - } - - private static void assertNotNull(RedisCodec codec) { - LettuceAssert.notNull(codec, "RedisCodec must not be null"); - } - - private static void assertNotNull(RedisURI redisURI) { - LettuceAssert.notNull(redisURI, "RedisURI must not be null"); - } - - private static void assertNotNull(ClientResources clientResources) { - LettuceAssert.notNull(clientResources, "ClientResources must not be null"); - } - - private void checkForRedisURI() { - LettuceAssert.assertState(this.redisURI != EMPTY_URI, - "RedisURI is not available. Use RedisClient(Host), RedisClient(Host, Port) or RedisClient(RedisURI) to construct your client."); - checkValidRedisURI(this.redisURI); - } - - private void checkPoolDependency() { - LettuceAssert.assertState(POOL_AVAILABLE, - "Cannot use connection pooling without the optional Apache commons-pool2 library on the class path"); - } - - /** - * Set the {@link ClientOptions} for the client. - * - * @param clientOptions the new client options - * @throws IllegalArgumentException if {@literal clientOptions} is null - */ - @Override - public void setOptions(ClientOptions clientOptions) { - super.setOptions(clientOptions); - } - - private Timeout defaultTimeout() { - return Timeout.of(timeout, unit); - } - - private static class Timeout { - - final long timeout; - final TimeUnit timeUnit; - - private Timeout(long timeout, TimeUnit timeUnit) { - this.timeout = timeout; - this.timeUnit = timeUnit; - } - - private static Timeout of(long timeout, TimeUnit timeUnit) { - return new Timeout(timeout, timeUnit); - } - - private static Timeout from(RedisURI redisURI) { - - LettuceAssert.notNull(redisURI, "A valid RedisURI is needed"); - return new Timeout(redisURI.getTimeout(), redisURI.getUnit()); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisClusterAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisClusterAsyncConnection.java deleted file mode 100644 index 4cc2a4435e..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisClusterAsyncConnection.java +++ /dev/null @@ -1,266 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; - -/** - * A complete asynchronous and thread-safe cluster Redis API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisClusterAsyncCommands} - */ -@Deprecated -public interface RedisClusterAsyncConnection extends RedisHashesAsyncConnection, RedisKeysAsyncConnection, - RedisStringsAsyncConnection, RedisListsAsyncConnection, RedisSetsAsyncConnection, - RedisSortedSetsAsyncConnection, RedisScriptingAsyncConnection, RedisServerAsyncConnection, - RedisHLLAsyncConnection, RedisGeoAsyncConnection, BaseRedisAsyncConnection { - - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - String auth(String password); - - /** - * Meet another cluster node to include the node into the cluster. The command starts the cluster handshake and returns with - * {@literal OK} when the node was added to the cluster. - * - * @param ip IP address of the host - * @param port port number. - * @return String simple-string-reply - */ - RedisFuture clusterMeet(String ip, int port); - - /** - * Blacklist and remove the cluster node from the cluster. - * - * @param nodeId the node Id - * @return String simple-string-reply - */ - RedisFuture clusterForget(String nodeId); - - /** - * Adds slots to the cluster node. The current node will become the master for the specified slots. - * - * @param slots one or more slots from {@literal 0} to {@literal 16384} - * @return String simple-string-reply - */ - RedisFuture clusterAddSlots(int... slots); - - /** - * Removes slots from the cluster node. - * - * @param slots one or more slots from {@literal 0} to {@literal 16384} - * @return String simple-string-reply - */ - RedisFuture clusterDelSlots(int... slots); - - /** - * Assign a slot to a node. The command migrates the specified slot from the current node to the specified node in - * {@code nodeId} - * - * @param slot the slot - * @param nodeId the id of the node that will become the master for the slot - * @return String simple-string-reply - */ - RedisFuture clusterSetSlotNode(int slot, String nodeId); - - /** - * Clears migrating / importing state from the slot. - * - * @param slot the slot - * @return String simple-string-reply - */ - RedisFuture clusterSetSlotStable(int slot); - - /** - * Flag a slot as {@literal MIGRATING} (outgoing) towards the node specified in {@code nodeId}. The slot must be handled by - * the current node in order to be migrated. - * - * @param slot the slot - * @param nodeId the id of the node is targeted to become the master for the slot - * @return String simple-string-reply - */ - RedisFuture clusterSetSlotMigrating(int slot, String nodeId); - - /** - * Flag a slot as {@literal IMPORTING} (incoming) from the node specified in {@code nodeId}. - * - * @param slot the slot - * @param nodeId the id of the node is the master of the slot - * @return String simple-string-reply - */ - RedisFuture clusterSetSlotImporting(int slot, String nodeId); - - /** - * Get information and statistics about the cluster viewed by the current node. - * - * @return String bulk-string-reply as a collection of text lines. - */ - RedisFuture clusterInfo(); - - /** - * Obtain the nodeId for the currently connected node. - * - * @return String simple-string-reply - */ - RedisFuture clusterMyId(); - - /** - * Obtain details about all cluster nodes. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} - * - * @return String bulk-string-reply as a collection of text lines - */ - RedisFuture clusterNodes(); - - /** - * List slaves for a certain node identified by its {@code nodeId}. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} - * - * @param nodeId node id of the master node - * @return List<String> array-reply list of slaves. The command returns data in the same format as - * {@link #clusterNodes()} but one line per slave. - */ - RedisFuture> clusterSlaves(String nodeId); - - /** - * Retrieve the list of keys within the {@code slot}. - * - * @param slot the slot - * @param count maximal number of keys - * @return List<K> array-reply list of keys - */ - RedisFuture> clusterGetKeysInSlot(int slot, int count); - - /** - * Returns the number of keys in the specified Redis Cluster hash {@code slot}. - * - * @param slot the slot - * @return Integer reply: The number of keys in the specified hash slot, or an error if the hash slot is invalid. - */ - RedisFuture clusterCountKeysInSlot(int slot); - - /** - * Returns the number of failure reports for the specified node. Failure reports are the way Redis Cluster uses in order to - * promote a {@literal PFAIL} state, that means a node is not reachable, to a {@literal FAIL} state, that means that the - * majority of masters in the cluster agreed within a window of time that the node is not reachable. - * - * @param nodeId the node id - * @return Integer reply: The number of active failure reports for the node. - */ - RedisFuture clusterCountFailureReports(String nodeId); - - /** - * Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and - * testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Basically the same as - * {@link com.lambdaworks.redis.cluster.SlotHash#getSlot(byte[])}. If not, call Houston and report that we've got a problem. - * - * @param key the key. - * @return Integer reply: The hash slot number. - */ - RedisFuture clusterKeyslot(K key); - - /** - * Forces a node to save the nodes.conf configuration on disk. - * - * @return String simple-string-reply: {@code OK} or an error if the operation fails. - */ - RedisFuture clusterSaveconfig(); - - /** - * This command sets a specific config epoch in a fresh node. It only works when: - *
    - *
  • The nodes table of the node is empty.
  • - *
  • The node current config epoch is zero.
  • - *
- * - * @param configEpoch the config epoch - * @return String simple-string-reply: {@code OK} or an error if the operation fails. - */ - RedisFuture clusterSetConfigEpoch(long configEpoch); - - /** - * Get array of cluster slots to node mappings. - * - * @return RedisFuture<List<Object>> array-reply nested list of slot ranges with IP/Port mappings. - */ - RedisFuture> clusterSlots(); - - /** - * - * @return String simple-string-reply - */ - RedisFuture asking(); - - /** - * Turn this node into a slave of the node with the id {@code nodeId}. - * - * @param nodeId master node id - * @return String simple-string-reply - */ - RedisFuture clusterReplicate(String nodeId); - - /** - * Failover a cluster node. Turns the currently connected node into a master and the master into its slave. - * - * @param force do not coordinate with master if {@literal true} - * @return String simple-string-reply - */ - RedisFuture clusterFailover(boolean force); - - /** - * Reset a node performing a soft or hard reset: - *
    - *
  • All other nodes are forgotten
  • - *
  • All the assigned / open slots are released
  • - *
  • If the node is a slave, it turns into a master
  • - *
  • Only for hard reset: a new Node ID is generated
  • - *
  • Only for hard reset: currentEpoch and configEpoch are set to 0
  • - *
  • The new configuration is saved and the cluster state updated
  • - *
  • If the node was a slave, the whole data set is flushed away
  • - *
- * - * @param hard {@literal true} for hard reset. Generates a new nodeId and currentEpoch/configEpoch are set to 0 - * @return String simple-string-reply - */ - RedisFuture clusterReset(boolean hard); - - /** - * Delete all the slots associated with the specified node. The number of deleted slots is returned. - * - * @return String simple-string-reply - */ - RedisFuture clusterFlushslots(); - - /** - * Tells a Redis cluster slave node that the client is ok reading possibly stale data and is not interested in running write - * queries. - * - * @return String simple-string-reply - */ - RedisFuture readOnly(); - - /** - * Resets readOnly flag. - * - * @return String simple-string-reply - */ - RedisFuture readWrite(); - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisClusterConnection.java b/src/main/java/com/lambdaworks/redis/RedisClusterConnection.java deleted file mode 100644 index 6ddc5654f6..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisClusterConnection.java +++ /dev/null @@ -1,268 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.api.sync.BaseRedisCommands; - -/** - * A complete synchronous and thread-safe cluster Redis API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands} - */ -@Deprecated -public interface RedisClusterConnection extends RedisHashesConnection, RedisKeysConnection, - RedisStringsConnection, RedisListsConnection, RedisSetsConnection, RedisSortedSetsConnection, - RedisScriptingConnection, RedisServerConnection, RedisHLLConnection, RedisGeoConnection, - BaseRedisConnection, AutoCloseable { - - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - String auth(String password); - - /** - * Meet another cluster node to include the node into the cluster. The command starts the cluster handshake and returns with - * {@literal OK} when the node was added to the cluster. - * - * @param ip IP address of the host - * @param port port number. - * @return String simple-string-reply - */ - String clusterMeet(String ip, int port); - - /** - * Blacklist and remove the cluster node from the cluster. - * - * @param nodeId the node Id - * @return String simple-string-reply - */ - String clusterForget(String nodeId); - - /** - * Adds slots to the cluster node. The current node will become the master for the specified slots. - * - * @param slots one or more slots from {@literal 0} to {@literal 16384} - * @return String simple-string-reply - */ - String clusterAddSlots(int... slots); - - /** - * Removes slots from the cluster node. - * - * @param slots one or more slots from {@literal 0} to {@literal 16384} - * @return String simple-string-reply - */ - String clusterDelSlots(int... slots); - - /** - * Assign a slot to a node. The command migrates the specified slot from the current node to the specified node in - * {@code nodeId} - * - * @param slot the slot - * @param nodeId the id of the node that will become the master for the slot - * @return String simple-string-reply - */ - String clusterSetSlotNode(int slot, String nodeId); - - /** - * Clears migrating / importing state from the slot. - * - * @param slot the slot - * @return String simple-string-reply - */ - String clusterSetSlotStable(int slot); - - /** - * Flag a slot as {@literal MIGRATING} (outgoing) towards the node specified in {@code nodeId}. The slot must be handled by - * the current node in order to be migrated. - * - * @param slot the slot - * @param nodeId the id of the node is targeted to become the master for the slot - * @return String simple-string-reply - */ - String clusterSetSlotMigrating(int slot, String nodeId); - - /** - * Flag a slot as {@literal IMPORTING} (incoming) from the node specified in {@code nodeId}. - * - * @param slot the slot - * @param nodeId the id of the node is the master of the slot - * @return String simple-string-reply - */ - String clusterSetSlotImporting(int slot, String nodeId); - - /** - * Get information and statistics about the cluster viewed by the current node. - * - * @return String bulk-string-reply as a collection of text lines. - */ - String clusterInfo(); - - /** - * Obtain the nodeId for the currently connected node. - * - * @return String simple-string-reply - */ - String clusterMyId(); - - /** - * Obtain details about all cluster nodes. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} - * - * @return String bulk-string-reply as a collection of text lines - */ - String clusterNodes(); - - /** - * List slaves for a certain node identified by its {@code nodeId}. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} - * - * @param nodeId node id of the master node - * @return List<String> array-reply list of slaves. The command returns data in the same format as - * {@link #clusterNodes()} but one line per slave. - */ - List clusterSlaves(String nodeId); - - /** - * Retrieve the list of keys within the {@code slot}. - * - * @param slot the slot - * @param count maximal number of keys - * @return List<K> array-reply list of keys - */ - List clusterGetKeysInSlot(int slot, int count); - - /** - * Returns the number of keys in the specified Redis Cluster hash {@code slot}. - * - * @param slot the slot - * @return Integer reply: The number of keys in the specified hash slot, or an error if the hash slot is invalid. - */ - Long clusterCountKeysInSlot(int slot); - - /** - * Returns the number of failure reports for the specified node. Failure reports are the way Redis Cluster uses in order to - * promote a {@literal PFAIL} state, that means a node is not reachable, to a {@literal FAIL} state, that means that the - * majority of masters in the cluster agreed within a window of time that the node is not reachable. - * - * @param nodeId the node id - * @return Integer reply: The number of active failure reports for the node. - */ - Long clusterCountFailureReports(String nodeId); - - /** - * Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and - * testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Basically the same as - * {@link com.lambdaworks.redis.cluster.SlotHash#getSlot(byte[])}. If not, call Houston and report that we've got a problem. - * - * @param key the key. - * @return Integer reply: The hash slot number. - */ - Long clusterKeyslot(K key); - - /** - * Forces a node to save the nodes.conf configuration on disk. - * - * @return String simple-string-reply: {@code OK} or an error if the operation fails. - */ - String clusterSaveconfig(); - - /** - * This command sets a specific config epoch in a fresh node. It only works when: - *
    - *
  • The nodes table of the node is empty.
  • - *
  • The node current config epoch is zero.
  • - *
- * - * @param configEpoch the config epoch - * @return String simple-string-reply: {@code OK} or an error if the operation fails. - */ - String clusterSetConfigEpoch(long configEpoch); - - /** - * Get array of cluster slots to node mappings. - * - * @return List<Object> array-reply nested list of slot ranges with IP/Port mappings. - */ - List clusterSlots(); - - /** - * The asking command is required after a {@code -ASK} redirection. The client should issue {@code ASKING} before to - * actually send the command to the target instance. See the Redis Cluster specification for more information. - * - * @return String simple-string-reply - */ - String asking(); - - /** - * Turn this node into a slave of the node with the id {@code nodeId}. - * - * @param nodeId master node id - * @return String simple-string-reply - */ - String clusterReplicate(String nodeId); - - /** - * Failover a cluster node. Turns the currently connected node into a master and the master into its slave. - * - * @param force do not coordinate with master if {@literal true} - * @return String simple-string-reply - */ - String clusterFailover(boolean force); - - /** - * Reset a node performing a soft or hard reset: - *
    - *
  • All other nodes are forgotten
  • - *
  • All the assigned / open slots are released
  • - *
  • If the node is a slave, it turns into a master
  • - *
  • Only for hard reset: a new Node ID is generated
  • - *
  • Only for hard reset: currentEpoch and configEpoch are set to 0
  • - *
  • The new configuration is saved and the cluster state updated
  • - *
  • If the node was a slave, the whole data set is flushed away
  • - *
- * - * @param hard {@literal true} for hard reset. Generates a new nodeId and currentEpoch/configEpoch are set to 0 - * @return String simple-string-reply - */ - String clusterReset(boolean hard); - - /** - * Delete all the slots associated with the specified node. The number of deleted slots is returned. - * - * @return String simple-string-reply - */ - String clusterFlushslots(); - - /** - * Tells a Redis cluster slave node that the client is ok reading possibly stale data and is not interested in running write - * queries. - * - * @return String simple-string-reply - */ - String readOnly(); - - /** - * Resets readOnly flag. - * - * @return String simple-string-reply - */ - String readWrite(); - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisCommandBuilder.java b/src/main/java/com/lambdaworks/redis/RedisCommandBuilder.java deleted file mode 100644 index 10fd532db5..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisCommandBuilder.java +++ /dev/null @@ -1,2543 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.LettuceStrings.string; -import static com.lambdaworks.redis.protocol.CommandKeyword.*; -import static com.lambdaworks.redis.protocol.CommandType.*; - -import java.util.Date; -import java.util.List; -import java.util.Map; -import java.util.Set; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.output.*; -import com.lambdaworks.redis.protocol.BaseRedisCommandBuilder; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.RedisCommand; - -/** - * @param - * @param - * @author Mark Paluch - */ -@SuppressWarnings({"unchecked", "Convert2Diamond", "WeakerAccess", "varargs"}) -class RedisCommandBuilder extends BaseRedisCommandBuilder { - - static final String MUST_NOT_CONTAIN_NULL_ELEMENTS = "must not contain null elements"; - static final String MUST_NOT_BE_EMPTY = "must not be empty"; - static final String MUST_NOT_BE_NULL = "must not be null"; - - public RedisCommandBuilder(RedisCodec codec) { - super(codec); - } - - public Command append(K key, V value) { - notNullKey(key); - - return createCommand(APPEND, new IntegerOutput(codec), key, value); - } - - public Command auth(String password) { - LettuceAssert.notNull(password, "Password " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(password, "Password " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).add(password); - return createCommand(AUTH, new StatusOutput(codec), args); - } - - public Command bgrewriteaof() { - return createCommand(BGREWRITEAOF, new StatusOutput(codec)); - } - - public Command bgsave() { - return createCommand(BGSAVE, new StatusOutput(codec)); - } - - public Command bitcount(K key) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key); - return createCommand(BITCOUNT, new IntegerOutput(codec), args); - } - - public Command bitcount(K key, long start, long end) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(start).add(end); - return createCommand(BITCOUNT, new IntegerOutput(codec), args); - } - - public Command> bitfield(K key, BitFieldArgs bitFieldArgs) { - notNullKey(key); - LettuceAssert.notNull(bitFieldArgs, "BitFieldArgs must not be null"); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key); - - bitFieldArgs.build(args); - - return createCommand(BITFIELD, (CommandOutput) new ArrayOutput(codec), args); - } - - public Command bitpos(K key, boolean state) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(state ? 1 : 0); - return createCommand(BITPOS, new IntegerOutput(codec), args); - } - - public Command bitpos(K key, boolean state, long start, long end) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(state ? 1 : 0).add(start).add(end); - return createCommand(BITPOS, new IntegerOutput(codec), args); - } - - public Command bitopAnd(K destination, K... keys) { - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec); - args.add(AND).addKey(destination).addKeys(keys); - return createCommand(BITOP, new IntegerOutput(codec), args); - } - - public Command bitopNot(K destination, K source) { - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(source, "Source " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - args.add(NOT).addKey(destination).addKey(source); - return createCommand(BITOP, new IntegerOutput(codec), args); - } - - public Command bitopOr(K destination, K... keys) { - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec); - args.add(OR).addKey(destination).addKeys(keys); - return createCommand(BITOP, new IntegerOutput(codec), args); - } - - public Command bitopXor(K destination, K... keys) { - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec); - args.add(XOR).addKey(destination).addKeys(keys); - return createCommand(BITOP, new IntegerOutput(codec), args); - } - - public Command> blpop(long timeout, K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys).add(timeout); - return createCommand(BLPOP, new KeyValueOutput(codec), args); - } - - public Command> brpop(long timeout, K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys).add(timeout); - return createCommand(BRPOP, new KeyValueOutput(codec), args); - } - - public Command brpoplpush(long timeout, K source, K destination) { - LettuceAssert.notNull(source, "Source " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - args.addKey(source).addKey(destination).add(timeout); - return createCommand(BRPOPLPUSH, new ValueOutput(codec), args); - } - - public Command clientGetname() { - CommandArgs args = new CommandArgs(codec).add(GETNAME); - return createCommand(CLIENT, new KeyOutput(codec), args); - } - - public Command clientSetname(K name) { - LettuceAssert.notNull(name, "Name " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).add(SETNAME).addKey(name); - return createCommand(CLIENT, new StatusOutput(codec), args); - } - - public Command clientKill(String addr) { - LettuceAssert.notNull(addr, "Addr " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(addr, "Addr " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).add(KILL).add(addr); - return createCommand(CLIENT, new StatusOutput(codec), args); - } - - public Command clientKill(KillArgs killArgs) { - LettuceAssert.notNull(killArgs, "KillArgs " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).add(KILL); - killArgs.build(args); - return createCommand(CLIENT, new IntegerOutput(codec), args); - } - - public Command clientPause(long timeout) { - CommandArgs args = new CommandArgs(codec).add(PAUSE).add(timeout); - return createCommand(CLIENT, new StatusOutput(codec), args); - } - - public Command clientList() { - CommandArgs args = new CommandArgs(codec).add(LIST); - return createCommand(CLIENT, new StatusOutput(codec), args); - } - - public Command> command() { - CommandArgs args = new CommandArgs(codec); - return createCommand(COMMAND, new ArrayOutput(codec), args); - } - - public Command> commandInfo(String... commands) { - LettuceAssert.notNull(commands, "Commands " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(commands, "Commands " + MUST_NOT_BE_EMPTY); - LettuceAssert.noNullElements(commands, "Commands " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - - CommandArgs args = new CommandArgs(codec); - args.add(INFO); - - for (String command : commands) { - args.add(command); - } - - return createCommand(COMMAND, new ArrayOutput(codec), args); - } - - public Command commandCount() { - CommandArgs args = new CommandArgs(codec).add(COUNT); - return createCommand(COMMAND, new IntegerOutput(codec), args); - } - - public Command configRewrite() { - CommandArgs args = new CommandArgs(codec).add(REWRITE); - return createCommand(CONFIG, new StatusOutput(codec), args); - } - - public Command> configGet(String parameter) { - LettuceAssert.notNull(parameter, "Parameter " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(parameter, "Parameter " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).add(GET).add(parameter); - return createCommand(CONFIG, new StringListOutput(codec), args); - } - - public Command configResetstat() { - CommandArgs args = new CommandArgs(codec).add(RESETSTAT); - return createCommand(CONFIG, new StatusOutput(codec), args); - } - - public Command configSet(String parameter, String value) { - LettuceAssert.notNull(parameter, "Parameter " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(parameter, "Parameter " + MUST_NOT_BE_EMPTY); - LettuceAssert.notNull(value, "Value " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).add(SET).add(parameter).add(value); - return createCommand(CONFIG, new StatusOutput(codec), args); - } - - public Command dbsize() { - return createCommand(DBSIZE, new IntegerOutput(codec)); - } - - public Command debugCrashAndRecover(Long delay) { - CommandArgs args = new CommandArgs(codec).add("CRASH-AND-RECOVER"); - if (delay != null) { - args.add(delay); - } - return createCommand(DEBUG, new StatusOutput(codec), args); - } - - public Command debugObject(K key) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).add(OBJECT).addKey(key); - return createCommand(DEBUG, new StatusOutput(codec), args); - } - - public Command debugOom() { - return createCommand(DEBUG, null, new CommandArgs(codec).add("OOM")); - } - - public Command debugHtstats(int db) { - CommandArgs args = new CommandArgs(codec).add(HTSTATS).add(db); - return createCommand(DEBUG, new StatusOutput(codec), args); - } - - public Command debugReload() { - return createCommand(DEBUG, new StatusOutput(codec), new CommandArgs(codec).add(RELOAD)); - } - - public Command debugRestart(Long delay) { - CommandArgs args = new CommandArgs(codec).add(RESTART); - if (delay != null) { - args.add(delay); - } - return createCommand(DEBUG, new StatusOutput(codec), args); - } - - public Command debugSdslen(K key) { - notNullKey(key); - - return createCommand(DEBUG, new StatusOutput(codec), new CommandArgs(codec).add("SDSLEN").addKey(key)); - } - - public Command debugSegfault() { - CommandArgs args = new CommandArgs(codec).add(SEGFAULT); - return createCommand(DEBUG, null, args); - } - - public Command decr(K key) { - notNullKey(key); - - return createCommand(DECR, new IntegerOutput(codec), key); - } - - public Command decrby(K key, long amount) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(amount); - return createCommand(DECRBY, new IntegerOutput(codec), args); - } - - public Command del(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(DEL, new IntegerOutput(codec), args); - } - - public Command del(Iterable keys) { - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(DEL, new IntegerOutput(codec), args); - } - - public Command unlink(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(UNLINK, new IntegerOutput(codec), args); - } - - public Command unlink(Iterable keys) { - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(UNLINK, new IntegerOutput(codec), args); - } - - public Command discard() { - return createCommand(DISCARD, new StatusOutput(codec)); - } - - public Command dump(K key) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key); - return createCommand(DUMP, new ByteArrayOutput(codec), args); - } - - public Command echo(V msg) { - LettuceAssert.notNull(msg, "message " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addValue(msg); - return createCommand(ECHO, new ValueOutput(codec), args); - } - - public Command eval(String script, ScriptOutputType type, K... keys) { - LettuceAssert.notNull(script, "Script " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(script, "Script " + MUST_NOT_BE_EMPTY); - LettuceAssert.notNull(type, "ScriptOutputType " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - args.add(script).add(keys.length).addKeys(keys); - CommandOutput output = newScriptOutput(codec, type); - return createCommand(EVAL, output, args); - } - - public Command eval(String script, ScriptOutputType type, K[] keys, V... values) { - LettuceAssert.notNull(script, "Script " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(script, "Script " + MUST_NOT_BE_EMPTY); - LettuceAssert.notNull(type, "ScriptOutputType " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(values, "Values " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - args.add(script).add(keys.length).addKeys(keys).addValues(values); - CommandOutput output = newScriptOutput(codec, type); - return createCommand(EVAL, output, args); - } - - public Command evalsha(String digest, ScriptOutputType type, K... keys) { - LettuceAssert.notNull(digest, "Digest " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(digest, "Digest " + MUST_NOT_BE_EMPTY); - LettuceAssert.notNull(type, "ScriptOutputType " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - args.add(digest).add(keys.length).addKeys(keys); - CommandOutput output = newScriptOutput(codec, type); - return createCommand(EVALSHA, output, args); - } - - public Command evalsha(String digest, ScriptOutputType type, K[] keys, V... values) { - LettuceAssert.notNull(digest, "Digest " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(digest, "Digest " + MUST_NOT_BE_EMPTY); - LettuceAssert.notNull(type, "ScriptOutputType " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(values, "Values " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - args.add(digest).add(keys.length).addKeys(keys).addValues(values); - CommandOutput output = newScriptOutput(codec, type); - return createCommand(EVALSHA, output, args); - } - - public Command exists(K key) { - notNullKey(key); - - return createCommand(EXISTS, new BooleanOutput(codec), key); - } - - public Command exists(K... keys) { - notEmpty(keys); - - return createCommand(EXISTS, new IntegerOutput(codec), new CommandArgs(codec).addKeys(keys)); - } - - public Command exists(Iterable keys) { - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - - return createCommand(EXISTS, new IntegerOutput(codec), new CommandArgs(codec).addKeys(keys)); - } - - public Command expire(K key, long seconds) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(seconds); - return createCommand(EXPIRE, new BooleanOutput(codec), args); - } - - public Command expireat(K key, long timestamp) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(timestamp); - return createCommand(EXPIREAT, new BooleanOutput(codec), args); - } - - public Command flushall() { - return createCommand(FLUSHALL, new StatusOutput(codec)); - } - - public Command flushallAsync() { - return createCommand(FLUSHALL, new StatusOutput(codec), new CommandArgs(codec).add(ASYNC)); - } - - public Command flushdb() { - return createCommand(FLUSHDB, new StatusOutput(codec)); - } - - public Command flushdbAsync() { - return createCommand(FLUSHDB, new StatusOutput(codec), new CommandArgs(codec).add(ASYNC)); - } - - public Command get(K key) { - notNullKey(key); - - return createCommand(GET, new ValueOutput(codec), key); - } - - public Command getbit(K key, long offset) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(offset); - return createCommand(GETBIT, new IntegerOutput(codec), args); - } - - public Command getrange(K key, long start, long end) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(end); - return createCommand(GETRANGE, new ValueOutput(codec), args); - } - - public Command getset(K key, V value) { - notNullKey(key); - - return createCommand(GETSET, new ValueOutput(codec), key, value); - } - - public Command hdel(K key, K... fields) { - notNullKey(key); - LettuceAssert.notNull(fields, "Fields " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(fields, "Fields " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKeys(fields); - return createCommand(HDEL, new IntegerOutput(codec), args); - } - - public Command hexists(K key, K field) { - notNullKey(key); - LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(field); - return createCommand(HEXISTS, new BooleanOutput(codec), args); - } - - public Command hget(K key, K field) { - notNullKey(key); - LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(field); - return createCommand(HGET, new ValueOutput(codec), args); - } - - public Command hincrby(K key, K field, long amount) { - notNullKey(key); - LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(field).add(amount); - return createCommand(HINCRBY, new IntegerOutput(codec), args); - } - - public Command hincrbyfloat(K key, K field, double amount) { - notNullKey(key); - LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(field).add(amount); - return createCommand(HINCRBYFLOAT, new DoubleOutput(codec), args); - } - - public Command> hgetall(K key) { - notNullKey(key); - - return createCommand(HGETALL, new MapOutput(codec), key); - } - - public Command hgetall(KeyValueStreamingChannel channel, K key) { - notNullKey(key); - notNull(channel); - - return createCommand(HGETALL, new KeyValueStreamingOutput(codec, channel), key); - } - - public Command> hkeys(K key) { - notNullKey(key); - - return createCommand(HKEYS, new KeyListOutput(codec), key); - } - - public Command hkeys(KeyStreamingChannel channel, K key) { - notNullKey(key); - notNull(channel); - - return createCommand(HKEYS, new KeyStreamingOutput(codec, channel), key); - } - - public Command hlen(K key) { - notNullKey(key); - - return createCommand(HLEN, new IntegerOutput(codec), key); - } - - public Command hstrlen(K key, K field) { - notNullKey(key); - LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(field); - return createCommand(HSTRLEN, new IntegerOutput(codec), args); - } - - public Command> hmget(K key, K... fields) { - notNullKey(key); - LettuceAssert.notNull(fields, "Fields " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(fields, "Fields " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKeys(fields); - return createCommand(HMGET, new ValueListOutput(codec), args); - } - - public Command hmget(ValueStreamingChannel channel, K key, K... fields) { - notNullKey(key); - LettuceAssert.notNull(fields, "Fields " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(fields, "Fields " + MUST_NOT_BE_EMPTY); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKeys(fields); - return createCommand(HMGET, new ValueStreamingOutput(codec, channel), args); - } - - public Command hmset(K key, Map map) { - notNullKey(key); - LettuceAssert.notNull(map, "Map " + MUST_NOT_BE_NULL); - LettuceAssert.isTrue(!map.isEmpty(), "Map " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(map); - return createCommand(HMSET, new StatusOutput(codec), args); - } - - public Command hset(K key, K field, V value) { - notNullKey(key); - LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(field).addValue(value); - return createCommand(HSET, new BooleanOutput(codec), args); - } - - public Command hsetnx(K key, K field, V value) { - notNullKey(key); - LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(field).addValue(value); - return createCommand(HSETNX, new BooleanOutput(codec), args); - } - - public Command> hvals(K key) { - notNullKey(key); - - return createCommand(HVALS, new ValueListOutput(codec), key); - } - - public Command hvals(ValueStreamingChannel channel, K key) { - notNullKey(key); - notNull(channel); - - return createCommand(HVALS, new ValueStreamingOutput(codec, channel), key); - } - - public Command incr(K key) { - notNullKey(key); - - return createCommand(INCR, new IntegerOutput(codec), key); - } - - public Command incrby(K key, long amount) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(amount); - return createCommand(INCRBY, new IntegerOutput(codec), args); - } - - public Command incrbyfloat(K key, double amount) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(amount); - return createCommand(INCRBYFLOAT, new DoubleOutput(codec), args); - } - - public Command info() { - return createCommand(INFO, new StatusOutput(codec)); - } - - public Command info(String section) { - LettuceAssert.notNull(section, "Section " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).add(section); - return createCommand(INFO, new StatusOutput(codec), args); - } - - public Command> keys(K pattern) { - LettuceAssert.notNull(pattern, "Pattern " + MUST_NOT_BE_NULL); - - return createCommand(KEYS, new KeyListOutput(codec), pattern); - } - - public Command keys(KeyStreamingChannel channel, K pattern) { - LettuceAssert.notNull(pattern, "Pattern " + MUST_NOT_BE_NULL); - notNull(channel); - - return createCommand(KEYS, new KeyStreamingOutput(codec, channel), pattern); - } - - public Command lastsave() { - return createCommand(LASTSAVE, new DateOutput(codec)); - } - - public Command lindex(K key, long index) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(index); - return createCommand(LINDEX, new ValueOutput(codec), args); - } - - public Command linsert(K key, boolean before, V pivot, V value) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(before ? BEFORE : AFTER).addValue(pivot).addValue(value); - return createCommand(LINSERT, new IntegerOutput(codec), args); - } - - public Command llen(K key) { - notNullKey(key); - - return createCommand(LLEN, new IntegerOutput(codec), key); - } - - public Command lpop(K key) { - notNullKey(key); - - return createCommand(LPOP, new ValueOutput(codec), key); - } - - public Command lpush(K key, V... values) { - notNullKey(key); - notEmptyValues(values); - - return createCommand(LPUSH, new IntegerOutput(codec), key, values); - } - - public Command lpushx(K key, V... values) { - notNullKey(key); - notEmptyValues(values); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValues(values); - - return createCommand(LPUSHX, new IntegerOutput(codec), args); - } - - public Command> lrange(K key, long start, long stop) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(stop); - return createCommand(LRANGE, new ValueListOutput(codec), args); - } - - public Command lrange(ValueStreamingChannel channel, K key, long start, long stop) { - notNullKey(key); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(stop); - return createCommand(LRANGE, new ValueStreamingOutput(codec, channel), args); - } - - public Command lrem(K key, long count, V value) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(count).addValue(value); - return createCommand(LREM, new IntegerOutput(codec), args); - } - - public Command lset(K key, long index, V value) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(index).addValue(value); - return createCommand(LSET, new StatusOutput(codec), args); - } - - public Command ltrim(K key, long start, long stop) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(stop); - return createCommand(LTRIM, new StatusOutput(codec), args); - } - - public Command migrate(String host, int port, K key, int db, long timeout) { - LettuceAssert.notNull(host, "Host " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(host, "Host " + MUST_NOT_BE_EMPTY); - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.add(host).add(port).addKey(key).add(db).add(timeout); - return createCommand(MIGRATE, new StatusOutput(codec), args); - } - - public Command migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs) { - LettuceAssert.notNull(host, "Host " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(host, "Host " + MUST_NOT_BE_EMPTY); - LettuceAssert.notNull(migrateArgs, "migrateArgs " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - - args.add(host).add(port); - - if (migrateArgs.keys.size() == 1) { - args.addKey(migrateArgs.keys.get(0)); - } else { - args.add(""); - } - - args.add(db).add(timeout); - migrateArgs.build(args); - - return createCommand(MIGRATE, new StatusOutput(codec), args); - } - - public Command> mget(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(MGET, new ValueListOutput(codec), args); - } - - public Command> mget(Iterable keys) { - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(MGET, new ValueListOutput(codec), args); - } - - public Command mget(ValueStreamingChannel channel, K... keys) { - notEmpty(keys); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(MGET, new ValueStreamingOutput(codec, channel), args); - } - - public Command mget(ValueStreamingChannel channel, Iterable keys) { - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(MGET, new ValueStreamingOutput(codec, channel), args); - } - - public Command move(K key, int db) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(db); - return createCommand(MOVE, new BooleanOutput(codec), args); - } - - public Command multi() { - return createCommand(MULTI, new StatusOutput(codec)); - } - - public Command mset(Map map) { - LettuceAssert.notNull(map, "Map " + MUST_NOT_BE_NULL); - LettuceAssert.isTrue(!map.isEmpty(), "Map " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).add(map); - return createCommand(MSET, new StatusOutput(codec), args); - } - - public Command msetnx(Map map) { - LettuceAssert.notNull(map, "Map " + MUST_NOT_BE_NULL); - LettuceAssert.isTrue(!map.isEmpty(), "Map " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).add(map); - return createCommand(MSETNX, new BooleanOutput(codec), args); - } - - public Command objectEncoding(K key) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).add(ENCODING).addKey(key); - return createCommand(OBJECT, new StatusOutput(codec), args); - } - - public Command objectIdletime(K key) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).add(IDLETIME).addKey(key); - return createCommand(OBJECT, new IntegerOutput(codec), args); - } - - public Command objectRefcount(K key) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).add(REFCOUNT).addKey(key); - return createCommand(OBJECT, new IntegerOutput(codec), args); - } - - public Command persist(K key) { - notNullKey(key); - - return createCommand(PERSIST, new BooleanOutput(codec), key); - } - - public Command pexpire(K key, long milliseconds) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(milliseconds); - return createCommand(PEXPIRE, new BooleanOutput(codec), args); - } - - public Command pexpireat(K key, long timestamp) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(timestamp); - return createCommand(PEXPIREAT, new BooleanOutput(codec), args); - } - - public Command ping() { - return createCommand(PING, new StatusOutput(codec)); - } - - public Command readOnly() { - return createCommand(READONLY, new StatusOutput(codec)); - } - - public Command readWrite() { - return createCommand(READWRITE, new StatusOutput(codec)); - } - - public Command pttl(K key) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key); - return createCommand(PTTL, new IntegerOutput(codec), args); - } - - public Command publish(K channel, V message) { - LettuceAssert.notNull(channel, "Channel " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(channel).addValue(message); - return createCommand(PUBLISH, new IntegerOutput(codec), args); - } - - public Command> pubsubChannels() { - CommandArgs args = new CommandArgs(codec).add(CHANNELS); - return createCommand(PUBSUB, new KeyListOutput(codec), args); - } - - public Command> pubsubChannels(K pattern) { - LettuceAssert.notNull(pattern, "Pattern " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).add(CHANNELS).addKey(pattern); - return createCommand(PUBSUB, new KeyListOutput(codec), args); - } - - @SuppressWarnings({ "unchecked", "rawtypes" }) - public Command> pubsubNumsub(K... pattern) { - LettuceAssert.notNull(pattern, "Pattern " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(pattern, "Pattern " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).add(NUMSUB).addKeys(pattern); - return createCommand(PUBSUB, (MapOutput) new MapOutput((RedisCodec) codec), args); - } - - public Command pubsubNumpat() { - CommandArgs args = new CommandArgs(codec).add(NUMPAT); - return createCommand(PUBSUB, new IntegerOutput(codec), args); - } - - public Command quit() { - return createCommand(QUIT, new StatusOutput(codec)); - } - - public Command randomkey() { - return createCommand(RANDOMKEY, new ValueOutput(codec)); - } - - public Command> role() { - return createCommand(ROLE, new ArrayOutput(codec)); - } - - public Command rename(K key, K newKey) { - notNullKey(key); - LettuceAssert.notNull(newKey, "NewKey " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(newKey); - return createCommand(RENAME, new StatusOutput(codec), args); - } - - public Command renamenx(K key, K newKey) { - notNullKey(key); - LettuceAssert.notNull(newKey, "NewKey " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKey(newKey); - return createCommand(RENAMENX, new BooleanOutput(codec), args); - } - - public Command restore(K key, long ttl, byte[] value) { - notNullKey(key); - LettuceAssert.notNull(value, "Value " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(ttl).add(value); - return createCommand(RESTORE, new StatusOutput(codec), args); - } - - public Command rpop(K key) { - notNullKey(key); - - return createCommand(RPOP, new ValueOutput(codec), key); - } - - public Command rpoplpush(K source, K destination) { - LettuceAssert.notNull(source, "Source " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(source).addKey(destination); - return createCommand(RPOPLPUSH, new ValueOutput(codec), args); - } - - public Command rpush(K key, V... values) { - notNullKey(key); - notEmptyValues(values); - - return createCommand(RPUSH, new IntegerOutput(codec), key, values); - } - - public Command rpushx(K key, V... values) { - notNullKey(key); - notEmptyValues(values); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValues(values); - return createCommand(RPUSHX, new IntegerOutput(codec), args); - } - - - public Command sadd(K key, V... members) { - notNullKey(key); - LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); - - return createCommand(SADD, new IntegerOutput(codec), key, members); - } - - public Command save() { - return createCommand(SAVE, new StatusOutput(codec)); - } - - public Command scard(K key) { - notNullKey(key); - - return createCommand(SCARD, new IntegerOutput(codec), key); - } - - public Command> scriptExists(String... digests) { - LettuceAssert.notNull(digests, "Digests " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(digests, "Digests " + MUST_NOT_BE_EMPTY); - LettuceAssert.noNullElements(digests, "Digests " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - - CommandArgs args = new CommandArgs(codec).add(EXISTS); - for (String sha : digests) { - args.add(sha); - } - return createCommand(SCRIPT, new BooleanListOutput(codec), args); - } - - public Command scriptFlush() { - CommandArgs args = new CommandArgs(codec).add(FLUSH); - return createCommand(SCRIPT, new StatusOutput(codec), args); - } - - public Command scriptKill() { - CommandArgs args = new CommandArgs(codec).add(KILL); - return createCommand(SCRIPT, new StatusOutput(codec), args); - } - - public Command scriptLoad(V script) { - LettuceAssert.notNull(script, "Script " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).add(LOAD).addValue(script); - return createCommand(SCRIPT, new StatusOutput(codec), args); - } - - public Command> sdiff(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(SDIFF, new ValueSetOutput(codec), args); - } - - public Command sdiff(ValueStreamingChannel channel, K... keys) { - notEmpty(keys); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(SDIFF, new ValueStreamingOutput(codec, channel), args); - } - - public Command sdiffstore(K destination, K... keys) { - notEmpty(keys); - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(destination).addKeys(keys); - return createCommand(SDIFFSTORE, new IntegerOutput(codec), args); - } - - public Command select(int db) { - CommandArgs args = new CommandArgs(codec).add(db); - return createCommand(SELECT, new StatusOutput(codec), args); - } - - public Command set(K key, V value) { - notNullKey(key); - - return createCommand(SET, new StatusOutput(codec), key, value); - } - - public Command set(K key, V value, SetArgs setArgs) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValue(value); - setArgs.build(args); - return createCommand(SET, new StatusOutput(codec), args); - } - - public Command setbit(K key, long offset, int value) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(offset).add(value); - return createCommand(SETBIT, new IntegerOutput(codec), args); - } - - public Command setex(K key, long seconds, V value) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(seconds).addValue(value); - return createCommand(SETEX, new StatusOutput(codec), args); - } - - public Command psetex(K key, long milliseconds, V value) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(milliseconds).addValue(value); - return createCommand(PSETEX, new StatusOutput(codec), args); - } - - public Command setnx(K key, V value) { - notNullKey(key); - return createCommand(SETNX, new BooleanOutput(codec), key, value); - } - - public Command setrange(K key, long offset, V value) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(offset).addValue(value); - return createCommand(SETRANGE, new IntegerOutput(codec), args); - } - - @Deprecated - public Command shutdown() { - return createCommand(SHUTDOWN, new StatusOutput(codec)); - } - - public Command shutdown(boolean save) { - CommandArgs args = new CommandArgs(codec); - return createCommand(SHUTDOWN, new StatusOutput(codec), save ? args.add(SAVE) : args.add(NOSAVE)); - } - - public Command> sinter(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(SINTER, new ValueSetOutput(codec), args); - } - - public Command sinter(ValueStreamingChannel channel, K... keys) { - notEmpty(keys); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(SINTER, new ValueStreamingOutput(codec, channel), args); - } - - public Command sinterstore(K destination, K... keys) { - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKey(destination).addKeys(keys); - return createCommand(SINTERSTORE, new IntegerOutput(codec), args); - } - - public Command sismember(K key, V member) { - notNullKey(key); - return createCommand(SISMEMBER, new BooleanOutput(codec), key, member); - } - - public Command smove(K source, K destination, V member) { - LettuceAssert.notNull(source, "Source " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(source).addKey(destination).addValue(member); - return createCommand(SMOVE, new BooleanOutput(codec), args); - } - - public Command slaveof(String host, int port) { - LettuceAssert.notNull(host, "Host " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(host, "Host " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).add(host).add(port); - return createCommand(SLAVEOF, new StatusOutput(codec), args); - } - - public Command slaveofNoOne() { - CommandArgs args = new CommandArgs(codec).add(NO).add(ONE); - return createCommand(SLAVEOF, new StatusOutput(codec), args); - } - - public Command> slowlogGet() { - CommandArgs args = new CommandArgs(codec).add(GET); - return createCommand(SLOWLOG, new NestedMultiOutput(codec), args); - } - - public Command> slowlogGet(int count) { - CommandArgs args = new CommandArgs(codec).add(GET).add(count); - return createCommand(SLOWLOG, new NestedMultiOutput(codec), args); - } - - public Command slowlogLen() { - CommandArgs args = new CommandArgs(codec).add(LEN); - return createCommand(SLOWLOG, new IntegerOutput(codec), args); - } - - public Command slowlogReset() { - CommandArgs args = new CommandArgs(codec).add(RESET); - return createCommand(SLOWLOG, new StatusOutput(codec), args); - } - - public Command> smembers(K key) { - notNullKey(key); - - return createCommand(SMEMBERS, new ValueSetOutput(codec), key); - } - - public Command smembers(ValueStreamingChannel channel, K key) { - notNullKey(key); - notNull(channel); - - return createCommand(SMEMBERS, new ValueStreamingOutput(codec, channel), key); - } - - public Command> sort(K key) { - notNullKey(key); - - return createCommand(SORT, new ValueListOutput(codec), key); - } - - public Command> sort(K key, SortArgs sortArgs) { - notNullKey(key); - LettuceAssert.notNull(sortArgs, "SortArgs " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key); - sortArgs.build(args, null); - return createCommand(SORT, new ValueListOutput(codec), args); - } - - public Command sort(ValueStreamingChannel channel, K key) { - notNullKey(key); - notNull(channel); - - return createCommand(SORT, new ValueStreamingOutput(codec, channel), key); - } - - public Command sort(ValueStreamingChannel channel, K key, SortArgs sortArgs) { - notNullKey(key); - notNull(channel); - LettuceAssert.notNull(sortArgs, "SortArgs " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key); - sortArgs.build(args, null); - return createCommand(SORT, new ValueStreamingOutput(codec, channel), args); - } - - public Command sortStore(K key, SortArgs sortArgs, K destination) { - notNullKey(key); - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(sortArgs, "SortArgs " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key); - sortArgs.build(args, destination); - return createCommand(SORT, new IntegerOutput(codec), args); - } - - public Command spop(K key) { - notNullKey(key); - - return createCommand(SPOP, new ValueOutput(codec), key); - } - - public Command> spop(K key, long count) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(count); - return createCommand(SPOP, new ValueSetOutput(codec), args); - } - - public Command srandmember(K key) { - notNullKey(key); - - return createCommand(SRANDMEMBER, new ValueOutput(codec), key); - } - - public Command> srandmember(K key, long count) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(count); - return createCommand(SRANDMEMBER, new ValueListOutput(codec), args); - } - - public Command srandmember(ValueStreamingChannel channel, K key, long count) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(count); - return createCommand(SRANDMEMBER, new ValueStreamingOutput(codec, channel), args); - } - - public Command srem(K key, V... members) { - notNullKey(key); - LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); - - return createCommand(SREM, new IntegerOutput(codec), key, members); - } - - public Command> sunion(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(SUNION, new ValueSetOutput(codec), args); - } - - public Command sunion(ValueStreamingChannel channel, K... keys) { - notEmpty(keys); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(SUNION, new ValueStreamingOutput(codec, channel), args); - } - - public Command sunionstore(K destination, K... keys) { - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKey(destination).addKeys(keys); - return createCommand(SUNIONSTORE, new IntegerOutput(codec), args); - } - - public Command sync() { - return createCommand(SYNC, new StatusOutput(codec)); - } - - public Command strlen(K key) { - notNullKey(key); - - return createCommand(STRLEN, new IntegerOutput(codec), key); - } - - public Command touch(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(TOUCH, new IntegerOutput(codec), args); - } - - public Command touch(Iterable keys) { - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(TOUCH, new IntegerOutput(codec), args); - } - - public Command ttl(K key) { - notNullKey(key); - - return createCommand(TTL, new IntegerOutput(codec), key); - } - - public Command type(K key) { - notNullKey(key); - - return createCommand(TYPE, new StatusOutput(codec), key); - } - - public Command watch(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(WATCH, new StatusOutput(codec), args); - } - - public Command wait(int replicas, long timeout) { - CommandArgs args = new CommandArgs(codec).add(replicas).add(timeout); - - return createCommand(WAIT, new IntegerOutput(codec), args); - } - - public Command unwatch() { - return createCommand(UNWATCH, new StatusOutput(codec)); - } - - public Command zadd(K key, ZAddArgs zAddArgs, double score, V member) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key); - - if (zAddArgs != null) { - zAddArgs.build(args); - } - args.add(score).addValue(member); - - return createCommand(ZADD, new IntegerOutput(codec), args); - } - - public Command zaddincr(K key, double score, V member) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key); - args.add(INCR); - args.add(score).addValue(member); - - return createCommand(ZADD, new DoubleOutput(codec), args); - } - - @SuppressWarnings("unchecked") - public Command zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues) { - notNullKey(key); - LettuceAssert.notNull(scoresAndValues, "ScoresAndValues " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(scoresAndValues, "ScoresAndValues " + MUST_NOT_BE_EMPTY); - LettuceAssert.noNullElements(scoresAndValues, "ScoresAndValues " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - - CommandArgs args = new CommandArgs(codec).addKey(key); - - if (zAddArgs != null) { - zAddArgs.build(args); - } - - if (allElementsInstanceOf(scoresAndValues, ScoredValue.class)) { - - for (Object o : scoresAndValues) { - ScoredValue scoredValue = (ScoredValue) o; - - args.add(scoredValue.score); - args.addValue(scoredValue.value); - } - - } else { - LettuceAssert.isTrue(scoresAndValues.length % 2 == 0, - "ScoresAndValues.length must be a multiple of 2 and contain a " - + "sequence of score1, value1, score2, value2, scoreN,valueN"); - - for (int i = 0; i < scoresAndValues.length; i += 2) { - args.add((Double) scoresAndValues[i]); - args.addValue((V) scoresAndValues[i + 1]); - } - } - - return createCommand(ZADD, new IntegerOutput(codec), args); - } - - private boolean allElementsInstanceOf(Object[] objects, Class expectedAssignableType) { - - for (Object object : objects) { - if (!expectedAssignableType.isAssignableFrom(object.getClass())) { - return false; - } - } - - return true; - } - - public Command zcard(K key) { - notNullKey(key); - - return createCommand(ZCARD, new IntegerOutput(codec), key); - } - - public Command zcount(K key, double min, double max) { - return zcount(key, string(min), string(max)); - } - - public Command zcount(K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(min).add(max); - return createCommand(ZCOUNT, new IntegerOutput(codec), args); - } - - public Command zincrby(K key, double amount, K member) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(amount).addKey(member); - return createCommand(ZINCRBY, new DoubleOutput(codec), args); - } - - public Command zinterstore(K destination, K... keys) { - notEmpty(keys); - - return zinterstore(destination, new ZStoreArgs(), keys); - } - - public Command zinterstore(K destination, ZStoreArgs storeArgs, K... keys) { - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(storeArgs, "ZStoreArgs " + MUST_NOT_BE_NULL); - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKey(destination).add(keys.length).addKeys(keys); - storeArgs.build(args); - return createCommand(ZINTERSTORE, new IntegerOutput(codec), args); - } - - public Command> zrange(K key, long start, long stop) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(stop); - return createCommand(ZRANGE, new ValueListOutput(codec), args); - } - - public Command>> zrangeWithScores(K key, long start, long stop) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(start).add(stop).add(WITHSCORES); - return createCommand(ZRANGE, new ScoredValueListOutput(codec), args); - } - - public Command> zrangebyscore(K key, double min, double max) { - return zrangebyscore(key, string(min), string(max)); - } - - public Command> zrangebyscore(K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(min).add(max); - return createCommand(ZRANGEBYSCORE, new ValueListOutput(codec), args); - } - - public Command> zrangebyscore(K key, double min, double max, long offset, long count) { - return zrangebyscore(key, string(min), string(max), offset, count); - } - - public Command> zrangebyscore(K key, String min, String max, long offset, long count) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max).add(LIMIT).add(offset).add(count); - return createCommand(ZRANGEBYSCORE, new ValueListOutput(codec), args); - } - - public Command>> zrangebyscoreWithScores(K key, double min, double max) { - return zrangebyscoreWithScores(key, string(min), string(max)); - } - - public Command>> zrangebyscoreWithScores(K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max).add(WITHSCORES); - return createCommand(ZRANGEBYSCORE, new ScoredValueListOutput(codec), args); - } - - public Command>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count) { - return zrangebyscoreWithScores(key, string(min), string(max), offset, count); - } - - public Command>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max).add(WITHSCORES).add(LIMIT).add(offset).add(count); - return createCommand(ZRANGEBYSCORE, new ScoredValueListOutput(codec), args); - } - - public Command zrange(ValueStreamingChannel channel, K key, long start, long stop) { - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(stop); - return createCommand(ZRANGE, new ValueStreamingOutput(codec, channel), args); - } - - public Command zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { - notNullKey(key); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(start).add(stop).add(WITHSCORES); - return createCommand(ZRANGE, new ScoredValueStreamingOutput(codec, channel), args); - } - - public Command zrangebyscore(ValueStreamingChannel channel, K key, double min, double max) { - return zrangebyscore(channel, key, string(min), string(max)); - } - - public Command zrangebyscore(ValueStreamingChannel channel, K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - LettuceAssert.notNull(channel, "ScoredValueStreamingChannel " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(min).add(max); - return createCommand(ZRANGEBYSCORE, new ValueStreamingOutput(codec, channel), args); - } - - public Command zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, - long count) { - return zrangebyscore(channel, key, string(min), string(max), offset, count); - } - - public Command zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, - long count) { - notNullKey(key); - notNullMinMax(min, max); - LettuceAssert.notNull(channel, "ScoredValueStreamingChannel " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max).add(LIMIT).add(offset).add(count); - return createCommand(ZRANGEBYSCORE, new ValueStreamingOutput(codec, channel), args); - } - - public Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max) { - return zrangebyscoreWithScores(channel, key, string(min), string(max)); - } - - public Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max).add(WITHSCORES); - return createCommand(ZRANGEBYSCORE, new ScoredValueStreamingOutput(codec, channel), args); - } - - public Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, - long offset, long count) { - return zrangebyscoreWithScores(channel, key, string(min), string(max), offset, count); - } - - public Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, - long offset, long count) { - notNullKey(key); - notNullMinMax(min, max); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max).add(WITHSCORES).add(LIMIT).add(offset).add(count); - return createCommand(ZRANGEBYSCORE, new ScoredValueStreamingOutput(codec, channel), args); - } - - public Command zrank(K key, V member) { - notNullKey(key); - - return createCommand(ZRANK, new IntegerOutput(codec), key, member); - } - - public Command zrem(K key, V... members) { - notNullKey(key); - LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); - - return createCommand(ZREM, new IntegerOutput(codec), key, members); - } - - public Command zremrangebyrank(K key, long start, long stop) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(stop); - return createCommand(ZREMRANGEBYRANK, new IntegerOutput(codec), args); - } - - public Command zremrangebyscore(K key, double min, double max) { - return zremrangebyscore(key, string(min), string(max)); - } - - public Command zremrangebyscore(K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(min).add(max); - return createCommand(ZREMRANGEBYSCORE, new IntegerOutput(codec), args); - } - - public Command> zrevrange(K key, long start, long stop) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(stop); - return createCommand(ZREVRANGE, new ValueListOutput(codec), args); - } - - public Command>> zrevrangeWithScores(K key, long start, long stop) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(start).add(stop).add(WITHSCORES); - return createCommand(ZREVRANGE, new ScoredValueListOutput(codec), args); - } - - public Command> zrevrangebyscore(K key, double max, double min) { - return zrevrangebyscore(key, string(max), string(min)); - } - - public Command> zrevrangebyscore(K key, String max, String min) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(max).add(min); - return createCommand(ZREVRANGEBYSCORE, new ValueListOutput(codec), args); - } - - public Command> zrevrangebyscore(K key, double max, double min, long offset, long count) { - return zrevrangebyscore(key, string(max), string(min), offset, count); - } - - public Command> zrevrangebyscore(K key, String max, String min, long offset, long count) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(max).add(min).add(LIMIT).add(offset).add(count); - return createCommand(ZREVRANGEBYSCORE, new ValueListOutput(codec), args); - } - - public Command>> zrevrangebyscoreWithScores(K key, double max, double min) { - return zrevrangebyscoreWithScores(key, string(max), string(min)); - } - - public Command>> zrevrangebyscoreWithScores(K key, String max, String min) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(max).add(min).add(WITHSCORES); - return createCommand(ZREVRANGEBYSCORE, new ScoredValueListOutput(codec), args); - } - - public Command>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, - long count) { - return zrevrangebyscoreWithScores(key, string(max), string(min), offset, count); - } - - public Command>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, - long count) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(max).add(min).add(WITHSCORES).add(LIMIT).add(offset).add(count); - return createCommand(ZREVRANGEBYSCORE, new ScoredValueListOutput(codec), args); - } - - public Command zrevrange(ValueStreamingChannel channel, K key, long start, long stop) { - notNullKey(key); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(start).add(stop); - return createCommand(ZREVRANGE, new ValueStreamingOutput(codec, channel), args); - } - - public Command zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { - notNullKey(key); - LettuceAssert.notNull(channel, "ValueStreamingChannel " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(start).add(stop).add(WITHSCORES); - return createCommand(ZREVRANGE, new ScoredValueStreamingOutput(codec, channel), args); - } - - public Command zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min) { - return zrevrangebyscore(channel, key, string(max), string(min)); - } - - public Command zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min) { - notNullKey(key); - notNullMinMax(min, max); - notNull(channel); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(max).add(min); - return createCommand(ZREVRANGEBYSCORE, new ValueStreamingOutput(codec, channel), args); - } - - public Command zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, - long count) { - return zrevrangebyscore(channel, key, string(max), string(min), offset, count); - } - - public Command zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, - long count) { - notNullKey(key); - notNullMinMax(min, max); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(max).add(min).add(LIMIT).add(offset).add(count); - return createCommand(ZREVRANGEBYSCORE, new ValueStreamingOutput(codec, channel), args); - } - - public Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, - double min) { - return zrevrangebyscoreWithScores(channel, key, string(max), string(min)); - } - - public Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, - String min) { - notNullKey(key); - notNullMinMax(min, max); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(max).add(min).add(WITHSCORES); - return createCommand(ZREVRANGEBYSCORE, new ScoredValueStreamingOutput(codec, channel), args); - } - - public Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, - long offset, long count) { - notNullKey(key); - LettuceAssert.notNull(min, "Min " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(max, "Max " + MUST_NOT_BE_NULL); - notNull(channel); - return zrevrangebyscoreWithScores(channel, key, string(max), string(min), offset, count); - } - - public Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, - long offset, long count) { - notNullKey(key); - notNullMinMax(min, max); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(max).add(min).add(WITHSCORES).add(LIMIT).add(offset).add(count); - return createCommand(ZREVRANGEBYSCORE, new ScoredValueStreamingOutput(codec, channel), args); - } - - public Command zrevrank(K key, V member) { - notNullKey(key); - - return createCommand(ZREVRANK, new IntegerOutput(codec), key, member); - } - - public Command zscore(K key, V member) { - notNullKey(key); - - return createCommand(ZSCORE, new DoubleOutput(codec), key, member); - } - - public Command zunionstore(K destination, K... keys) { - notEmpty(keys); - LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); - - return zunionstore(destination, new ZStoreArgs(), keys); - } - - public Command zunionstore(K destination, ZStoreArgs storeArgs, K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec); - args.addKey(destination).add(keys.length).addKeys(keys); - storeArgs.build(args); - return createCommand(ZUNIONSTORE, new IntegerOutput(codec), args); - } - - public RedisCommand zlexcount(K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max); - return createCommand(ZLEXCOUNT, new IntegerOutput(codec), args); - } - - public RedisCommand zremrangebylex(K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max); - return createCommand(ZREMRANGEBYLEX, new IntegerOutput(codec), args); - } - - public RedisCommand> zrangebylex(K key, String min, String max) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max); - return createCommand(ZRANGEBYLEX, new ValueListOutput(codec), args); - } - - public RedisCommand> zrangebylex(K key, String min, String max, long offset, long count) { - notNullKey(key); - notNullMinMax(min, max); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key).add(min).add(max).add(LIMIT).add(offset).add(count); - return createCommand(ZRANGEBYLEX, new ValueListOutput(codec), args); - } - - public Command> time() { - CommandArgs args = new CommandArgs(codec); - return createCommand(TIME, new ValueListOutput(codec), args); - } - - public Command> scan() { - return scan(ScanCursor.INITIAL, null); - } - - public Command> scan(ScanCursor scanCursor) { - return scan(scanCursor, null); - } - - public Command> scan(ScanArgs scanArgs) { - return scan(ScanCursor.INITIAL, scanArgs); - } - - public Command> scan(ScanCursor scanCursor, ScanArgs scanArgs) { - CommandArgs args = new CommandArgs(codec); - - scanArgs(scanCursor, scanArgs, args); - - KeyScanOutput output = new KeyScanOutput(codec); - return createCommand(SCAN, output, args); - } - - protected void scanArgs(ScanCursor scanCursor, ScanArgs scanArgs, CommandArgs args) { - LettuceAssert.notNull(scanCursor, "ScanCursor " + MUST_NOT_BE_NULL); - LettuceAssert.isTrue(!scanCursor.isFinished(), "ScanCursor must not be finished"); - - args.add(scanCursor.getCursor()); - - if (scanArgs != null) { - scanArgs.build(args); - } - } - - public Command scanStreaming(KeyStreamingChannel channel) { - notNull(channel); - LettuceAssert.notNull(channel, "KeyStreamingChannel " + MUST_NOT_BE_NULL); - - return scanStreaming(channel, ScanCursor.INITIAL, null); - } - - public Command scanStreaming(KeyStreamingChannel channel, ScanCursor scanCursor) { - notNull(channel); - LettuceAssert.notNull(channel, "KeyStreamingChannel " + MUST_NOT_BE_NULL); - - return scanStreaming(channel, scanCursor, null); - } - - public Command scanStreaming(KeyStreamingChannel channel, ScanArgs scanArgs) { - notNull(channel); - LettuceAssert.notNull(channel, "KeyStreamingChannel " + MUST_NOT_BE_NULL); - - return scanStreaming(channel, ScanCursor.INITIAL, scanArgs); - } - - public Command scanStreaming(KeyStreamingChannel channel, ScanCursor scanCursor, - ScanArgs scanArgs) { - notNull(channel); - LettuceAssert.notNull(channel, "KeyStreamingChannel " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec); - scanArgs(scanCursor, scanArgs, args); - - KeyScanStreamingOutput output = new KeyScanStreamingOutput(codec, channel); - return createCommand(SCAN, output, args); - } - - public Command> sscan(K key) { - notNullKey(key); - - return sscan(key, ScanCursor.INITIAL, null); - } - - public Command> sscan(K key, ScanCursor scanCursor) { - notNullKey(key); - - return sscan(key, scanCursor, null); - } - - public Command> sscan(K key, ScanArgs scanArgs) { - notNullKey(key); - - return sscan(key, ScanCursor.INITIAL, scanArgs); - } - - public Command> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key); - - scanArgs(scanCursor, scanArgs, args); - - ValueScanOutput output = new ValueScanOutput(codec); - return createCommand(SSCAN, output, args); - } - - public Command sscanStreaming(ValueStreamingChannel channel, K key) { - notNullKey(key); - notNull(channel); - - return sscanStreaming(channel, key, ScanCursor.INITIAL, null); - } - - public Command sscanStreaming(ValueStreamingChannel channel, K key, ScanCursor scanCursor) { - notNullKey(key); - notNull(channel); - - return sscanStreaming(channel, key, scanCursor, null); - } - - public Command sscanStreaming(ValueStreamingChannel channel, K key, ScanArgs scanArgs) { - notNullKey(key); - notNull(channel); - - return sscanStreaming(channel, key, ScanCursor.INITIAL, scanArgs); - } - - public Command sscanStreaming(ValueStreamingChannel channel, K key, ScanCursor scanCursor, - ScanArgs scanArgs) { - notNullKey(key); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - - args.addKey(key); - scanArgs(scanCursor, scanArgs, args); - - ValueScanStreamingOutput output = new ValueScanStreamingOutput(codec, channel); - return createCommand(SSCAN, output, args); - } - - public Command> hscan(K key) { - notNullKey(key); - - return hscan(key, ScanCursor.INITIAL, null); - } - - public Command> hscan(K key, ScanCursor scanCursor) { - notNullKey(key); - - return hscan(key, scanCursor, null); - } - - public Command> hscan(K key, ScanArgs scanArgs) { - notNullKey(key); - - return hscan(key, ScanCursor.INITIAL, scanArgs); - } - - public Command> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key); - - scanArgs(scanCursor, scanArgs, args); - - MapScanOutput output = new MapScanOutput(codec); - return createCommand(HSCAN, output, args); - } - - public Command hscanStreaming(KeyValueStreamingChannel channel, K key) { - notNullKey(key); - notNull(channel); - - return hscanStreaming(channel, key, ScanCursor.INITIAL, null); - } - - public Command hscanStreaming(KeyValueStreamingChannel channel, K key, - ScanCursor scanCursor) { - notNullKey(key); - notNull(channel); - - return hscanStreaming(channel, key, scanCursor, null); - } - - public Command hscanStreaming(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs) { - notNullKey(key); - notNull(channel); - - return hscanStreaming(channel, key, ScanCursor.INITIAL, scanArgs); - } - - public Command hscanStreaming(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, - ScanArgs scanArgs) { - notNullKey(key); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - - args.addKey(key); - scanArgs(scanCursor, scanArgs, args); - - KeyValueScanStreamingOutput output = new KeyValueScanStreamingOutput(codec, channel); - return createCommand(HSCAN, output, args); - } - - public Command> zscan(K key) { - notNullKey(key); - - return zscan(key, ScanCursor.INITIAL, null); - } - - public Command> zscan(K key, ScanCursor scanCursor) { - notNullKey(key); - - return zscan(key, scanCursor, null); - } - - public Command> zscan(K key, ScanArgs scanArgs) { - notNullKey(key); - - return zscan(key, ScanCursor.INITIAL, scanArgs); - } - - public Command> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec); - args.addKey(key); - - scanArgs(scanCursor, scanArgs, args); - - ScoredValueScanOutput output = new ScoredValueScanOutput(codec); - return createCommand(ZSCAN, output, args); - } - - public Command zscanStreaming(ScoredValueStreamingChannel channel, K key) { - notNullKey(key); - notNull(channel); - - return zscanStreaming(channel, key, ScanCursor.INITIAL, null); - } - - public Command zscanStreaming(ScoredValueStreamingChannel channel, K key, - ScanCursor scanCursor) { - notNullKey(key); - notNull(channel); - - return zscanStreaming(channel, key, scanCursor, null); - } - - public Command zscanStreaming(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs) { - notNullKey(key); - notNull(channel); - - return zscanStreaming(channel, key, ScanCursor.INITIAL, scanArgs); - } - - public Command zscanStreaming(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, - ScanArgs scanArgs) { - notNullKey(key); - notNull(channel); - - CommandArgs args = new CommandArgs(codec); - - args.addKey(key); - scanArgs(scanCursor, scanArgs, args); - - ScoredValueScanStreamingOutput output = new ScoredValueScanStreamingOutput(codec, channel); - return createCommand(ZSCAN, output, args); - } - - public Command pfadd(K key, V value, V... moreValues) { - notNullKey(key); - LettuceAssert.notNull(value, "Value " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(moreValues, "MoreValues " + MUST_NOT_BE_NULL); - LettuceAssert.noNullElements(moreValues, "MoreValues " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValue(value).addValues(moreValues); - return createCommand(PFADD, new IntegerOutput(codec), args); - } - - public Command pfadd(K key, V... values) { - notNullKey(key); - notEmptyValues(values); - LettuceAssert.noNullElements(values, "Values " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValues(values); - return createCommand(PFADD, new IntegerOutput(codec), args); - } - - public Command pfcount(K key, K... moreKeys) { - notNullKey(key); - LettuceAssert.notNull(moreKeys, "MoreKeys " + MUST_NOT_BE_NULL); - LettuceAssert.noNullElements(moreKeys, "MoreKeys " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - - CommandArgs args = new CommandArgs(codec).addKey(key).addKeys(moreKeys); - return createCommand(PFCOUNT, new IntegerOutput(codec), args); - } - - public Command pfcount(K... keys) { - notEmpty(keys); - - CommandArgs args = new CommandArgs(codec).addKeys(keys); - return createCommand(PFCOUNT, new IntegerOutput(codec), args); - } - - @SuppressWarnings("unchecked") - public Command pfmerge(K destkey, K sourcekey, K... moreSourceKeys) { - LettuceAssert.notNull(destkey, "Destkey " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(sourcekey, "Sourcekey " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(moreSourceKeys, "MoreSourceKeys " + MUST_NOT_BE_NULL); - LettuceAssert.noNullElements(moreSourceKeys, "MoreSourceKeys " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - - CommandArgs args = new CommandArgs(codec).addKeys(destkey).addKey(sourcekey).addKeys(moreSourceKeys); - return createCommand(PFMERGE, new StatusOutput(codec), args); - } - - @SuppressWarnings("unchecked") - public Command pfmerge(K destkey, K... sourcekeys) { - LettuceAssert.notNull(destkey, "Destkey " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(sourcekeys, "Sourcekeys " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(sourcekeys, "Sourcekeys " + MUST_NOT_BE_EMPTY); - LettuceAssert.noNullElements(sourcekeys, "Sourcekeys " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - - CommandArgs args = new CommandArgs(codec).addKeys(destkey).addKeys(sourcekeys); - return createCommand(PFMERGE, new StatusOutput(codec), args); - } - - public Command clusterBumpepoch() { - CommandArgs args = new CommandArgs(codec).add(BUMPEPOCH); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterMeet(String ip, int port) { - LettuceAssert.notNull(ip, "IP " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(ip, "IP " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).add(MEET).add(ip).add(port); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterForget(String nodeId) { - assertNodeId(nodeId); - - CommandArgs args = new CommandArgs(codec).add(FORGET).add(nodeId); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterAddslots(int[] slots) { - notEmptySlots(slots); - - CommandArgs args = new CommandArgs(codec).add(ADDSLOTS); - - for (int slot : slots) { - args.add(slot); - } - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterDelslots(int[] slots) { - notEmptySlots(slots); - - CommandArgs args = new CommandArgs(codec).add(DELSLOTS); - - for (int slot : slots) { - args.add(slot); - } - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterInfo() { - CommandArgs args = new CommandArgs(codec).add(INFO); - - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterMyId() { - CommandArgs args = new CommandArgs(codec).add(MYID); - - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterNodes() { - CommandArgs args = new CommandArgs(codec).add(NODES); - - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command> clusterGetKeysInSlot(int slot, int count) { - CommandArgs args = new CommandArgs(codec).add(GETKEYSINSLOT).add(slot).add(count); - return createCommand(CLUSTER, new KeyListOutput(codec), args); - } - - public Command clusterCountKeysInSlot(int slot) { - CommandArgs args = new CommandArgs(codec).add(COUNTKEYSINSLOT).add(slot); - return createCommand(CLUSTER, new IntegerOutput(codec), args); - } - - public Command clusterCountFailureReports(String nodeId) { - assertNodeId(nodeId); - - CommandArgs args = new CommandArgs(codec).add("COUNT-FAILURE-REPORTS").add(nodeId); - return createCommand(CLUSTER, new IntegerOutput(codec), args); - } - - public Command clusterKeyslot(K key) { - CommandArgs args = new CommandArgs(codec).add(KEYSLOT).addKey(key); - return createCommand(CLUSTER, new IntegerOutput(codec), args); - } - - public Command clusterSaveconfig() { - CommandArgs args = new CommandArgs(codec).add(SAVECONFIG); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterSetConfigEpoch(long configEpoch) { - CommandArgs args = new CommandArgs(codec).add("SET-CONFIG-EPOCH").add(configEpoch); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command> clusterSlots() { - CommandArgs args = new CommandArgs(codec).add(SLOTS); - return createCommand(CLUSTER, new ArrayOutput(codec), args); - } - - public Command clusterSetSlotNode(int slot, String nodeId) { - assertNodeId(nodeId); - - CommandArgs args = new CommandArgs(codec).add(SETSLOT).add(slot).add(NODE).add(nodeId); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterSetSlotStable(int slot) { - - CommandArgs args = new CommandArgs(codec).add(SETSLOT).add(slot).add(STABLE); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterSetSlotMigrating(int slot, String nodeId) { - assertNodeId(nodeId); - - CommandArgs args = new CommandArgs(codec).add(SETSLOT).add(slot).add(MIGRATING).add(nodeId); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterSetSlotImporting(int slot, String nodeId) { - assertNodeId(nodeId); - - CommandArgs args = new CommandArgs(codec).add(SETSLOT).add(slot).add(IMPORTING).add(nodeId); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterReplicate(String nodeId) { - assertNodeId(nodeId); - - CommandArgs args = new CommandArgs(codec).add(REPLICATE).add(nodeId); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command asking() { - - CommandArgs args = new CommandArgs(codec); - return createCommand(ASKING, new StatusOutput(codec), args); - } - - public Command clusterFlushslots() { - - CommandArgs args = new CommandArgs(codec).add(FLUSHSLOTS); - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command> clusterSlaves(String nodeId) { - assertNodeId(nodeId); - - CommandArgs args = new CommandArgs(codec).add(SLAVES).add(nodeId); - return createCommand(CLUSTER, new StringListOutput(codec), args); - } - - public Command clusterFailover(boolean force) { - - CommandArgs args = new CommandArgs(codec).add(FAILOVER); - if (force) { - args.add(FORCE); - } - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command clusterReset(boolean hard) { - - CommandArgs args = new CommandArgs(codec).add(RESET); - if (hard) { - args.add(HARD); - } else { - args.add(SOFT); - } - return createCommand(CLUSTER, new StatusOutput(codec), args); - } - - public Command geoadd(K key, double longitude, double latitude, V member) { - notNullKey(key); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(longitude).add(latitude).addValue(member); - return createCommand(GEOADD, new IntegerOutput(codec), args); - } - - public Command geoadd(K key, Object[] lngLatMember) { - - notNullKey(key); - LettuceAssert.notNull(lngLatMember, "LngLatMember " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(lngLatMember, "LngLatMember " + MUST_NOT_BE_EMPTY); - LettuceAssert.noNullElements(lngLatMember, "LngLatMember " + MUST_NOT_CONTAIN_NULL_ELEMENTS); - LettuceAssert.isTrue(lngLatMember.length % 3 == 0, "LngLatMember.length must be a multiple of 3 and contain a " - + "sequence of longitude1, latitude1, member1, longitude2, latitude2, member2, ... longitudeN, latitudeN, memberN"); - - CommandArgs args = new CommandArgs(codec).addKey(key); - - for (int i = 0; i < lngLatMember.length; i += 3) { - args.add((Double) lngLatMember[i]); - args.add((Double) lngLatMember[i + 1]); - args.addValue((V) lngLatMember[i + 2]); - } - - return createCommand(GEOADD, new IntegerOutput(codec), args); - } - - public Command> geohash(K key, V... members) { - notNullKey(key); - LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValues(members); - return createCommand(GEOHASH, new StringListOutput(codec), args); - } - - public Command> georadius(K key, double longitude, double latitude, double distance, String unit) { - notNullKey(key); - LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(longitude).add(latitude).add(distance).add(unit); - return createCommand(GEORADIUS, new ValueSetOutput(codec), args); - } - - public Command>> georadius(K key, double longitude, double latitude, double distance, String unit, - GeoArgs geoArgs) { - - notNullKey(key); - LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); - LettuceAssert.notNull(geoArgs, "GeoArgs " + MUST_NOT_BE_NULL); - CommandArgs args = new CommandArgs(codec).addKey(key).add(longitude).add(latitude).add(distance).add(unit); - geoArgs.build(args); - - return createCommand(GEORADIUS, new GeoWithinListOutput(codec, geoArgs.isWithDistance(), geoArgs.isWithHash(), - geoArgs.isWithCoordinates()), args); - } - - public Command georadius(K key, double longitude, double latitude, double distance, String unit, - GeoRadiusStoreArgs geoRadiusStoreArgs) { - - notNullKey(key); - LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); - LettuceAssert.notNull(geoRadiusStoreArgs, "GeoRadiusStoreArgs " + MUST_NOT_BE_NULL); - LettuceAssert.isTrue(geoRadiusStoreArgs.getStoreKey() != null || geoRadiusStoreArgs.getStoreDistKey() != null, - "At least STORE key or STORDIST key is required"); - - CommandArgs args = new CommandArgs(codec).addKey(key).add(longitude).add(latitude).add(distance).add(unit); - geoRadiusStoreArgs.build(args); - - return createCommand(GEORADIUS, new IntegerOutput(codec), args); - } - - public Command> georadiusbymember(K key, V member, double distance, String unit) { - - notNullKey(key); - LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValue(member).add(distance).add(unit); - return createCommand(GEORADIUSBYMEMBER, new ValueSetOutput(codec), args); - } - - public Command>> georadiusbymember(K key, V member, double distance, String unit, GeoArgs geoArgs) { - - notNullKey(key); - LettuceAssert.notNull(geoArgs, "GeoArgs " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValue(member).add(distance).add(unit); - geoArgs.build(args); - - return createCommand(GEORADIUSBYMEMBER, new GeoWithinListOutput(codec, geoArgs.isWithDistance(), - geoArgs.isWithHash(), geoArgs.isWithCoordinates()), args); - } - - public Command georadiusbymember(K key, V member, double distance, String unit, - GeoRadiusStoreArgs geoRadiusStoreArgs) { - - notNullKey(key); - LettuceAssert.notNull(geoRadiusStoreArgs, "GeoRadiusStoreArgs " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); - LettuceAssert.isTrue(geoRadiusStoreArgs.getStoreKey() != null || geoRadiusStoreArgs.getStoreDistKey() != null, - "At least STORE key or STORDIST key is required"); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValue(member).add(distance).add(unit); - geoRadiusStoreArgs.build(args); - - return createCommand(GEORADIUSBYMEMBER, new IntegerOutput(codec), args); - } - - @SuppressWarnings({ "unchecked", "rawtypes" }) - public Command> geopos(K key, V[] members) { - notNullKey(key); - LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); - CommandArgs args = new CommandArgs(codec).addKey(key).addValues(members); - - return (Command) createCommand(GEOPOS, new GeoCoordinatesListOutput(codec), args); - } - - public Command geodist(K key, V from, V to, GeoArgs.Unit unit) { - notNullKey(key); - LettuceAssert.notNull(from, "From " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(from, "To " + MUST_NOT_BE_NULL); - - CommandArgs args = new CommandArgs(codec).addKey(key).addValue(from).addValue(to); - - if (unit != null) { - args.add(unit.name()); - } - - return createCommand(GEODIST, new DoubleOutput(codec), args); - } - - public void notNull(ScoredValueStreamingChannel channel) { - LettuceAssert.notNull(channel, "ScoredValueStreamingChannel " + MUST_NOT_BE_NULL); - } - - public void notNull(KeyStreamingChannel channel) { - LettuceAssert.notNull(channel, "KeyValueStreamingChannel " + MUST_NOT_BE_NULL); - } - - public void notNull(ValueStreamingChannel channel) { - LettuceAssert.notNull(channel, "ValueStreamingChannel " + MUST_NOT_BE_NULL); - } - - public void notNull(KeyValueStreamingChannel channel) { - LettuceAssert.notNull(channel, "KeyValueStreamingChannel " + MUST_NOT_BE_NULL); - } - - private void notNullKey(K key) { - LettuceAssert.notNull(key, "Key " + MUST_NOT_BE_NULL); - } - - public void notNullMinMax(String min, String max) { - LettuceAssert.notNull(min, "Min " + MUST_NOT_BE_NULL); - LettuceAssert.notNull(max, "Max " + MUST_NOT_BE_NULL); - } - - private void notEmpty(K[] keys) { - LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(keys, "Keys " + MUST_NOT_BE_EMPTY); - } - - private void notEmptyValues(V[] values) { - LettuceAssert.notNull(values, "Values " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(values, "Values " + MUST_NOT_BE_EMPTY); - } - - private void assertNodeId(String nodeId) { - LettuceAssert.notNull(nodeId, "NodeId " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(nodeId, "NodeId " + MUST_NOT_BE_EMPTY); - } - - private void notEmptySlots(int[] slots) { - LettuceAssert.notNull(slots, "Slots " + MUST_NOT_BE_NULL); - LettuceAssert.notEmpty(slots, "Slots " + MUST_NOT_BE_EMPTY); - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisCommandExecutionException.java b/src/main/java/com/lambdaworks/redis/RedisCommandExecutionException.java deleted file mode 100644 index 7b40d72542..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisCommandExecutionException.java +++ /dev/null @@ -1,23 +0,0 @@ -package com.lambdaworks.redis; - -/** - * Exception for errors states reported by Redis. - * - * @author Mark Paluch - */ -@SuppressWarnings("serial") -public class RedisCommandExecutionException extends RedisException { - - public RedisCommandExecutionException(Throwable cause) { - super(cause); - } - - public RedisCommandExecutionException(String msg) { - super(msg); - } - - public RedisCommandExecutionException(String msg, Throwable e) { - super(msg, e); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisCommandInterruptedException.java b/src/main/java/com/lambdaworks/redis/RedisCommandInterruptedException.java deleted file mode 100644 index 2cfb344276..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisCommandInterruptedException.java +++ /dev/null @@ -1,16 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -/** - * Exception thrown when the thread executing a redis command is interrupted. - * - * @author Will Glozer - */ -@SuppressWarnings("serial") -public class RedisCommandInterruptedException extends RedisException { - - public RedisCommandInterruptedException(Throwable e) { - super("Command interrupted", e); - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisCommandTimeoutException.java b/src/main/java/com/lambdaworks/redis/RedisCommandTimeoutException.java deleted file mode 100644 index 904140d0e4..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisCommandTimeoutException.java +++ /dev/null @@ -1,18 +0,0 @@ -package com.lambdaworks.redis; - -/** - * Exception thrown when the command waiting timeout is exceeded. - * - * @author Mark Paluch - */ -@SuppressWarnings("serial") -public class RedisCommandTimeoutException extends RedisException { - - public RedisCommandTimeoutException() { - super("Command timed out"); - } - - public RedisCommandTimeoutException(String msg) { - super(msg); - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisConnection.java b/src/main/java/com/lambdaworks/redis/RedisConnection.java deleted file mode 100644 index 007c3a9c54..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisConnection.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.*; - -/** - * - * A complete synchronous and thread-safe Redis API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisCommands} - */ -@Deprecated -public interface RedisConnection extends RedisHashesConnection, RedisKeysConnection, - RedisStringsConnection, RedisListsConnection, RedisSetsConnection, RedisSortedSetsConnection, - RedisScriptingConnection, RedisServerConnection, RedisHLLConnection, RedisGeoConnection, - BaseRedisConnection, RedisClusterConnection, RedisTransactionalCommands { - - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - String auth(String password); - - /** - * Change the selected database for the current connection. - * - * @param db the database number - * @return String simple-string-reply - */ - String select(int db); - - /** - * @return the underlying connection. - */ - StatefulRedisConnection getStatefulConnection(); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisConnectionException.java b/src/main/java/com/lambdaworks/redis/RedisConnectionException.java deleted file mode 100644 index 36181cd8b4..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisConnectionException.java +++ /dev/null @@ -1,19 +0,0 @@ -package com.lambdaworks.redis; - -/** - * Exception for connection failures. - * - * @author Mark Paluch - */ -@SuppressWarnings("serial") -public class RedisConnectionException extends RedisException { - - public RedisConnectionException(String msg) { - super(msg); - } - - public RedisConnectionException(String msg, Throwable e) { - super(msg, e); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisConnectionPool.java b/src/main/java/com/lambdaworks/redis/RedisConnectionPool.java deleted file mode 100644 index a3051da4b4..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisConnectionPool.java +++ /dev/null @@ -1,231 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Closeable; -import java.lang.reflect.InvocationTargetException; -import java.lang.reflect.Method; -import java.lang.reflect.Proxy; -import java.util.Collections; -import java.util.Set; - -import com.lambdaworks.redis.internal.AbstractInvocationHandler; -import org.apache.commons.pool2.BasePooledObjectFactory; -import org.apache.commons.pool2.PooledObject; -import org.apache.commons.pool2.PooledObjectFactory; -import org.apache.commons.pool2.impl.DefaultPooledObject; -import org.apache.commons.pool2.impl.GenericObjectPool; -import org.apache.commons.pool2.impl.GenericObjectPoolConfig; - -/** - * Connection pool for redis connections. - * - * @author Mark Paluch - * @param Connection type. - * @since 3.0 - */ -public class RedisConnectionPool implements Closeable { - - private final RedisConnectionProvider redisConnectionProvider; - private GenericObjectPool objectPool; - private CloseEvents closeEvents = new CloseEvents(); - - /** - * Create a new connection pool - * - * @param redisConnectionProvider the connection provider - * @param maxActive max active connections - * @param maxIdle max idle connections - * @param maxWait max wait time (ms) for a connection - */ - public RedisConnectionPool(RedisConnectionProvider redisConnectionProvider, int maxActive, int maxIdle, long maxWait) { - this.redisConnectionProvider = redisConnectionProvider; - - GenericObjectPoolConfig config = new GenericObjectPoolConfig(); - config.setMaxIdle(maxIdle); - config.setMaxTotal(maxActive); - config.setMaxWaitMillis(maxWait); - config.setTestOnBorrow(true); - - objectPool = new GenericObjectPool(createFactory(redisConnectionProvider), config); - } - - private PooledObjectFactory createFactory(final RedisConnectionProvider redisConnectionProvider) { - return new BasePooledObjectFactory() { - - @SuppressWarnings("unchecked") - @Override - public T create() throws Exception { - - T connection = redisConnectionProvider.createConnection(); - PooledConnectionInvocationHandler h = new PooledConnectionInvocationHandler(connection, - RedisConnectionPool.this); - - Object proxy = Proxy.newProxyInstance(getClass().getClassLoader(), - new Class[] { redisConnectionProvider.getComponentType() }, h); - - return (T) proxy; - } - - @Override - public PooledObject wrap(T obj) { - return new DefaultPooledObject(obj); - } - - @Override - public boolean validateObject(PooledObject p) { - return Connections.isOpen(p.getObject()); - } - - @Override - @SuppressWarnings("unchecked") - public void destroyObject(PooledObject p) throws Exception { - - T object = p.getObject(); - if (Proxy.isProxyClass(object.getClass())) { - PooledConnectionInvocationHandler invocationHandler = (PooledConnectionInvocationHandler) Proxy - .getInvocationHandler(object); - - object = invocationHandler.getConnection(); - } - - Connections.close(object); - } - }; - } - - /** - * Allocate a connection from the pool. It must be returned using freeConnection (or alternatively call {@code close()} on - * the connection). - * - * The connections returned by this method are proxies to the underlying connections. - * - * @return a pooled connection. - */ - public T allocateConnection() { - try { - return objectPool.borrowObject(); - } catch (RedisException e) { - throw e; - } catch (Exception e) { - throw new RedisException(e.getMessage(), e); - } - } - - /** - * Return a connection into the pool. - * - * @param t the connection. - */ - public void freeConnection(T t) { - objectPool.returnObject(t); - } - - /** - * - * @return the number of idle connections - */ - public int getNumIdle() { - return objectPool.getNumIdle(); - } - - /** - * - * @return the number of active connections. - */ - public int getNumActive() { - return objectPool.getNumActive(); - } - - /** - * Close the pool and close all idle connections. Active connections won't be closed. - */ - @Override - public void close() { - objectPool.close(); - objectPool = null; - - closeEvents.fireEventClosed(this); - closeEvents = null; - } - - /** - * - * @return the component type (pool resource type). - */ - public Class getComponentType() { - return redisConnectionProvider.getComponentType(); - } - - /** - * Adds a CloseListener. - * - * @param listener the listener - */ - void addListener(CloseEvents.CloseListener listener) { - closeEvents.addListener(listener); - } - - /** - * Invocation handler which takes care of connection.close(). Connections are returned to the pool on a close()-call. - * - * @author Mark Paluch - * @param Connection type. - * @since 3.0 - */ - static class PooledConnectionInvocationHandler extends AbstractInvocationHandler { - public static final Set DISABLED_METHODS = Collections.singleton("getStatefulConnection"); - - private T connection; - private final RedisConnectionPool pool; - - public PooledConnectionInvocationHandler(T connection, RedisConnectionPool pool) { - this.connection = connection; - this.pool = pool; - - } - - @SuppressWarnings("unchecked") - @Override - protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { - - if (DISABLED_METHODS.contains(method.getName())) { - throw new UnsupportedOperationException( - "Calls to " + method.getName() + " are not supported on pooled connections"); - } - - if (connection == null) { - throw new RedisException("Connection is deallocated and cannot be used anymore."); - } - - if (method.getName().equals("close")) { - if (pool.objectPool == null) { - return method.invoke(connection, args); - } - pool.freeConnection((T) proxy); - return null; - } - - try { - return method.invoke(connection, args); - } catch (InvocationTargetException e) { - throw e.getTargetException(); - } - } - - public T getConnection() { - return connection; - } - } - - /** - * Connection provider for redis connections. - * - * @author Mark Paluch - * @param Connection type. - * @since 3.0 - */ - interface RedisConnectionProvider { - T createConnection(); - - Class getComponentType(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisConnectionStateListener.java b/src/main/java/com/lambdaworks/redis/RedisConnectionStateListener.java deleted file mode 100644 index 2ebaf139a2..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisConnectionStateListener.java +++ /dev/null @@ -1,33 +0,0 @@ -// Copyright (C) 2013 - ze. All rights reserved. -package com.lambdaworks.redis; - -/** - * Simple interface for Redis connection state monitoring. - * - * @author ze - */ -public interface RedisConnectionStateListener { - /** - * Event handler for successful connection event. - * - * @param connection Source connection. - */ - void onRedisConnected(RedisChannelHandler connection); - - /** - * Event handler for disconnection event. - * - * @param connection Source connection. - */ - void onRedisDisconnected(RedisChannelHandler connection); - - /** - * - * Event handler for exceptions. - * - * @param connection Source connection. - * - * @param cause Caught exception. - */ - void onRedisExceptionCaught(RedisChannelHandler connection, Throwable cause); -} \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/RedisException.java b/src/main/java/com/lambdaworks/redis/RedisException.java deleted file mode 100644 index c9f78826b5..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisException.java +++ /dev/null @@ -1,23 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -/** - * Exception thrown when Redis returns an error message, or when the client fails for any reason. - * - * @author Will Glozer - */ -@SuppressWarnings("serial") -public class RedisException extends RuntimeException { - public RedisException(String msg) { - super(msg); - } - - public RedisException(String msg, Throwable e) { - super(msg, e); - } - - public RedisException(Throwable cause) { - super(cause); - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisFuture.java b/src/main/java/com/lambdaworks/redis/RedisFuture.java deleted file mode 100644 index cae755b022..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisFuture.java +++ /dev/null @@ -1,33 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.CompletionStage; -import java.util.concurrent.Future; -import java.util.concurrent.TimeUnit; - -/** - * Redis Future, extends a Listenable Future (Notification on Complete). The execution of the notification happens either on - * finish of the future execution or, if the future is completed already, immediately. - * - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public interface RedisFuture extends CompletionStage, Future { - - /** - * - * @return error text, if any error occured. - */ - String getError(); - - /** - * Wait up to the specified time for the command output to become available. - * - * @param timeout Maximum time to wait for a result. - * @param unit Unit of time for the timeout. - * - * @return true if the output became available. - * @throws InterruptedException if the current thread is interrupted while waiting - */ - boolean await(long timeout, TimeUnit unit) throws InterruptedException; -} diff --git a/src/main/java/com/lambdaworks/redis/RedisGeoAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisGeoAsyncConnection.java deleted file mode 100644 index b1adf42531..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisGeoAsyncConnection.java +++ /dev/null @@ -1,155 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.Set; - -import com.lambdaworks.redis.GeoArgs.Unit; - -/** - * Asynchronous executed commands for Geo-Commands. - * - * @author Mark Paluch - * @since 3.3 - * @deprecated Use {@link com.lambdaworks.redis.api.async.RedisGeoAsyncCommands} - */ -@Deprecated -public interface RedisGeoAsyncConnection { - - /** - * Single geo add. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param member the member to add - * @return Long integer-reply the number of elements that were added to the set - */ - RedisFuture geoadd(K key, double longitude, double latitude, V member); - - /** - * Multi geo add. - * - * @param key the key of the geo set - * @param lngLatMember triplets of double longitude, double latitude and V member - * @return Long integer-reply the number of elements that were added to the set - */ - RedisFuture geoadd(K key, Object... lngLatMember); - - /** - * Retrieve Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index. - * - * @param key the key of the geo set - * @param members the members - * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. - */ - RedisFuture> geohash(K key, V... members); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @return bulk reply - */ - RedisFuture> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - RedisFuture>> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, - GeoArgs geoArgs); - - /** - * Perform a {@link #georadius(Object, double, double, double, GeoArgs.Unit, GeoArgs)} query and store the results in a - * sorted set. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - RedisFuture georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, - GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @return set of members - */ - RedisFuture> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); - - /** - * - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - RedisFuture>> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadiusbymember(Object, Object, double, GeoArgs.Unit, GeoArgs)} query and store the results in a - * sorted set. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - RedisFuture georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, - GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Get geo coordinates for the {@code members}. - * - * @param key the key of the geo set - * @param members the members - * - * @return a list of {@link GeoCoordinates}s representing the x,y position of each element specified in the arguments. For - * missing elements {@literal null} is returned. - */ - RedisFuture> geopos(K key, V... members); - - /** - * - * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. Default in meters by , otherwise according to {@code unit} - * - * @param key the key of the geo set - * @param from from member - * @param to to member - * @param unit distance unit - * - * @return distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. - */ - RedisFuture geodist(K key, V from, V to, GeoArgs.Unit unit); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisGeoConnection.java b/src/main/java/com/lambdaworks/redis/RedisGeoConnection.java deleted file mode 100644 index c973bbef4e..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisGeoConnection.java +++ /dev/null @@ -1,151 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.Set; - -import com.lambdaworks.redis.GeoArgs.Unit; - -/** - * Synchronous executed commands for Geo-Commands. - * - * @author Mark Paluch - * @since 3.3 - * @deprecated Use {@link com.lambdaworks.redis.api.sync.RedisGeoCommands} - */ -@Deprecated -public interface RedisGeoConnection { - - /** - * Single geo add. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param member the member to add - * @return Long integer-reply the number of elements that were added to the set - */ - Long geoadd(K key, double longitude, double latitude, V member); - - /** - * Multi geo add. - * - * @param key the key of the geo set - * @param lngLatMember triplets of double longitude, double latitude and V member - * @return Long integer-reply the number of elements that were added to the set - */ - Long geoadd(K key, Object... lngLatMember); - - /** - * Retrieve Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index. - * - * @param key the key of the geo set - * @param members the members - * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. - */ - List geohash(K key, V... members); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @return bulk reply - */ - Set georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - List> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadius(Object, double, double, double, Unit)} query and store the results in a sorted set. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - Long georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, - GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @return set of members - */ - Set georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); - - /** - * - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - List> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadiusbymember(Object, Object, double, Unit, GeoArgs)} query and store the results in a sorted set. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - Long georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Get geo coordinates for the {@code members}. - * - * @param key the key of the geo set - * @param members the members - * - * @return a list of {@link GeoCoordinates}s representing the x,y position of each element specified in the arguments. For - * missing elements {@literal null} is returned. - */ - List geopos(K key, V... members); - - /** - * - * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. Default in meters by, otherwise according to {@code unit} - * - * @param key the key of the geo set - * @param from from member - * @param to to member - * @param unit distance unit - * - * @return distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. - */ - Double geodist(K key, V from, V to, GeoArgs.Unit unit); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisHLLAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisHLLAsyncConnection.java deleted file mode 100644 index 9ba08ec3ba..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisHLLAsyncConnection.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.api.async.RedisHLLAsyncCommands; - -/** - * Asynchronous executed commands for HyperLogLog (PF* commands). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisHLLAsyncCommands} - */ -@Deprecated -public interface RedisHLLAsyncConnection { - /** - * Adds the specified elements to the specified HyperLogLog. - * - * @param key the key - * @param value the value - * @param moreValues more values - * - * @return RedisFuture<Long> integer-reply specifically: - * - * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. - */ - RedisFuture pfadd(K key, V value, V... moreValues); - - /** - * Merge N different HyperLogLogs into a single one. - * - * @param destkey the destination key - * @param sourcekey the source key - * @param moreSourceKeys more source keys - * - * @return RedisFuture<String> simple-string-reply The command just returns {@code OK}. - */ - RedisFuture pfmerge(K destkey, K sourcekey, K... moreSourceKeys); - - /** - * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). - * - * @param key the key - * @param moreKeys more keys - * - * @return RedisFuture<Long> integer-reply specifically: - * - * The approximated number of unique elements observed via {@code PFADD}. - */ - RedisFuture pfcount(K key, K... moreKeys); - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisHLLConnection.java b/src/main/java/com/lambdaworks/redis/RedisHLLConnection.java deleted file mode 100644 index 18053be98d..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisHLLConnection.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.api.sync.RedisHLLCommands; - -/** - * Synchronous executed commands for HyperLogLog (PF* commands). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisHLLCommands} - */ -@Deprecated -public interface RedisHLLConnection { - /** - * Adds the specified elements to the specified HyperLogLog. - * - * @param key the key - * @param value the value - * @param moreValues more values - * - * @return Long integer-reply specifically: - * - * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. - */ - Long pfadd(K key, V value, V... moreValues); - - /** - * Merge N different HyperLogLogs into a single one. - * - * @param destkey the destination key - * @param sourcekey the source key - * @param moreSourceKeys more source keys - * - * @return Long simple-string-reply The command just returns {@code OK}. - */ - String pfmerge(K destkey, K sourcekey, K... moreSourceKeys); - - /** - * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). - * - * @param key the key - * @param moreKeys more keys - * - * @return Long integer-reply specifically: - * - * The approximated number of unique elements observed via {@code PFADD}. - */ - Long pfcount(K key, K... moreKeys); - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisHashesAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisHashesAsyncConnection.java deleted file mode 100644 index 39a3c3fd7a..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisHashesAsyncConnection.java +++ /dev/null @@ -1,281 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.api.async.RedisHashAsyncCommands; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Asynchronous executed commands for Hashes (Key-Value pairs). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisHashAsyncCommands} - */ -@Deprecated -public interface RedisHashesAsyncConnection { - - /** - * Delete one or more hash fields. - * - * @param key the key - * @param fields the field type: key - * @return RedisFuture<Long> integer-reply the number of fields that were removed from the hash, not including - * specified but non existing fields. - */ - RedisFuture hdel(K key, K... fields); - - /** - * Determine if a hash field exists. - * - * @param key the key - * @param field the field type: key - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, - * or {@code key} does not exist. - */ - RedisFuture hexists(K key, K field); - - /** - * Get the value of a hash field. - * - * @param key the key - * @param field the field type: key - * @return RedisFuture<V> bulk-string-reply the value associated with {@code field}, or {@code null} when - * {@code field} is not present in the hash or {@code key} does not exist. - */ - RedisFuture hget(K key, K field); - - /** - * Increment the integer value of a hash field by the given number. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: long - * @return RedisFuture<Long> integer-reply the value at {@code field} after the increment operation. - */ - RedisFuture hincrby(K key, K field, long amount); - - /** - * Increment the float value of a hash field by the given amount. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: double - * @return RedisFuture<Double;> bulk-string-reply the value of {@code field} after the increment. - */ - RedisFuture hincrbyfloat(K key, K field, double amount); - - /** - * Get all the fields and values in a hash. - * - * @param key the key - * @return RedisFuture<Map<K,V>> array-reply list of fields and their values stored in the hash, or an empty - * list when {@code key} does not exist. - */ - RedisFuture> hgetall(K key); - - /** - * Stream over all the fields and values in a hash. - * - * @param channel the channel - * @param key the key - * - * @return RedisFuture<Long> count of the keys. - */ - RedisFuture hgetall(KeyValueStreamingChannel channel, K key); - - /** - * Get all the fields in a hash. - * - * @param key the key - * @return RedisFuture<List<K>> array-reply list of fields in the hash, or an empty list when {@code key} does - * not exist. - */ - RedisFuture> hkeys(K key); - - /** - * Get all the fields in a hash. - * - * @param channel the channel - * @param key the key - * - * @return RedisFuture<Long> count of the keys. - */ - RedisFuture hkeys(KeyStreamingChannel channel, K key); - - /** - * Get the number of fields in a hash. - * - * @param key the key - * @return RedisFuture<Long> integer-reply number of fields in the hash, or {@literal false} when {@code key} does not - * exist. - */ - RedisFuture hlen(K key); - - /** - * Get the values of all the given hash fields. - * - * @param key the key - * @param fields the field type: key - * @return RedisFuture<List<V>> array-reply list of values associated with the given fields, in the same - */ - RedisFuture> hmget(K key, K... fields); - - /** - * Stream over the values of all the given hash fields. - * - * @param channel the channel - * @param key the key - * @param fields the fields - * - * @return RedisFuture<Long> count of the keys - */ - RedisFuture hmget(ValueStreamingChannel channel, K key, K... fields); - - /** - * Set multiple hash fields to multiple values. - * - * @param key the key - * @param map the null - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture hmset(K key, Map map); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @return RedisFuture<MapScanCursor<K, V>> scan cursor. - */ - RedisFuture> hscan(K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanArgs scan arguments - * @return RedisFuture<MapScanCursor<K, V>> scan cursor. - */ - RedisFuture> hscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return RedisFuture<MapScanCursor<K, V>> scan cursor. - */ - RedisFuture> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return RedisFuture<MapScanCursor<K, V>> scan cursor. - */ - RedisFuture> hscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture hscan(KeyValueStreamingChannel channel, K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanArgs scan arguments - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Set the string value of a hash field. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if {@code field} is a new field in the hash and {@code value} was set. {@literal false} if - * {@code field} already exists in the hash and the value was updated. - */ - RedisFuture hset(K key, K field, V value); - - /** - * Set the value of a hash field, only if the field does not exist. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@code 1} if {@code field} is a new field in the hash and {@code value} was set. {@code 0} if {@code field} - * already exists in the hash and no operation was performed. - */ - RedisFuture hsetnx(K key, K field, V value); - - /** - * Get the string length of the field value in a hash. - * - * @param key the key - * @param field the field type: key - * @return RedisFuture<Long> integer-reply the string length of the {@code field} value, or {@code 0} when - * {@code field} is not present in the hash or {@code key} does not exist at all. - */ - RedisFuture hstrlen(K key, K field); - - /** - * Get all the values in a hash. - * - * @param key the key - * @return RedisFuture<List<V>> array-reply list of values in the hash, or an empty list when {@code key} does - * not exist. - */ - RedisFuture> hvals(K key); - - /** - * Stream over all the values in a hash. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * - * @return RedisFuture<Long> count of the keys. - */ - RedisFuture hvals(ValueStreamingChannel channel, K key); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisHashesConnection.java b/src/main/java/com/lambdaworks/redis/RedisHashesConnection.java deleted file mode 100644 index 99e2520b1e..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisHashesConnection.java +++ /dev/null @@ -1,278 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.api.sync.RedisHashCommands; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Hashes (Key-Value pairs). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisHashCommands} - */ -@Deprecated -public interface RedisHashesConnection { - - /** - * Delete one or more hash fields. - * - * @param key the key - * @param fields the field type: key - * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing - * fields. - */ - Long hdel(K key, K... fields); - - /** - * Determine if a hash field exists. - * - * @param key the key - * @param field the field type: key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, - * or {@code key} does not exist. - */ - Boolean hexists(K key, K field); - - /** - * Get the value of a hash field. - * - * @param key the key - * @param field the field type: key - * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present - * in the hash or {@code key} does not exist. - */ - V hget(K key, K field); - - /** - * Increment the integer value of a hash field by the given number. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: long - * @return Long integer-reply the value at {@code field} after the increment operation. - */ - Long hincrby(K key, K field, long amount); - - /** - * Increment the float value of a hash field by the given amount. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: double - * @return Double bulk-string-reply the value of {@code field} after the increment. - */ - Double hincrbyfloat(K key, K field, double amount); - - /** - * Get all the fields and values in a hash. - * - * @param key the key - * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} - * does not exist. - */ - Map hgetall(K key); - - /** - * Stream over all the fields and values in a hash. - * - * @param channel the channel - * @param key the key - * - * @return Long count of the keys. - */ - Long hgetall(KeyValueStreamingChannel channel, K key); - - /** - * Get all the fields in a hash. - * - * @param key the key - * @return List<K> array-reply list of fields in the hash, or an empty list when {@code key} does not exist. - */ - List hkeys(K key); - - /** - * Stream over all the fields in a hash. - * - * @param channel the channel - * @param key the key - * - * @return Long count of the keys. - */ - Long hkeys(KeyStreamingChannel channel, K key); - - /** - * Get the number of fields in a hash. - * - * @param key the key - * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. - */ - Long hlen(K key); - - /** - * Get the values of all the given hash fields. - * - * @param key the key - * @param fields the field type: key - * @return List<V> array-reply list of values associated with the given fields, in the same - */ - List hmget(K key, K... fields); - - /** - * Stream over the values of all the given hash fields. - * - * @param channel the channel - * @param key the key - * @param fields the fields - * - * @return Long count of the keys - */ - Long hmget(ValueStreamingChannel channel, K key, K... fields); - - /** - * Set multiple hash fields to multiple values. - * - * @param key the key - * @param map the null - * @return String simple-string-reply - */ - String hmset(K key, Map map); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanArgs scan arguments - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Set the string value of a hash field. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@literal true} if {@code field} is a new field in the hash and {@code value} was set. {@literal false} if - * {@code field} already exists in the hash and the value was updated. - */ - Boolean hset(K key, K field, V value); - - /** - * Set the value of a hash field, only if the field does not exist. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@code 1} if {@code field} is a new field in the hash and {@code value} was set. {@code 0} if {@code field} - * already exists in the hash and no operation was performed. - */ - Boolean hsetnx(K key, K field, V value); - - /** - * Get the string length of the field value in a hash. - * - * @param key the key - * @param field the field type: key - * @return Long integer-reply the string length of the {@code field} value, or {@code 0} when {@code field} is not present - * in the hash or {@code key} does not exist at all. - */ - Long hstrlen(K key, K field); - - /** - * Get all the values in a hash. - * - * @param key the key - * @return List<V> array-reply list of values in the hash, or an empty list when {@code key} does not exist. - */ - List hvals(K key); - - /** - * Stream over all the values in a hash. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * - * @return Long count of the keys. - */ - Long hvals(ValueStreamingChannel channel, K key); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisKeysAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisKeysAsyncConnection.java deleted file mode 100644 index bcd7a3efa6..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisKeysAsyncConnection.java +++ /dev/null @@ -1,411 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.Date; -import java.util.List; - -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Asynchronous executed commands for Keys (Key manipulation/querying). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@literal RedisKeyAsyncCommands} - */ -@Deprecated -public interface RedisKeysAsyncConnection { - - /** - * Delete one or more keys. - * - * @param keys the keys - * @return RedisFuture<Long> integer-reply The number of keys that were removed. - */ - RedisFuture del(K... keys); - - /** - * Unlink one or more keys (non blocking DEL). - * - * @param keys the keys - * @return RedisFuture<Long> integer-reply The number of keys that were removed. - */ - RedisFuture unlink(K... keys); - - /** - * Return a serialized version of the value stored at the specified key. - * - * @param key the key - * @return RedisFuture<byte[]>bulk-string-reply the serialized value. - */ - RedisFuture dump(K key); - - /** - * Determine if a key exists. - * - * @param key the key - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the key exists. {@literal false} if the key does not exist. - * @deprecated Use {@link #exists(Object[])} instead - */ - @Deprecated - RedisFuture exists(K key); - - /** - * Determine how many keys exist. - * - * @param keys the keys - * @return Long integer-reply specifically: Number of existing keys - */ - RedisFuture exists(K... keys); - - /** - * Set a key's time to live in seconds. - * - * @param key the key - * @param seconds the seconds type: long - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - RedisFuture expire(K key, long seconds); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - RedisFuture expireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - RedisFuture expireat(K key, long timestamp); - - /** - * Find all keys matching the given pattern. - * - * @param pattern the pattern type: patternkey (pattern) - * @return RedisFuture<List<K>> array-reply list of keys matching {@code pattern}. - */ - RedisFuture> keys(K pattern); - - /** - * Find all keys matching the given pattern. - * - * @param channel the channel - * @param pattern the pattern - * - * @return RedisFuture<Long> array-reply list of keys matching {@code pattern}. - */ - RedisFuture keys(KeyStreamingChannel channel, K pattern); - - /** - * Atomically transfer a key from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param key the key - * @param db the database - * @param timeout the timeout in milliseconds - * - * @return RedisFuture<String> simple-string-reply The command returns OK on success. - */ - RedisFuture migrate(String host, int port, K key, int db, long timeout); - - /** - * Atomically transfer one or more keys from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param db the database - * @param timeout the timeout in milliseconds - * @param migrateArgs migrate args that allow to configure further options - * @return String simple-string-reply The command returns OK on success. - */ - RedisFuture migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs); - - /** - * Move a key to another database. - * - * @param key the key - * @param db the db type: long - * @return RedisFuture<Boolean> integer-reply specifically: - */ - RedisFuture move(K key, int db); - - /** - * returns the kind of internal representation used in order to store the value associated with a key. - * - * @param key the key - * @return RedisFuture<String> - */ - RedisFuture objectEncoding(K key); - - /** - * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write - * operations). - * - * @param key the key - * @return RedisFuture<Long> number of seconds since the object stored at the specified key is idle. - */ - RedisFuture objectIdletime(K key); - - /** - * returns the number of references of the value associated with the specified key. - * - * @param key the key - * @return RedisFuture<Long> - */ - RedisFuture objectRefcount(K key); - - /** - * Remove the expiration from a key. - * - * @param key the key - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an - * associated timeout. - */ - RedisFuture persist(K key); - - /** - * Set a key's time to live in milliseconds. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @return integer-reply, specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - RedisFuture pexpire(K key, long milliseconds); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - RedisFuture pexpireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - RedisFuture pexpireat(K key, long timestamp); - - /** - * Get the time to live for a key in milliseconds. - * - * @param key the key - * @return RedisFuture<Long> integer-reply TTL in milliseconds, or a negative value in order to signal an error (see - * the description above). - */ - RedisFuture pttl(K key); - - /** - * Return a random key from the keyspace. - * - * @return RedisFuture<V> bulk-string-reply the random key, or {@literal null} when the database is empty. - */ - RedisFuture randomkey(); - - /** - * Rename a key. - * - * @param key the key - * @param newKey the newkey type: key - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture rename(K key, K newKey); - - /** - * Rename a key, only if the new key does not exist. - * - * @param key the key - * @param newKey the newkey type: key - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. - */ - RedisFuture renamenx(K key, K newKey); - - /** - * Create a key using the provided serialized value, previously obtained using DUMP. - * - * @param key the key - * @param ttl the ttl type: long - * @param value the serialized-value type: string - * @return RedisFuture<String> simple-string-reply The command returns OK on success. - */ - RedisFuture restore(K key, long ttl, byte[] value); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @return RedisFuture<List<V>> array-reply list of sorted elements. - */ - RedisFuture> sort(K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return RedisFuture<Long> number of values. - */ - RedisFuture sort(ValueStreamingChannel channel, K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @return RedisFuture<List<V>> array-reply list of sorted elements. - */ - RedisFuture> sort(K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param sortArgs sort arguments - * @return RedisFuture<Long> number of values. - */ - RedisFuture sort(ValueStreamingChannel channel, K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @param destination the destination key to store sort results - * @return RedisFuture<Long> number of values. - */ - RedisFuture sortStore(K key, SortArgs sortArgs, K destination); - - /** - * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * - * @param keys the keys - * @return RedisFuture<Long> integer-reply the number of found keys. - */ - RedisFuture touch(K... keys); - - /** - * Get the time to live for a key. - * - * @param key the key - * @return RedisFuture<Long> integer-reply TTL in seconds, or a negative value in order to signal an error (see the - * description above). - */ - RedisFuture ttl(K key); - - /** - * Determine the type stored at key. - * - * @param key the key - * @return RedisFuture<String> simple-string-reply type of {@code key}, or {@code none} when {@code key} does not - * exist. - */ - RedisFuture type(K key); - - /** - * Incrementally iterate the keys space. - * - * @return RedisFuture<KeyScanCursor<K>> scan cursor. - */ - RedisFuture> scan(); - - /** - * Incrementally iterate the keys space. - * - * @param scanArgs scan arguments - * @return RedisFuture<KeyScanCursor<K>> scan cursor. - */ - RedisFuture> scan(ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return RedisFuture<KeyScanCursor<K>> scan cursor. - */ - RedisFuture> scan(ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return RedisFuture<KeyScanCursor<K>> scan cursor. - */ - RedisFuture> scan(ScanCursor scanCursor); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture scan(KeyStreamingChannel channel); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanArgs scan arguments - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture scan(KeyStreamingChannel channel, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor); - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisKeysConnection.java b/src/main/java/com/lambdaworks/redis/RedisKeysConnection.java deleted file mode 100644 index 93855efdeb..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisKeysConnection.java +++ /dev/null @@ -1,406 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.Date; -import java.util.List; - -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Keys (Key manipulation/querying). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@literal RedisKeyCommands} - */ -@Deprecated -public interface RedisKeysConnection { - - /** - * Delete one or more keys. - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - */ - Long del(K... keys); - - /** - * Unlink one or more keys (non blocking DEL). - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - */ - Long unlink(K... keys); - - /** - * Return a serialized version of the value stored at the specified key. - * - * @param key the key - * @return byte[] bulk-string-reply the serialized value. - */ - byte[] dump(K key); - - /** - * Determine if a key exists. - * - * @param key the key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the key exists. {@literal false} if the key does not exist. - * @deprecated Use {@link #exists(Object[])} instead - */ - @Deprecated - Boolean exists(K key); - - /** - * Determine how many keys exist. - * - * @param keys the keys - * @return Long integer-reply specifically: Number of existing keys - */ - Long exists(K... keys); - - /** - * Set a key's time to live in seconds. - * - * @param key the key - * @param seconds the seconds type: long - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - Boolean expire(K key, long seconds); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean expireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean expireat(K key, long timestamp); - - /** - * Find all keys matching the given pattern. - * - * @param pattern the pattern type: patternkey (pattern) - * @return List<K> array-reply list of keys matching {@code pattern}. - */ - List keys(K pattern); - - /** - * Find all keys matching the given pattern. - * - * @param channel the channel - * @param pattern the pattern - * @return Long array-reply list of keys matching {@code pattern}. - */ - Long keys(KeyStreamingChannel channel, K pattern); - - /** - * Atomically transfer a key from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param key the key - * @param db the database - * @param timeout the timeout in milliseconds - * @return String simple-string-reply The command returns OK on success. - */ - String migrate(String host, int port, K key, int db, long timeout); - - /** - * Atomically transfer one or more keys from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param db the database - * @param timeout the timeout in milliseconds - * @param migrateArgs migrate args that allow to configure further options - * @return String simple-string-reply The command returns OK on success. - */ - String migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs); - - /** - * Move a key to another database. - * - * @param key the key - * @param db the db type: long - * @return Boolean integer-reply specifically: - */ - Boolean move(K key, int db); - - /** - * returns the kind of internal representation used in order to store the value associated with a key. - * - * @param key the key - * @return String - */ - String objectEncoding(K key); - - /** - * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write - * operations). - * - * @param key the key - * @return number of seconds since the object stored at the specified key is idle. - */ - Long objectIdletime(K key); - - /** - * returns the number of references of the value associated with the specified key. - * - * @param key the key - * @return Long - */ - Long objectRefcount(K key); - - /** - * Remove the expiration from a key. - * - * @param key the key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an - * associated timeout. - */ - Boolean persist(K key); - - /** - * Set a key's time to live in milliseconds. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @return integer-reply, specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - Boolean pexpire(K key, long milliseconds); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean pexpireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean pexpireat(K key, long timestamp); - - /** - * Get the time to live for a key in milliseconds. - * - * @param key the key - * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description - * above). - */ - Long pttl(K key); - - /** - * Return a random key from the keyspace. - * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. - */ - V randomkey(); - - /** - * Rename a key. - * - * @param key the key - * @param newKey the newkey type: key - * @return String simple-string-reply - */ - String rename(K key, K newKey); - - /** - * Rename a key, only if the new key does not exist. - * - * @param key the key - * @param newKey the newkey type: key - * @return Boolean integer-reply specifically: - * - * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. - */ - Boolean renamenx(K key, K newKey); - - /** - * Create a key using the provided serialized value, previously obtained using DUMP. - * - * @param key the key - * @param ttl the ttl type: long - * @param value the serialized-value type: string - * @return String simple-string-reply The command returns OK on success. - */ - String restore(K key, long ttl, byte[] value); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @return List<V> array-reply list of sorted elements. - */ - List sort(K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return Long number of values. - */ - Long sort(ValueStreamingChannel channel, K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @return List<V> array-reply list of sorted elements. - */ - List sort(K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param sortArgs sort arguments - * @return Long number of values. - */ - Long sort(ValueStreamingChannel channel, K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @param destination the destination key to store sort results - * @return Long number of values. - */ - Long sortStore(K key, SortArgs sortArgs, K destination); - - /** - * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * - * @param keys the keys - * @return Long integer-reply the number of found keys. - */ - Long touch(K... keys); - - /** - * Get the time to live for a key. - * - * @param key the key - * @return Long integer-reply TTL in seconds, or a negative value in order to signal an error (see the description above). - */ - Long ttl(K key); - - /** - * Determine the type stored at key. - * - * @param key the key - * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. - */ - String type(K key); - - /** - * Incrementally iterate the keys space. - * - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(); - - /** - * Incrementally iterate the keys space. - * - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanCursor scanCursor); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisListsAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisListsAsyncConnection.java deleted file mode 100644 index 8bf311ff88..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisListsAsyncConnection.java +++ /dev/null @@ -1,222 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; - -import com.lambdaworks.redis.api.async.RedisListAsyncCommands; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Asynchronous executed commands for Lists. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisListAsyncCommands} - */ -@Deprecated -public interface RedisListsAsyncConnection { - - /** - * Remove and get the first element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return RedisFuture<KeyValue<K,V>> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - RedisFuture> blpop(long timeout, K... keys); - - /** - * Remove and get the last element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return RedisFuture<KeyValue<K,V>> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - RedisFuture> brpop(long timeout, K... keys); - - /** - * Pop a value from a list, push it to another list and return it; or block until one is available. - * - * @param timeout the timeout in seconds - * @param source the source key - * @param destination the destination type: key - * @return RedisFuture<V> bulk-string-reply the element being popped from {@code source} and pushed to - * {@code destination}. If {@code timeout} is reached, a - */ - RedisFuture brpoplpush(long timeout, K source, K destination); - - /** - * Get an element from a list by its index. - * - * @param key the key - * @param index the index type: long - * @return RedisFuture<V> bulk-string-reply the requested element, or {@literal null} when {@code index} is out of - * range. - */ - RedisFuture lindex(K key, long index); - - /** - * Insert an element before or after another element in a list. - * - * @param key the key - * @param before the before - * @param pivot the pivot - * @param value the value - * @return RedisFuture<Long> integer-reply the length of the list after the insert operation, or {@code -1} when the - * value {@code pivot} was not found. - */ - RedisFuture linsert(K key, boolean before, V pivot, V value); - - /** - * Get the length of a list. - * - * @param key the key - * @return RedisFuture<Long> integer-reply the length of the list at {@code key}. - */ - RedisFuture llen(K key); - - /** - * Remove and get the first element in a list. - * - * @param key the key - * @return RedisFuture<V> bulk-string-reply the value of the first element, or {@literal null} when {@code key} does - * not exist. - */ - RedisFuture lpop(K key); - - /** - * Prepend one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return RedisFuture<Long> integer-reply the length of the list after the push operations. - */ - RedisFuture lpush(K key, V... values); - - /** - * Prepend a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return RedisFuture<Long> integer-reply the length of the list after the push operation. - * @deprecated Use {@link #lpushx(Object, Object[])} - */ - @Deprecated - RedisFuture lpushx(K key, V value); - - /** - * Prepend values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - RedisFuture lpushx(K key, V... values); - - /** - * Get a range of elements from a list. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return RedisFuture<List<V>> array-reply list of elements in the specified range. - */ - RedisFuture> lrange(K key, long start, long stop); - - /** - * Get a range of elements from a list. - * - * @param channel the channel - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture lrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Remove elements from a list. - * - * @param key the key - * @param count the count type: long - * @param value the value - * @return RedisFuture<Long> integer-reply the number of removed elements. - */ - RedisFuture lrem(K key, long count, V value); - - /** - * Set the value of an element in a list by its index. - * - * @param key the key - * @param index the index type: long - * @param value the value - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture lset(K key, long index, V value); - - /** - * Trim a list to the specified range. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture ltrim(K key, long start, long stop); - - /** - * Remove and get the last element in a list. - * - * @param key the key - * @return RedisFuture<V> bulk-string-reply the value of the last element, or {@literal null} when {@code key} does - * not exist. - */ - RedisFuture rpop(K key); - - /** - * Remove the last element in a list, append it to another list and return it. - * - * @param source the source key - * @param destination the destination type: key - * @return RedisFuture<V> bulk-string-reply the element being popped and pushed. - */ - RedisFuture rpoplpush(K source, K destination); - - /** - * Append one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return RedisFuture<Long> integer-reply the length of the list after the push operation. - */ - RedisFuture rpush(K key, V... values); - - /** - * Append a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return RedisFuture<Long> integer-reply the length of the list after the push operation. - * @deprecated Use {@link #rpushx(java.lang.Object, java.lang.Object[])} - */ - @Deprecated - RedisFuture rpushx(K key, V value); - - /** - * Append values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return RedisFuture<Long> integer-reply the length of the list after the push operation. - */ - RedisFuture rpushx(K key, V... values); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisListsConnection.java b/src/main/java/com/lambdaworks/redis/RedisListsConnection.java deleted file mode 100644 index bcdd3b9124..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisListsConnection.java +++ /dev/null @@ -1,220 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; - -import com.lambdaworks.redis.api.sync.RedisListCommands; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * - * Synchronous executed commands for Lists. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@literal RedisListCommands} - */ -@Deprecated -public interface RedisListsConnection { - - /** - * Remove and get the first element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return KeyValue<K,V> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - KeyValue blpop(long timeout, K... keys); - - /** - * Remove and get the last element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return KeyValue<K,V> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - KeyValue brpop(long timeout, K... keys); - - /** - * Pop a value from a list, push it to another list and return it; or block until one is available. - * - * @param timeout the timeout in seconds - * @param source the source key - * @param destination the destination type: key - * @return V bulk-string-reply the element being popped from {@code source} and pushed to {@code destination}. If - * {@code timeout} is reached, a - */ - V brpoplpush(long timeout, K source, K destination); - - /** - * Get an element from a list by its index. - * - * @param key the key - * @param index the index type: long - * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. - */ - V lindex(K key, long index); - - /** - * Insert an element before or after another element in a list. - * - * @param key the key - * @param before the before - * @param pivot the pivot - * @param value the value - * @return Long integer-reply the length of the list after the insert operation, or {@code -1} when the value {@code pivot} - * was not found. - */ - Long linsert(K key, boolean before, V pivot, V value); - - /** - * Get the length of a list. - * - * @param key the key - * @return Long integer-reply the length of the list at {@code key}. - */ - Long llen(K key); - - /** - * Remove and get the first element in a list. - * - * @param key the key - * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. - */ - V lpop(K key); - - /** - * Prepend one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return Long integer-reply the length of the list after the push operations. - */ - Long lpush(K key, V... values); - - /** - * Prepend a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #lpushx(Object, Object[])} - */ - @Deprecated - Long lpushx(K key, V value); - - /** - * Prepend values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - Long lpushx(K key, V... values); - - /** - * Get a range of elements from a list. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return List<V> array-reply list of elements in the specified range. - */ - List lrange(K key, long start, long stop); - - /** - * Get a range of elements from a list. - * - * @param channel the channel - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long count of elements in the specified range. - */ - Long lrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Remove elements from a list. - * - * @param key the key - * @param count the count type: long - * @param value the value - * @return Long integer-reply the number of removed elements. - */ - Long lrem(K key, long count, V value); - - /** - * Set the value of an element in a list by its index. - * - * @param key the key - * @param index the index type: long - * @param value the value - * @return String simple-string-reply - */ - String lset(K key, long index, V value); - - /** - * Trim a list to the specified range. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return String simple-string-reply - */ - String ltrim(K key, long start, long stop); - - /** - * Remove and get the last element in a list. - * - * @param key the key - * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. - */ - V rpop(K key); - - /** - * Remove the last element in a list, append it to another list and return it. - * - * @param source the source key - * @param destination the destination type: key - * @return V bulk-string-reply the element being popped and pushed. - */ - V rpoplpush(K source, K destination); - - /** - * Append one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return Long integer-reply the length of the list after the push operation. - */ - Long rpush(K key, V... values); - - /** - * Append a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #rpushx(Object, Object[])} - */ - @Deprecated - Long rpushx(K key, V value); - - /** - * Append values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - Long rpushx(K key, V... values); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisReactiveCommandsImpl.java b/src/main/java/com/lambdaworks/redis/RedisReactiveCommandsImpl.java deleted file mode 100644 index 16f29388d4..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisReactiveCommandsImpl.java +++ /dev/null @@ -1,34 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisClusterReactiveCommands; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * A reactive and thread-safe API for a Redis Sentinel connection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class RedisReactiveCommandsImpl extends AbstractRedisReactiveCommands implements - RedisReactiveCommands, RedisClusterReactiveCommands { - - /** - * Initialize a new instance. - * - * @param connection the connection to operate on - * @param codec the codec for command encoding - * - */ - public RedisReactiveCommandsImpl(StatefulRedisConnection connection, RedisCodec codec) { - super(connection, codec); - } - - @Override - @SuppressWarnings("unchecked") - public StatefulRedisConnection getStatefulConnection() { - return (StatefulRedisConnection) connection; - } -} diff --git a/src/main/java/com/lambdaworks/redis/RedisScriptingAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisScriptingAsyncConnection.java deleted file mode 100644 index 52cb17b7c1..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisScriptingAsyncConnection.java +++ /dev/null @@ -1,96 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; - -import com.lambdaworks.redis.api.async.RedisScriptingAsyncCommands; - -/** - * Asynchronous executed commands for Scripting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@literal RedisScriptingAsyncCommands} - */ -@Deprecated -public interface RedisScriptingAsyncConnection { - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type output type - * @param keys key names - * @param expected return type - * @return script result - */ - RedisFuture eval(String script, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - RedisFuture eval(String script, ScriptOutputType type, K[] keys, V... values); - - /** - * Evaluates a script cached on the server side by its SHA1 digest - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param expected return type - * @return script result - */ - RedisFuture evalsha(String digest, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - RedisFuture evalsha(String digest, ScriptOutputType type, K[] keys, V... values); - - /** - * Check existence of scripts in the script cache. - * - * @param digests script digests - * @return RedisFuture<List<Boolean>> array-reply The command returns an array of integers that correspond to - * the specified SHA1 digest arguments. For every corresponding SHA1 digest of a script that actually exists in the - * script cache, an 1 is returned, otherwise 0 is returned. - */ - RedisFuture> scriptExists(String... digests); - - /** - * Remove all the scripts from the script cache. - * - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture scriptFlush(); - - /** - * Kill the script currently in execution. - * - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture scriptKill(); - - /** - * Load the specified Lua script into the script cache. - * - * @param script script content - * @return RedisFuture<String> bulk-string-reply This command returns the SHA1 digest of the script added into the - * script cache. - */ - RedisFuture scriptLoad(V script); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisScriptingConnection.java b/src/main/java/com/lambdaworks/redis/RedisScriptingConnection.java deleted file mode 100644 index 04e3fd8b75..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisScriptingConnection.java +++ /dev/null @@ -1,95 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; - -import com.lambdaworks.redis.api.sync.RedisScriptingCommands; - -/** - * Synchronous executed commands for Scripting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisScriptingCommands} - */ -@Deprecated -public interface RedisScriptingConnection { - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type output type - * @param keys key names - * @param expected return type - * @return script result - */ - T eval(String script, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - T eval(String script, ScriptOutputType type, K[] keys, V... values); - - /** - * Evaluates a script cached on the server side by its SHA1 digest - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param expected return type - * @return script result - */ - T evalsha(String digest, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - T evalsha(String digest, ScriptOutputType type, K[] keys, V... values); - - /** - * Check existence of scripts in the script cache. - * - * @param digests script digests - * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 - * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 - * is returned, otherwise 0 is returned. - */ - List scriptExists(String... digests); - - /** - * Remove all the scripts from the script cache. - * - * @return String simple-string-reply - */ - String scriptFlush(); - - /** - * Kill the script currently in execution. - * - * @return String simple-string-reply - */ - String scriptKill(); - - /** - * Load the specified Lua script into the script cache. - * - * @param script script content - * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. - */ - String scriptLoad(V script); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisSentinelAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisSentinelAsyncConnection.java deleted file mode 100644 index 051b210fb6..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisSentinelAsyncConnection.java +++ /dev/null @@ -1,114 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Closeable; -import java.net.SocketAddress; -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; - -/** - * Asynchronous executed commands for Sentinel. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisSentinelAsyncCommands} - */ -@Deprecated -public interface RedisSentinelAsyncConnection extends Closeable { - - /** - * Return the ip and port number of the master with that name. - * - * @param key the key - * @return Future<SocketAddress> - */ - RedisFuture getMasterAddrByName(K key); - - /** - * Enumerates all the monitored masters and their states. - * - * @return RedisFuture<Map<K, V>> - */ - RedisFuture>> masters(); - - /** - * Show the state and info of the specified master. - * - * @param key the key - * @return RedisFuture<Map<K, V>> - */ - RedisFuture> master(K key); - - /** - * Provides a list of slaves for the master with the specified name. - * - * @param key the key - * @return RedisFuture<List<Map<K, V>>> - */ - RedisFuture>> slaves(K key); - - /** - * This command will reset all the masters with matching name. - * - * @param key the key - * @return RedisFuture<Long> - */ - RedisFuture reset(K key); - - /** - * Perform a failover. - * - * @param key the master id - * @return RedisFuture<String> - */ - RedisFuture failover(K key); - - /** - * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. - * - * @param key the key - * @param ip the IP address - * @param port the port - * @param quorum the quorum count - * @return RedisFuture<String> - */ - RedisFuture monitor(K key, String ip, int port, int quorum); - - /** - * Multiple option / value pairs can be specified (or none at all). - * - * @param key the key - * @param option the option - * @param value the value - * - * @return RedisFuture<String> simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - RedisFuture set(K key, String option, V value); - - /** - * remove the specified master. - * - * @param key the key - * @return RedisFuture<String> - */ - RedisFuture remove(K key); - - /** - * Ping the server. - * - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture ping(); - - @Override - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisServerAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisServerAsyncConnection.java deleted file mode 100644 index b4ca65b0c2..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisServerAsyncConnection.java +++ /dev/null @@ -1,304 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.Date; -import java.util.List; - -import com.lambdaworks.redis.api.async.RedisServerAsyncCommands; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * Asynchronous executed commands for Server Control. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisServerAsyncCommands} - */ -@Deprecated -public interface RedisServerAsyncConnection { - /** - * Asynchronously rewrite the append-only file. - * - * @return RedisFuture<String> simple-string-reply always {@code OK}. - */ - RedisFuture bgrewriteaof(); - - /** - * Asynchronously save the dataset to disk. - * - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture bgsave(); - - /** - * Get the current connection name. - * - * @return RedisFuture<K> bulk-string-reply The connection name, or a null bulk reply if no name is set. - */ - RedisFuture clientGetname(); - - /** - * Set the current connection name. - * - * @param name the client name - * @return RedisFuture<String> simple-string-reply {@code OK} if the connection name was successfully set. - */ - RedisFuture clientSetname(K name); - - /** - * Kill the connection of a client identified by ip:port. - * - * @param addr the addr in format ip:port - * @return RedisFuture<String> simple-string-reply {@code OK} if the connection exists and has been closed - */ - RedisFuture clientKill(String addr); - - /** - * Kill connections of clients which are filtered by {@code killArgs} - * - * @param killArgs args for the kill operation - * @return RedisFuture<Long> integer-reply number of killed connections - */ - RedisFuture clientKill(KillArgs killArgs); - - /** - * Stop processing commands from clients for some time. - * - * @param timeout the timeout value in milliseconds - * @return RedisFuture<String> simple-string-reply The command returns OK or an error if the timeout is invalid. - */ - RedisFuture clientPause(long timeout); - - /** - * Get the list of client connections. - * - * @return RedisFuture<String> bulk-string-reply a unique string, formatted as follows: One client connection per line - * (separated by LF), each line is composed of a succession of property=value fields separated by a space character. - */ - RedisFuture clientList(); - - /** - * Returns an array reply of details about all Redis commands. - * - * @return RedisFuture<List<Object>> array-reply - */ - RedisFuture> command(); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return RedisFuture<List<Object>> array-reply - */ - RedisFuture> commandInfo(String... commands); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return RedisFuture<List<Object>> array-reply - */ - RedisFuture> commandInfo(CommandType... commands); - - /** - * Get total number of Redis commands. - * - * @return RedisFuture<Long> integer-reply of number of total commands in this Redis server. - */ - RedisFuture commandCount(); - - /** - * Get the value of a configuration parameter. - * - * @param parameter the parameter - * @return RedisFuture<List<String>> bulk-string-reply - */ - RedisFuture> configGet(String parameter); - - /** - * Reset the stats returned by INFO. - * - * @return RedisFuture<String> simple-string-reply always {@code OK}. - */ - RedisFuture configResetstat(); - - /** - * Rewrite the configuration file with the in memory configuration. - * - * @return RedisFuture<String> simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise - * an error is returned. - */ - RedisFuture configRewrite(); - - /** - * Set a configuration parameter to the given value. - * - * @param parameter the parameter name - * @param value the parameter value - * @return RedisFuture<String> simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an - * error is returned. - */ - RedisFuture configSet(String parameter, String value); - - /** - * Return the number of keys in the selected database. - * - * @return RedisFuture<Long> integer-reply - */ - RedisFuture dbsize(); - - /** - * Get debugging information about a key. - * - * @param key the key - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture debugObject(K key); - - /** - * Make the server crash: Invalid pointer access. - */ - void debugSegfault(); - - /** - * Make the server crash: Out of memory. - */ - void debugOom(); - - /** - * Get debugging information about the internal hash-table state. - * - * @param db the database number - * @return String simple-string-reply - */ - RedisFuture debugHtstats(int db); - - /** - * Remove all keys from all databases. - * - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture flushall(); - - /** - * Remove all keys asynchronously from all databases. - * - * @return String simple-string-reply - */ - RedisFuture flushallAsync(); - - /** - * Remove all keys from the current database. - * - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture flushdb(); - - /** - * Remove all keys asynchronously from the current database. - * - * @return String simple-string-reply - */ - RedisFuture flushdbAsync(); - - /** - * Get information and statistics about the server. - * - * @return RedisFuture<String> bulk-string-reply as a collection of text lines. - */ - RedisFuture info(); - - /** - * Get information and statistics about the server. - * - * @param section the section type: string - * @return RedisFuture<String> bulk-string-reply as a collection of text lines. - */ - RedisFuture info(String section); - - /** - * Get the UNIX time stamp of the last successful save to disk. - * - * @return RedisFuture<Date> integer-reply an UNIX time stamp. - */ - RedisFuture lastsave(); - - /** - * Synchronously save the dataset to disk. - * - * @return RedisFuture<String> simple-string-reply The commands returns OK on success. - */ - RedisFuture save(); - - /** - * Synchronously save the dataset to disk and then shut down the server. - * - * @param save {@literal true} force save operation - */ - void shutdown(boolean save); - - /** - * Make the server a slave of another instance. - * - * @param host the host type: string - * @param port the port type: string - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture slaveof(String host, int port); - - /** - * Promote server as master. - * - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture slaveofNoOne(); - - /** - * Read the slow log. - * - * @return List<Object> deeply nested multi bulk replies - */ - RedisFuture> slowlogGet(); - - /** - * Read the slow log. - * - * @param count the count - * @return List<Object> deeply nested multi bulk replies - */ - RedisFuture> slowlogGet(int count); - - /** - * Obtaining the current length of the slow log. - * - * @return RedisFuture<Long> length of the slow log. - */ - RedisFuture slowlogLen(); - - /** - * Resetting the slow log. - * - * @return RedisFuture<String> simple-string-reply The commands returns OK on success. - */ - RedisFuture slowlogReset(); - - /** - * Internal command used for replication. - * - * @return RedisFuture<String> - */ - @Deprecated - RedisFuture sync(); - - /** - * Return the current server time. - * - * @return RedisFuture<List<V>> array-reply specifically: - * - * A multi bulk reply containing two elements: - * - * unix time in seconds. microseconds. - */ - RedisFuture> time(); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisServerConnection.java b/src/main/java/com/lambdaworks/redis/RedisServerConnection.java deleted file mode 100644 index 0a3f393fa0..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisServerConnection.java +++ /dev/null @@ -1,304 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.Date; -import java.util.List; - -import com.lambdaworks.redis.api.sync.RedisServerCommands; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * Synchronous executed commands for Server Control. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisServerCommands} - */ -@Deprecated -public interface RedisServerConnection extends RedisServerCommands { - /** - * Asynchronously rewrite the append-only file. - * - * @return String simple-string-reply always {@code OK}. - */ - String bgrewriteaof(); - - /** - * Asynchronously save the dataset to disk. - * - * @return String simple-string-reply - */ - String bgsave(); - - /** - * Get the current connection name. - * - * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. - */ - K clientGetname(); - - /** - * Set the current connection name. - * - * @param name the client name - * @return simple-string-reply {@code OK} if the connection name was successfully set. - */ - String clientSetname(K name); - - /** - * Kill the connection of a client identified by ip:port. - * - * @param addr ip:port - * @return String simple-string-reply {@code OK} if the connection exists and has been closed - */ - String clientKill(String addr); - - /** - * Kill connections of clients which are filtered by {@code killArgs} - * - * @param killArgs args for the kill operation - * @return Long integer-reply number of killed connections - */ - Long clientKill(KillArgs killArgs); - - /** - * Stop processing commands from clients for some time. - * - * @param timeout the timeout value in milliseconds - * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. - */ - String clientPause(long timeout); - - /** - * Get the list of client connections. - * - * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), - * each line is composed of a succession of property=value fields separated by a space character. - */ - String clientList(); - - /** - * Returns an array reply of details about all Redis commands. - * - * @return List<Object> array-reply - */ - List command(); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return List<Object> array-reply - */ - List commandInfo(String... commands); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return List<Object> array-reply - */ - List commandInfo(CommandType... commands); - - /** - * Get total number of Redis commands. - * - * @return Long integer-reply of number of total commands in this Redis server. - */ - Long commandCount(); - - /** - * Get the value of a configuration parameter. - * - * @param parameter name of the parameter - * @return List<String> bulk-string-reply - */ - List configGet(String parameter); - - /** - * Reset the stats returned by INFO. - * - * @return String simple-string-reply always {@code OK}. - */ - String configResetstat(); - - /** - * Rewrite the configuration file with the in memory configuration. - * - * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is - * returned. - */ - String configRewrite(); - - /** - * Set a configuration parameter to the given value. - * - * @param parameter the parameter name - * @param value the parameter value - * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. - */ - String configSet(String parameter, String value); - - /** - * Return the number of keys in the selected database. - * - * @return Long integer-reply - */ - Long dbsize(); - - /** - * Get debugging information about a key. - * - * @param key the key - * @return String simple-string-reply - */ - String debugObject(K key); - - /** - * Make the server crash: Invalid pointer access. - */ - void debugSegfault(); - - /** - * Make the server crash: Out of memory. - */ - void debugOom(); - - /** - * Get debugging information about the internal hash-table state. - * - * @param db the database number - * @return String simple-string-reply - */ - String debugHtstats(int db); - - /** - * Remove all keys from all databases. - * - * @return String simple-string-reply - */ - String flushall(); - - /** - * Remove all keys asynchronously from all databases. - * - * @return String simple-string-reply - */ - String flushallAsync(); - - /** - * Remove all keys from the current database. - * - * @return String simple-string-reply - */ - String flushdb(); - - /** - * Remove all keys asynchronously from the current database. - * - * @return String simple-string-reply - */ - String flushdbAsync(); - - /** - * Get information and statistics about the server. - * - * @return String bulk-string-reply as a collection of text lines. - */ - String info(); - - /** - * Get information and statistics about the server. - * - * @param section the section type: string - * @return String bulk-string-reply as a collection of text lines. - */ - String info(String section); - - /** - * Get the UNIX time stamp of the last successful save to disk. - * - * @return Date integer-reply an UNIX time stamp. - */ - Date lastsave(); - - /** - * Synchronously save the dataset to disk. - * - * @return String simple-string-reply The commands returns OK on success. - */ - String save(); - - /** - * Synchronously save the dataset to disk and then shut down the server. - * - * @param save {@literal true} force save operation - */ - void shutdown(boolean save); - - /** - * Make the server a slave of another instance, or promote it as master. - * - * @param host the host type: string - * @param port the port type: string - * @return String simple-string-reply - */ - String slaveof(String host, int port); - - /** - * Promote server as master. - * - * @return String simple-string-reply - */ - String slaveofNoOne(); - - /** - * Read the slow log. - * - * @return List<Object> deeply nested multi bulk replies - */ - List slowlogGet(); - - /** - * Read the slow log. - * - * @param count the count - * @return List<Object> deeply nested multi bulk replies - */ - List slowlogGet(int count); - - /** - * Obtaining the current length of the slow log. - * - * @return Long length of the slow log. - */ - Long slowlogLen(); - - /** - * Resetting the slow log. - * - * @return String simple-string-reply The commands returns OK on success. - */ - String slowlogReset(); - - /** - * Internal command used for replication. - * - * @return String simple-string-reply - */ - @Deprecated - String sync(); - - /** - * Return the current server time. - * - * @return List<V> array-reply specifically: - * - * A multi bulk reply containing two elements: - * - * unix time in seconds. microseconds. - */ - List time(); - -} diff --git a/src/main/java/com/lambdaworks/redis/RedisSetsAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisSetsAsyncConnection.java deleted file mode 100644 index 1a5c4aa759..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisSetsAsyncConnection.java +++ /dev/null @@ -1,292 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.Set; - -import com.lambdaworks.redis.api.async.RedisSetAsyncCommands; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Asynchronous executed commands for Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisSetAsyncCommands} - */ -@Deprecated -public interface RedisSetsAsyncConnection { - - /** - * Add one or more members to a set. - * - * @param key the key - * @param members the member type: value - * @return RedisFuture<Long> integer-reply the number of elements that were added to the set, not including all the - * elements already present into the set. - */ - RedisFuture sadd(K key, V... members); - - /** - * Get the number of members in a set. - * - * @param key the key - * @return RedisFuture<Long> integer-reply the cardinality (number of elements) of the set, or {@literal false} if - * {@code key} does not exist. - */ - RedisFuture scard(K key); - - /** - * Subtract multiple sets. - * - * @param keys the key - * @return RedisFuture<Set<V>> array-reply list with members of the resulting set. - */ - RedisFuture> sdiff(K... keys); - - /** - * Subtract multiple sets. - * - * @param channel streaming channel that receives a call for every value - * @param keys the keys - * @return RedisFuture<Long> count of members of the resulting set. - */ - RedisFuture sdiff(ValueStreamingChannel channel, K... keys); - - /** - * Subtract multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return RedisFuture<Long> integer-reply the number of elements in the resulting set. - */ - RedisFuture sdiffstore(K destination, K... keys); - - /** - * Intersect multiple sets. - * - * @param keys the key - * @return RedisFuture<Set<V>> array-reply list with members of the resulting set. - */ - RedisFuture> sinter(K... keys); - - /** - * Intersect multiple sets. - * - * @param channel streaming channel that receives a call for every value - * @param keys the keys - * @return RedisFuture<Long> count of members of the resulting set. - */ - RedisFuture sinter(ValueStreamingChannel channel, K... keys); - - /** - * Intersect multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return RedisFuture<Long> integer-reply the number of elements in the resulting set. - */ - RedisFuture sinterstore(K destination, K... keys); - - /** - * Determine if a given value is a member of a set. - * - * @param key the key - * @param member the member type: value - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the - * set, or if {@code key} does not exist. - */ - RedisFuture sismember(K key, V member); - - /** - * Move a member from one set to another. - * - * @param source the source key - * @param destination the destination type: key - * @param member the member type: value - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no - * operation was performed. - */ - RedisFuture smove(K source, K destination, V member); - - /** - * Get all the members in a set. - * - * @param key the key - * @return RedisFuture<Set<V>> array-reply all elements of the set. - */ - RedisFuture> smembers(K key); - - /** - * Get all the members in a set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return RedisFuture<Long> count of members of the resulting set. - */ - RedisFuture smembers(ValueStreamingChannel channel, K key); - - /** - * Remove and return a random member from a set. - * - * @param key the key - * @return RedisFuture<V> bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - RedisFuture spop(K key); - - /** - * Remove and return one or multiple random members from a set. - * - * @param key the key - * @param count number of members to pop - * @return RedisFuture<Set<V>> bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - RedisFuture> spop(K key, long count); - - /** - * Get one or multiple random members from a set. - * - * @param key the key - * - * @return RedisFuture<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk - * Reply with the randomly selected element, or {@literal null} when {@code key} does not exist. - */ - RedisFuture srandmember(K key); - - /** - * Get one or multiple random members from a set. - * - * @param key the key - * @param count the count type: long - * @return RedisFuture<Set<V>> bulk-string-reply without the additional {@code count} argument the command - * returns a Bulk Reply with the randomly selected element, or {@literal null} when {@code key} does not exist. - */ - RedisFuture> srandmember(K key, long count); - - /** - * Get one or multiple random members from a set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param count the count - * @return RedisFuture<Long> count of members of the resulting set. - */ - RedisFuture srandmember(ValueStreamingChannel channel, K key, long count); - - /** - * Remove one or more members from a set. - * - * @param key the key - * @param members the member type: value - * @return RedisFuture<Long> integer-reply the number of members that were removed from the set, not including non - * existing members. - */ - RedisFuture srem(K key, V... members); - - /** - * Add multiple sets. - * - * @param keys the key - * @return RedisFuture<Set<V>> array-reply list with members of the resulting set. - */ - RedisFuture> sunion(K... keys); - - /** - * Add multiple sets. - * - * @param channel streaming channel that receives a call for every value - * @param keys the key - * @return RedisFuture<Long> count of members of the resulting set. - */ - RedisFuture sunion(ValueStreamingChannel channel, K... keys); - - /** - * Add multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return RedisFuture<Long> integer-reply the number of elements in the resulting set. - */ - RedisFuture sunionstore(K destination, K... keys); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @return RedisFuture<ValueScanCursor>V<> scan cursor. - */ - RedisFuture> sscan(K key); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanArgs scan arguments - * @return RedisFuture<ValueScanCursor>V<> scan cursor. - */ - RedisFuture> sscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return RedisFuture<ValueScanCursor>V<> scan cursor. - */ - RedisFuture> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return RedisFuture<ValueScanCursor>V<> scan cursor. - */ - RedisFuture> sscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture sscan(ValueStreamingChannel channel, K key); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanArgs scan arguments - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisSetsConnection.java b/src/main/java/com/lambdaworks/redis/RedisSetsConnection.java deleted file mode 100644 index 9104cb8d02..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisSetsConnection.java +++ /dev/null @@ -1,291 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.Set; - -import com.lambdaworks.redis.api.sync.RedisSetCommands; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisSetCommands} - */ -@Deprecated -public interface RedisSetsConnection { - - /** - * Add one or more members to a set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply the number of elements that were added to the set, not including all the elements already - * present into the set. - */ - Long sadd(K key, V... members); - - /** - * Get the number of members in a set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not - * exist. - */ - Long scard(K key); - - /** - * Subtract multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sdiff(K... keys); - - /** - * Subtract multiple sets. - * - * @param channel the channel - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sdiff(ValueStreamingChannel channel, K... keys); - - /** - * Subtract multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sdiffstore(K destination, K... keys); - - /** - * Intersect multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sinter(K... keys); - - /** - * Intersect multiple sets. - * - * @param channel the channel - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sinter(ValueStreamingChannel channel, K... keys); - - /** - * Intersect multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sinterstore(K destination, K... keys); - - /** - * Determine if a given value is a member of a set. - * - * @param key the key - * @param member the member type: value - * @return Boolean integer-reply specifically: - * - * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the - * set, or if {@code key} does not exist. - */ - Boolean sismember(K key, V member); - - /** - * Move a member from one set to another. - * - * @param source the source key - * @param destination the destination type: key - * @param member the member type: value - * @return Boolean integer-reply specifically: - * - * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no - * operation was performed. - */ - Boolean smove(K source, K destination, V member); - - /** - * Get all the members in a set. - * - * @param key the key - * @return Set<V> array-reply all elements of the set. - */ - Set smembers(K key); - - /** - * Get all the members in a set. - * - * @param channel the channel - * @param key the keys - * @return Long count of members of the resulting set. - */ - Long smembers(ValueStreamingChannel channel, K key); - - /** - * Remove and return a random member from a set. - * - * @param key the key - * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - V spop(K key); - - /** - * Remove and return one or multiple random members from a set. - * - * @param key the key - * @param count number of members to pop - * @return Set<V> bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - Set spop(K key, long count); - - /** - * Get one or multiple random members from a set. - * - * @param key the key - * - * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the - * randomly selected element, or {@literal null} when {@code key} does not exist. - */ - V srandmember(K key); - - /** - * Get one or multiple random members from a set. - * - * @param key the key - * @param count the count type: long - * @return Set<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply - * with the randomly selected element, or {@literal null} when {@code key} does not exist. - */ - List srandmember(K key, long count); - - /** - * Get one or multiple random members from a set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param count the count - * @return Long count of members of the resulting set. - */ - Long srandmember(ValueStreamingChannel channel, K key, long count); - - /** - * Remove one or more members from a set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply the number of members that were removed from the set, not including non existing members. - */ - Long srem(K key, V... members); - - /** - * Add multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sunion(K... keys); - - /** - * Add multiple sets. - * - * @param channel streaming channel that receives a call for every value - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sunion(ValueStreamingChannel channel, K... keys); - - /** - * Add multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sunionstore(K destination, K... keys); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanArgs scan arguments - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisSortedSetsAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisSortedSetsAsyncConnection.java deleted file mode 100644 index e30e008c01..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisSortedSetsAsyncConnection.java +++ /dev/null @@ -1,812 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; - -import com.lambdaworks.redis.api.async.RedisSortedSetAsyncCommands; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Asynchronous executed commands for Sorted Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@literal RedisSortedSetAsyncCommands} - */ -@Deprecated -public interface RedisSortedSetsAsyncConnection { - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param score the score - * @param member the member - * - * @return RedisFuture<Long> integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return RedisFuture<Long> integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param score the score - * @param member the member - * - * @return RedisFuture<Long> integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, ZAddArgs zAddArgs, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return RedisFuture<Long> integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); - - /** - * ZADD acts like ZINCRBY - * - * @param key the key - * @param score the score - * @param member the member - * - * @return RedisFuture<Long> integer-reply specifically: - * - * The total number of elements changed - */ - RedisFuture zaddincr(K key, double score, V member); - - /** - * Get the number of members in a sorted set. - * - * @param key the key - * @return RedisFuture<Long> integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} - * if {@code key} does not exist. - */ - RedisFuture zcard(K key); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> integer-reply the number of elements in the specified score range. - */ - RedisFuture zcount(K key, double min, double max); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> integer-reply the number of elements in the specified score range. - */ - RedisFuture zcount(K key, String min, String max); - - /** - * Increment the score of a member in a sorted set. - * - * @param key the key - * @param amount the increment type: long - * @param member the member type: key - * @return RedisFuture<Double;> bulk-string-reply the new score of {@code member} (a double precision floating point - * number), represented as string. - */ - RedisFuture zincrby(K key, double amount, K member); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param keys the keys - * - * @return RedisFuture<Long> integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - RedisFuture zinterstore(K destination, K... keys); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * - * @return RedisFuture<Long> integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - RedisFuture zinterstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return RedisFuture<List<V>> array-reply list of elements in the specified range. - */ - RedisFuture> zrange(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified range. - */ - RedisFuture>> zrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebyscore(K key, double min, double max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebyscore(K key, double min, double max, long offset, long count); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebyscore(K key, String min, String max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrangebyscoreWithScores(K key, double min, double max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrangebyscoreWithScores(K key, String min, String max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> count of elements in the specified score range. - */ - RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> count of elements in the specified score range. - */ - RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<Long> count of elements in the specified score range. - */ - RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<Long> count of elements in the specified score range. - */ - RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> count of elements in the specified score range. - */ - RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> count of elements in the specified score range. - */ - RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<Long> count of elements in the specified score range. - */ - RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, - long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<Long> count of elements in the specified score range. - */ - RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, - long offset, long count); - - /** - * Determine the index of a member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return RedisFuture<Long> integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted - * set or {@code key} does not exist, - */ - RedisFuture zrank(K key, V member); - - /** - * Remove one or more members from a sorted set. - * - * @param key the key - * @param members the member type: value - * @return RedisFuture<Long> integer-reply specifically: - * - * The number of members removed from the sorted set, not including non existing members. - */ - RedisFuture zrem(K key, V... members); - - /** - * Remove all members in a sorted set within the given indexes. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return RedisFuture<Long> integer-reply the number of elements removed. - */ - RedisFuture zremrangebyrank(K key, long start, long stop); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> integer-reply the number of elements removed. - */ - RedisFuture zremrangebyscore(K key, double min, double max); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> integer-reply the number of elements removed. - */ - RedisFuture zremrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return RedisFuture<List<V>> array-reply list of elements in the specified range. - */ - RedisFuture> zrevrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return RedisFuture<List<V>> array-reply list of elements in the specified range. - */ - RedisFuture>> zrevrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrevrangebyscore(K key, double max, double min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrevrangebyscore(K key, String max, String min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrevrangebyscore(K key, double max, double min, long offset, long count); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrevrangebyscore(K key, String max, String min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<ScoredValue<V>>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param max max score - * @param min min score - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, - long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<Long> count of elements in the specified range. - */ - RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, - long offset, long count); - - /** - * Determine the index of a member in a sorted set, with scores ordered from high to low. - * - * @param key the key - * @param member the member type: value - * @return RedisFuture<Long> integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted - * set or {@code key} does not exist, - */ - RedisFuture zrevrank(K key, V member); - - /** - * Get the score associated with the given member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return RedisFuture<Double;> bulk-string-reply the score of {@code member} (a double precision floating point - * number), represented as string. - */ - RedisFuture zscore(K key, V member); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination destination key - * @param keys source keys - * - * @return RedisFuture<Long> integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - RedisFuture zunionstore(K destination, K... keys); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return RedisFuture<Long> integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - RedisFuture zunionstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @return RedisFuture<ScoredValueScanCursor<V>> scan cursor. - */ - RedisFuture> zscan(K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanArgs scan arguments - * @return RedisFuture<ScoredValueScanCursor<V>> scan cursor. - */ - RedisFuture> zscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return RedisFuture<ScoredValueScanCursor<V>> scan cursor. - */ - RedisFuture> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return RedisFuture<ScoredValueScanCursor<V>> scan cursor. - */ - RedisFuture> zscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture zscan(ScoredValueStreamingChannel channel, K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanArgs scan arguments - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return RedisFuture<StreamScanCursor> scan cursor. - */ - RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Count the number of members in a sorted set between a given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> integer-reply the number of elements in the specified score range. - */ - RedisFuture zlexcount(K key, String min, String max); - - /** - * Remove all members in a sorted set between the given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<Long> integer-reply the number of elements removed. - */ - RedisFuture zremrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return RedisFuture<List<V>> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebylex(K key, String min, String max, long offset, long count); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisSortedSetsConnection.java b/src/main/java/com/lambdaworks/redis/RedisSortedSetsConnection.java deleted file mode 100644 index 97867a80e0..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisSortedSetsConnection.java +++ /dev/null @@ -1,807 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; - -import com.lambdaworks.redis.api.sync.RedisSortedSetCommands; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Sorted Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@literal RedisSortedSetCommands} - */ -@Deprecated -public interface RedisSortedSetsConnection { - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ZAddArgs zAddArgs, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); - - /** - * ZADD acts like ZINCRBY - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The total number of elements changed - */ - Double zaddincr(K key, double score, V member); - - /** - * Get the number of members in a sorted set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} - * does not exist. - */ - Long zcard(K key); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zcount(K key, double min, double max); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zcount(K key, String min, String max); - - /** - * Increment the score of a member in a sorted set. - * - * @param key the key - * @param amount the increment type: long - * @param member the member type: key - * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented - * as string. - */ - Double zincrby(K key, double amount, K member); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zinterstore(K destination, K... keys); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zinterstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List zrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List> zrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, double min, double max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, double min, double max, long offset, long count); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, String min, String max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, double min, double max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, String min, String max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); - - /** - * Return a range of members in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Determine the index of a member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Long zrank(K key, V member); - - /** - * Remove one or more members from a sorted set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply specifically: - * - * The number of members removed from the sorted set, not including non existing members. - */ - Long zrem(K key, V... members); - - /** - * Remove all members in a sorted set within the given indexes. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyrank(K key, long start, long stop); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyscore(K key, double min, double max); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List zrevrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List> zrevrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, double max, double min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, String max, String min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the withscores - * @param count the null - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, double max, double min, long offset, long count); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, String max, String min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<V> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, double max, double min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, String max, String min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrevrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param max max score - * @param min min score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, - long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, - long count); - - /** - * Determine the index of a member in a sorted set, with scores ordered from high to low. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Long zrevrank(K key, V member); - - /** - * Get the score associated with the given member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as - * string. - */ - Double zscore(K key, V member); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination destination key - * @param keys source keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zunionstore(K destination, K... keys); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zunionstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Count the number of members in a sorted set between a given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zlexcount(K key, String min, String max); - - /** - * Remove all members in a sorted set between the given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebylex(K key, String min, String max, long offset, long count); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisStringsAsyncConnection.java b/src/main/java/com/lambdaworks/redis/RedisStringsAsyncConnection.java deleted file mode 100644 index df1bb95452..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisStringsAsyncConnection.java +++ /dev/null @@ -1,347 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.output.ValueStreamingChannel; - -import java.util.List; -import java.util.Map; - -/** - * Asynchronous executed commands for Strings. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@literal RedisStringAsyncCommands} - */ -@Deprecated -public interface RedisStringsAsyncConnection { - - /** - * Append a value to a key. - * - * @param key the key - * @param value the value - * @return RedisFuture<Long> integer-reply the length of the string after the append operation. - */ - RedisFuture append(K key, V value); - - /** - * Count set bits in a string. - * - * @param key the key - * - * @return RedisFuture<Long> integer-reply The number of bits set to 1. - */ - RedisFuture bitcount(K key); - - /** - * Count set bits in a string. - * - * @param key the key - * @param start the start - * @param end the end - * - * @return RedisFuture<Long> integer-reply The number of bits set to 1. - */ - RedisFuture bitcount(K key, long start, long end); - - /** - * Execute {@code BITFIELD} with its subcommands. - * - * @param key the key - * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. - * - * @return Long bulk-reply the results from the bitfield commands. - */ - RedisFuture> bitfield(K key, BitFieldArgs bitFieldArgs); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the state - * - * @return RedisFuture<Long> integer-reply The command returns the position of the first bit set to 1 or 0 according - * to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * end and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - RedisFuture bitpos(K key, boolean state); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the bit type: long - * @param start the start type: long - * @param end the end type: long - * @return RedisFuture<Long> integer-reply The command returns the position of the first bit set to 1 or 0 according - * to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - RedisFuture bitpos(K key, boolean state, long start, long end); - - /** - * Perform bitwise AND between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return RedisFuture<Long> integer-reply The size of the string stored in the destination key, that is equal to the - * size of the longest input string. - */ - RedisFuture bitopAnd(K destination, K... keys); - - /** - * Perform bitwise NOT between strings. - * - * @param destination result key of the operation - * @param source operation input key names - * @return RedisFuture<Long> integer-reply The size of the string stored in the destination key, that is equal to the - * size of the longest input string. - */ - RedisFuture bitopNot(K destination, K source); - - /** - * Perform bitwise OR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return RedisFuture<Long> integer-reply The size of the string stored in the destination key, that is equal to the - * size of the longest input string. - */ - RedisFuture bitopOr(K destination, K... keys); - - /** - * Perform bitwise XOR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return RedisFuture<Long> integer-reply The size of the string stored in the destination key, that is equal to the - * size of the longest input string. - */ - RedisFuture bitopXor(K destination, K... keys); - - /** - * Decrement the integer value of a key by one. - * - * @param key the key - * @return RedisFuture<Long> integer-reply the value of {@code key} after the decrement - */ - RedisFuture decr(K key); - - /** - * Decrement the integer value of a key by the given number. - * - * @param key the key - * @param amount the decrement type: long - * @return RedisFuture<Long> integer-reply the value of {@code key} after the decrement - */ - RedisFuture decrby(K key, long amount); - - /** - * Get the value of a key. - * - * @param key the key - * @return RedisFuture<V> bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not - * exist. - */ - RedisFuture get(K key); - - /** - * Returns the bit value at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @return RedisFuture<Long> integer-reply the bit value stored at offset. - */ - RedisFuture getbit(K key, long offset); - - /** - * Get a substring of the string stored at a key. - * - * @param key the key - * @param start the start type: long - * @param end the end type: long - * @return RedisFuture<V> bulk-string-reply - */ - RedisFuture getrange(K key, long start, long end); - - /** - * Set the string value of a key and return its old value. - * - * @param key the key - * @param value the value - * @return RedisFuture<V> bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} - * did not exist. - */ - RedisFuture getset(K key, V value); - - /** - * Increment the integer value of a key by one. - * - * @param key the key - * @return RedisFuture<Long> integer-reply the value of {@code key} after the increment - */ - RedisFuture incr(K key); - - /** - * Increment the integer value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: long - * @return RedisFuture<Long> integer-reply the value of {@code key} after the increment - */ - RedisFuture incrby(K key, long amount); - - /** - * Increment the float value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: double - * @return RedisFuture<Double;> bulk-string-reply the value of {@code key} after the increment. - */ - RedisFuture incrbyfloat(K key, double amount); - - /** - * Get the values of all the given keys. - * - * @param keys the key - * @return RedisFuture<List<V>> array-reply list of values at the specified keys. - */ - RedisFuture> mget(K... keys); - - /** - * Stream the values of all the given keys. - * - * @param channel the channel - * @param keys the keys - * - * @return RedisFuture<Long> array-reply list of values at the specified keys. - */ - RedisFuture mget(ValueStreamingChannel channel, K... keys); - - /** - * Set multiple keys to multiple values. - * - * @param map the null - * @return RedisFuture<String> simple-string-reply always {@code OK} since {@code MSET} can't fail. - */ - RedisFuture mset(Map map); - - /** - * Set multiple keys to multiple values, only if none of the keys exist. - * - * @param map the null - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). - */ - RedisFuture msetnx(Map map); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * - * @return RedisFuture<String> simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - RedisFuture set(K key, V value); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * @param setArgs the setArgs - * - * @return RedisFuture<V> simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - RedisFuture set(K key, V value, SetArgs setArgs); - - /** - * Sets or clears the bit at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @param value the value type: string - * @return RedisFuture<Long> integer-reply the original bit value stored at offset. - */ - RedisFuture setbit(K key, long offset, int value); - - /** - * Set the value and expiration of a key. - * - * @param key the key - * @param seconds the seconds type: long - * @param value the value - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture setex(K key, long seconds, V value); - - /** - * Set the value and expiration in milliseconds of a key. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @param value the value - * @return RedisFuture<String> simple-string-reply - */ - RedisFuture psetex(K key, long milliseconds, V value); - - /** - * Set the value of a key, only if the key does not exist. - * - * @param key the key - * @param value the value - * @return RedisFuture<Boolean> integer-reply specifically: - * - * {@code 1} if the key was set {@code 0} if the key was not set - */ - RedisFuture setnx(K key, V value); - - /** - * Overwrite part of a string at key starting at the specified offset. - * - * @param key the key - * @param offset the offset type: long - * @param value the value - * @return RedisFuture<Long> integer-reply the length of the string after it was modified by the command. - */ - RedisFuture setrange(K key, long offset, V value); - - /** - * Get the length of the value stored in a key. - * - * @param key the key - * @return RedisFuture<Long> integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does - * not exist. - */ - RedisFuture strlen(K key); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisStringsConnection.java b/src/main/java/com/lambdaworks/redis/RedisStringsConnection.java deleted file mode 100644 index 96799fbc14..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisStringsConnection.java +++ /dev/null @@ -1,343 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.api.sync.RedisStringCommands; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -import java.util.List; -import java.util.Map; - -/** - * Synchronous executed commands for Strings. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisStringCommands} - */ -@Deprecated -public interface RedisStringsConnection { - - /** - * Append a value to a key. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the string after the append operation. - */ - Long append(K key, V value); - - /** - * Count set bits in a string. - * - * @param key the key - * - * @return Long integer-reply The number of bits set to 1. - */ - Long bitcount(K key); - - /** - * Count set bits in a string. - * - * @param key the key - * @param start the start - * @param end the end - * - * @return Long integer-reply The number of bits set to 1. - */ - Long bitcount(K key, long start, long end); - - /** - * Execute {@code BITFIELD} with its subcommands. - * - * @param key the key - * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. - * - * @return Long bulk-reply the results from the bitfield commands. - */ - List bitfield(K key, BitFieldArgs bitFieldArgs); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the state - * - * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - Long bitpos(K key, boolean state); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the bit type: long - * @param start the start type: long - * @param end the end type: long - * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - Long bitpos(K key, boolean state, long start, long end); - - /** - * Perform bitwise AND between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopAnd(K destination, K... keys); - - /** - * Perform bitwise NOT between strings. - * - * @param destination result key of the operation - * @param source operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopNot(K destination, K source); - - /** - * Perform bitwise OR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopOr(K destination, K... keys); - - /** - * Perform bitwise XOR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopXor(K destination, K... keys); - - /** - * Decrement the integer value of a key by one. - * - * @param key the key - * @return Long integer-reply the value of {@code key} after the decrement - */ - Long decr(K key); - - /** - * Decrement the integer value of a key by the given number. - * - * @param key the key - * @param amount the decrement type: long - * @return Long integer-reply the value of {@code key} after the decrement - */ - Long decrby(K key, long amount); - - /** - * Get the value of a key. - * - * @param key the key - * @return V bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not exist. - */ - V get(K key); - - /** - * Returns the bit value at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @return Long integer-reply the bit value stored at offset. - */ - Long getbit(K key, long offset); - - /** - * Get a substring of the string stored at a key. - * - * @param key the key - * @param start the start type: long - * @param end the end type: long - * @return V bulk-string-reply - */ - V getrange(K key, long start, long end); - - /** - * Set the string value of a key and return its old value. - * - * @param key the key - * @param value the value - * @return V bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} did not exist. - */ - V getset(K key, V value); - - /** - * Increment the integer value of a key by one. - * - * @param key the key - * @return Long integer-reply the value of {@code key} after the increment - */ - Long incr(K key); - - /** - * Increment the integer value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: long - * @return Long integer-reply the value of {@code key} after the increment - */ - Long incrby(K key, long amount); - - /** - * Increment the float value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: double - * @return Double bulk-string-reply the value of {@code key} after the increment. - */ - Double incrbyfloat(K key, double amount); - - /** - * Get the values of all the given keys. - * - * @param keys the key - * @return List<V> array-reply list of values at the specified keys. - */ - List mget(K... keys); - - /** - * Stream over the values of all the given keys. - * - * @param channel the channel - * @param keys the keys - * - * @return Long array-reply list of values at the specified keys. - */ - Long mget(ValueStreamingChannel channel, K... keys); - - /** - * Set multiple keys to multiple values. - * - * @param map the null - * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. - */ - String mset(Map map); - - /** - * Set multiple keys to multiple values, only if none of the keys exist. - * - * @param map the null - * @return Boolean integer-reply specifically: - * - * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). - */ - Boolean msetnx(Map map); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - String set(K key, V value); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * @param setArgs the setArgs - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - String set(K key, V value, SetArgs setArgs); - - /** - * Sets or clears the bit at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @param value the value type: string - * @return Long integer-reply the original bit value stored at offset. - */ - Long setbit(K key, long offset, int value); - - /** - * Set the value and expiration of a key. - * - * @param key the key - * @param seconds the seconds type: long - * @param value the value - * @return String simple-string-reply - */ - String setex(K key, long seconds, V value); - - /** - * Set the value and expiration in milliseconds of a key. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @param value the value - * @return String simple-string-reply - */ - String psetex(K key, long milliseconds, V value); - - /** - * Set the value of a key, only if the key does not exist. - * - * @param key the key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@code 1} if the key was set {@code 0} if the key was not set - */ - Boolean setnx(K key, V value); - - /** - * Overwrite part of a string at key starting at the specified offset. - * - * @param key the key - * @param offset the offset type: long - * @param value the value - * @return Long integer-reply the length of the string after it was modified by the command. - */ - Long setrange(K key, long offset, V value); - - /** - * Get the length of the value stored in a key. - * - * @param key the key - * @return Long integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does not exist. - */ - Long strlen(K key); -} diff --git a/src/main/java/com/lambdaworks/redis/RedisURI.java b/src/main/java/com/lambdaworks/redis/RedisURI.java deleted file mode 100644 index 905edfd399..0000000000 --- a/src/main/java/com/lambdaworks/redis/RedisURI.java +++ /dev/null @@ -1,1156 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.LettuceStrings.isEmpty; -import static com.lambdaworks.redis.LettuceStrings.isNotEmpty; - -import java.io.Serializable; -import java.io.UnsupportedEncodingException; -import java.net.InetSocketAddress; -import java.net.SocketAddress; -import java.net.URI; -import java.net.URLEncoder; -import java.util.*; -import java.util.concurrent.TimeUnit; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.internal.HostAndPort; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceSets; -import com.lambdaworks.redis.protocol.LettuceCharsets; - -/** - * Redis URI. Contains connection details for the Redis/Sentinel connections. You can provide the database, password and - * timeouts within the RedisURI. - * - * You have following possibilities to create a {@link RedisURI}: - * - *
    - *
  • Use an URI: - *

    - * {@code RedisURI.create("redis://localhost/");} - *

    - * See {@link #create(String)} for more options
  • - *
  • Use the Builder: - *

    - * {@code RedisURI.Builder.redis("localhost", 6379).withPassword("password").withDatabase(1).build(); } - *

    - * See {@link com.lambdaworks.redis.RedisURI.Builder#redis(String)} and - * {@link com.lambdaworks.redis.RedisURI.Builder#sentinel(String)} for more options.
  • - *
  • Construct your own instance: - *

    - * {@code new RedisURI("localhost", 6379, 60, TimeUnit.SECONDS);} - *

    - * or - *

    - * {@code RedisURI uri = new RedisURI(); - * uri.setHost("localhost"); - * } - *

    - *
  • - *
- * - *

URI syntax

- * - * Redis Standalone
redis{@code ://}[password@]host [{@code :} - * port][{@code /}database][{@code ?} [timeout=timeout[d|h|m|s|ms|us|ns]] [ - * &database=database]]
- * - * Redis Standalone (SSL)
rediss{@code ://}[password@]host [{@code :} - * port][{@code /}database][{@code ?} [timeout=timeout[d|h|m|s|ms|us|ns]] [ - * &database=database]]
- * - * Redis Standalone (Unix Domain Sockets)
redis-socket{@code ://} [password@]path[ - * {@code ?}[timeout=timeout[d|h|m|s|ms|us|ns]][&database=database]]
- * - * Redis Sentinel
redis-sentinel{@code ://}[password@]host1 [{@code :} - * port1][, host2 [{@code :}port2]][, hostN [{@code :}portN]][{@code /} - * database][{@code ?} [timeout=timeout[d|h|m|s|ms|us|ns]] [ - * &sentinelMasterId=sentinelMasterId] [&database=database]]
- * - *

- * Schemes - *

- *
    - *
  • redis Redis Standalone
  • - *
  • rediss Redis Standalone SSL
  • - *
  • redis-socket Redis Standalone Unix Domain Socket
  • - *
  • redis-sentinel Redis Sentinel
  • - *
- * - *

- * Timeout units - *

- *
    - *
  • d Days
  • - *
  • h Hours
  • - *
  • m Minutes
  • - *
  • s Seconds
  • - *
  • ms Milliseconds
  • - *
  • us Microseconds
  • - *
  • ns Nanoseconds
  • - *
- * - *

- * Hint: The database parameter within the query part has higher precedence than the database in the path. - *

- * - * - * RedisURI supports Redis Standalone, Redis Sentinel and Redis Cluster with plain, SSL, TLS and unix domain socket connections. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings("serial") -public class RedisURI implements Serializable, ConnectionPoint { - - public static final String URI_SCHEME_REDIS_SENTINEL = "redis-sentinel"; - public static final String URI_SCHEME_REDIS = "redis"; - public static final String URI_SCHEME_REDIS_SECURE = "rediss"; - public static final String URI_SCHEME_REDIS_SECURE_ALT = "redis+ssl"; - public static final String URI_SCHEME_REDIS_TLS_ALT = "redis+tls"; - public static final String URI_SCHEME_REDIS_SOCKET = "redis-socket"; - public static final String URI_SCHEME_REDIS_SOCKET_ALT = "redis+socket"; - public static final String PARAMETER_NAME_TIMEOUT = "timeout"; - public static final String PARAMETER_NAME_DATABASE = "database"; - public static final String PARAMETER_NAME_DATABASE_ALT = "db"; - public static final String PARAMETER_NAME_SENTINEL_MASTER_ID = "sentinelMasterId"; - - public static final Map TIME_UNIT_MAP; - - static { - Map unitMap = new HashMap(); - unitMap.put("ns", TimeUnit.NANOSECONDS); - unitMap.put("us", TimeUnit.MICROSECONDS); - unitMap.put("ms", TimeUnit.MILLISECONDS); - unitMap.put("s", TimeUnit.SECONDS); - unitMap.put("m", TimeUnit.MINUTES); - unitMap.put("h", TimeUnit.HOURS); - unitMap.put("d", TimeUnit.DAYS); - TIME_UNIT_MAP = Collections.unmodifiableMap(unitMap); - } - - /** - * The default sentinel port. - */ - public static final int DEFAULT_SENTINEL_PORT = 26379; - - /** - * The default redis port. - */ - public static final int DEFAULT_REDIS_PORT = 6379; - - /** - * Default timeout: 60 sec - */ - public static final long DEFAULT_TIMEOUT = 60; - public static final TimeUnit DEFAULT_TIMEOUT_UNIT = TimeUnit.SECONDS; - - private String host; - private String socket; - private String sentinelMasterId; - private int port; - private int database; - private char[] password; - private boolean ssl = false; - private boolean verifyPeer = true; - private boolean startTls = false; - private long timeout = 60; - private TimeUnit unit = TimeUnit.SECONDS; - private final List sentinels = new ArrayList<>(); - - /** - * Default empty constructor. - */ - public RedisURI() { - } - - /** - * Constructor with host/port and timeout. - * - * @param host the host - * @param port the port - * @param timeout timeout value - * @param unit unit of the timeout value - */ - public RedisURI(String host, int port, long timeout, TimeUnit unit) { - this.host = host; - this.port = port; - this.timeout = timeout; - this.unit = unit; - } - - /** - * Returns a new {@link RedisURI.Builder} to construct a {@link RedisURI}. - * - * @return a new {@link RedisURI.Builder} to construct a {@link RedisURI}. - */ - public static RedisURI.Builder builder() { - return new Builder(); - } - - /** - * Create a Redis URI from host and port. - * - * @param host the host - * @param port the port - * @return An instance of {@link RedisURI} containing details from the {@code host} and {@code port}. - */ - public static RedisURI create(String host, int port) { - return Builder.redis(host, port).build(); - } - - /** - * Create a Redis URI from an URI string. - * - * The uri must follow conventions of {@link java.net.URI} - * - * @param uri The URI string. - * @return An instance of {@link RedisURI} containing details from the URI. - */ - public static RedisURI create(String uri) { - LettuceAssert.notEmpty(uri, "URI must not be empty"); - return create(URI.create(uri)); - } - - /** - * Create a Redis URI from an URI string: - * - * The uri must follow conventions of {@link java.net.URI} - * - * @param uri The URI. - * @return An instance of {@link RedisURI} containing details from the URI. - */ - public static RedisURI create(URI uri) { - return buildRedisUriFromUri(uri); - } - - /** - * Returns the host. - * - * @return the host. - */ - public String getHost() { - return host; - } - - /** - * Sets the Redis host. - * - * @param host the host - */ - public void setHost(String host) { - this.host = host; - } - - /** - * Returns the Sentinel Master Id. - * - * @return the Sentinel Master Id. - */ - public String getSentinelMasterId() { - return sentinelMasterId; - } - - /** - * Sets the Sentinel Master Id. - * - * @param sentinelMasterId the Sentinel Master Id. - */ - public void setSentinelMasterId(String sentinelMasterId) { - this.sentinelMasterId = sentinelMasterId; - } - - /** - * Returns the Redis port. - * - * @return the Redis port - */ - public int getPort() { - return port; - } - - /** - * Sets the Redis port. Defaults to {@link #DEFAULT_REDIS_PORT}. - * - * @param port the Redis port - */ - public void setPort(int port) { - this.port = port; - } - - /** - * Returns the Unix Domain Socket path. - * - * @return the Unix Domain Socket path. - */ - public String getSocket() { - return socket; - } - - /** - * Sets the Unix Domain Socket path. - * - * @param socket the Unix Domain Socket path. - */ - public void setSocket(String socket) { - this.socket = socket; - } - - /** - * Returns the password. - * - * @return the password - */ - public char[] getPassword() { - return password; - } - - /** - * Sets the password. Use empty string to skip authentication. - * - * @param password the password, must not be {@literal null}. - */ - public void setPassword(String password) { - - LettuceAssert.notNull(password, "Password must not be null"); - this.password = password.toCharArray(); - } - - /** - * Returns the command timeout for synchronous command execution. - * - * @return the Timeout - */ - public long getTimeout() { - return timeout; - } - - /** - * Sets the command timeout for synchronous command execution. - * - * @param timeout the command timeout for synchronous command execution. - */ - public void setTimeout(long timeout) { - this.timeout = timeout; - } - - /** - * Returns the {@link TimeUnit} for the command timeout. - * - * @return the {@link TimeUnit} for the command timeout. - */ - public TimeUnit getUnit() { - return unit; - } - - /** - * Sets the {@link TimeUnit} for the command timeout. - * - * @param unit the {@link TimeUnit} for the command timeout, must not be {@literal null} - */ - public void setUnit(TimeUnit unit) { - - LettuceAssert.notNull(unit, "TimeUnit must not be null"); - this.unit = unit; - } - - /** - * Returns the Redis database number. Databases are only available for Redis Standalone and Redis Master/Slave. - * - * @return - */ - public int getDatabase() { - return database; - } - - /** - * Sets the Redis database number. Databases are only available for Redis Standalone and Redis Master/Slave. - * - * @param database the Redis database number. - */ - public void setDatabase(int database) { - this.database = database; - } - - /** - * Returns {@literal true} if SSL mode is enabled. - * - * @return {@literal true} if SSL mode is enabled. - */ - public boolean isSsl() { - return ssl; - } - - /** - * Sets whether to use SSL model. - * - * @param ssl - */ - public void setSsl(boolean ssl) { - this.ssl = ssl; - } - - /** - * Sets whether to verify peers when using {@link #isSsl() SSL}. - * - * @return {@literal true} to verify peers when using {@link #isSsl() SSL}. - */ - public boolean isVerifyPeer() { - return verifyPeer; - } - - /** - * Sets whether to verify peers when using {@link #isSsl() SSL}. - * - * @param verifyPeer {@literal true} to verify peers when using {@link #isSsl() SSL}. - */ - public void setVerifyPeer(boolean verifyPeer) { - this.verifyPeer = verifyPeer; - } - - /** - * Returns {@literal true} if StartTLS is enabled. - * - * @return {@literal true} if StartTLS is enabled. - */ - public boolean isStartTls() { - return startTls; - } - - /** - * Returns whether StartTLS is enabled. - * - * @param startTls {@literal true} if StartTLS is enabled. - */ - public void setStartTls(boolean startTls) { - this.startTls = startTls; - } - - /** - * - * @return the list of {@link RedisURI Redis Sentinel URIs}. - */ - public List getSentinels() { - return sentinels; - } - - /** - * Creates an URI based on the RedisURI. - * - * @return URI based on the RedisURI - */ - public URI toURI() { - String scheme = getScheme(); - String authority = getAuthority(scheme); - String queryString = getQueryString(); - String uri = scheme + "://" + authority; - - if (!queryString.isEmpty()) { - uri += "?" + queryString; - } - - return URI.create(uri); - } - - private static RedisURI buildRedisUriFromUri(URI uri) { - Builder builder; - if (uri.getScheme().equals(URI_SCHEME_REDIS_SENTINEL)) { - builder = configureSentinel(uri); - } else { - builder = configureStandalone(uri); - } - - String userInfo = uri.getUserInfo(); - - if (isEmpty(userInfo) && isNotEmpty(uri.getAuthority()) && uri.getAuthority().indexOf('@') > 0) { - userInfo = uri.getAuthority().substring(0, uri.getAuthority().indexOf('@')); - } - - if (isNotEmpty(userInfo)) { - String password = userInfo; - if (password.startsWith(":")) { - password = password.substring(1); - } else { - - int index = password.indexOf(':'); - if (index > 0) { - password = password.substring(index + 1); - } - } - if (password != null && !password.equals("")) { - builder.withPassword(password); - } - } - - if (isNotEmpty(uri.getPath()) && builder.socket == null) { - String pathSuffix = uri.getPath().substring(1); - - if (isNotEmpty(pathSuffix)) { - builder.withDatabase(Integer.parseInt(pathSuffix)); - } - } - - if (isNotEmpty(uri.getQuery())) { - StringTokenizer st = new StringTokenizer(uri.getQuery(), "&;"); - while (st.hasMoreTokens()) { - String queryParam = st.nextToken(); - String forStartWith = queryParam.toLowerCase(); - if (forStartWith.startsWith(PARAMETER_NAME_TIMEOUT + "=")) { - parseTimeout(builder, queryParam.toLowerCase()); - } - - if (forStartWith.startsWith(PARAMETER_NAME_DATABASE + "=") - || queryParam.startsWith(PARAMETER_NAME_DATABASE_ALT + "=")) { - parseDatabase(builder, queryParam); - } - - if (forStartWith.startsWith(PARAMETER_NAME_SENTINEL_MASTER_ID.toLowerCase() + "=")) { - parseSentinelMasterId(builder, queryParam); - } - } - } - - if (uri.getScheme().equals(URI_SCHEME_REDIS_SENTINEL)) { - LettuceAssert.notEmpty(builder.sentinelMasterId, "URI must contain the sentinelMasterId"); - } - - return builder.build(); - } - - private String getAuthority(final String scheme) { - - String authority = null; - if (host != null) { - authority = urlEncode(host) + getPortPart(port, scheme); - } - - if (sentinels.size() != 0) { - String joinedSentinels = sentinels.stream() - .map(redisURI -> urlEncode(redisURI.getHost()) + getPortPart(redisURI.getPort(), scheme)) - .collect(Collectors.joining(",")); - - authority = joinedSentinels; - } - - if (socket != null) { - authority = urlEncode(socket); - } - - if (password != null && password.length != 0) { - authority = urlEncode(new String(password)) + "@" + authority; - } - return authority; - } - - private String getQueryString() { - List queryPairs = new ArrayList<>(); - - if (database != 0) { - queryPairs.add(PARAMETER_NAME_DATABASE + "=" + database); - } - - if (sentinelMasterId != null) { - queryPairs.add(PARAMETER_NAME_SENTINEL_MASTER_ID + "=" + urlEncode(sentinelMasterId)); - } - - if (timeout != 0 && unit != null && (timeout != DEFAULT_TIMEOUT && !unit.equals(DEFAULT_TIMEOUT_UNIT))) { - queryPairs.add(PARAMETER_NAME_TIMEOUT + "=" + timeout + toQueryParamUnit(unit)); - } - - return queryPairs.stream().collect(Collectors.joining("&")); - } - - private String getPortPart(int port, String scheme) { - - if (URI_SCHEME_REDIS_SENTINEL.equals(scheme) && port == DEFAULT_SENTINEL_PORT) { - return ""; - } - - if (URI_SCHEME_REDIS.equals(scheme) && port == DEFAULT_REDIS_PORT) { - return ""; - } - - return ":" + port; - } - - private String getScheme() { - String scheme = URI_SCHEME_REDIS; - - if (isSsl()) { - if (isStartTls()) { - scheme = URI_SCHEME_REDIS_TLS_ALT; - } else { - scheme = URI_SCHEME_REDIS_SECURE; - } - } - - if (socket != null) { - scheme = URI_SCHEME_REDIS_SOCKET; - } - - if (host == null && !sentinels.isEmpty()) { - scheme = URI_SCHEME_REDIS_SENTINEL; - } - return scheme; - } - - private String toQueryParamUnit(TimeUnit unit) { - - for (Map.Entry entry : TIME_UNIT_MAP.entrySet()) { - if (entry.getValue().equals(unit)) { - return entry.getKey(); - } - } - return ""; - } - - /** - * URL encode the {@code str} without slash escaping {@code %2F} - * - * @param str - * @return the URL-encoded string - */ - private String urlEncode(String str) { - try { - return URLEncoder.encode(str, LettuceCharsets.UTF8.name()).replaceAll("%2F", "/"); - } catch (UnsupportedEncodingException e) { - throw new IllegalStateException(e); - } - } - - /** - * - * @return the resolved {@link SocketAddress} based either on host/port or the socket. - */ - public SocketAddress getResolvedAddress() { - if (getSocket() != null) { - return EpollProvider.newSocketAddress(getSocket()); - } - InetSocketAddress socketAddress = new InetSocketAddress(host, port); - - return socketAddress; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - - sb.append(" ["); - - if (host != null) { - sb.append("host='").append(host).append('\''); - sb.append(", port=").append(port); - } - - if (socket != null) { - sb.append("socket='").append(socket).append('\''); - } - - if (sentinelMasterId != null) { - sb.append("sentinels=").append(getSentinels()); - sb.append(", sentinelMasterId=").append(sentinelMasterId); - } - - sb.append(']'); - return sb.toString(); - } - - @Override - public boolean equals(Object o) { - if (this == o) - return true; - if (!(o instanceof RedisURI)) - return false; - - RedisURI redisURI = (RedisURI) o; - - if (port != redisURI.port) - return false; - if (database != redisURI.database) - return false; - if (host != null ? !host.equals(redisURI.host) : redisURI.host != null) - return false; - if (socket != null ? !socket.equals(redisURI.socket) : redisURI.socket != null) - return false; - if (sentinelMasterId != null ? !sentinelMasterId.equals(redisURI.sentinelMasterId) : redisURI.sentinelMasterId != null) - return false; - return !(sentinels != null ? !sentinels.equals(redisURI.sentinels) : redisURI.sentinels != null); - - } - - @Override - public int hashCode() { - int result = host != null ? host.hashCode() : 0; - result = 31 * result + (socket != null ? socket.hashCode() : 0); - result = 31 * result + (sentinelMasterId != null ? sentinelMasterId.hashCode() : 0); - result = 31 * result + port; - result = 31 * result + database; - result = 31 * result + (sentinels != null ? sentinels.hashCode() : 0); - return result; - } - - private static void parseTimeout(Builder builder, String queryParam) { - int index = queryParam.indexOf('='); - if (index < 0) { - return; - } - - String timeoutString = queryParam.substring(index + 1); - - int numbersEnd = 0; - while (numbersEnd < timeoutString.length() && Character.isDigit(timeoutString.charAt(numbersEnd))) { - numbersEnd++; - } - - if (numbersEnd == 0) { - if (timeoutString.startsWith("-")) { - builder.withTimeout(0, TimeUnit.MILLISECONDS); - } else { - // no-op, leave defaults - } - } else { - String timeoutValueString = timeoutString.substring(0, numbersEnd); - long timeoutValue = Long.parseLong(timeoutValueString); - builder.withTimeout(timeoutValue, TimeUnit.MILLISECONDS); - - String suffix = timeoutString.substring(numbersEnd); - TimeUnit timeoutUnit = TIME_UNIT_MAP.get(suffix); - if (timeoutUnit == null) { - timeoutUnit = TimeUnit.MILLISECONDS; - } - - builder.withTimeout(timeoutValue, timeoutUnit); - } - } - - private static void parseDatabase(Builder builder, String queryParam) { - int index = queryParam.indexOf('='); - if (index < 0) { - return; - } - - String databaseString = queryParam.substring(index + 1); - - int numbersEnd = 0; - while (numbersEnd < databaseString.length() && Character.isDigit(databaseString.charAt(numbersEnd))) { - numbersEnd++; - } - - if (numbersEnd != 0) { - String databaseValueString = databaseString.substring(0, numbersEnd); - int value = Integer.parseInt(databaseValueString); - builder.withDatabase(value); - } - } - - private static void parseSentinelMasterId(Builder builder, String queryParam) { - int index = queryParam.indexOf('='); - if (index < 0) { - return; - } - - String masterIdString = queryParam.substring(index + 1); - if (isNotEmpty(masterIdString)) { - builder.withSentinelMasterId(masterIdString); - } - } - - private static Builder configureStandalone(URI uri) { - Builder builder; - Set allowedSchemes = LettuceSets.unmodifiableSet(URI_SCHEME_REDIS, URI_SCHEME_REDIS_SECURE, - URI_SCHEME_REDIS_SOCKET, URI_SCHEME_REDIS_SOCKET_ALT, URI_SCHEME_REDIS_SECURE_ALT, URI_SCHEME_REDIS_TLS_ALT); - - if (!allowedSchemes.contains(uri.getScheme())) { - throw new IllegalArgumentException("Scheme " + uri.getScheme() + " not supported"); - } - - if (URI_SCHEME_REDIS_SOCKET.equals(uri.getScheme()) || URI_SCHEME_REDIS_SOCKET_ALT.equals(uri.getScheme())) { - builder = Builder.socket(uri.getPath()); - } else { - if (uri.getPort() > 0) { - builder = Builder.redis(uri.getHost(), uri.getPort()); - } else { - builder = Builder.redis(uri.getHost()); - } - } - - if (URI_SCHEME_REDIS_SECURE.equals(uri.getScheme()) || URI_SCHEME_REDIS_SECURE_ALT.equals(uri.getScheme())) { - builder.withSsl(true); - } - - if (URI_SCHEME_REDIS_TLS_ALT.equals(uri.getScheme())) { - builder.withSsl(true); - builder.withStartTls(true); - } - return builder; - } - - private static RedisURI.Builder configureSentinel(URI uri) { - String masterId = uri.getFragment(); - - RedisURI.Builder builder = null; - - if (isNotEmpty(uri.getHost())) { - if (uri.getPort() != -1) { - builder = RedisURI.Builder.sentinel(uri.getHost(), uri.getPort()); - } else { - builder = RedisURI.Builder.sentinel(uri.getHost()); - } - } - - if (builder == null && isNotEmpty(uri.getAuthority())) { - String authority = uri.getAuthority(); - if (authority.indexOf('@') > -1) { - authority = authority.substring(authority.indexOf('@') + 1); - } - - String[] hosts = authority.split("\\,"); - for (String host : hosts) { - HostAndPort hostAndPort = HostAndPort.parse(host); - if (builder == null) { - if (hostAndPort.hasPort()) { - builder = RedisURI.Builder.sentinel(hostAndPort.getHostText(), hostAndPort.getPort()); - } else { - builder = RedisURI.Builder.sentinel(hostAndPort.getHostText()); - } - } else { - if (hostAndPort.hasPort()) { - builder.withSentinel(hostAndPort.getHostText(), hostAndPort.getPort()); - } else { - builder.withSentinel(hostAndPort.getHostText()); - } - } - } - } - - if (isNotEmpty(masterId)) { - builder.withSentinelMasterId(masterId); - } - - LettuceAssert.notNull(builder, "Invalid URI, cannot get host part"); - return builder; - } - - /** - * Builder for Redis URI. - */ - public static class Builder { - - private String host; - private String socket; - private String sentinelMasterId; - private int port; - private int database; - private char[] password; - private boolean ssl = false; - private boolean verifyPeer = true; - private boolean startTls = false; - private long timeout = 60; - private TimeUnit unit = TimeUnit.SECONDS; - private final List sentinels = new ArrayList<>(); - - /** - * @deprecated Use {@link RedisURI#builder()} - */ - @Deprecated - public Builder() { - } - - /** - * Set Redis socket. Creates a new builder. - * - * @param socket the host name - * @return New builder with Redis socket. - */ - public static Builder socket(String socket) { - - LettuceAssert.notNull(socket, "Socket must not be null"); - - Builder builder = RedisURI.builder(); - builder.socket = socket; - return builder; - } - - /** - * Set Redis host. Creates a new builder. - * - * @param host the host name - * @return New builder with Redis host/port. - */ - public static Builder redis(String host) { - return redis(host, DEFAULT_REDIS_PORT); - } - - /** - * Set Redis host and port. Creates a new builder - * - * @param host the host name - * @param port the port - * @return New builder with Redis host/port. - */ - public static Builder redis(String host, int port) { - - LettuceAssert.notEmpty(host, "Host must not be empty"); - LettuceAssert.isTrue(isValidPort(port), String.format("Port out of range: %s", port)); - - Builder builder = RedisURI.builder(); - return builder.withHost(host).withPort(port); - } - - /** - * Set Sentinel host. Creates a new builder. - * - * @param host the host name - * @return New builder with Sentinel host/port. - */ - public static Builder sentinel(String host) { - - LettuceAssert.notEmpty(host, "Host must not be empty"); - - Builder builder = RedisURI.builder(); - return builder.withSentinel(host); - } - - /** - * Set Sentinel host and port. Creates a new builder. - * - * @param host the host name - * @param port the port - * @return New builder with Sentinel host/port. - */ - public static Builder sentinel(String host, int port) { - - LettuceAssert.notEmpty(host, "Host must not be empty"); - LettuceAssert.isTrue(isValidPort(port), String.format("Port out of range: %s", port)); - - Builder builder = RedisURI.builder(); - return builder.withSentinel(host, port); - } - - /** - * Set Sentinel host and master id. Creates a new builder. - * - * @param host the host name - * @param masterId sentinel master id - * @return New builder with Sentinel host/port. - */ - public static Builder sentinel(String host, String masterId) { - return sentinel(host, DEFAULT_SENTINEL_PORT, masterId); - } - - /** - * Set Sentinel host, port and master id. Creates a new builder. - * - * @param host the host name - * @param port the port - * @param masterId sentinel master id - * @return New builder with Sentinel host/port. - */ - public static Builder sentinel(String host, int port, String masterId) { - - LettuceAssert.notEmpty(host, "Host must not be empty"); - LettuceAssert.isTrue(isValidPort(port), String.format("Port out of range: %s", port)); - - Builder builder = RedisURI.builder(); - return builder.withSentinelMasterId(masterId).withSentinel(host, port); - } - - /** - * Add a withSentinel host to the existing builder. - * - * @param host the host name - * @return the builder - */ - public Builder withSentinel(String host) { - return withSentinel(host, DEFAULT_SENTINEL_PORT); - } - - /** - * Add a withSentinel host/port to the existing builder. - * - * @param host the host name - * @param port the port - * @return the builder - */ - public Builder withSentinel(String host, int port) { - - LettuceAssert.assertState(this.host == null, "Cannot use with Redis mode."); - LettuceAssert.notEmpty(host, "Host must not be empty"); - LettuceAssert.isTrue(isValidPort(port), String.format("Port out of range: %s", port)); - - sentinels.add(HostAndPort.of(host, port)); - return this; - } - - /** - * Adds host information to the builder. Does only affect Redis URI, cannot be used with Sentinel connections. - * - * @param host the port - * @return the builder - */ - public Builder withHost(String host) { - - LettuceAssert.assertState(this.sentinels.isEmpty(), "Sentinels are non-empty. Cannot use in Sentinel mode."); - LettuceAssert.notEmpty(host, "Host must not be empty"); - - this.host = host; - return this; - } - - /** - * Adds port information to the builder. Does only affect Redis URI, cannot be used with Sentinel connections. - * - * @param port the port - * @return the builder - */ - public Builder withPort(int port) { - - LettuceAssert.assertState(this.host != null, "Host is null. Cannot use in Sentinel mode."); - LettuceAssert.isTrue(isValidPort(port), String.format("Port out of range: %s", port)); - - this.port = port; - return this; - } - - /** - * Adds ssl information to the builder. Does only affect Redis URI, cannot be used with Sentinel connections. - * - * @param ssl {@literal true} if use SSL - * @return the builder - */ - public Builder withSsl(boolean ssl) { - - LettuceAssert.assertState(this.host != null, "Host is null. Cannot use in Sentinel mode."); - - this.ssl = ssl; - return this; - } - - /** - * Enables/disables StartTLS when using SSL. Does only affect Redis URI, cannot be used with Sentinel connections. - * - * @param startTls {@literal true} if use StartTLS - * @return the builder - */ - public Builder withStartTls(boolean startTls) { - - LettuceAssert.assertState(this.host != null, "Host is null. Cannot use in Sentinel mode."); - - this.startTls = startTls; - return this; - } - - /** - * Enables/disables peer verification. Does only affect Redis URI, cannot be used with Sentinel connections. - * - * @param verifyPeer {@literal true} to verify hosts when using SSL - * @return the builder - */ - public Builder withVerifyPeer(boolean verifyPeer) { - - LettuceAssert.assertState(this.host != null, "Host is null. Cannot use in Sentinel mode."); - - this.verifyPeer = verifyPeer; - return this; - } - - /** - * Adds database selection. - * - * @param database the database number - * @return the builder - */ - public Builder withDatabase(int database) { - - LettuceAssert.isTrue(database >= 0 && database <= 15, "Invalid database number: " + database); - - this.database = database; - return this; - } - - /** - * Adds authentication. - * - * @param password the password - * @return the builder - */ - public Builder withPassword(String password) { - - LettuceAssert.notNull(password, "Password must not be null"); - - this.password = password.toCharArray(); - return this; - } - - /** - * Adds timeout. - * - * @param timeout must be greater or equal 0" - * @param unit the timeout time unit. - * @return the builder - */ - public Builder withTimeout(long timeout, TimeUnit unit) { - - LettuceAssert.notNull(unit, "TimeUnit must not be null"); - LettuceAssert.isTrue(timeout >= 0, "Timeout must be greater or equal 0"); - - this.timeout = timeout; - this.unit = unit; - return this; - } - - /** - * Adds a sentinel master Id. - * - * @param sentinelMasterId sentinel master id, must not be empty or {@literal null} - * @return the builder - */ - public Builder withSentinelMasterId(String sentinelMasterId) { - - LettuceAssert.notEmpty(sentinelMasterId, "Sentinel master id must not empty"); - - this.sentinelMasterId = sentinelMasterId; - return this; - } - - /** - * - * @return the RedisURI. - */ - public RedisURI build() { - - if (sentinels.isEmpty() && LettuceStrings.isEmpty(host) && LettuceStrings.isEmpty(socket)) { - throw new IllegalStateException( - "Cannot build a RedisURI. One of the following must be provided Host, Socket or Sentinel"); - } - - RedisURI redisURI = new RedisURI(); - redisURI.setHost(host); - redisURI.setPort(port); - redisURI.password = password; - redisURI.setDatabase(database); - - redisURI.setSentinelMasterId(sentinelMasterId); - - for (HostAndPort sentinel : sentinels) { - redisURI.getSentinels().add(new RedisURI(sentinel.getHostText(), sentinel.getPort(), timeout, unit)); - } - - redisURI.setSocket(socket); - redisURI.setSsl(ssl); - redisURI.setStartTls(startTls); - redisURI.setVerifyPeer(verifyPeer); - - redisURI.setTimeout(timeout); - redisURI.setUnit(unit); - - return redisURI; - } - } - - /** Return true for valid port numbers. */ - private static boolean isValidPort(int port) { - return port >= 0 && port <= 65535; - } -} diff --git a/src/main/java/com/lambdaworks/redis/ScanArgs.java b/src/main/java/com/lambdaworks/redis/ScanArgs.java deleted file mode 100644 index 8a8a413591..0000000000 --- a/src/main/java/com/lambdaworks/redis/ScanArgs.java +++ /dev/null @@ -1,88 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.protocol.CommandKeyword.*; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CommandArgs; - -/** - * Argument list builder for the redis scan commans (scan, hscan, sscan, zscan) . Static import the methods from {@link Builder} - * and chain the method calls: {@code matches("weight_*").limit(0, 2)}. - * - * @author Mark Paluch - * @since 3.0 - */ -public class ScanArgs { - - private Long count; - private String match; - - /** - * Static builder methods. - */ - public static class Builder { - - /** - * Utility constructor. - */ - private Builder() { - - } - - /** - * Create a new instance of {@link ScanArgs} with limit. - * - * @param count number of elements to scan - * @return a new instance of {@link ScanArgs} - */ - public static ScanArgs limit(long count) { - return new ScanArgs().limit(count); - } - - /** - * Create a new instance of {@link ScanArgs} with match filter. - * - * @param matches the filter - * @return a new instance of {@link ScanArgs} - */ - public static ScanArgs matches(String matches) { - return new ScanArgs().match(matches); - } - } - - /** - * Match filter - * - * @param match the filter - * @return the current instance of {@link ScanArgs} - */ - public ScanArgs match(String match) { - LettuceAssert.notNull(match, "Match must not be null"); - this.match = match; - return this; - } - - /** - * Limit the scan by count - * - * @param count number of elements to scan - * @return the current instance of {@link ScanArgs} - */ - public ScanArgs limit(long count) { - this.count = count; - return this; - } - - void build(CommandArgs args) { - - if (match != null) { - args.add(MATCH).add(match); - } - - if (count != null) { - args.add(COUNT).add(count); - } - - } - -} diff --git a/src/main/java/com/lambdaworks/redis/ScanCursor.java b/src/main/java/com/lambdaworks/redis/ScanCursor.java deleted file mode 100644 index d18082e706..0000000000 --- a/src/main/java/com/lambdaworks/redis/ScanCursor.java +++ /dev/null @@ -1,102 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Generic Cursor data structure. - * - * @author Mark Paluch - * @since 3.0 - */ -public class ScanCursor { - - /** - * Finished cursor. - */ - public final static ScanCursor FINISHED = new ImmutableScanCursor("0", true); - - /** - * Initial cursor. - */ - public final static ScanCursor INITIAL = new ImmutableScanCursor("0", false); - - private String cursor; - private boolean finished; - - /** - * Creates a new {@link ScanCursor}. - */ - public ScanCursor() { - } - - /** - * Creates a new {@link ScanCursor}. - * - * @param cursor - * @param finished - */ - public ScanCursor(String cursor, boolean finished) { - this.cursor = cursor; - this.finished = finished; - } - - /** - * - * @return cursor id - */ - public String getCursor() { - return cursor; - } - - /** - * Set the cursor - * - * @param cursor the cursor id - */ - public void setCursor(String cursor) { - LettuceAssert.notEmpty(cursor, "Cursor must not be empty"); - - this.cursor = cursor; - } - - /** - * - * @return true if the scan operation of this cursor is finished. - */ - public boolean isFinished() { - return finished; - } - - public void setFinished(boolean finished) { - this.finished = finished; - } - - /** - * Creates a Scan-Cursor reference. - * - * @param cursor the cursor id - * @return ScanCursor - */ - public static ScanCursor of(String cursor) { - ScanCursor scanCursor = new ScanCursor(); - scanCursor.setCursor(cursor); - return scanCursor; - } - - private static class ImmutableScanCursor extends ScanCursor { - - public ImmutableScanCursor(String cursor, boolean finished) { - super(cursor, finished); - } - - @Override - public void setCursor(String cursor) { - throw new UnsupportedOperationException("setCursor not supported on " + getClass().getSimpleName()); - } - - @Override - public void setFinished(boolean finished) { - throw new UnsupportedOperationException("setFinished not supported on " + getClass().getSimpleName()); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/ScoredValue.java b/src/main/java/com/lambdaworks/redis/ScoredValue.java deleted file mode 100644 index dd6e9679e9..0000000000 --- a/src/main/java/com/lambdaworks/redis/ScoredValue.java +++ /dev/null @@ -1,42 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -/** - * A value and its associated score from a ZSET. - * - * @param Value type. - * @author Will Glozer - */ -public class ScoredValue { - public final double score; - public final V value; - - public ScoredValue(double score, V value) { - this.score = score; - this.value = value; - } - - @Override - public boolean equals(Object o) { - if (o == null || getClass() != o.getClass()) { - return false; - } - ScoredValue that = (ScoredValue) o; - return Double.compare(that.score, score) == 0 && value.equals(that.value); - } - - @Override - public int hashCode() { - - long temp = Double.doubleToLongBits(score); - int result = (int) (temp ^ (temp >>> 32)); - result = 31 * result + (value != null ? value.hashCode() : 0); - return result; - } - - @Override - public String toString() { - return String.format("(%f, %s)", score, value); - } -} diff --git a/src/main/java/com/lambdaworks/redis/ScoredValueScanCursor.java b/src/main/java/com/lambdaworks/redis/ScoredValueScanCursor.java deleted file mode 100644 index 6cddadfe19..0000000000 --- a/src/main/java/com/lambdaworks/redis/ScoredValueScanCursor.java +++ /dev/null @@ -1,23 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.List; - -/** - * Cursor providing a list of {@link com.lambdaworks.redis.ScoredValue} - * - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public class ScoredValueScanCursor extends ScanCursor { - - private final List> values = new ArrayList<>(); - - public ScoredValueScanCursor() { - } - - public List> getValues() { - return values; - } -} diff --git a/src/main/java/com/lambdaworks/redis/ScriptOutputType.java b/src/main/java/com/lambdaworks/redis/ScriptOutputType.java deleted file mode 100644 index 61984e799b..0000000000 --- a/src/main/java/com/lambdaworks/redis/ScriptOutputType.java +++ /dev/null @@ -1,40 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -/** - * A Lua script returns one of the following types: - * - *
    - *
  • {@link #BOOLEAN} boolean
  • - *
  • {@link #INTEGER} 64-bit integer
  • - *
  • {@link #STATUS} status string
  • - *
  • {@link #VALUE} value
  • - *
  • {@link #MULTI} of these types
  • - *
- * - * Redis to Lua conversion table. - *
    - *
  • Redis integer reply -> Lua number
  • - *
  • Redis bulk reply -> Lua string
  • - *
  • Redis multi bulk reply -> Lua table (may have other Redis data types nested)
  • - *
  • Redis status reply -> Lua table with a single {@code ok} field containing the status
  • - *
  • Redis error reply -> Lua table with a single {@code err} field containing the error
  • - *
  • Redis Nil bulk reply and Nil multi bulk reply -> Lua false boolean type
  • - *
- * - * Lua to Redis conversion table. - *
    - *
  • Lua number -> Redis integer reply (the number is converted into an integer)
  • - *
  • Lua string -> Redis bulk reply
  • - *
  • Lua table (array) -> Redis multi bulk reply (truncated to the first {@literal null} inside the Lua array if any)
  • - *
  • Lua table with a single {@code ok} field -> Redis status reply
  • - *
  • Lua table with a single {@code err} field -> Redis error reply
  • - *
  • Lua boolean false -> Redis Nil bulk reply.
  • - *
- * - * @author Will Glozer - */ -public enum ScriptOutputType { - BOOLEAN, INTEGER, MULTI, STATUS, VALUE -} diff --git a/src/main/java/com/lambdaworks/redis/SetArgs.java b/src/main/java/com/lambdaworks/redis/SetArgs.java deleted file mode 100644 index e35cad1c61..0000000000 --- a/src/main/java/com/lambdaworks/redis/SetArgs.java +++ /dev/null @@ -1,82 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. -// Copyright (C) 2013 - Vincent Rischmann. All rights reserved. - -package com.lambdaworks.redis; - -import com.lambdaworks.redis.protocol.CommandArgs; - -/** - * Argument list builder for the new redis SET command starting from Redis 2.6.12. - * Static import the methods from {@link Builder} and chain the method calls: {@code ex(10).nx()}. - * - * @author Vincent Rischmann - */ -public class SetArgs { - private Long ex; - private Long px; - private boolean nx = false; - private boolean xx = false; - - public static class Builder { - /** - * Utility constructor. - */ - private Builder() { - - } - - public static SetArgs ex(long ex) { - return new SetArgs().ex(ex); - } - - public static SetArgs px(long px) { - return new SetArgs().px(px); - } - - public static SetArgs nx() { - return new SetArgs().nx(); - } - - public static SetArgs xx() { - return new SetArgs().xx(); - } - } - - public SetArgs ex(long ex) { - this.ex = ex; - return this; - } - - public SetArgs px(long px) { - this.px = px; - return this; - } - - public SetArgs nx() { - this.nx = true; - return this; - } - - public SetArgs xx() { - this.xx = true; - return this; - } - - public void build(CommandArgs args) { - if (ex != null) { - args.add("EX").add(ex); - } - - if (px != null) { - args.add("PX").add(px); - } - - if (nx) { - args.add("NX"); - } - - if (xx) { - args.add("XX"); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/SocketOptions.java b/src/main/java/com/lambdaworks/redis/SocketOptions.java deleted file mode 100644 index bf1b1ba266..0000000000 --- a/src/main/java/com/lambdaworks/redis/SocketOptions.java +++ /dev/null @@ -1,174 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Options to configure low-level socket options for the connections kept to Redis servers. - * - * @author Mark Paluch - * @since 4.3 - */ -public class SocketOptions { - - public static final long DEFAULT_CONNECT_TIMEOUT = 10; - public static final TimeUnit DEFAULT_CONNECT_TIMEOUT_UNIT = TimeUnit.SECONDS; - - public static final boolean DEFAULT_SO_KEEPALIVE = false; - public static final boolean DEFAULT_SO_NO_DELAY = false; - - private final long connectTimeout; - private final TimeUnit connectTimeoutUnit; - private final boolean keepAlive; - private final boolean tcpNoDelay; - - protected SocketOptions(Builder builder) { - - this.connectTimeout = builder.connectTimeout; - this.connectTimeoutUnit = builder.connectTimeoutUnit; - this.keepAlive = builder.keepAlive; - this.tcpNoDelay = builder.tcpNoDelay; - } - - protected SocketOptions(SocketOptions original) { - this.connectTimeout = original.getConnectTimeout(); - this.connectTimeoutUnit = original.getConnectTimeoutUnit(); - this.keepAlive = original.isKeepAlive(); - this.tcpNoDelay = original.isTcpNoDelay(); - } - - /** - * Create a copy of {@literal options} - * - * @param options the original - * @return A new instance of {@link SocketOptions} containing the values of {@literal options} - */ - public static SocketOptions copyOf(SocketOptions options) { - return new SocketOptions(options); - } - - /** - * Returns a new {@link SocketOptions.Builder} to construct {@link SocketOptions}. - * - * @return a new {@link SocketOptions.Builder} to construct {@link SocketOptions}. - */ - public static SocketOptions.Builder builder() { - return new SocketOptions.Builder(); - } - - /** - * Create a new {@link SocketOptions} using default settings. - * - * @return a new instance of default cluster client client options. - */ - public static SocketOptions create() { - return builder().build(); - } - - /** - * Builder for {@link SocketOptions}. - */ - public static class Builder { - - private long connectTimeout = DEFAULT_CONNECT_TIMEOUT; - private TimeUnit connectTimeoutUnit = DEFAULT_CONNECT_TIMEOUT_UNIT; - private boolean keepAlive = DEFAULT_SO_KEEPALIVE; - private boolean tcpNoDelay = DEFAULT_SO_NO_DELAY; - - private Builder() { - } - - /** - * Set connection timeout. Defaults to {@literal 10 SECONDS}. See {@link #DEFAULT_CONNECT_TIMEOUT} and - * {@link #DEFAULT_CONNECT_TIMEOUT_UNIT}. - * - * @param connectTimeout connection timeout, must be greater {@literal 0}. - * @param connectTimeoutUnit unit for {@code connectTimeout}, must not be {@literal null}. - * @return {@code this} - */ - public Builder connectTimeout(long connectTimeout, TimeUnit connectTimeoutUnit) { - - LettuceAssert.isTrue(connectTimeout > 0, "Connect timeout must be greater 0"); - LettuceAssert.notNull(connectTimeoutUnit, "TimeUnit must not be null"); - - this.connectTimeout = connectTimeout; - this.connectTimeoutUnit = connectTimeoutUnit; - return this; - } - - /** - * Sets whether to enable TCP keepalive. Defaults to {@literal false}. See {@link #DEFAULT_SO_KEEPALIVE}. - * - * @param keepAlive whether to enable or disable the TCP keepalive. - * @return {@code this} - * @see java.net.SocketOptions#SO_KEEPALIVE - */ - public Builder keepAlive(boolean keepAlive) { - - this.keepAlive = keepAlive; - return this; - } - - /** - * Sets whether to disable Nagle's algorithm. Defaults to {@literal false} (Nagle enabled). See - * {@link #DEFAULT_SO_NO_DELAY}. - * - * @param tcpNoDelay {@literal true} to disable Nagle's algorithm, {@link false} to enable Nagle's algorithm. - * @return {@code this} - * @see java.net.SocketOptions#TCP_NODELAY - */ - public Builder tcpNoDelay(boolean tcpNoDelay) { - - this.tcpNoDelay = tcpNoDelay; - return this; - } - - /** - * Create a new instance of {@link SocketOptions} - * - * @return new instance of {@link SocketOptions} - */ - public SocketOptions build() { - return new SocketOptions(this); - } - } - - /** - * Returns the connection timeout. - * - * @return the connection timeout. - */ - public long getConnectTimeout() { - return connectTimeout; - } - - /** - * Returns the the connection timeout unit. - * - * @return the connection timeout unit. - */ - public TimeUnit getConnectTimeoutUnit() { - return connectTimeoutUnit; - } - - /** - * Returns whether to enable TCP keepalive. - * - * @return whether to enable TCP keepalive - * @see java.net.SocketOptions#SO_KEEPALIVE - */ - public boolean isKeepAlive() { - return keepAlive; - } - - /** - * Returns whether to use TCP NoDelay. - * - * @return {@literal true} to disable Nagle's algorithm, {@link false} to enable Nagle's algorithm. - * @see java.net.SocketOptions#TCP_NODELAY - */ - public boolean isTcpNoDelay() { - return tcpNoDelay; - } -} diff --git a/src/main/java/com/lambdaworks/redis/SortArgs.java b/src/main/java/com/lambdaworks/redis/SortArgs.java deleted file mode 100644 index e4219e9fdd..0000000000 --- a/src/main/java/com/lambdaworks/redis/SortArgs.java +++ /dev/null @@ -1,130 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.protocol.CommandKeyword.*; -import static com.lambdaworks.redis.protocol.CommandType.GET; - -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandKeyword; - -/** - * Argument list builder for the redis SORT command. Static import the methods from - * {@link Builder} and chain the method calls: {@code by("weight_*").desc().limit(0, 2)}. - * - * @author Will Glozer - */ -public class SortArgs { - private String by; - private Long offset, count; - private List get; - private CommandKeyword order; - private boolean alpha; - - /** - * Static builder methods. - */ - public static class Builder { - /** - * Utility constructor. - */ - private Builder() { - - } - - public static SortArgs by(String pattern) { - return new SortArgs().by(pattern); - } - - public static SortArgs limit(long offset, long count) { - return new SortArgs().limit(offset, count); - } - - public static SortArgs get(String pattern) { - return new SortArgs().get(pattern); - } - - public static SortArgs asc() { - return new SortArgs().asc(); - } - - public static SortArgs desc() { - return new SortArgs().desc(); - } - - public static SortArgs alpha() { - return new SortArgs().alpha(); - } - } - - public SortArgs by(String pattern) { - by = pattern; - return this; - } - - public SortArgs limit(long offset, long count) { - this.offset = offset; - this.count = count; - return this; - } - - public SortArgs get(String pattern) { - if (get == null) { - get = new ArrayList<>(); - } - get.add(pattern); - return this; - } - - public SortArgs asc() { - order = ASC; - return this; - } - - public SortArgs desc() { - order = DESC; - return this; - } - - public SortArgs alpha() { - alpha = true; - return this; - } - - void build(CommandArgs args, K store) { - - if (by != null) { - args.add(BY); - args.add(by); - } - - if (get != null) { - for (String pattern : get) { - args.add(GET); - args.add(pattern); - } - } - - if (offset != null) { - args.add(LIMIT); - args.add(offset); - args.add(count); - } - - if (order != null) { - args.add(order); - } - - if (alpha) { - args.add(ALPHA); - } - - if (store != null) { - args.add(STORE); - args.addKey(store); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/SslConnectionBuilder.java b/src/main/java/com/lambdaworks/redis/SslConnectionBuilder.java deleted file mode 100644 index 19cd08dd74..0000000000 --- a/src/main/java/com/lambdaworks/redis/SslConnectionBuilder.java +++ /dev/null @@ -1,235 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.ConnectionEventTrigger.local; -import static com.lambdaworks.redis.ConnectionEventTrigger.remote; -import static com.lambdaworks.redis.PlainChannelInitializer.INITIALIZING_CMD_BUILDER; -import static com.lambdaworks.redis.PlainChannelInitializer.pingBeforeActivate; -import static com.lambdaworks.redis.PlainChannelInitializer.removeIfExists; - -import java.io.IOException; -import java.io.InputStream; -import java.security.GeneralSecurityException; -import java.security.KeyStore; -import java.util.List; -import java.util.concurrent.CompletableFuture; - -import javax.net.ssl.*; - -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.event.connection.ConnectedEvent; -import com.lambdaworks.redis.event.connection.ConnectionActivatedEvent; -import com.lambdaworks.redis.event.connection.DisconnectedEvent; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.AsyncCommand; - -import io.netty.channel.*; -import io.netty.handler.ssl.SslContext; -import io.netty.handler.ssl.SslContextBuilder; -import io.netty.handler.ssl.SslHandler; -import io.netty.handler.ssl.SslHandshakeCompletionEvent; -import io.netty.handler.ssl.util.InsecureTrustManagerFactory; - -/** - * Connection builder for SSL connections. This class is part of the internal API. - * - * @author Mark Paluch - */ -public class SslConnectionBuilder extends ConnectionBuilder { - private RedisURI redisURI; - - public static SslConnectionBuilder sslConnectionBuilder() { - return new SslConnectionBuilder(); - } - - public SslConnectionBuilder ssl(RedisURI redisURI) { - this.redisURI = redisURI; - return this; - } - - @Override - protected List buildHandlers() { - LettuceAssert.assertState(redisURI != null, "RedisURI must not be null"); - LettuceAssert.assertState(redisURI.isSsl(), "RedisURI is not configured for SSL (ssl is false)"); - - return super.buildHandlers(); - } - - @Override - public RedisChannelInitializer build() { - - final List channelHandlers = buildHandlers(); - - return new SslChannelInitializer(clientOptions().isPingBeforeActivateConnection(), channelHandlers, redisURI, - clientResources().eventBus(), clientOptions().getSslOptions()); - } - - /** - * @author Mark Paluch - */ - static class SslChannelInitializer extends io.netty.channel.ChannelInitializer implements RedisChannelInitializer { - - private final boolean pingBeforeActivate; - private final List handlers; - private final RedisURI redisURI; - private final EventBus eventBus; - private final SslOptions sslOptions; - private CompletableFuture initializedFuture = new CompletableFuture<>(); - - public SslChannelInitializer(boolean pingBeforeActivate, List handlers, RedisURI redisURI, - EventBus eventBus, SslOptions sslOptions) { - this.pingBeforeActivate = pingBeforeActivate; - this.handlers = handlers; - this.redisURI = redisURI; - this.eventBus = eventBus; - this.sslOptions = sslOptions; - } - - @Override - protected void initChannel(Channel channel) throws Exception { - - SSLParameters sslParams = new SSLParameters(); - - SslContextBuilder sslContextBuilder = SslContextBuilder.forClient().sslProvider(sslOptions.getSslProvider()); - if (redisURI.isVerifyPeer()) { - sslParams.setEndpointIdentificationAlgorithm("HTTPS"); - } else { - sslContextBuilder.trustManager(InsecureTrustManagerFactory.INSTANCE); - } - - if (sslOptions.getTruststore() != null) { - try (InputStream is = sslOptions.getTruststore().openStream()) { - sslContextBuilder.trustManager(createTrustManagerFactory(is, - sslOptions.getTruststorePassword().length == 0 ? null : sslOptions.getTruststorePassword())); - } - } - - SslContext sslContext = sslContextBuilder.build(); - - SSLEngine sslEngine = sslContext.newEngine(channel.alloc(), redisURI.getHost(), redisURI.getPort()); - sslEngine.setSSLParameters(sslParams); - - removeIfExists(channel.pipeline(), SslHandler.class); - - if (channel.pipeline().get("first") == null) { - channel.pipeline().addFirst("first", new ChannelDuplexHandler() { - - @Override - public void channelActive(ChannelHandlerContext ctx) throws Exception { - eventBus.publish(new ConnectedEvent(local(ctx), remote(ctx))); - super.channelActive(ctx); - } - - @Override - public void channelInactive(ChannelHandlerContext ctx) throws Exception { - eventBus.publish(new DisconnectedEvent(local(ctx), remote(ctx))); - super.channelInactive(ctx); - } - }); - } - - SslHandler sslHandler = new SslHandler(sslEngine, redisURI.isStartTls()); - channel.pipeline().addLast(sslHandler); - - if (channel.pipeline().get("channelActivator") == null) { - channel.pipeline().addLast("channelActivator", new RedisChannelInitializerImpl() { - - private AsyncCommand pingCommand; - - @Override - public CompletableFuture channelInitialized() { - return initializedFuture; - } - - @Override - public void channelInactive(ChannelHandlerContext ctx) throws Exception { - initializedFuture = new CompletableFuture<>(); - pingCommand = null; - super.channelInactive(ctx); - } - - @Override - public void channelActive(ChannelHandlerContext ctx) throws Exception { - if (initializedFuture.isDone()) { - super.channelActive(ctx); - } - } - - @Override - public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { - if (evt instanceof SslHandshakeCompletionEvent && !initializedFuture.isDone()) { - - SslHandshakeCompletionEvent event = (SslHandshakeCompletionEvent) evt; - if (event.isSuccess()) { - if (pingBeforeActivate) { - if (redisURI.getPassword() != null && redisURI.getPassword().length != 0) { - pingCommand = new AsyncCommand<>( - INITIALIZING_CMD_BUILDER.auth(new String(redisURI.getPassword()))); - } else { - pingCommand = new AsyncCommand<>(INITIALIZING_CMD_BUILDER.ping()); - } - pingBeforeActivate(pingCommand, initializedFuture, ctx, handlers); - } else { - ctx.fireChannelActive(); - } - } else { - initializedFuture.completeExceptionally(event.cause()); - } - } - - if (evt instanceof ConnectionEvents.Close) { - if (ctx.channel().isOpen()) { - ctx.channel().close(); - } - } - - if (evt instanceof ConnectionEvents.Activated) { - if (!initializedFuture.isDone()) { - initializedFuture.complete(true); - eventBus.publish(new ConnectionActivatedEvent(local(ctx), remote(ctx))); - } - } - - super.userEventTriggered(ctx, evt); - } - - @Override - public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { - if (cause instanceof SSLHandshakeException || cause.getCause() instanceof SSLException) { - initializedFuture.completeExceptionally(cause); - } - super.exceptionCaught(ctx, cause); - } - }); - } - - for (ChannelHandler handler : handlers) { - removeIfExists(channel.pipeline(), handler.getClass()); - channel.pipeline().addLast(handler); - } - } - - @Override - public CompletableFuture channelInitialized() { - return initializedFuture; - } - - private static TrustManagerFactory createTrustManagerFactory(InputStream inputStream, char[] storePassword) - throws GeneralSecurityException, IOException { - - KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); - - try { - trustStore.load(inputStream, storePassword); - } finally { - inputStream.close(); - } - - TrustManagerFactory trustManagerFactory = TrustManagerFactory - .getInstance(TrustManagerFactory.getDefaultAlgorithm()); - trustManagerFactory.init(trustStore); - - return trustManagerFactory; - } - - } -} diff --git a/src/main/java/com/lambdaworks/redis/SslOptions.java b/src/main/java/com/lambdaworks/redis/SslOptions.java deleted file mode 100644 index d665f23a03..0000000000 --- a/src/main/java/com/lambdaworks/redis/SslOptions.java +++ /dev/null @@ -1,222 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.File; -import java.net.MalformedURLException; -import java.net.URL; - -import com.lambdaworks.redis.internal.LettuceAssert; - -import io.netty.handler.ssl.OpenSsl; -import io.netty.handler.ssl.SslProvider; - -/** - * Options to configure SSL options for the connections kept to Redis servers. - * - * @author Mark Paluch - * @since 4.3 - */ -public class SslOptions { - - public static final SslProvider DEFAULT_SSL_PROVIDER = SslProvider.JDK; - - private final SslProvider sslProvider; - private final URL truststore; - private final char[] truststorePassword; - - protected SslOptions(Builder builder) { - this.sslProvider = builder.sslProvider; - this.truststore = builder.truststore; - this.truststorePassword = builder.truststorePassword; - } - - protected SslOptions(SslOptions original) { - this.sslProvider = original.getSslProvider(); - this.truststore = original.getTruststore(); - this.truststorePassword = original.getTruststorePassword(); - } - - /** - * Create a copy of {@literal options} - * - * @param options the original - * @return A new instance of {@link SslOptions} containing the values of {@literal options} - */ - public static SslOptions copyOf(SslOptions options) { - return new SslOptions(options); - } - - /** - * Returns a new {@link SslOptions.Builder} to construct {@link SslOptions}. - * - * @return a new {@link SslOptions.Builder} to construct {@link SslOptions}. - */ - public static SslOptions.Builder builder() { - return new SslOptions.Builder(); - } - - /** - * Create a new {@link SslOptions} using default settings. - * - * @return a new instance of default cluster client client options. - */ - public static SslOptions create() { - return builder().build(); - } - - /** - * Builder for {@link SslOptions}. - */ - public static class Builder { - - private SslProvider sslProvider = DEFAULT_SSL_PROVIDER; - private URL truststore; - private char[] truststorePassword = new char[0]; - - private Builder() { - } - - /** - * Use the JDK SSL provider for SSL connections. - * - * @return {@code this} - */ - public Builder jdkSslProvider() { - return sslProvider(SslProvider.JDK); - } - - /** - * Use the OpenSSL provider for SSL connections. The OpenSSL provider requires the - * {@code netty-tcnative} dependency with the OpenSSL JNI - * binary. - * - * @return {@code this} - * @throws IllegalStateException if OpenSSL is not available - */ - public Builder openSslProvider() { - return sslProvider(SslProvider.OPENSSL); - } - - private Builder sslProvider(SslProvider sslProvider) { - - if (sslProvider == SslProvider.OPENSSL) { - if (!OpenSsl.isAvailable()) { - throw new IllegalStateException("OpenSSL SSL Provider is not available"); - } - } - - this.sslProvider = sslProvider; - - return this; - } - - /** - * Sets the Truststore file to load trusted certificates. The trust store file must be supported by - * {@link java.security.KeyStore} which is {@code jks} by default. Truststores are reloaded on each connection attempt - * that allows to replace certificates during runtime. - * - * @param truststore the truststore file, must not be {@literal null}. - * @return {@code this} - */ - public Builder truststore(File truststore) { - return truststore(truststore, ""); - } - - /** - * Sets the Truststore file to load trusted certificates. The trust store file must be supported by - * {@link java.security.KeyStore} which is {@code jks} by default. Truststores are reloaded on each connection attempt - * that allows to replace certificates during runtime. - * - * @param truststore the truststore file, must not be {@literal null}. - * @param truststorePassword the truststore password. May be empty to omit password and the truststore integrity check. - * @return {@code this} - */ - public Builder truststore(File truststore, String truststorePassword) { - - LettuceAssert.notNull(truststore, "Truststore must not be null"); - LettuceAssert.isTrue(truststore.exists(), String.format("Truststore file %s does not exist", truststore)); - LettuceAssert.isTrue(truststore.isFile(), String.format("Truststore file %s is not a file", truststore)); - - try { - this.truststore = truststore.toURI().toURL(); - } catch (MalformedURLException e) { - throw new IllegalArgumentException(e); - } - - if (LettuceStrings.isNotEmpty(truststorePassword)) { - this.truststorePassword = truststorePassword.toCharArray(); - } else { - this.truststorePassword = new char[0]; - } - - return this; - } - - /** - * Sets the Truststore resource to load trusted certificates. The trust store file must be supported by - * {@link java.security.KeyStore} which is {@code jks} by default. Truststores are reloaded on each connection attempt - * that allows to replace certificates during runtime. - * - * @param truststore the truststore file, must not be {@literal null}. - * @return {@code this} - */ - public Builder truststore(URL truststore) { - return truststore(truststore, ""); - } - - /** - * Sets the Truststore resource to load trusted certificates. The trust store file must be supported by - * {@link java.security.KeyStore} which is {@code jks} by default. Truststores are reloaded on each connection attempt - * that allows to replace certificates during runtime. - * - * @param truststore the truststore file, must not be {@literal null}. - * @param truststorePassword the truststore password. May be empty to omit password and the truststore integrity check. - * @return {@code this} - */ - public Builder truststore(URL truststore, String truststorePassword) { - - LettuceAssert.notNull(truststore, "Truststore must not be null"); - this.truststore = truststore; - - if (LettuceStrings.isNotEmpty(truststorePassword)) { - this.truststorePassword = truststorePassword.toCharArray(); - } else { - this.truststorePassword = new char[0]; - } - - return this; - } - - /** - * Create a new instance of {@link SslOptions} - * - * @return new instance of {@link SslOptions} - */ - public SslOptions build() { - return new SslOptions(this); - } - } - - /** - * - * @return the configured {@link SslProvider}. - */ - public SslProvider getSslProvider() { - return sslProvider; - } - - /** - * - * @return the truststore {@link URL}. - */ - public URL getTruststore() { - return truststore; - } - - /** - * - * @return the password for the truststore. May be empty. - */ - public char[] getTruststorePassword() { - return truststorePassword; - } -} diff --git a/src/main/java/com/lambdaworks/redis/StatefulRedisConnectionImpl.java b/src/main/java/com/lambdaworks/redis/StatefulRedisConnectionImpl.java deleted file mode 100644 index 65911a1c3a..0000000000 --- a/src/main/java/com/lambdaworks/redis/StatefulRedisConnectionImpl.java +++ /dev/null @@ -1,208 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.protocol.CommandType.AUTH; -import static com.lambdaworks.redis.protocol.CommandType.DISCARD; -import static com.lambdaworks.redis.protocol.CommandType.EXEC; -import static com.lambdaworks.redis.protocol.CommandType.MULTI; -import static com.lambdaworks.redis.protocol.CommandType.READONLY; -import static com.lambdaworks.redis.protocol.CommandType.READWRITE; -import static com.lambdaworks.redis.protocol.CommandType.SELECT; - -import java.util.concurrent.TimeUnit; -import java.util.function.Consumer; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.output.MultiOutput; -import com.lambdaworks.redis.protocol.CompleteableCommand; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.protocol.TransactionalCommand; -import io.netty.channel.ChannelHandler; - -/** - * A thread-safe connection to a Redis server. Multiple threads may share one {@link StatefulRedisConnectionImpl} - * - * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All - * pending commands will be (re)sent after successful reconnection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -@ChannelHandler.Sharable -public class StatefulRedisConnectionImpl extends RedisChannelHandler implements StatefulRedisConnection { - - protected final RedisCodec codec; - protected final RedisCommands sync; - protected final RedisAsyncCommandsImpl async; - protected final RedisReactiveCommandsImpl reactive; - - protected MultiOutput multi; - private char[] password; - private int db; - private boolean readOnly; - - /** - * Initialize a new connection. - * - * @param writer the channel writer - * @param codec Codec used to encode/decode keys and values. - * @param timeout Maximum time to wait for a response. - * @param unit Unit of time for the timeout. - */ - public StatefulRedisConnectionImpl(RedisChannelWriter writer, RedisCodec codec, long timeout, TimeUnit unit) { - super(writer, timeout, unit); - - this.codec = codec; - this.async = newRedisAsyncCommandsImpl(); - this.sync = newRedisSyncCommandsImpl(); - this.reactive = newRedisReactiveCommandsImpl(); - } - - @Override - public RedisAsyncCommands async() { - return async; - } - - /** - * Create a new instance of {@link RedisCommands}. Can be overriden to extend. - * - * @return a new instance - */ - protected RedisCommands newRedisSyncCommandsImpl() { - return syncHandler(async(), RedisCommands.class, RedisClusterCommands.class); - } - - /** - * Create a new instance of {@link RedisAsyncCommandsImpl}. Can be overriden to extend. - * - * @return a new instance - */ - protected RedisAsyncCommandsImpl newRedisAsyncCommandsImpl() { - return new RedisAsyncCommandsImpl<>(this, codec); - } - - @Override - public RedisReactiveCommands reactive() { - return reactive; - } - - /** - * Create a new instance of {@link RedisReactiveCommandsImpl}. Can be overriden to extend. - * - * @return a new instance - */ - protected RedisReactiveCommandsImpl newRedisReactiveCommandsImpl() { - return new RedisReactiveCommandsImpl<>(this, codec); - } - - @Override - public RedisCommands sync() { - return sync; - } - - @Override - public boolean isMulti() { - return multi != null; - } - - @Override - public void activated() { - - super.activated(); - // do not block in here, since the channel flow will be interrupted. - if (password != null) { - async.authAsync(new String(password)); - } - - if (db != 0) { - async.selectAsync(db); - } - - if (readOnly) { - async.readOnly(); - } - } - - @Override - public > C dispatch(C cmd) { - - RedisCommand local = cmd; - - if (local.getType().name().equals(AUTH.name())) { - local = attachOnComplete(local, status -> { - if ("OK".equals(status) && cmd.getArgs().getFirstString() != null) { - this.password = cmd.getArgs().getFirstString().toCharArray(); - } - }); - } - - if (local.getType().name().equals(SELECT.name())) { - local = attachOnComplete(local, status -> { - if ("OK".equals(status) && cmd.getArgs().getFirstInteger() != null) { - this.db = cmd.getArgs().getFirstInteger().intValue(); - } - }); - } - - if (local.getType().name().equals(READONLY.name())) { - local = attachOnComplete(local, status -> { - if ("OK".equals(status)) { - this.readOnly = true; - } - }); - } - - if (local.getType().name().equals(READWRITE.name())) { - local = attachOnComplete(local, status -> { - if ("OK".equals(status)) { - this.readOnly = false; - } - }); - } - - if (local.getType().name().equals(DISCARD.name())) { - if (multi != null) { - multi.cancel(); - multi = null; - } - } - - if (local.getType().name().equals(EXEC.name())) { - MultiOutput multiOutput = this.multi; - this.multi = null; - if (multiOutput == null) { - multiOutput = new MultiOutput<>(codec); - } - local.setOutput((MultiOutput) multiOutput); - } - - if (multi != null) { - local = new TransactionalCommand<>(local); - multi.add(local); - } - - try { - return (C) super.dispatch(local); - } finally { - if (cmd.getType().name().equals(MULTI.name())) { - multi = (multi == null ? new MultiOutput<>(codec) : multi); - } - } - } - - private RedisCommand attachOnComplete(RedisCommand command, Consumer consumer) { - - if (command instanceof CompleteableCommand) { - CompleteableCommand completeable = (CompleteableCommand) command; - completeable.onComplete(consumer); - } - return command; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/StreamScanCursor.java b/src/main/java/com/lambdaworks/redis/StreamScanCursor.java deleted file mode 100644 index e3554dbe65..0000000000 --- a/src/main/java/com/lambdaworks/redis/StreamScanCursor.java +++ /dev/null @@ -1,19 +0,0 @@ -package com.lambdaworks.redis; - -/** - * Cursor result using the Streaming API. Provides the count of retrieved elements. - * - * @author Mark Paluch - * @since 3.0 - */ -public class StreamScanCursor extends ScanCursor { - private long count; - - public long getCount() { - return count; - } - - public void setCount(long count) { - this.count = count; - } -} diff --git a/src/main/java/com/lambdaworks/redis/ValueScanCursor.java b/src/main/java/com/lambdaworks/redis/ValueScanCursor.java deleted file mode 100644 index 090f4db29c..0000000000 --- a/src/main/java/com/lambdaworks/redis/ValueScanCursor.java +++ /dev/null @@ -1,23 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.List; - -/** - * Cursor providing a list of values. - * - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public class ValueScanCursor extends ScanCursor { - - private final List values = new ArrayList<>(); - - public ValueScanCursor() { - } - - public List getValues() { - return values; - } -} diff --git a/src/main/java/com/lambdaworks/redis/ZAddArgs.java b/src/main/java/com/lambdaworks/redis/ZAddArgs.java deleted file mode 100644 index f3c3059615..0000000000 --- a/src/main/java/com/lambdaworks/redis/ZAddArgs.java +++ /dev/null @@ -1,65 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.protocol.CommandArgs; - -/** - * Argument list builder for the improved redis ZADD command starting from Redis - * 3.0.2. Static import the methods from {@link Builder} and call the methods: {@code xx()} or {@code nx()} . - * - * @author Mark Paluch - */ -public class ZAddArgs { - private boolean nx = false; - private boolean xx = false; - private boolean ch = false; - - public static class Builder { - /** - * Utility constructor. - */ - private Builder() { - - } - - public static ZAddArgs nx() { - return new ZAddArgs().nx(); - } - - public static ZAddArgs xx() { - return new ZAddArgs().xx(); - } - - public static ZAddArgs ch() { - return new ZAddArgs().ch(); - } - } - - private ZAddArgs nx() { - this.nx = true; - return this; - } - - private ZAddArgs ch() { - this.ch = true; - return this; - } - - private ZAddArgs xx() { - this.xx = true; - return this; - } - - public void build(CommandArgs args) { - if (nx) { - args.add("NX"); - } - - if (xx) { - args.add("XX"); - } - - if (ch) { - args.add("CH"); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/ZStoreArgs.java b/src/main/java/com/lambdaworks/redis/ZStoreArgs.java deleted file mode 100644 index 1117caa92a..0000000000 --- a/src/main/java/com/lambdaworks/redis/ZStoreArgs.java +++ /dev/null @@ -1,104 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.protocol.CommandKeyword.*; - -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.protocol.CommandArgs; - -/** - * Argument list builder for the redis ZUNIONSTORE and ZINTERSTORE commands. Static import the methods from {@link Builder} and - * chain the method calls: {@code weights(1, 2).max()}. - * - * @author Will Glozer - */ -public class ZStoreArgs { - private static enum Aggregate { - SUM, MIN, MAX - } - - private List weights; - private Aggregate aggregate; - - /** - * Static builder methods. - */ - public static class Builder { - - /** - * Utility constructor. - */ - private Builder() { - - } - - public static ZStoreArgs weights(long... weights) { - return new ZStoreArgs().weights(weights); - } - - public static ZStoreArgs sum() { - return new ZStoreArgs().sum(); - } - - public static ZStoreArgs min() { - return new ZStoreArgs().min(); - } - - public static ZStoreArgs max() { - return new ZStoreArgs().max(); - } - } - - public ZStoreArgs weights(long... weights) { - this.weights = new ArrayList<>(weights.length); - for (long weight : weights) { - this.weights.add(weight); - } - return this; - } - - public ZStoreArgs sum() { - aggregate = Aggregate.SUM; - return this; - } - - public ZStoreArgs min() { - aggregate = Aggregate.MIN; - return this; - } - - public ZStoreArgs max() { - aggregate = Aggregate.MAX; - return this; - } - - void build(CommandArgs args) { - if (weights != null) { - args.add(WEIGHTS); - for (long weight : weights) { - args.add(weight); - } - } - - if (aggregate != null) { - args.add(AGGREGATE); - switch (aggregate) { - case SUM: - args.add(SUM); - break; - case MIN: - args.add(MIN); - break; - case MAX: - args.add(MAX); - break; - default: - throw new IllegalArgumentException("Aggregation " + aggregate + " not supported"); - } - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/api/StatefulConnection.java b/src/main/java/com/lambdaworks/redis/api/StatefulConnection.java deleted file mode 100644 index 02c3c01c2a..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/StatefulConnection.java +++ /dev/null @@ -1,84 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.protocol.RedisCommand; - -/** - * A stateful connection providing command dispatching, timeouts and open/close methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface StatefulConnection extends AutoCloseable { - - /** - * Set the default command timeout for this connection. - * - * @param timeout Command timeout. - * @param unit Unit of time for the timeout. - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * @return the timeout unit. - */ - TimeUnit getTimeoutUnit(); - - /** - * @return the timeout. - */ - long getTimeout(); - - /** - * Dispatch a command. Write a command on the channel. The command may be changed/wrapped during write and the written - * instance is returned after the call. This command does not wait until the command completes and does not guarantee - * whether the command is executed successfully. - * - * @param command the Redis command - * @param result type - * @param command type - * @return the written redis command - */ - > C dispatch(C command); - - /** - * Close the connection. The connection will become not usable anymore as soon as this method was called. - */ - void close(); - - /** - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * - * @return the client options valid for this connection. - */ - ClientOptions getOptions(); - - /** - * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the - * internal state machine gets out of sync with the connection. - */ - void reset(); - - /** - * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands - * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is - * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. - * - * @param autoFlush state of autoFlush. - */ - void setAutoFlushCommands(boolean autoFlush); - - /** - * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to - * achieve batching. No-op if channel is not connected. - */ - void flushCommands(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/StatefulRedisConnection.java b/src/main/java/com/lambdaworks/redis/api/StatefulRedisConnection.java deleted file mode 100644 index 4c59cbe967..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/StatefulRedisConnection.java +++ /dev/null @@ -1,47 +0,0 @@ -package com.lambdaworks.redis.api; - -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; - -/** - * A thread-safe connection to a redis server. Multiple threads may share one {@link StatefulRedisConnection}. - * - * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All - * pending commands will be (re)sent after successful reconnection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface StatefulRedisConnection extends StatefulConnection { - - /** - * - * @return true, if the connection is within a transaction. - */ - boolean isMulti(); - - /** - * Returns the {@link RedisCommands} API for the current connection. Does not create a new connection. - * - * @return the synchronous API for the underlying connection. - */ - RedisCommands sync(); - - /** - * Returns the {@link RedisAsyncCommands} API for the current connection. Does not create a new connection. - * - * @return the asynchronous API for the underlying connection. - */ - RedisAsyncCommands async(); - - /** - * Returns the {@link RedisReactiveCommands} API for the current connection. Does not create a new connection. - * - * @return the reactive API for the underlying connection. - */ - RedisReactiveCommands reactive(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/async/BaseRedisAsyncCommands.java b/src/main/java/com/lambdaworks/redis/api/async/BaseRedisAsyncCommands.java deleted file mode 100644 index f52953b96b..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/async/BaseRedisAsyncCommands.java +++ /dev/null @@ -1,151 +0,0 @@ -package com.lambdaworks.redis.api.async; - -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.ProtocolKeyword; -import com.lambdaworks.redis.output.CommandOutput; -import com.lambdaworks.redis.RedisFuture; - -/** - * - * Asynchronous executed commands for basic commands. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi - */ -public interface BaseRedisAsyncCommands extends AutoCloseable { - - /** - * Post a message to a channel. - * - * @param channel the channel type: key - * @param message the message type: value - * @return Long integer-reply the number of clients that received the message. - */ - RedisFuture publish(K channel, V message); - - /** - * Lists the currently *active channels*. - * - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - RedisFuture> pubsubChannels(); - - /** - * Lists the currently *active channels*. - * - * @param channel the key - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - RedisFuture> pubsubChannels(K channel); - - /** - * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. - * - * @param channels channel keys - * @return array-reply a list of channels and number of subscribers for every channel. - */ - RedisFuture> pubsubNumsub(K... channels); - - /** - * Returns the number of subscriptions to patterns. - * - * @return Long integer-reply the number of patterns all the clients are subscribed to. - */ - RedisFuture pubsubNumpat(); - - /** - * Echo the given string. - * - * @param msg the message type: value - * @return V bulk-string-reply - */ - RedisFuture echo(V msg); - - /** - * Return the role of the instance in the context of replication. - * - * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional - * elements are role-specific. - */ - RedisFuture> role(); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - RedisFuture ping(); - - /** - * Switch connection to Read-Only mode when connecting to a cluster. - * - * @return String simple-string-reply. - */ - RedisFuture readOnly(); - - /** - * Switch connection to Read-Write mode (default) when connecting to a cluster. - * - * @return String simple-string-reply. - */ - RedisFuture readWrite(); - - /** - * Close the connection. - * - * @return String simple-string-reply always OK. - */ - RedisFuture quit(); - - /** - * Wait for replication. - * - * @param replicas minimum number of replicas - * @param timeout timeout in milliseconds - * @return number of replicas - */ - RedisFuture waitForReplication(int replicas, long timeout); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param response type - * @return the command response - */ - RedisFuture dispatch(ProtocolKeyword type, CommandOutput output); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param args the command arguments, must not be {@literal null}. - * @param response type - * @return the command response - */ - RedisFuture dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); - - /** - * Close the connection. The connection will become not usable anymore as soon as this method was called. - */ - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the - * internal state machine gets out of sync with the connection. - */ - void reset(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisAsyncCommands.java b/src/main/java/com/lambdaworks/redis/api/async/RedisAsyncCommands.java deleted file mode 100644 index e3c57e540c..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisAsyncCommands.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis.api.async; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.RedisAsyncConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; - -/** - * A complete asynchronous and thread-safe Redis API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public interface RedisAsyncCommands extends RedisHashAsyncCommands, RedisKeyAsyncCommands, - RedisStringAsyncCommands, RedisListAsyncCommands, RedisSetAsyncCommands, - RedisSortedSetAsyncCommands, RedisScriptingAsyncCommands, RedisServerAsyncCommands, - RedisHLLAsyncCommands, BaseRedisAsyncCommands, RedisClusterAsyncCommands, - RedisTransactionalAsyncCommands, RedisGeoAsyncCommands, RedisAsyncConnection { - - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - String auth(String password); - - /** - * Change the selected database for the current connection. - * - * @param db the database number - * @return String simple-string-reply - */ - String select(int db); - - /** - * @return the underlying connection. - */ - StatefulRedisConnection getStatefulConnection(); - -} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisHLLAsyncCommands.java b/src/main/java/com/lambdaworks/redis/api/async/RedisHLLAsyncCommands.java deleted file mode 100644 index 4fcda15ab6..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisHLLAsyncCommands.java +++ /dev/null @@ -1,48 +0,0 @@ -package com.lambdaworks.redis.api.async; - -import com.lambdaworks.redis.RedisFuture; - -/** - * Asynchronous executed commands for HyperLogLog (PF* commands). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi - */ -public interface RedisHLLAsyncCommands { - - /** - * Adds the specified elements to the specified HyperLogLog. - * - * @param key the key - * @param values the values - * - * @return Long integer-reply specifically: - * - * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. - */ - RedisFuture pfadd(K key, V... values); - - /** - * Merge N different HyperLogLogs into a single one. - * - * @param destkey the destination key - * @param sourcekeys the source key - * - * @return String simple-string-reply The command just returns {@code OK}. - */ - RedisFuture pfmerge(K destkey, K... sourcekeys); - - /** - * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). - * - * @param keys the keys - * - * @return Long integer-reply specifically: - * - * The approximated number of unique elements observed via {@code PFADD}. - */ - RedisFuture pfcount(K... keys); -} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisScriptingAsyncCommands.java b/src/main/java/com/lambdaworks/redis/api/async/RedisScriptingAsyncCommands.java deleted file mode 100644 index 5eed7171c1..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisScriptingAsyncCommands.java +++ /dev/null @@ -1,103 +0,0 @@ -package com.lambdaworks.redis.api.async; - -import java.util.List; -import com.lambdaworks.redis.ScriptOutputType; -import com.lambdaworks.redis.RedisFuture; - -/** - * Asynchronous executed commands for Scripting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi - */ -public interface RedisScriptingAsyncCommands { - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type output type - * @param keys key names - * @param expected return type - * @return script result - */ - RedisFuture eval(String script, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - RedisFuture eval(String script, ScriptOutputType type, K[] keys, V... values); - - /** - * Evaluates a script cached on the server side by its SHA1 digest - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param expected return type - * @return script result - */ - RedisFuture evalsha(String digest, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - RedisFuture evalsha(String digest, ScriptOutputType type, K[] keys, V... values); - - /** - * Check existence of scripts in the script cache. - * - * @param digests script digests - * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 - * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 - * is returned, otherwise 0 is returned. - */ - RedisFuture> scriptExists(String... digests); - - /** - * Remove all the scripts from the script cache. - * - * @return String simple-string-reply - */ - RedisFuture scriptFlush(); - - /** - * Kill the script currently in execution. - * - * @return String simple-string-reply - */ - RedisFuture scriptKill(); - - /** - * Load the specified Lua script into the script cache. - * - * @param script script content - * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. - */ - RedisFuture scriptLoad(V script); - - /** - * Create a SHA1 digest from a Lua script. - * - * @param script script content - * @return the SHA1 value - */ - String digest(V script); -} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisSortedSetAsyncCommands.java b/src/main/java/com/lambdaworks/redis/api/async/RedisSortedSetAsyncCommands.java deleted file mode 100644 index 27c3285e38..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisSortedSetAsyncCommands.java +++ /dev/null @@ -1,836 +0,0 @@ -package com.lambdaworks.redis.api.async; - -import java.util.List; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.ScoredValueScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ZAddArgs; -import com.lambdaworks.redis.ZStoreArgs; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; - -/** - * Asynchronous executed commands for Sorted Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi - */ -public interface RedisSortedSetAsyncCommands { - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, ScoredValue... scoredValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, ZAddArgs zAddArgs, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the ke - * @param zAddArgs arguments for zadd - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - RedisFuture zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); - - /** - * ZADD acts like ZINCRBY - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The total number of elements changed - */ - RedisFuture zaddincr(K key, double score, V member); - - /** - * Get the number of members in a sorted set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} - * does not exist. - */ - RedisFuture zcard(K key); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - RedisFuture zcount(K key, double min, double max); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - RedisFuture zcount(K key, String min, String max); - - /** - * Increment the score of a member in a sorted set. - * - * @param key the key - * @param amount the increment type: long - * @param member the member type: key - * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented - * as string. - */ - RedisFuture zincrby(K key, double amount, K member); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - RedisFuture zinterstore(K destination, K... keys); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - RedisFuture zinterstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - RedisFuture> zrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - RedisFuture>> zrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebyscore(K key, double min, double max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebyscore(K key, double min, double max, long offset, long count); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebyscore(K key, String min, String max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrangebyscoreWithScores(K key, double min, double max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrangebyscoreWithScores(K key, String min, String max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); - - /** - * Return a range of members in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - RedisFuture zrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - RedisFuture zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Determine the index of a member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - RedisFuture zrank(K key, V member); - - /** - * Remove one or more members from a sorted set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply specifically: - * - * The number of members removed from the sorted set, not including non existing members. - */ - RedisFuture zrem(K key, V... members); - - /** - * Remove all members in a sorted set within the given indexes. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long integer-reply the number of elements removed. - */ - RedisFuture zremrangebyrank(K key, long start, long stop); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - RedisFuture zremrangebyscore(K key, double min, double max); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - RedisFuture zremrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - RedisFuture> zrevrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - RedisFuture>> zrevrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrevrangebyscore(K key, double max, double min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrevrangebyscore(K key, String max, String min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the withscores - * @param count the null - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrevrangebyscore(K key, double max, double min, long offset, long count); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrevrangebyscore(K key, String max, String min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param max max score - * @param min min score - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Determine the index of a member in a sorted set, with scores ordered from high to low. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - RedisFuture zrevrank(K key, V member); - - /** - * Get the score associated with the given member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as - * string. - */ - RedisFuture zscore(K key, V member); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination destination key - * @param keys source keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - RedisFuture zunionstore(K destination, K... keys); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - RedisFuture zunionstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @return ScoredValueScanCursor<V> scan cursor. - */ - RedisFuture> zscan(K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - RedisFuture> zscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - RedisFuture> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ScoredValueScanCursor<V> scan cursor. - */ - RedisFuture> zscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - RedisFuture zscan(ScoredValueStreamingChannel channel, K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Count the number of members in a sorted set between a given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - RedisFuture zlexcount(K key, String min, String max); - - /** - * Remove all members in a sorted set between the given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - RedisFuture zremrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - RedisFuture> zrangebylex(K key, String min, String max, long offset, long count); -} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisTransactionalAsyncCommands.java b/src/main/java/com/lambdaworks/redis/api/async/RedisTransactionalAsyncCommands.java deleted file mode 100644 index bd35a19d01..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisTransactionalAsyncCommands.java +++ /dev/null @@ -1,54 +0,0 @@ -package com.lambdaworks.redis.api.async; - -import java.util.List; -import com.lambdaworks.redis.RedisFuture; - -/** - * Asynchronous executed commands for Transactions. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi - */ -public interface RedisTransactionalAsyncCommands { - - /** - * Discard all commands issued after MULTI. - * - * @return String simple-string-reply always {@code OK}. - */ - RedisFuture discard(); - - /** - * Execute all commands issued after MULTI. - * - * @return List<Object> array-reply each element being the reply to each of the commands in the atomic transaction. - * - * When using {@code WATCH}, {@code EXEC} can return a - */ - RedisFuture> exec(); - - /** - * Mark the start of a transaction block. - * - * @return String simple-string-reply always {@code OK}. - */ - RedisFuture multi(); - - /** - * Watch the given keys to determine execution of the MULTI/EXEC block. - * - * @param keys the key - * @return String simple-string-reply always {@code OK}. - */ - RedisFuture watch(K... keys); - - /** - * Forget about all watched keys. - * - * @return String simple-string-reply always {@code OK}. - */ - RedisFuture unwatch(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/async/package-info.java b/src/main/java/com/lambdaworks/redis/api/async/package-info.java deleted file mode 100644 index 4ac51e1ea8..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/async/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Standalone Redis API for asynchronous executed commands. - */ -package com.lambdaworks.redis.api.async; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/api/package-info.java b/src/main/java/com/lambdaworks/redis/api/package-info.java deleted file mode 100644 index a2d9537746..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Standalone Redis connection API. - */ -package com.lambdaworks.redis.api; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/api/rx/BaseRedisReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/BaseRedisReactiveCommands.java deleted file mode 100644 index e6b4312175..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/BaseRedisReactiveCommands.java +++ /dev/null @@ -1,153 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.Collection; -import java.util.Map; - -import com.lambdaworks.redis.output.CommandOutput; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.ProtocolKeyword; - -import rx.Observable; - -/** - * - * Observable commands for basic commands. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface BaseRedisReactiveCommands extends AutoCloseable { - - /** - * Post a message to a channel. - * - * @param channel the channel type: key - * @param message the message type: value - * @return Long integer-reply the number of clients that received the message. - */ - Observable publish(K channel, V message); - - /** - * Lists the currently *active channels*. - * - * @return K array-reply a list of active channels, optionally matching the specified pattern. - */ - Observable pubsubChannels(); - - /** - * Lists the currently *active channels*. - * - * @param channel the key - * @return K array-reply a list of active channels, optionally matching the specified pattern. - */ - Observable pubsubChannels(K channel); - - /** - * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. - * - * @param channels channel keys - * @return array-reply a list of channels and number of subscribers for every channel. - */ - Observable> pubsubNumsub(K... channels); - - /** - * Returns the number of subscriptions to patterns. - * - * @return Long integer-reply the number of patterns all the clients are subscribed to. - */ - Observable pubsubNumpat(); - - /** - * Echo the given string. - * - * @param msg the message type: value - * @return V bulk-string-reply - */ - Observable echo(V msg); - - /** - * Return the role of the instance in the context of replication. - * - * @return Object array-reply where the first element is one of master, slave, sentinel and the additional elements are - * role-specific. - */ - Observable role(); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - Observable ping(); - - /** - * Switch connection to Read-Only mode when connecting to a cluster. - * - * @return String simple-string-reply. - */ - Observable readOnly(); - - /** - * Switch connection to Read-Write mode (default) when connecting to a cluster. - * - * @return String simple-string-reply. - */ - Observable readWrite(); - - /** - * Close the connection. - * - * @return String simple-string-reply always OK. - */ - Observable quit(); - - /** - * Wait for replication. - * - * @param replicas minimum number of replicas - * @param timeout timeout in milliseconds - * @return number of replicas - */ - Observable waitForReplication(int replicas, long timeout); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param response type - * @return the command response - */ - Observable dispatch(ProtocolKeyword type, CommandOutput output); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param args the command arguments, must not be {@literal null}. - * @param response type - * @return the command response - */ - Observable dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); - - /** - * Close the connection. The connection will become not usable anymore as soon as this method was called. - */ - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the - * internal state machine gets out of sync with the connection. - */ - void reset(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisGeoReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisGeoReactiveCommands.java deleted file mode 100644 index abd49e4011..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisGeoReactiveCommands.java +++ /dev/null @@ -1,152 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import com.lambdaworks.redis.GeoArgs; -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.GeoRadiusStoreArgs; -import com.lambdaworks.redis.GeoWithin; -import java.util.List; -import java.util.Set; -import rx.Observable; - -/** - * Observable commands for the Geo-API. - * - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisGeoReactiveCommands { - - /** - * Single geo add. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param member the member to add - * @return Long integer-reply the number of elements that were added to the set - */ - Observable geoadd(K key, double longitude, double latitude, V member); - - /** - * Multi geo add. - * - * @param key the key of the geo set - * @param lngLatMember triplets of double longitude, double latitude and V member - * @return Long integer-reply the number of elements that were added to the set - */ - Observable geoadd(K key, Object... lngLatMember); - - /** - * Retrieve Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index. - * - * @param key the key of the geo set - * @param members the members - * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. - */ - Observable geohash(K key, V... members); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @return bulk reply - */ - Observable georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - Observable> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadius(Object, double, double, double, Unit, GeoArgs)} query and store the results in a sorted set. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - Observable georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @return set of members - */ - Observable georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); - - /** - * - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - Observable> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadiusbymember(Object, Object, double, Unit, GeoArgs)} query and store the results in a sorted set. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - Observable georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Get geo coordinates for the {@code members}. - * - * @param key the key of the geo set - * @param members the members - * - * @return a list of {@link GeoCoordinates}s representing the x,y position of each element specified in the arguments. For - * missing elements {@literal null} is returned. - */ - Observable geopos(K key, V... members); - - /** - * - * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. Default in meters by, otherwise according to {@code unit} - * - * @param key the key of the geo set - * @param from from member - * @param to to member - * @param unit distance unit - * - * @return distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. - */ - Observable geodist(K key, V from, V to, GeoArgs.Unit unit); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisHLLReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisHLLReactiveCommands.java deleted file mode 100644 index dd8bd8a351..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisHLLReactiveCommands.java +++ /dev/null @@ -1,48 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import rx.Observable; - -/** - * Observable commands for HyperLogLog (PF* commands). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisHLLReactiveCommands { - - /** - * Adds the specified elements to the specified HyperLogLog. - * - * @param key the key - * @param values the values - * - * @return Long integer-reply specifically: - * - * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. - */ - Observable pfadd(K key, V... values); - - /** - * Merge N different HyperLogLogs into a single one. - * - * @param destkey the destination key - * @param sourcekeys the source key - * - * @return String simple-string-reply The command just returns {@code OK}. - */ - Observable pfmerge(K destkey, K... sourcekeys); - - /** - * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). - * - * @param keys the keys - * - * @return Long integer-reply specifically: - * - * The approximated number of unique elements observed via {@code PFADD}. - */ - Observable pfcount(K... keys); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisHashReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisHashReactiveCommands.java deleted file mode 100644 index e10d080d61..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisHashReactiveCommands.java +++ /dev/null @@ -1,280 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.MapScanCursor; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import rx.Observable; - -/** - * Observable commands for Hashes (Key-Value pairs). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisHashReactiveCommands { - - /** - * Delete one or more hash fields. - * - * @param key the key - * @param fields the field type: key - * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing - * fields. - */ - Observable hdel(K key, K... fields); - - /** - * Determine if a hash field exists. - * - * @param key the key - * @param field the field type: key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, - * or {@code key} does not exist. - */ - Observable hexists(K key, K field); - - /** - * Get the value of a hash field. - * - * @param key the key - * @param field the field type: key - * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present - * in the hash or {@code key} does not exist. - */ - Observable hget(K key, K field); - - /** - * Increment the integer value of a hash field by the given number. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: long - * @return Long integer-reply the value at {@code field} after the increment operation. - */ - Observable hincrby(K key, K field, long amount); - - /** - * Increment the float value of a hash field by the given amount. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: double - * @return Double bulk-string-reply the value of {@code field} after the increment. - */ - Observable hincrbyfloat(K key, K field, double amount); - - /** - * Get all the fields and values in a hash. - * - * @param key the key - * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} - * does not exist. - */ - Observable> hgetall(K key); - - /** - * Stream over all the fields and values in a hash. - * - * @param channel the channel - * @param key the key - * - * @return Long count of the keys. - */ - Observable hgetall(KeyValueStreamingChannel channel, K key); - - /** - * Get all the fields in a hash. - * - * @param key the key - * @return K array-reply list of fields in the hash, or an empty list when {@code key} does not exist. - */ - Observable hkeys(K key); - - /** - * Stream over all the fields in a hash. - * - * @param channel the channel - * @param key the key - * - * @return Long count of the keys. - */ - Observable hkeys(KeyStreamingChannel channel, K key); - - /** - * Get the number of fields in a hash. - * - * @param key the key - * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. - */ - Observable hlen(K key); - - /** - * Get the values of all the given hash fields. - * - * @param key the key - * @param fields the field type: key - * @return V array-reply list of values associated with the given fields, in the same - */ - Observable hmget(K key, K... fields); - - /** - * Stream over the values of all the given hash fields. - * - * @param channel the channel - * @param key the key - * @param fields the fields - * - * @return Long count of the keys - */ - Observable hmget(ValueStreamingChannel channel, K key, K... fields); - - /** - * Set multiple hash fields to multiple values. - * - * @param key the key - * @param map the null - * @return String simple-string-reply - */ - Observable hmset(K key, Map map); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @return MapScanCursor<K, V> map scan cursor. - */ - Observable> hscan(K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanArgs scan arguments - * @return MapScanCursor<K, V> map scan cursor. - */ - Observable> hscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return MapScanCursor<K, V> map scan cursor. - */ - Observable> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return MapScanCursor<K, V> map scan cursor. - */ - Observable> hscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @return StreamScanCursor scan cursor. - */ - Observable hscan(KeyValueStreamingChannel channel, K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Observable hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Observable hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - Observable hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Set the string value of a hash field. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@literal true} if {@code field} is a new field in the hash and {@code value} was set. {@literal false} if - * {@code field} already exists in the hash and the value was updated. - */ - Observable hset(K key, K field, V value); - - /** - * Set the value of a hash field, only if the field does not exist. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@code 1} if {@code field} is a new field in the hash and {@code value} was set. {@code 0} if {@code field} - * already exists in the hash and no operation was performed. - */ - Observable hsetnx(K key, K field, V value); - - /** - * Get the string length of the field value in a hash. - * - * @param key the key - * @param field the field type: key - * @return Long integer-reply the string length of the {@code field} value, or {@code 0} when {@code field} is not present - * in the hash or {@code key} does not exist at all. - */ - Observable hstrlen(K key, K field); - - /** - * Get all the values in a hash. - * - * @param key the key - * @return V array-reply list of values in the hash, or an empty list when {@code key} does not exist. - */ - Observable hvals(K key); - - /** - * Stream over all the values in a hash. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * - * @return Long count of the keys. - */ - Observable hvals(ValueStreamingChannel channel, K key); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisKeyReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisKeyReactiveCommands.java deleted file mode 100644 index 5fd66b53f5..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisKeyReactiveCommands.java +++ /dev/null @@ -1,399 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.Date; -import java.util.List; -import com.lambdaworks.redis.KeyScanCursor; -import com.lambdaworks.redis.MigrateArgs; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.SortArgs; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import rx.Observable; - -/** - * Observable commands for Keys (Key manipulation/querying). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisKeyReactiveCommands { - - /** - * Delete one or more keys. - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - */ - Observable del(K... keys); - - /** - * Unlink one or more keys (non blocking DEL). - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - */ - Observable unlink(K... keys); - - /** - * Return a serialized version of the value stored at the specified key. - * - * @param key the key - * @return byte[] bulk-string-reply the serialized value. - */ - Observable dump(K key); - - /** - * Determine how many keys exist. - * - * @param keys the keys - * @return Long integer-reply specifically: Number of existing keys - */ - Observable exists(K... keys); - - /** - * Set a key's time to live in seconds. - * - * @param key the key - * @param seconds the seconds type: long - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - Observable expire(K key, long seconds); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Observable expireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Observable expireat(K key, long timestamp); - - /** - * Find all keys matching the given pattern. - * - * @param pattern the pattern type: patternkey (pattern) - * @return K array-reply list of keys matching {@code pattern}. - */ - Observable keys(K pattern); - - /** - * Find all keys matching the given pattern. - * - * @param channel the channel - * @param pattern the pattern - * @return Long array-reply list of keys matching {@code pattern}. - */ - Observable keys(KeyStreamingChannel channel, K pattern); - - /** - * Atomically transfer a key from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param key the key - * @param db the database - * @param timeout the timeout in milliseconds - * @return String simple-string-reply The command returns OK on success. - */ - Observable migrate(String host, int port, K key, int db, long timeout); - - /** - * Atomically transfer one or more keys from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param db the database - * @param timeout the timeout in milliseconds - * @param migrateArgs migrate args that allow to configure further options - * @return String simple-string-reply The command returns OK on success. - */ - Observable migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs); - - /** - * Move a key to another database. - * - * @param key the key - * @param db the db type: long - * @return Boolean integer-reply specifically: - */ - Observable move(K key, int db); - - /** - * returns the kind of internal representation used in order to store the value associated with a key. - * - * @param key the key - * @return String - */ - Observable objectEncoding(K key); - - /** - * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write - * operations). - * - * @param key the key - * @return number of seconds since the object stored at the specified key is idle. - */ - Observable objectIdletime(K key); - - /** - * returns the number of references of the value associated with the specified key. - * - * @param key the key - * @return Long - */ - Observable objectRefcount(K key); - - /** - * Remove the expiration from a key. - * - * @param key the key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an - * associated timeout. - */ - Observable persist(K key); - - /** - * Set a key's time to live in milliseconds. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @return integer-reply, specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - Observable pexpire(K key, long milliseconds); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Observable pexpireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Observable pexpireat(K key, long timestamp); - - /** - * Get the time to live for a key in milliseconds. - * - * @param key the key - * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description - * above). - */ - Observable pttl(K key); - - /** - * Return a random key from the keyspace. - * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. - */ - Observable randomkey(); - - /** - * Rename a key. - * - * @param key the key - * @param newKey the newkey type: key - * @return String simple-string-reply - */ - Observable rename(K key, K newKey); - - /** - * Rename a key, only if the new key does not exist. - * - * @param key the key - * @param newKey the newkey type: key - * @return Boolean integer-reply specifically: - * - * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. - */ - Observable renamenx(K key, K newKey); - - /** - * Create a key using the provided serialized value, previously obtained using DUMP. - * - * @param key the key - * @param ttl the ttl type: long - * @param value the serialized-value type: string - * @return String simple-string-reply The command returns OK on success. - */ - Observable restore(K key, long ttl, byte[] value); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @return V array-reply list of sorted elements. - */ - Observable sort(K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return Long number of values. - */ - Observable sort(ValueStreamingChannel channel, K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @return V array-reply list of sorted elements. - */ - Observable sort(K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param sortArgs sort arguments - * @return Long number of values. - */ - Observable sort(ValueStreamingChannel channel, K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @param destination the destination key to store sort results - * @return Long number of values. - */ - Observable sortStore(K key, SortArgs sortArgs, K destination); - - /** - * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * - * @param keys the keys - * @return Long integer-reply the number of found keys. - */ - Observable touch(K... keys); - - /** - * Get the time to live for a key. - * - * @param key the key - * @return Long integer-reply TTL in seconds, or a negative value in order to signal an error (see the description above). - */ - Observable ttl(K key); - - /** - * Determine the type stored at key. - * - * @param key the key - * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. - */ - Observable type(K key); - - /** - * Incrementally iterate the keys space. - * - * @return KeyScanCursor<K> scan cursor. - */ - Observable> scan(); - - /** - * Incrementally iterate the keys space. - * - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - */ - Observable> scan(ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - */ - Observable> scan(ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return KeyScanCursor<K> scan cursor. - */ - Observable> scan(ScanCursor scanCursor); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @return StreamScanCursor scan cursor. - */ - Observable scan(KeyStreamingChannel channel); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Observable scan(KeyStreamingChannel channel, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Observable scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - Observable scan(KeyStreamingChannel channel, ScanCursor scanCursor); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisListReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisListReactiveCommands.java deleted file mode 100644 index 552f9de0e8..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisListReactiveCommands.java +++ /dev/null @@ -1,216 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.List; -import com.lambdaworks.redis.KeyValue; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import rx.Observable; - -/** - * Observable commands for Lists. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisListReactiveCommands { - - /** - * Remove and get the first element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return KeyValue<K,V> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - Observable> blpop(long timeout, K... keys); - - /** - * Remove and get the last element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return KeyValue<K,V> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - Observable> brpop(long timeout, K... keys); - - /** - * Pop a value from a list, push it to another list and return it; or block until one is available. - * - * @param timeout the timeout in seconds - * @param source the source key - * @param destination the destination type: key - * @return V bulk-string-reply the element being popped from {@code source} and pushed to {@code destination}. If - * {@code timeout} is reached, a - */ - Observable brpoplpush(long timeout, K source, K destination); - - /** - * Get an element from a list by its index. - * - * @param key the key - * @param index the index type: long - * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. - */ - Observable lindex(K key, long index); - - /** - * Insert an element before or after another element in a list. - * - * @param key the key - * @param before the before - * @param pivot the pivot - * @param value the value - * @return Long integer-reply the length of the list after the insert operation, or {@code -1} when the value {@code pivot} - * was not found. - */ - Observable linsert(K key, boolean before, V pivot, V value); - - /** - * Get the length of a list. - * - * @param key the key - * @return Long integer-reply the length of the list at {@code key}. - */ - Observable llen(K key); - - /** - * Remove and get the first element in a list. - * - * @param key the key - * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. - */ - Observable lpop(K key); - - /** - * Prepend one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return Long integer-reply the length of the list after the push operations. - */ - Observable lpush(K key, V... values); - - /** - * Prepend a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #lpushx(Object, Object[])} - */ - Observable lpushx(K key, V value); - - /** - * Prepend values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - Observable lpushx(K key, V... values); - - /** - * Get a range of elements from a list. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return V array-reply list of elements in the specified range. - */ - Observable lrange(K key, long start, long stop); - - /** - * Get a range of elements from a list. - * - * @param channel the channel - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long count of elements in the specified range. - */ - Observable lrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Remove elements from a list. - * - * @param key the key - * @param count the count type: long - * @param value the value - * @return Long integer-reply the number of removed elements. - */ - Observable lrem(K key, long count, V value); - - /** - * Set the value of an element in a list by its index. - * - * @param key the key - * @param index the index type: long - * @param value the value - * @return String simple-string-reply - */ - Observable lset(K key, long index, V value); - - /** - * Trim a list to the specified range. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return String simple-string-reply - */ - Observable ltrim(K key, long start, long stop); - - /** - * Remove and get the last element in a list. - * - * @param key the key - * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. - */ - Observable rpop(K key); - - /** - * Remove the last element in a list, append it to another list and return it. - * - * @param source the source key - * @param destination the destination type: key - * @return V bulk-string-reply the element being popped and pushed. - */ - Observable rpoplpush(K source, K destination); - - /** - * Append one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return Long integer-reply the length of the list after the push operation. - */ - Observable rpush(K key, V... values); - - /** - * Append a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #rpushx(java.lang.Object, java.lang.Object[])} - */ - Observable rpushx(K key, V value); - - /** - * Append values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - Observable rpushx(K key, V... values); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisReactiveCommands.java deleted file mode 100644 index 01182aa4b8..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisReactiveCommands.java +++ /dev/null @@ -1,53 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.concurrent.TimeUnit; - -import rx.Observable; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.rx.RedisClusterReactiveCommands; - -/** - * A complete reactive and thread-safe Redis API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisReactiveCommands extends RedisHashReactiveCommands, RedisKeyReactiveCommands, - RedisStringReactiveCommands, RedisListReactiveCommands, RedisSetReactiveCommands, - RedisSortedSetReactiveCommands, RedisScriptingReactiveCommands, RedisServerReactiveCommands, - RedisHLLReactiveCommands, BaseRedisReactiveCommands, RedisClusterReactiveCommands, - RedisTransactionalReactiveCommands, RedisGeoReactiveCommands { - - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - Observable auth(String password); - - /** - * Change the selected database for the current connection. - * - * @param db the database number - * @return String simple-string-reply - */ - Observable select(int db); - - /** - * @return the underlying connection. - */ - StatefulRedisConnection getStatefulConnection(); - -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisScriptingReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisScriptingReactiveCommands.java deleted file mode 100644 index c20b82292d..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisScriptingReactiveCommands.java +++ /dev/null @@ -1,103 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.List; -import com.lambdaworks.redis.ScriptOutputType; -import rx.Observable; - -/** - * Observable commands for Scripting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisScriptingReactiveCommands { - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type output type - * @param keys key names - * @param expected return type - * @return script result - */ - Observable eval(String script, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - Observable eval(String script, ScriptOutputType type, K[] keys, V... values); - - /** - * Evaluates a script cached on the server side by its SHA1 digest - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param expected return type - * @return script result - */ - Observable evalsha(String digest, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - Observable evalsha(String digest, ScriptOutputType type, K[] keys, V... values); - - /** - * Check existence of scripts in the script cache. - * - * @param digests script digests - * @return Boolean array-reply The command returns an array of integers that correspond to the specified SHA1 - * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 - * is returned, otherwise 0 is returned. - */ - Observable scriptExists(String... digests); - - /** - * Remove all the scripts from the script cache. - * - * @return String simple-string-reply - */ - Observable scriptFlush(); - - /** - * Kill the script currently in execution. - * - * @return String simple-string-reply - */ - Observable scriptKill(); - - /** - * Load the specified Lua script into the script cache. - * - * @param script script content - * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. - */ - Observable scriptLoad(V script); - - /** - * Create a SHA1 digest from a Lua script. - * - * @param script script content - * @return the SHA1 value - */ - String digest(V script); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisServerReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisServerReactiveCommands.java deleted file mode 100644 index 22993104f3..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisServerReactiveCommands.java +++ /dev/null @@ -1,337 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.Date; -import java.util.List; -import com.lambdaworks.redis.KillArgs; -import com.lambdaworks.redis.protocol.CommandType; -import rx.Observable; - -/** - * Observable commands for Server Control. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisServerReactiveCommands { - - /** - * Asynchronously rewrite the append-only file. - * - * @return String simple-string-reply always {@code OK}. - */ - Observable bgrewriteaof(); - - /** - * Asynchronously save the dataset to disk. - * - * @return String simple-string-reply - */ - Observable bgsave(); - - /** - * Get the current connection name. - * - * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. - */ - Observable clientGetname(); - - /** - * Set the current connection name. - * - * @param name the client name - * @return simple-string-reply {@code OK} if the connection name was successfully set. - */ - Observable clientSetname(K name); - - /** - * Kill the connection of a client identified by ip:port. - * - * @param addr ip:port - * @return String simple-string-reply {@code OK} if the connection exists and has been closed - */ - Observable clientKill(String addr); - - /** - * Kill connections of clients which are filtered by {@code killArgs} - * - * @param killArgs args for the kill operation - * @return Long integer-reply number of killed connections - */ - Observable clientKill(KillArgs killArgs); - - /** - * Stop processing commands from clients for some time. - * - * @param timeout the timeout value in milliseconds - * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. - */ - Observable clientPause(long timeout); - - /** - * Get the list of client connections. - * - * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), - * each line is composed of a succession of property=value fields separated by a space character. - */ - Observable clientList(); - - /** - * Returns an array reply of details about all Redis commands. - * - * @return Object array-reply - */ - Observable command(); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return Object array-reply - */ - Observable commandInfo(String... commands); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return Object array-reply - */ - Observable commandInfo(CommandType... commands); - - /** - * Get total number of Redis commands. - * - * @return Long integer-reply of number of total commands in this Redis server. - */ - Observable commandCount(); - - /** - * Get the value of a configuration parameter. - * - * @param parameter name of the parameter - * @return String bulk-string-reply - */ - Observable configGet(String parameter); - - /** - * Reset the stats returned by INFO. - * - * @return String simple-string-reply always {@code OK}. - */ - Observable configResetstat(); - - /** - * Rewrite the configuration file with the in memory configuration. - * - * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is - * returned. - */ - Observable configRewrite(); - - /** - * Set a configuration parameter to the given value. - * - * @param parameter the parameter name - * @param value the parameter value - * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. - */ - Observable configSet(String parameter, String value); - - /** - * Return the number of keys in the selected database. - * - * @return Long integer-reply - */ - Observable dbsize(); - - /** - * Crash and recover - * @param delay optional delay in milliseconds - * @return String simple-string-reply - */ - Observable debugCrashAndRecover(Long delay); - - /** - * Get debugging information about the internal hash-table state. - * - * @param db the database number - * @return String simple-string-reply - */ - Observable debugHtstats(int db); - - /** - * Get debugging information about a key. - * - * @param key the key - * @return String simple-string-reply - */ - Observable debugObject(K key); - - /** - * Make the server crash: Out of memory. - * - * @return nothing, because the server crashes before returning. - */ - Observable debugOom(); - - /** - * Make the server crash: Invalid pointer access. - * - * @return nothing, because the server crashes before returning. - */ - Observable debugSegfault(); - - /** - * Save RDB, clear the database and reload RDB. - * - * @return String simple-string-reply The commands returns OK on success. - */ - Observable debugReload(); - - /** - * Restart the server gracefully. - * @param delay optional delay in milliseconds - * @return String simple-string-reply - */ - Observable debugRestart(Long delay); - - /** - * Get debugging information about the internal SDS length. - * - * @param key the key - * @return String simple-string-reply - */ - Observable debugSdslen(K key); - - /** - * Remove all keys from all databases. - * - * @return String simple-string-reply - */ - Observable flushall(); - - /** - * Remove all keys asynchronously from all databases. - * - * @return String simple-string-reply - */ - Observable flushallAsync(); - - /** - * Remove all keys from the current database. - * - * @return String simple-string-reply - */ - Observable flushdb(); - - /** - * Remove all keys asynchronously from the current database. - * - * @return String simple-string-reply - */ - Observable flushdbAsync(); - - /** - * Get information and statistics about the server. - * - * @return String bulk-string-reply as a collection of text lines. - */ - Observable info(); - - /** - * Get information and statistics about the server. - * - * @param section the section type: string - * @return String bulk-string-reply as a collection of text lines. - */ - Observable info(String section); - - /** - * Get the UNIX time stamp of the last successful save to disk. - * - * @return Date integer-reply an UNIX time stamp. - */ - Observable lastsave(); - - /** - * Synchronously save the dataset to disk. - * - * @return String simple-string-reply The commands returns OK on success. - */ - Observable save(); - - /** - * Synchronously save the dataset to disk and then shut down the server. - * - * @param save {@literal true} force save operation - * @return nothing because the server shuts down before returning a value - */ - Observable shutdown(boolean save); - - /** - * Make the server a slave of another instance, or promote it as master. - * - * @param host the host type: string - * @param port the port type: string - * @return String simple-string-reply - */ - Observable slaveof(String host, int port); - - /** - * Promote server as master. - * - * @return String simple-string-reply - */ - Observable slaveofNoOne(); - - /** - * Read the slow log. - * - * @return Object deeply nested multi bulk replies - */ - Observable slowlogGet(); - - /** - * Read the slow log. - * - * @param count the count - * @return Object deeply nested multi bulk replies - */ - Observable slowlogGet(int count); - - /** - * Obtaining the current length of the slow log. - * - * @return Long length of the slow log. - */ - Observable slowlogLen(); - - /** - * Resetting the slow log. - * - * @return String simple-string-reply The commands returns OK on success. - */ - Observable slowlogReset(); - - /** - * Internal command used for replication. - * - * @return String simple-string-reply - */ - @Deprecated - Observable sync(); - - /** - * Return the current server time. - * - * @return V array-reply specifically: - * - * A multi bulk reply containing two elements: - * - * unix time in seconds. microseconds. - */ - Observable time(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisSetReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisSetReactiveCommands.java deleted file mode 100644 index 5ddcb7c9c1..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisSetReactiveCommands.java +++ /dev/null @@ -1,292 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.Set; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ValueScanCursor; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import rx.Observable; - -/** - * Observable commands for Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisSetReactiveCommands { - - /** - * Add one or more members to a set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply the number of elements that were added to the set, not including all the elements already - * present into the set. - */ - Observable sadd(K key, V... members); - - /** - * Get the number of members in a set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not - * exist. - */ - Observable scard(K key); - - /** - * Subtract multiple sets. - * - * @param keys the key - * @return V array-reply list with members of the resulting set. - */ - Observable sdiff(K... keys); - - /** - * Subtract multiple sets. - * - * @param channel the channel - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Observable sdiff(ValueStreamingChannel channel, K... keys); - - /** - * Subtract multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Observable sdiffstore(K destination, K... keys); - - /** - * Intersect multiple sets. - * - * @param keys the key - * @return V array-reply list with members of the resulting set. - */ - Observable sinter(K... keys); - - /** - * Intersect multiple sets. - * - * @param channel the channel - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Observable sinter(ValueStreamingChannel channel, K... keys); - - /** - * Intersect multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Observable sinterstore(K destination, K... keys); - - /** - * Determine if a given value is a member of a set. - * - * @param key the key - * @param member the member type: value - * @return Boolean integer-reply specifically: - * - * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the - * set, or if {@code key} does not exist. - */ - Observable sismember(K key, V member); - - /** - * Move a member from one set to another. - * - * @param source the source key - * @param destination the destination type: key - * @param member the member type: value - * @return Boolean integer-reply specifically: - * - * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no - * operation was performed. - */ - Observable smove(K source, K destination, V member); - - /** - * Get all the members in a set. - * - * @param key the key - * @return V array-reply all elements of the set. - */ - Observable smembers(K key); - - /** - * Get all the members in a set. - * - * @param channel the channel - * @param key the keys - * @return Long count of members of the resulting set. - */ - Observable smembers(ValueStreamingChannel channel, K key); - - /** - * Remove and return a random member from a set. - * - * @param key the key - * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - Observable spop(K key); - - /** - * Remove and return one or multiple random members from a set. - * - * @param key the key - * @param count number of members to pop - * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - Observable spop(K key, long count); - - /** - * Get one random member from a set. - * - * @param key the key - * - * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the - * randomly selected element, or {@literal null} when {@code key} does not exist. - */ - Observable srandmember(K key); - - /** - * Get one or multiple random members from a set. - * - * @param key the key - * @param count the count type: long - * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply - * with the randomly selected element, or {@literal null} when {@code key} does not exist. - */ - Observable srandmember(K key, long count); - - /** - * Get one or multiple random members from a set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param count the count - * @return Long count of members of the resulting set. - */ - Observable srandmember(ValueStreamingChannel channel, K key, long count); - - /** - * Remove one or more members from a set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply the number of members that were removed from the set, not including non existing members. - */ - Observable srem(K key, V... members); - - /** - * Add multiple sets. - * - * @param keys the key - * @return V array-reply list with members of the resulting set. - */ - Observable sunion(K... keys); - - /** - * Add multiple sets. - * - * @param channel streaming channel that receives a call for every value - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Observable sunion(ValueStreamingChannel channel, K... keys); - - /** - * Add multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Observable sunionstore(K destination, K... keys); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @return ValueScanCursor<V> scan cursor. - */ - Observable> sscan(K key); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanArgs scan arguments - * @return ValueScanCursor<V> scan cursor. - */ - Observable> sscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ValueScanCursor<V> scan cursor. - */ - Observable> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ValueScanCursor<V> scan cursor. - */ - Observable> sscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - Observable sscan(ValueStreamingChannel channel, K key); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Observable sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Observable sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - Observable sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisSortedSetReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisSortedSetReactiveCommands.java deleted file mode 100644 index bdc42ded83..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisSortedSetReactiveCommands.java +++ /dev/null @@ -1,836 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.List; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.ScoredValueScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ZAddArgs; -import com.lambdaworks.redis.ZStoreArgs; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import rx.Observable; - -/** - * Observable commands for Sorted Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisSortedSetReactiveCommands { - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Observable zadd(K key, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Observable zadd(K key, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Observable zadd(K key, ScoredValue... scoredValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Observable zadd(K key, ZAddArgs zAddArgs, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Observable zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the ke - * @param zAddArgs arguments for zadd - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Observable zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); - - /** - * ZADD acts like ZINCRBY - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The total number of elements changed - */ - Observable zaddincr(K key, double score, V member); - - /** - * Get the number of members in a sorted set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} - * does not exist. - */ - Observable zcard(K key); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Observable zcount(K key, double min, double max); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Observable zcount(K key, String min, String max); - - /** - * Increment the score of a member in a sorted set. - * - * @param key the key - * @param amount the increment type: long - * @param member the member type: key - * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented - * as string. - */ - Observable zincrby(K key, double amount, K member); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Observable zinterstore(K destination, K... keys); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Observable zinterstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return V array-reply list of elements in the specified range. - */ - Observable zrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return V array-reply list of elements in the specified range. - */ - Observable> zrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return V array-reply list of elements in the specified score range. - */ - Observable zrangebyscore(K key, double min, double max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return V array-reply list of elements in the specified score range. - */ - Observable zrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return V array-reply list of elements in the specified score range. - */ - Observable zrangebyscore(K key, double min, double max, long offset, long count); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return V array-reply list of elements in the specified score range. - */ - Observable zrangebyscore(K key, String min, String max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return ScoredValue<V> array-reply list of elements in the specified score range. - */ - Observable> zrangebyscoreWithScores(K key, double min, double max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return ScoredValue<V> array-reply list of elements in the specified score range. - */ - Observable> zrangebyscoreWithScores(K key, String min, String max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return ScoredValue<V> array-reply list of elements in the specified score range. - */ - Observable> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return ScoredValue<V> array-reply list of elements in the specified score range. - */ - Observable> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); - - /** - * Return a range of members in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Observable zrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Observable zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Observable zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Observable zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Observable zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Observable zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Observable zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Observable zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Observable zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Observable zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Determine the index of a member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Observable zrank(K key, V member); - - /** - * Remove one or more members from a sorted set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply specifically: - * - * The number of members removed from the sorted set, not including non existing members. - */ - Observable zrem(K key, V... members); - - /** - * Remove all members in a sorted set within the given indexes. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long integer-reply the number of elements removed. - */ - Observable zremrangebyrank(K key, long start, long stop); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Observable zremrangebyscore(K key, double min, double max); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Observable zremrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return V array-reply list of elements in the specified range. - */ - Observable zrevrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return V array-reply list of elements in the specified range. - */ - Observable> zrevrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return V array-reply list of elements in the specified score range. - */ - Observable zrevrangebyscore(K key, double max, double min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return V array-reply list of elements in the specified score range. - */ - Observable zrevrangebyscore(K key, String max, String min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the withscores - * @param count the null - * @return V array-reply list of elements in the specified score range. - */ - Observable zrevrangebyscore(K key, double max, double min, long offset, long count); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return V array-reply list of elements in the specified score range. - */ - Observable zrevrangebyscore(K key, String max, String min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return V array-reply list of elements in the specified score range. - */ - Observable> zrevrangebyscoreWithScores(K key, double max, double min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return ScoredValue<V> array-reply list of elements in the specified score range. - */ - Observable> zrevrangebyscoreWithScores(K key, String max, String min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return ScoredValue<V> array-reply list of elements in the specified score range. - */ - Observable> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return V array-reply list of elements in the specified score range. - */ - Observable> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Observable zrevrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Observable zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param max max score - * @param min min score - * @return Long count of elements in the specified range. - */ - Observable zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Observable zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Observable zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Observable zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Observable zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Observable zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Observable zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Observable zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Determine the index of a member in a sorted set, with scores ordered from high to low. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Observable zrevrank(K key, V member); - - /** - * Get the score associated with the given member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as - * string. - */ - Observable zscore(K key, V member); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination destination key - * @param keys source keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Observable zunionstore(K destination, K... keys); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Observable zunionstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @return ScoredValueScanCursor<V> scan cursor. - */ - Observable> zscan(K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - Observable> zscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - Observable> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ScoredValueScanCursor<V> scan cursor. - */ - Observable> zscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - Observable zscan(ScoredValueStreamingChannel channel, K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Observable zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Observable zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - Observable zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Count the number of members in a sorted set between a given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Observable zlexcount(K key, String min, String max); - - /** - * Remove all members in a sorted set between the given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Observable zremrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return V array-reply list of elements in the specified score range. - */ - Observable zrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return V array-reply list of elements in the specified score range. - */ - Observable zrangebylex(K key, String min, String max, long offset, long count); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisStringReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisStringReactiveCommands.java deleted file mode 100644 index cf522e63ac..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisStringReactiveCommands.java +++ /dev/null @@ -1,343 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import com.lambdaworks.redis.BitFieldArgs; -import com.lambdaworks.redis.SetArgs; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import rx.Observable; - -import java.util.Map; - -/** - * Observable commands for Strings. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisStringReactiveCommands { - - /** - * Append a value to a key. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the string after the append operation. - */ - Observable append(K key, V value); - - /** - * Count set bits in a string. - * - * @param key the key - * - * @return Long integer-reply The number of bits set to 1. - */ - Observable bitcount(K key); - - /** - * Count set bits in a string. - * - * @param key the key - * @param start the start - * @param end the end - * - * @return Long integer-reply The number of bits set to 1. - */ - Observable bitcount(K key, long start, long end); - - /** - * Execute {@code BITFIELD} with its subcommands. - * - * @param key the key - * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. - * - * @return Long bulk-reply the results from the bitfield commands. - */ - Observable bitfield(K key, BitFieldArgs bitFieldArgs); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the state - * - * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - Observable bitpos(K key, boolean state); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the bit type: long - * @param start the start type: long - * @param end the end type: long - * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - Observable bitpos(K key, boolean state, long start, long end); - - /** - * Perform bitwise AND between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Observable bitopAnd(K destination, K... keys); - - /** - * Perform bitwise NOT between strings. - * - * @param destination result key of the operation - * @param source operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Observable bitopNot(K destination, K source); - - /** - * Perform bitwise OR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Observable bitopOr(K destination, K... keys); - - /** - * Perform bitwise XOR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Observable bitopXor(K destination, K... keys); - - /** - * Decrement the integer value of a key by one. - * - * @param key the key - * @return Long integer-reply the value of {@code key} after the decrement - */ - Observable decr(K key); - - /** - * Decrement the integer value of a key by the given number. - * - * @param key the key - * @param amount the decrement type: long - * @return Long integer-reply the value of {@code key} after the decrement - */ - Observable decrby(K key, long amount); - - /** - * Get the value of a key. - * - * @param key the key - * @return V bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not exist. - */ - Observable get(K key); - - /** - * Returns the bit value at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @return Long integer-reply the bit value stored at offset. - */ - Observable getbit(K key, long offset); - - /** - * Get a substring of the string stored at a key. - * - * @param key the key - * @param start the start type: long - * @param end the end type: long - * @return V bulk-string-reply - */ - Observable getrange(K key, long start, long end); - - /** - * Set the string value of a key and return its old value. - * - * @param key the key - * @param value the value - * @return V bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} did not exist. - */ - Observable getset(K key, V value); - - /** - * Increment the integer value of a key by one. - * - * @param key the key - * @return Long integer-reply the value of {@code key} after the increment - */ - Observable incr(K key); - - /** - * Increment the integer value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: long - * @return Long integer-reply the value of {@code key} after the increment - */ - Observable incrby(K key, long amount); - - /** - * Increment the float value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: double - * @return Double bulk-string-reply the value of {@code key} after the increment. - */ - Observable incrbyfloat(K key, double amount); - - /** - * Get the values of all the given keys. - * - * @param keys the key - * @return V array-reply list of values at the specified keys. - */ - Observable mget(K... keys); - - /** - * Stream over the values of all the given keys. - * - * @param channel the channel - * @param keys the keys - * - * @return Long array-reply list of values at the specified keys. - */ - Observable mget(ValueStreamingChannel channel, K... keys); - - /** - * Set multiple keys to multiple values. - * - * @param map the null - * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. - */ - Observable mset(Map map); - - /** - * Set multiple keys to multiple values, only if none of the keys exist. - * - * @param map the null - * @return Boolean integer-reply specifically: - * - * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). - */ - Observable msetnx(Map map); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - Observable set(K key, V value); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * @param setArgs the setArgs - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - Observable set(K key, V value, SetArgs setArgs); - - /** - * Sets or clears the bit at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @param value the value type: string - * @return Long integer-reply the original bit value stored at offset. - */ - Observable setbit(K key, long offset, int value); - - /** - * Set the value and expiration of a key. - * - * @param key the key - * @param seconds the seconds type: long - * @param value the value - * @return String simple-string-reply - */ - Observable setex(K key, long seconds, V value); - - /** - * Set the value and expiration in milliseconds of a key. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @param value the value - * @return String simple-string-reply - */ - Observable psetex(K key, long milliseconds, V value); - - /** - * Set the value of a key, only if the key does not exist. - * - * @param key the key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@code 1} if the key was set {@code 0} if the key was not set - */ - Observable setnx(K key, V value); - - /** - * Overwrite part of a string at key starting at the specified offset. - * - * @param key the key - * @param offset the offset type: long - * @param value the value - * @return Long integer-reply the length of the string after it was modified by the command. - */ - Observable setrange(K key, long offset, V value); - - /** - * Get the length of the value stored in a key. - * - * @param key the key - * @return Long integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does not exist. - */ - Observable strlen(K key); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/RedisTransactionalReactiveCommands.java b/src/main/java/com/lambdaworks/redis/api/rx/RedisTransactionalReactiveCommands.java deleted file mode 100644 index 3cc1f8e776..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/RedisTransactionalReactiveCommands.java +++ /dev/null @@ -1,54 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -import java.util.List; -import rx.Observable; - -/** - * Observable commands for Transactions. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisTransactionalReactiveCommands { - - /** - * Discard all commands issued after MULTI. - * - * @return String simple-string-reply always {@code OK}. - */ - Observable discard(); - - /** - * Execute all commands issued after MULTI. - * - * @return Object array-reply each element being the reply to each of the commands in the atomic transaction. - * - * When using {@code WATCH}, {@code EXEC} can return a - */ - Observable exec(); - - /** - * Mark the start of a transaction block. - * - * @return String simple-string-reply always {@code OK}. - */ - Observable multi(); - - /** - * Watch the given keys to determine execution of the MULTI/EXEC block. - * - * @param keys the key - * @return String simple-string-reply always {@code OK}. - */ - Observable watch(K... keys); - - /** - * Forget about all watched keys. - * - * @return String simple-string-reply always {@code OK}. - */ - Observable unwatch(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/Success.java b/src/main/java/com/lambdaworks/redis/api/rx/Success.java deleted file mode 100644 index 37e1c6906f..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/Success.java +++ /dev/null @@ -1,10 +0,0 @@ -package com.lambdaworks.redis.api.rx; - -/** - * An enum representing a successful operation. - * @author Mark Paluch - * @since 4.0 - */ -public enum Success { - Success; -} diff --git a/src/main/java/com/lambdaworks/redis/api/rx/package-info.java b/src/main/java/com/lambdaworks/redis/api/rx/package-info.java deleted file mode 100644 index 36be73a34f..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/rx/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Standalone Redis API for reactive commands. - */ -package com.lambdaworks.redis.api.rx; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/api/sync/BaseRedisCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/BaseRedisCommands.java deleted file mode 100644 index a88c2b84dc..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/BaseRedisCommands.java +++ /dev/null @@ -1,150 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.ProtocolKeyword; -import com.lambdaworks.redis.output.CommandOutput; - -/** - * - * Synchronous executed commands for basic commands. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface BaseRedisCommands extends AutoCloseable { - - /** - * Post a message to a channel. - * - * @param channel the channel type: key - * @param message the message type: value - * @return Long integer-reply the number of clients that received the message. - */ - Long publish(K channel, V message); - - /** - * Lists the currently *active channels*. - * - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - List pubsubChannels(); - - /** - * Lists the currently *active channels*. - * - * @param channel the key - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - List pubsubChannels(K channel); - - /** - * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. - * - * @param channels channel keys - * @return array-reply a list of channels and number of subscribers for every channel. - */ - Map pubsubNumsub(K... channels); - - /** - * Returns the number of subscriptions to patterns. - * - * @return Long integer-reply the number of patterns all the clients are subscribed to. - */ - Long pubsubNumpat(); - - /** - * Echo the given string. - * - * @param msg the message type: value - * @return V bulk-string-reply - */ - V echo(V msg); - - /** - * Return the role of the instance in the context of replication. - * - * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional - * elements are role-specific. - */ - List role(); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - String ping(); - - /** - * Switch connection to Read-Only mode when connecting to a cluster. - * - * @return String simple-string-reply. - */ - String readOnly(); - - /** - * Switch connection to Read-Write mode (default) when connecting to a cluster. - * - * @return String simple-string-reply. - */ - String readWrite(); - - /** - * Close the connection. - * - * @return String simple-string-reply always OK. - */ - String quit(); - - /** - * Wait for replication. - * - * @param replicas minimum number of replicas - * @param timeout timeout in milliseconds - * @return number of replicas - */ - Long waitForReplication(int replicas, long timeout); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param response type - * @return the command response - */ - T dispatch(ProtocolKeyword type, CommandOutput output); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param args the command arguments, must not be {@literal null}. - * @param response type - * @return the command response - */ - T dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); - - /** - * Close the connection. The connection will become not usable anymore as soon as this method was called. - */ - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the - * internal state machine gets out of sync with the connection. - */ - void reset(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisCommands.java deleted file mode 100644 index 00e40e5125..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisCommands.java +++ /dev/null @@ -1,51 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; - -/** - * - * A complete synchronous and thread-safe Redis API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public interface RedisCommands extends RedisHashCommands, RedisKeyCommands, RedisStringCommands, - RedisListCommands, RedisSetCommands, RedisSortedSetCommands, RedisScriptingCommands, - RedisServerCommands, RedisHLLCommands, BaseRedisCommands, RedisClusterCommands, - RedisTransactionalCommands, RedisGeoCommands, RedisConnection { - - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - String auth(String password); - - /** - * Change the selected database for the current Commands. - * - * @param db the database number - * @return String simple-string-reply - */ - String select(int db); - - /** - * @return the underlying connection. - */ - StatefulRedisConnection getStatefulConnection(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisGeoCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisGeoCommands.java deleted file mode 100644 index b7c5ba2100..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisGeoCommands.java +++ /dev/null @@ -1,151 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import com.lambdaworks.redis.GeoArgs; -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.GeoRadiusStoreArgs; -import com.lambdaworks.redis.GeoWithin; -import java.util.List; -import java.util.Set; - -/** - * Synchronous executed commands for the Geo-API. - * - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisGeoCommands { - - /** - * Single geo add. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param member the member to add - * @return Long integer-reply the number of elements that were added to the set - */ - Long geoadd(K key, double longitude, double latitude, V member); - - /** - * Multi geo add. - * - * @param key the key of the geo set - * @param lngLatMember triplets of double longitude, double latitude and V member - * @return Long integer-reply the number of elements that were added to the set - */ - Long geoadd(K key, Object... lngLatMember); - - /** - * Retrieve Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index. - * - * @param key the key of the geo set - * @param members the members - * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. - */ - List geohash(K key, V... members); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @return bulk reply - */ - Set georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - List> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadius(Object, double, double, double, Unit, GeoArgs)} query and store the results in a sorted set. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - Long georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @return set of members - */ - Set georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); - - /** - * - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - List> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadiusbymember(Object, Object, double, Unit, GeoArgs)} query and store the results in a sorted set. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - Long georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Get geo coordinates for the {@code members}. - * - * @param key the key of the geo set - * @param members the members - * - * @return a list of {@link GeoCoordinates}s representing the x,y position of each element specified in the arguments. For - * missing elements {@literal null} is returned. - */ - List geopos(K key, V... members); - - /** - * - * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. Default in meters by, otherwise according to {@code unit} - * - * @param key the key of the geo set - * @param from from member - * @param to to member - * @param unit distance unit - * - * @return distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. - */ - Double geodist(K key, V from, V to, GeoArgs.Unit unit); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisHLLCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisHLLCommands.java deleted file mode 100644 index 3151281632..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisHLLCommands.java +++ /dev/null @@ -1,47 +0,0 @@ -package com.lambdaworks.redis.api.sync; - - -/** - * Synchronous executed commands for HyperLogLog (PF* commands). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisHLLCommands { - - /** - * Adds the specified elements to the specified HyperLogLog. - * - * @param key the key - * @param values the values - * - * @return Long integer-reply specifically: - * - * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. - */ - Long pfadd(K key, V... values); - - /** - * Merge N different HyperLogLogs into a single one. - * - * @param destkey the destination key - * @param sourcekeys the source key - * - * @return String simple-string-reply The command just returns {@code OK}. - */ - String pfmerge(K destkey, K... sourcekeys); - - /** - * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). - * - * @param keys the keys - * - * @return Long integer-reply specifically: - * - * The approximated number of unique elements observed via {@code PFADD}. - */ - Long pfcount(K... keys); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisHashCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisHashCommands.java deleted file mode 100644 index 9fdc19c7fe..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisHashCommands.java +++ /dev/null @@ -1,279 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.MapScanCursor; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Hashes (Key-Value pairs). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisHashCommands { - - /** - * Delete one or more hash fields. - * - * @param key the key - * @param fields the field type: key - * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing - * fields. - */ - Long hdel(K key, K... fields); - - /** - * Determine if a hash field exists. - * - * @param key the key - * @param field the field type: key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, - * or {@code key} does not exist. - */ - Boolean hexists(K key, K field); - - /** - * Get the value of a hash field. - * - * @param key the key - * @param field the field type: key - * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present - * in the hash or {@code key} does not exist. - */ - V hget(K key, K field); - - /** - * Increment the integer value of a hash field by the given number. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: long - * @return Long integer-reply the value at {@code field} after the increment operation. - */ - Long hincrby(K key, K field, long amount); - - /** - * Increment the float value of a hash field by the given amount. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: double - * @return Double bulk-string-reply the value of {@code field} after the increment. - */ - Double hincrbyfloat(K key, K field, double amount); - - /** - * Get all the fields and values in a hash. - * - * @param key the key - * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} - * does not exist. - */ - Map hgetall(K key); - - /** - * Stream over all the fields and values in a hash. - * - * @param channel the channel - * @param key the key - * - * @return Long count of the keys. - */ - Long hgetall(KeyValueStreamingChannel channel, K key); - - /** - * Get all the fields in a hash. - * - * @param key the key - * @return List<K> array-reply list of fields in the hash, or an empty list when {@code key} does not exist. - */ - List hkeys(K key); - - /** - * Stream over all the fields in a hash. - * - * @param channel the channel - * @param key the key - * - * @return Long count of the keys. - */ - Long hkeys(KeyStreamingChannel channel, K key); - - /** - * Get the number of fields in a hash. - * - * @param key the key - * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. - */ - Long hlen(K key); - - /** - * Get the values of all the given hash fields. - * - * @param key the key - * @param fields the field type: key - * @return List<V> array-reply list of values associated with the given fields, in the same - */ - List hmget(K key, K... fields); - - /** - * Stream over the values of all the given hash fields. - * - * @param channel the channel - * @param key the key - * @param fields the fields - * - * @return Long count of the keys - */ - Long hmget(ValueStreamingChannel channel, K key, K... fields); - - /** - * Set multiple hash fields to multiple values. - * - * @param key the key - * @param map the null - * @return String simple-string-reply - */ - String hmset(K key, Map map); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanArgs scan arguments - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Set the string value of a hash field. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@literal true} if {@code field} is a new field in the hash and {@code value} was set. {@literal false} if - * {@code field} already exists in the hash and the value was updated. - */ - Boolean hset(K key, K field, V value); - - /** - * Set the value of a hash field, only if the field does not exist. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@code 1} if {@code field} is a new field in the hash and {@code value} was set. {@code 0} if {@code field} - * already exists in the hash and no operation was performed. - */ - Boolean hsetnx(K key, K field, V value); - - /** - * Get the string length of the field value in a hash. - * - * @param key the key - * @param field the field type: key - * @return Long integer-reply the string length of the {@code field} value, or {@code 0} when {@code field} is not present - * in the hash or {@code key} does not exist at all. - */ - Long hstrlen(K key, K field); - - /** - * Get all the values in a hash. - * - * @param key the key - * @return List<V> array-reply list of values in the hash, or an empty list when {@code key} does not exist. - */ - List hvals(K key); - - /** - * Stream over all the values in a hash. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * - * @return Long count of the keys. - */ - Long hvals(ValueStreamingChannel channel, K key); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisKeyCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisKeyCommands.java deleted file mode 100644 index e4c2b825dc..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisKeyCommands.java +++ /dev/null @@ -1,398 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.Date; -import java.util.List; -import com.lambdaworks.redis.KeyScanCursor; -import com.lambdaworks.redis.MigrateArgs; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.SortArgs; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Keys (Key manipulation/querying). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisKeyCommands { - - /** - * Delete one or more keys. - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - */ - Long del(K... keys); - - /** - * Unlink one or more keys (non blocking DEL). - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - */ - Long unlink(K... keys); - - /** - * Return a serialized version of the value stored at the specified key. - * - * @param key the key - * @return byte[] bulk-string-reply the serialized value. - */ - byte[] dump(K key); - - /** - * Determine how many keys exist. - * - * @param keys the keys - * @return Long integer-reply specifically: Number of existing keys - */ - Long exists(K... keys); - - /** - * Set a key's time to live in seconds. - * - * @param key the key - * @param seconds the seconds type: long - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - Boolean expire(K key, long seconds); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean expireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean expireat(K key, long timestamp); - - /** - * Find all keys matching the given pattern. - * - * @param pattern the pattern type: patternkey (pattern) - * @return List<K> array-reply list of keys matching {@code pattern}. - */ - List keys(K pattern); - - /** - * Find all keys matching the given pattern. - * - * @param channel the channel - * @param pattern the pattern - * @return Long array-reply list of keys matching {@code pattern}. - */ - Long keys(KeyStreamingChannel channel, K pattern); - - /** - * Atomically transfer a key from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param key the key - * @param db the database - * @param timeout the timeout in milliseconds - * @return String simple-string-reply The command returns OK on success. - */ - String migrate(String host, int port, K key, int db, long timeout); - - /** - * Atomically transfer one or more keys from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param db the database - * @param timeout the timeout in milliseconds - * @param migrateArgs migrate args that allow to configure further options - * @return String simple-string-reply The command returns OK on success. - */ - String migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs); - - /** - * Move a key to another database. - * - * @param key the key - * @param db the db type: long - * @return Boolean integer-reply specifically: - */ - Boolean move(K key, int db); - - /** - * returns the kind of internal representation used in order to store the value associated with a key. - * - * @param key the key - * @return String - */ - String objectEncoding(K key); - - /** - * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write - * operations). - * - * @param key the key - * @return number of seconds since the object stored at the specified key is idle. - */ - Long objectIdletime(K key); - - /** - * returns the number of references of the value associated with the specified key. - * - * @param key the key - * @return Long - */ - Long objectRefcount(K key); - - /** - * Remove the expiration from a key. - * - * @param key the key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an - * associated timeout. - */ - Boolean persist(K key); - - /** - * Set a key's time to live in milliseconds. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @return integer-reply, specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - Boolean pexpire(K key, long milliseconds); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean pexpireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean pexpireat(K key, long timestamp); - - /** - * Get the time to live for a key in milliseconds. - * - * @param key the key - * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description - * above). - */ - Long pttl(K key); - - /** - * Return a random key from the keyspace. - * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. - */ - V randomkey(); - - /** - * Rename a key. - * - * @param key the key - * @param newKey the newkey type: key - * @return String simple-string-reply - */ - String rename(K key, K newKey); - - /** - * Rename a key, only if the new key does not exist. - * - * @param key the key - * @param newKey the newkey type: key - * @return Boolean integer-reply specifically: - * - * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. - */ - Boolean renamenx(K key, K newKey); - - /** - * Create a key using the provided serialized value, previously obtained using DUMP. - * - * @param key the key - * @param ttl the ttl type: long - * @param value the serialized-value type: string - * @return String simple-string-reply The command returns OK on success. - */ - String restore(K key, long ttl, byte[] value); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @return List<V> array-reply list of sorted elements. - */ - List sort(K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return Long number of values. - */ - Long sort(ValueStreamingChannel channel, K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @return List<V> array-reply list of sorted elements. - */ - List sort(K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param sortArgs sort arguments - * @return Long number of values. - */ - Long sort(ValueStreamingChannel channel, K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @param destination the destination key to store sort results - * @return Long number of values. - */ - Long sortStore(K key, SortArgs sortArgs, K destination); - - /** - * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * - * @param keys the keys - * @return Long integer-reply the number of found keys. - */ - Long touch(K... keys); - - /** - * Get the time to live for a key. - * - * @param key the key - * @return Long integer-reply TTL in seconds, or a negative value in order to signal an error (see the description above). - */ - Long ttl(K key); - - /** - * Determine the type stored at key. - * - * @param key the key - * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. - */ - String type(K key); - - /** - * Incrementally iterate the keys space. - * - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(); - - /** - * Incrementally iterate the keys space. - * - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanCursor scanCursor); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisListCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisListCommands.java deleted file mode 100644 index db469ea64c..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisListCommands.java +++ /dev/null @@ -1,215 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.List; -import com.lambdaworks.redis.KeyValue; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Lists. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisListCommands { - - /** - * Remove and get the first element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return KeyValue<K,V> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - KeyValue blpop(long timeout, K... keys); - - /** - * Remove and get the last element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return KeyValue<K,V> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - KeyValue brpop(long timeout, K... keys); - - /** - * Pop a value from a list, push it to another list and return it; or block until one is available. - * - * @param timeout the timeout in seconds - * @param source the source key - * @param destination the destination type: key - * @return V bulk-string-reply the element being popped from {@code source} and pushed to {@code destination}. If - * {@code timeout} is reached, a - */ - V brpoplpush(long timeout, K source, K destination); - - /** - * Get an element from a list by its index. - * - * @param key the key - * @param index the index type: long - * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. - */ - V lindex(K key, long index); - - /** - * Insert an element before or after another element in a list. - * - * @param key the key - * @param before the before - * @param pivot the pivot - * @param value the value - * @return Long integer-reply the length of the list after the insert operation, or {@code -1} when the value {@code pivot} - * was not found. - */ - Long linsert(K key, boolean before, V pivot, V value); - - /** - * Get the length of a list. - * - * @param key the key - * @return Long integer-reply the length of the list at {@code key}. - */ - Long llen(K key); - - /** - * Remove and get the first element in a list. - * - * @param key the key - * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. - */ - V lpop(K key); - - /** - * Prepend one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return Long integer-reply the length of the list after the push operations. - */ - Long lpush(K key, V... values); - - /** - * Prepend a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #lpushx(Object, Object[])} - */ - Long lpushx(K key, V value); - - /** - * Prepend values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - Long lpushx(K key, V... values); - - /** - * Get a range of elements from a list. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return List<V> array-reply list of elements in the specified range. - */ - List lrange(K key, long start, long stop); - - /** - * Get a range of elements from a list. - * - * @param channel the channel - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long count of elements in the specified range. - */ - Long lrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Remove elements from a list. - * - * @param key the key - * @param count the count type: long - * @param value the value - * @return Long integer-reply the number of removed elements. - */ - Long lrem(K key, long count, V value); - - /** - * Set the value of an element in a list by its index. - * - * @param key the key - * @param index the index type: long - * @param value the value - * @return String simple-string-reply - */ - String lset(K key, long index, V value); - - /** - * Trim a list to the specified range. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return String simple-string-reply - */ - String ltrim(K key, long start, long stop); - - /** - * Remove and get the last element in a list. - * - * @param key the key - * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. - */ - V rpop(K key); - - /** - * Remove the last element in a list, append it to another list and return it. - * - * @param source the source key - * @param destination the destination type: key - * @return V bulk-string-reply the element being popped and pushed. - */ - V rpoplpush(K source, K destination); - - /** - * Append one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return Long integer-reply the length of the list after the push operation. - */ - Long rpush(K key, V... values); - - /** - * Append a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #rpushx(java.lang.Object, java.lang.Object[])} - */ - Long rpushx(K key, V value); - - /** - * Append values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - Long rpushx(K key, V... values); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisScriptingCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisScriptingCommands.java deleted file mode 100644 index 8f23f65c7a..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisScriptingCommands.java +++ /dev/null @@ -1,102 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.List; -import com.lambdaworks.redis.ScriptOutputType; - -/** - * Synchronous executed commands for Scripting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisScriptingCommands { - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type output type - * @param keys key names - * @param expected return type - * @return script result - */ - T eval(String script, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - T eval(String script, ScriptOutputType type, K[] keys, V... values); - - /** - * Evaluates a script cached on the server side by its SHA1 digest - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param expected return type - * @return script result - */ - T evalsha(String digest, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - T evalsha(String digest, ScriptOutputType type, K[] keys, V... values); - - /** - * Check existence of scripts in the script cache. - * - * @param digests script digests - * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 - * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 - * is returned, otherwise 0 is returned. - */ - List scriptExists(String... digests); - - /** - * Remove all the scripts from the script cache. - * - * @return String simple-string-reply - */ - String scriptFlush(); - - /** - * Kill the script currently in execution. - * - * @return String simple-string-reply - */ - String scriptKill(); - - /** - * Load the specified Lua script into the script cache. - * - * @param script script content - * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. - */ - String scriptLoad(V script); - - /** - * Create a SHA1 digest from a Lua script. - * - * @param script script content - * @return the SHA1 value - */ - String digest(V script); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisServerCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisServerCommands.java deleted file mode 100644 index 7dd5ac1321..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisServerCommands.java +++ /dev/null @@ -1,331 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.Date; -import java.util.List; -import com.lambdaworks.redis.KillArgs; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * Synchronous executed commands for Server Control. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisServerCommands { - - /** - * Asynchronously rewrite the append-only file. - * - * @return String simple-string-reply always {@code OK}. - */ - String bgrewriteaof(); - - /** - * Asynchronously save the dataset to disk. - * - * @return String simple-string-reply - */ - String bgsave(); - - /** - * Get the current connection name. - * - * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. - */ - K clientGetname(); - - /** - * Set the current connection name. - * - * @param name the client name - * @return simple-string-reply {@code OK} if the connection name was successfully set. - */ - String clientSetname(K name); - - /** - * Kill the connection of a client identified by ip:port. - * - * @param addr ip:port - * @return String simple-string-reply {@code OK} if the connection exists and has been closed - */ - String clientKill(String addr); - - /** - * Kill connections of clients which are filtered by {@code killArgs} - * - * @param killArgs args for the kill operation - * @return Long integer-reply number of killed connections - */ - Long clientKill(KillArgs killArgs); - - /** - * Stop processing commands from clients for some time. - * - * @param timeout the timeout value in milliseconds - * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. - */ - String clientPause(long timeout); - - /** - * Get the list of client connections. - * - * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), - * each line is composed of a succession of property=value fields separated by a space character. - */ - String clientList(); - - /** - * Returns an array reply of details about all Redis commands. - * - * @return List<Object> array-reply - */ - List command(); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return List<Object> array-reply - */ - List commandInfo(String... commands); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return List<Object> array-reply - */ - List commandInfo(CommandType... commands); - - /** - * Get total number of Redis commands. - * - * @return Long integer-reply of number of total commands in this Redis server. - */ - Long commandCount(); - - /** - * Get the value of a configuration parameter. - * - * @param parameter name of the parameter - * @return List<String> bulk-string-reply - */ - List configGet(String parameter); - - /** - * Reset the stats returned by INFO. - * - * @return String simple-string-reply always {@code OK}. - */ - String configResetstat(); - - /** - * Rewrite the configuration file with the in memory configuration. - * - * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is - * returned. - */ - String configRewrite(); - - /** - * Set a configuration parameter to the given value. - * - * @param parameter the parameter name - * @param value the parameter value - * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. - */ - String configSet(String parameter, String value); - - /** - * Return the number of keys in the selected database. - * - * @return Long integer-reply - */ - Long dbsize(); - - /** - * Crash and recover - * @param delay optional delay in milliseconds - * @return String simple-string-reply - */ - String debugCrashAndRecover(Long delay); - - /** - * Get debugging information about the internal hash-table state. - * - * @param db the database number - * @return String simple-string-reply - */ - String debugHtstats(int db); - - /** - * Get debugging information about a key. - * - * @param key the key - * @return String simple-string-reply - */ - String debugObject(K key); - - /** - * Make the server crash: Out of memory. - */ - void debugOom(); - - /** - * Make the server crash: Invalid pointer access. - */ - void debugSegfault(); - - /** - * Save RDB, clear the database and reload RDB. - * - * @return String simple-string-reply The commands returns OK on success. - */ - String debugReload(); - - /** - * Restart the server gracefully. - * @param delay optional delay in milliseconds - * @return String simple-string-reply - */ - String debugRestart(Long delay); - - /** - * Get debugging information about the internal SDS length. - * - * @param key the key - * @return String simple-string-reply - */ - String debugSdslen(K key); - - /** - * Remove all keys from all databases. - * - * @return String simple-string-reply - */ - String flushall(); - - /** - * Remove all keys asynchronously from all databases. - * - * @return String simple-string-reply - */ - String flushallAsync(); - - /** - * Remove all keys from the current database. - * - * @return String simple-string-reply - */ - String flushdb(); - - /** - * Remove all keys asynchronously from the current database. - * - * @return String simple-string-reply - */ - String flushdbAsync(); - - /** - * Get information and statistics about the server. - * - * @return String bulk-string-reply as a collection of text lines. - */ - String info(); - - /** - * Get information and statistics about the server. - * - * @param section the section type: string - * @return String bulk-string-reply as a collection of text lines. - */ - String info(String section); - - /** - * Get the UNIX time stamp of the last successful save to disk. - * - * @return Date integer-reply an UNIX time stamp. - */ - Date lastsave(); - - /** - * Synchronously save the dataset to disk. - * - * @return String simple-string-reply The commands returns OK on success. - */ - String save(); - - /** - * Synchronously save the dataset to disk and then shut down the server. - * - * @param save {@literal true} force save operation - */ - void shutdown(boolean save); - - /** - * Make the server a slave of another instance, or promote it as master. - * - * @param host the host type: string - * @param port the port type: string - * @return String simple-string-reply - */ - String slaveof(String host, int port); - - /** - * Promote server as master. - * - * @return String simple-string-reply - */ - String slaveofNoOne(); - - /** - * Read the slow log. - * - * @return List<Object> deeply nested multi bulk replies - */ - List slowlogGet(); - - /** - * Read the slow log. - * - * @param count the count - * @return List<Object> deeply nested multi bulk replies - */ - List slowlogGet(int count); - - /** - * Obtaining the current length of the slow log. - * - * @return Long length of the slow log. - */ - Long slowlogLen(); - - /** - * Resetting the slow log. - * - * @return String simple-string-reply The commands returns OK on success. - */ - String slowlogReset(); - - /** - * Internal command used for replication. - * - * @return String simple-string-reply - * - * @Deprecated/ - String sync(); - - /** - * Return the current server time. - * - * @return List<V> array-reply specifically: - * - * A multi bulk reply containing two elements: - * - * unix time in seconds. microseconds. - */ - List time(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisSetCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisSetCommands.java deleted file mode 100644 index 6a67e804e2..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisSetCommands.java +++ /dev/null @@ -1,292 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.List; -import java.util.Set; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ValueScanCursor; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisSetCommands { - - /** - * Add one or more members to a set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply the number of elements that were added to the set, not including all the elements already - * present into the set. - */ - Long sadd(K key, V... members); - - /** - * Get the number of members in a set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not - * exist. - */ - Long scard(K key); - - /** - * Subtract multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sdiff(K... keys); - - /** - * Subtract multiple sets. - * - * @param channel the channel - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sdiff(ValueStreamingChannel channel, K... keys); - - /** - * Subtract multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sdiffstore(K destination, K... keys); - - /** - * Intersect multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sinter(K... keys); - - /** - * Intersect multiple sets. - * - * @param channel the channel - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sinter(ValueStreamingChannel channel, K... keys); - - /** - * Intersect multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sinterstore(K destination, K... keys); - - /** - * Determine if a given value is a member of a set. - * - * @param key the key - * @param member the member type: value - * @return Boolean integer-reply specifically: - * - * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the - * set, or if {@code key} does not exist. - */ - Boolean sismember(K key, V member); - - /** - * Move a member from one set to another. - * - * @param source the source key - * @param destination the destination type: key - * @param member the member type: value - * @return Boolean integer-reply specifically: - * - * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no - * operation was performed. - */ - Boolean smove(K source, K destination, V member); - - /** - * Get all the members in a set. - * - * @param key the key - * @return Set<V> array-reply all elements of the set. - */ - Set smembers(K key); - - /** - * Get all the members in a set. - * - * @param channel the channel - * @param key the keys - * @return Long count of members of the resulting set. - */ - Long smembers(ValueStreamingChannel channel, K key); - - /** - * Remove and return a random member from a set. - * - * @param key the key - * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - V spop(K key); - - /** - * Remove and return one or multiple random members from a set. - * - * @param key the key - * @param count number of members to pop - * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - Set spop(K key, long count); - - /** - * Get one random member from a set. - * - * @param key the key - * - * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the - * randomly selected element, or {@literal null} when {@code key} does not exist. - */ - V srandmember(K key); - - /** - * Get one or multiple random members from a set. - * - * @param key the key - * @param count the count type: long - * @return Set<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply - * with the randomly selected element, or {@literal null} when {@code key} does not exist. - */ - List srandmember(K key, long count); - - /** - * Get one or multiple random members from a set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param count the count - * @return Long count of members of the resulting set. - */ - Long srandmember(ValueStreamingChannel channel, K key, long count); - - /** - * Remove one or more members from a set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply the number of members that were removed from the set, not including non existing members. - */ - Long srem(K key, V... members); - - /** - * Add multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sunion(K... keys); - - /** - * Add multiple sets. - * - * @param channel streaming channel that receives a call for every value - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sunion(ValueStreamingChannel channel, K... keys); - - /** - * Add multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sunionstore(K destination, K... keys); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanArgs scan arguments - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisSortedSetCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisSortedSetCommands.java deleted file mode 100644 index 00062b93b0..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisSortedSetCommands.java +++ /dev/null @@ -1,835 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.List; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.ScoredValueScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ZAddArgs; -import com.lambdaworks.redis.ZStoreArgs; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands for Sorted Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisSortedSetCommands { - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ScoredValue... scoredValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ZAddArgs zAddArgs, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the ke - * @param zAddArgs arguments for zadd - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); - - /** - * ZADD acts like ZINCRBY - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The total number of elements changed - */ - Double zaddincr(K key, double score, V member); - - /** - * Get the number of members in a sorted set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} - * does not exist. - */ - Long zcard(K key); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zcount(K key, double min, double max); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zcount(K key, String min, String max); - - /** - * Increment the score of a member in a sorted set. - * - * @param key the key - * @param amount the increment type: long - * @param member the member type: key - * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented - * as string. - */ - Double zincrby(K key, double amount, K member); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zinterstore(K destination, K... keys); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zinterstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List zrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List> zrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, double min, double max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, double min, double max, long offset, long count); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, String min, String max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, double min, double max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, String min, String max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); - - /** - * Return a range of members in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Determine the index of a member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Long zrank(K key, V member); - - /** - * Remove one or more members from a sorted set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply specifically: - * - * The number of members removed from the sorted set, not including non existing members. - */ - Long zrem(K key, V... members); - - /** - * Remove all members in a sorted set within the given indexes. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyrank(K key, long start, long stop); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyscore(K key, double min, double max); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List zrevrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List> zrevrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, double max, double min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, String max, String min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the withscores - * @param count the null - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, double max, double min, long offset, long count); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, String max, String min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<V> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, double max, double min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, String max, String min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrevrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param max max score - * @param min min score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Determine the index of a member in a sorted set, with scores ordered from high to low. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Long zrevrank(K key, V member); - - /** - * Get the score associated with the given member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as - * string. - */ - Double zscore(K key, V member); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination destination key - * @param keys source keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zunionstore(K destination, K... keys); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zunionstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Count the number of members in a sorted set between a given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zlexcount(K key, String min, String max); - - /** - * Remove all members in a sorted set between the given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebylex(K key, String min, String max, long offset, long count); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisStringCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisStringCommands.java deleted file mode 100644 index 4daf15764e..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisStringCommands.java +++ /dev/null @@ -1,342 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.BitFieldArgs; -import com.lambdaworks.redis.SetArgs; - -/** - * Synchronous executed commands for Strings. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisStringCommands { - - /** - * Append a value to a key. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the string after the append operation. - */ - Long append(K key, V value); - - /** - * Count set bits in a string. - * - * @param key the key - * - * @return Long integer-reply The number of bits set to 1. - */ - Long bitcount(K key); - - /** - * Count set bits in a string. - * - * @param key the key - * @param start the start - * @param end the end - * - * @return Long integer-reply The number of bits set to 1. - */ - Long bitcount(K key, long start, long end); - - /** - * Execute {@code BITFIELD} with its subcommands. - * - * @param key the key - * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. - * - * @return Long bulk-reply the results from the bitfield commands. - */ - List bitfield(K key, BitFieldArgs bitFieldArgs); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the state - * - * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - Long bitpos(K key, boolean state); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the bit type: long - * @param start the start type: long - * @param end the end type: long - * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - Long bitpos(K key, boolean state, long start, long end); - - /** - * Perform bitwise AND between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopAnd(K destination, K... keys); - - /** - * Perform bitwise NOT between strings. - * - * @param destination result key of the operation - * @param source operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopNot(K destination, K source); - - /** - * Perform bitwise OR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopOr(K destination, K... keys); - - /** - * Perform bitwise XOR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopXor(K destination, K... keys); - - /** - * Decrement the integer value of a key by one. - * - * @param key the key - * @return Long integer-reply the value of {@code key} after the decrement - */ - Long decr(K key); - - /** - * Decrement the integer value of a key by the given number. - * - * @param key the key - * @param amount the decrement type: long - * @return Long integer-reply the value of {@code key} after the decrement - */ - Long decrby(K key, long amount); - - /** - * Get the value of a key. - * - * @param key the key - * @return V bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not exist. - */ - V get(K key); - - /** - * Returns the bit value at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @return Long integer-reply the bit value stored at offset. - */ - Long getbit(K key, long offset); - - /** - * Get a substring of the string stored at a key. - * - * @param key the key - * @param start the start type: long - * @param end the end type: long - * @return V bulk-string-reply - */ - V getrange(K key, long start, long end); - - /** - * Set the string value of a key and return its old value. - * - * @param key the key - * @param value the value - * @return V bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} did not exist. - */ - V getset(K key, V value); - - /** - * Increment the integer value of a key by one. - * - * @param key the key - * @return Long integer-reply the value of {@code key} after the increment - */ - Long incr(K key); - - /** - * Increment the integer value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: long - * @return Long integer-reply the value of {@code key} after the increment - */ - Long incrby(K key, long amount); - - /** - * Increment the float value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: double - * @return Double bulk-string-reply the value of {@code key} after the increment. - */ - Double incrbyfloat(K key, double amount); - - /** - * Get the values of all the given keys. - * - * @param keys the key - * @return List<V> array-reply list of values at the specified keys. - */ - List mget(K... keys); - - /** - * Stream over the values of all the given keys. - * - * @param channel the channel - * @param keys the keys - * - * @return Long array-reply list of values at the specified keys. - */ - Long mget(ValueStreamingChannel channel, K... keys); - - /** - * Set multiple keys to multiple values. - * - * @param map the null - * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. - */ - String mset(Map map); - - /** - * Set multiple keys to multiple values, only if none of the keys exist. - * - * @param map the null - * @return Boolean integer-reply specifically: - * - * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). - */ - Boolean msetnx(Map map); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - String set(K key, V value); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * @param setArgs the setArgs - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - String set(K key, V value, SetArgs setArgs); - - /** - * Sets or clears the bit at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @param value the value type: string - * @return Long integer-reply the original bit value stored at offset. - */ - Long setbit(K key, long offset, int value); - - /** - * Set the value and expiration of a key. - * - * @param key the key - * @param seconds the seconds type: long - * @param value the value - * @return String simple-string-reply - */ - String setex(K key, long seconds, V value); - - /** - * Set the value and expiration in milliseconds of a key. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @param value the value - * @return String simple-string-reply - */ - String psetex(K key, long milliseconds, V value); - - /** - * Set the value of a key, only if the key does not exist. - * - * @param key the key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@code 1} if the key was set {@code 0} if the key was not set - */ - Boolean setnx(K key, V value); - - /** - * Overwrite part of a string at key starting at the specified offset. - * - * @param key the key - * @param offset the offset type: long - * @param value the value - * @return Long integer-reply the length of the string after it was modified by the command. - */ - Long setrange(K key, long offset, V value); - - /** - * Get the length of the value stored in a key. - * - * @param key the key - * @return Long integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does not exist. - */ - Long strlen(K key); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/RedisTransactionalCommands.java b/src/main/java/com/lambdaworks/redis/api/sync/RedisTransactionalCommands.java deleted file mode 100644 index 9867968dd3..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/RedisTransactionalCommands.java +++ /dev/null @@ -1,53 +0,0 @@ -package com.lambdaworks.redis.api.sync; - -import java.util.List; - -/** - * Synchronous executed commands for Transactions. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisTransactionalCommands { - - /** - * Discard all commands issued after MULTI. - * - * @return String simple-string-reply always {@code OK}. - */ - String discard(); - - /** - * Execute all commands issued after MULTI. - * - * @return List<Object> array-reply each element being the reply to each of the commands in the atomic transaction. - * - * When using {@code WATCH}, {@code EXEC} can return a - */ - List exec(); - - /** - * Mark the start of a transaction block. - * - * @return String simple-string-reply always {@code OK}. - */ - String multi(); - - /** - * Watch the given keys to determine execution of the MULTI/EXEC block. - * - * @param keys the key - * @return String simple-string-reply always {@code OK}. - */ - String watch(K... keys); - - /** - * Forget about all watched keys. - * - * @return String simple-string-reply always {@code OK}. - */ - String unwatch(); -} diff --git a/src/main/java/com/lambdaworks/redis/api/sync/package-info.java b/src/main/java/com/lambdaworks/redis/api/sync/package-info.java deleted file mode 100644 index 25444955f5..0000000000 --- a/src/main/java/com/lambdaworks/redis/api/sync/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Standalone Redis API for synchronous executed commands. - */ -package com.lambdaworks.redis.api.sync; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/AbstractNodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/AbstractNodeSelection.java deleted file mode 100644 index 9f83712db3..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/AbstractNodeSelection.java +++ /dev/null @@ -1,60 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.List; -import java.util.Map; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Abstract base class to support node selections. A node selection represents a set of Redis Cluster nodes and allows command - * execution on the selected cluster nodes. - * - * @param API type. - * @param Command command interface type to invoke multi-node operations. - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -abstract class AbstractNodeSelection implements NodeSelectionSupport { - - protected StatefulRedisClusterConnection globalConnection; - private ClusterConnectionProvider.Intent intent; - protected ClusterDistributionChannelWriter writer; - - public AbstractNodeSelection(StatefulRedisClusterConnection globalConnection, ClusterConnectionProvider.Intent intent) { - this.globalConnection = globalConnection; - this.intent = intent; - writer = ((StatefulRedisClusterConnectionImpl) globalConnection).getClusterDistributionChannelWriter(); - } - - protected StatefulRedisConnection getConnection(RedisClusterNode redisClusterNode) { - RedisURI uri = redisClusterNode.getUri(); - return writer.getClusterConnectionProvider().getConnection(intent, uri.getHost(), uri.getPort()); - } - - /** - * @return List of involved nodes - */ - protected abstract List nodes(); - - @Override - public int size() { - return nodes().size(); - } - - public Map> statefulMap() { - return nodes().stream().collect( - Collectors.toMap(redisClusterNode -> redisClusterNode, redisClusterNode1 -> getConnection(redisClusterNode1))); - } - - @Override - public RedisClusterNode node(int index) { - return nodes().get(index); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/AsyncExecutionsImpl.java b/src/main/java/com/lambdaworks/redis/cluster/AsyncExecutionsImpl.java deleted file mode 100644 index 69ecbef51c..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/AsyncExecutionsImpl.java +++ /dev/null @@ -1,44 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.Collection; -import java.util.Collections; -import java.util.HashMap; -import java.util.Map; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CompletionStage; - -import com.lambdaworks.redis.cluster.api.async.AsyncExecutions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - */ -class AsyncExecutionsImpl implements AsyncExecutions { - - private Map> executions; - - public AsyncExecutionsImpl(Map> executions) { - this.executions = Collections.unmodifiableMap(new HashMap<>(executions)); - } - - @Override - public Map> asMap() { - return executions; - } - - @Override - public Collection nodes() { - return executions.keySet(); - } - - @Override - public CompletionStage get(RedisClusterNode redisClusterNode) { - return executions.get(redisClusterNode); - } - - @Override - @SuppressWarnings("rawtypes") - public CompletableFuture[] futures() { - return executions.values().toArray(new CompletableFuture[executions.size()]); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterClientOptions.java b/src/main/java/com/lambdaworks/redis/cluster/ClusterClientOptions.java deleted file mode 100644 index c3150c832d..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterClientOptions.java +++ /dev/null @@ -1,310 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.SocketOptions; - -/** - * Client Options to control the behavior of {@link RedisClusterClient}. - * - * @author Mark Paluch - */ -public class ClusterClientOptions extends ClientOptions { - - public static final boolean DEFAULT_REFRESH_CLUSTER_VIEW = false; - public static final long DEFAULT_REFRESH_PERIOD = 60; - public static final TimeUnit DEFAULT_REFRESH_PERIOD_UNIT = TimeUnit.SECONDS; - public static final boolean DEFAULT_CLOSE_STALE_CONNECTIONS = true; - public static final boolean DEFAULT_VALIDATE_CLUSTER_MEMBERSHIP = true; - public static final int DEFAULT_MAX_REDIRECTS = 5; - - private final boolean validateClusterNodeMembership; - private final int maxRedirects; - private final ClusterTopologyRefreshOptions topologyRefreshOptions; - - protected ClusterClientOptions(Builder builder) { - - super(builder); - - this.validateClusterNodeMembership = builder.validateClusterNodeMembership; - this.maxRedirects = builder.maxRedirects; - - ClusterTopologyRefreshOptions refreshOptions = builder.topologyRefreshOptions; - - if (refreshOptions == null) { - refreshOptions = new ClusterTopologyRefreshOptions.Builder()// - .enablePeriodicRefresh(builder.refreshClusterView)// - .refreshPeriod(builder.refreshPeriod, builder.refreshPeriodUnit)// - .closeStaleConnections(builder.closeStaleConnections)// - .build(); - } - - this.topologyRefreshOptions = refreshOptions; - } - - protected ClusterClientOptions(ClusterClientOptions original) { - - super(original); - - this.validateClusterNodeMembership = original.validateClusterNodeMembership; - this.maxRedirects = original.maxRedirects; - this.topologyRefreshOptions = original.topologyRefreshOptions; - } - - /** - * Create a copy of {@literal options}. - * - * @param options the original - * @return A new instance of {@link ClusterClientOptions} containing the values of {@literal options} - */ - public static ClusterClientOptions copyOf(ClusterClientOptions options) { - return new ClusterClientOptions(options); - } - - /** - * Returns a new {@link ClusterClientOptions.Builder} to construct {@link ClusterClientOptions}. - * - * @return a new {@link ClusterClientOptions.Builder} to construct {@link ClusterClientOptions}. - */ - public static ClusterClientOptions.Builder builder() { - return new ClusterClientOptions.Builder(); - } - - /** - * Create a new {@link ClusterClientOptions} using default settings. - * - * @return a new instance of default cluster client client options. - */ - public static ClusterClientOptions create() { - return builder().build(); - } - - /** - * Builder for {@link ClusterClientOptions}. - */ - public static class Builder extends ClientOptions.Builder { - - private boolean refreshClusterView = DEFAULT_REFRESH_CLUSTER_VIEW; - private long refreshPeriod = DEFAULT_REFRESH_PERIOD; - private TimeUnit refreshPeriodUnit = DEFAULT_REFRESH_PERIOD_UNIT; - private boolean closeStaleConnections = DEFAULT_CLOSE_STALE_CONNECTIONS; - private boolean validateClusterNodeMembership = DEFAULT_VALIDATE_CLUSTER_MEMBERSHIP; - private int maxRedirects = DEFAULT_MAX_REDIRECTS; - private ClusterTopologyRefreshOptions topologyRefreshOptions = null; - - /** - * @deprecated Use {@link ClusterClientOptions#builder()} - */ - @Deprecated - public Builder() { - } - - /** - * Enable regular cluster topology updates. The client starts updating the cluster topology in the intervals of - * {@link Builder#refreshPeriod} /{@link Builder#refreshPeriodUnit}. Defaults to {@literal false}. See - * {@link #DEFAULT_REFRESH_CLUSTER_VIEW}. - * - * @param refreshClusterView {@literal true} enable regular cluster topology updates or {@literal false} to disable - * auto-updating - * @return {@code this} - * @deprecated Use {@link #topologyRefreshOptions}, see - * {@link com.lambdaworks.redis.cluster.ClusterTopologyRefreshOptions.Builder#enablePeriodicRefresh(boolean)} - */ - @Deprecated - public Builder refreshClusterView(boolean refreshClusterView) { - this.refreshClusterView = refreshClusterView; - return this; - } - - /** - * Set the refresh period. Defaults to {@literal 60 SECONDS}. See {@link #DEFAULT_REFRESH_PERIOD} and - * {@link #DEFAULT_REFRESH_PERIOD_UNIT}. - * - * @param refreshPeriod period for triggering topology updates - * @param refreshPeriodUnit unit for {@code refreshPeriod} - * @return {@code this} - * @deprecated Use {@link #topologyRefreshOptions}, see - * {@link com.lambdaworks.redis.cluster.ClusterTopologyRefreshOptions.Builder#refreshPeriod(long, TimeUnit)} - */ - @Deprecated - public Builder refreshPeriod(long refreshPeriod, TimeUnit refreshPeriodUnit) { - this.refreshPeriod = refreshPeriod; - this.refreshPeriodUnit = refreshPeriodUnit; - return this; - } - - /** - * Flag, whether to close stale connections when refreshing the cluster topology. Defaults to {@literal true}. Comes - * only into effect if {@link #isRefreshClusterView()} is {@literal true}. See - * {@link ClusterClientOptions#DEFAULT_CLOSE_STALE_CONNECTIONS}. - * - * @param closeStaleConnections {@literal true} if stale connections are cleaned up after cluster topology updates - * @return {@code this} - * @deprecated Use {@link #topologyRefreshOptions}, see - * {@link com.lambdaworks.redis.cluster.ClusterTopologyRefreshOptions.Builder#closeStaleConnections(boolean)} - */ - @Deprecated - public Builder closeStaleConnections(boolean closeStaleConnections) { - this.closeStaleConnections = closeStaleConnections; - return this; - } - - /** - * Validate the cluster node membership before allowing connections to a cluster node. Defaults to {@literal true}. See - * {@link ClusterClientOptions#DEFAULT_VALIDATE_CLUSTER_MEMBERSHIP}. - * - * @param validateClusterNodeMembership {@literal true} if validation is enabled. - * @return {@code this} - */ - public Builder validateClusterNodeMembership(boolean validateClusterNodeMembership) { - this.validateClusterNodeMembership = validateClusterNodeMembership; - return this; - } - - /** - * Number of maximal cluster redirects ({@literal -MOVED} and {@literal -ASK}) to follow in case a key was moved from - * one node to another node. Defaults to {@literal 5}. See {@link ClusterClientOptions#DEFAULT_MAX_REDIRECTS}. - * - * @param maxRedirects the limit of maximal cluster redirects - * @return {@code this} - */ - public Builder maxRedirects(int maxRedirects) { - this.maxRedirects = maxRedirects; - return this; - } - - /** - * Sets the {@link ClusterTopologyRefreshOptions} for detailed control of topology updates. - * - * @param topologyRefreshOptions the {@link ClusterTopologyRefreshOptions} - * @return {@code this} - */ - public Builder topologyRefreshOptions(ClusterTopologyRefreshOptions topologyRefreshOptions) { - this.topologyRefreshOptions = topologyRefreshOptions; - return this; - } - - @Override - public Builder pingBeforeActivateConnection(boolean pingBeforeActivateConnection) { - super.pingBeforeActivateConnection(pingBeforeActivateConnection); - return this; - } - - @Override - public Builder autoReconnect(boolean autoReconnect) { - super.autoReconnect(autoReconnect); - return this; - } - - @Override - public Builder suspendReconnectOnProtocolFailure(boolean suspendReconnectOnProtocolFailure) { - super.suspendReconnectOnProtocolFailure(suspendReconnectOnProtocolFailure); - return this; - } - - @Override - public Builder cancelCommandsOnReconnectFailure(boolean cancelCommandsOnReconnectFailure) { - super.cancelCommandsOnReconnectFailure(cancelCommandsOnReconnectFailure); - return this; - } - - @Override - public Builder requestQueueSize(int requestQueueSize) { - super.requestQueueSize(requestQueueSize); - return this; - } - - @Override - public Builder disconnectedBehavior(DisconnectedBehavior disconnectedBehavior) { - super.disconnectedBehavior(disconnectedBehavior); - return this; - } - - @Override - public ClientOptions.Builder socketOptions(SocketOptions socketOptions) { - super.socketOptions(socketOptions); - return this; - } - - /** - * Create a new instance of {@link ClusterClientOptions} - * - * @return new instance of {@link ClusterClientOptions} - */ - public ClusterClientOptions build() { - return new ClusterClientOptions(this); - } - } - - /** - * Flag, whether regular cluster topology updates are updated. The client starts updating the cluster topology in the - * intervals of {@link #getRefreshPeriod()} /{@link #getRefreshPeriodUnit()}. Defaults to {@literal false}. Returns the - * value from {@link ClusterTopologyRefreshOptions} if provided. - * - * @return {@literal true} it the cluster topology view is updated periodically - */ - public boolean isRefreshClusterView() { - return topologyRefreshOptions.isPeriodicRefreshEnabled(); - } - - /** - * Period between the regular cluster topology updates. Defaults to {@literal 60}. Returns the value from - * {@link ClusterTopologyRefreshOptions} if provided. - * - * @return the period between the regular cluster topology updates - */ - public long getRefreshPeriod() { - return topologyRefreshOptions.getRefreshPeriod(); - } - - /** - * Unit for the {@link #getRefreshPeriod()}. Defaults to {@link TimeUnit#SECONDS}. Returns the value from - * {@link ClusterTopologyRefreshOptions} if provided. - * - * @return unit for the {@link #getRefreshPeriod()} - */ - public TimeUnit getRefreshPeriodUnit() { - return topologyRefreshOptions.getRefreshPeriodUnit(); - } - - /** - * Flag, whether to close stale connections when refreshing the cluster topology. Defaults to {@literal true}. Comes only - * into effect if {@link #isRefreshClusterView()} is {@literal true}. Returns the value from - * {@link ClusterTopologyRefreshOptions} if provided. - * - * @return {@literal true} if stale connections are cleaned up after cluster topology updates - */ - public boolean isCloseStaleConnections() { - return topologyRefreshOptions.isCloseStaleConnections(); - } - - /** - * Validate the cluster node membership before allowing connections to a cluster node. Defaults to {@literal true}. - * - * @return {@literal true} if validation is enabled. - */ - public boolean isValidateClusterNodeMembership() { - return validateClusterNodeMembership; - } - - /** - * Number of maximal of cluster redirects ({@literal -MOVED} and {@literal -ASK}) to follow in case a key was moved from one - * node to another node. Defaults to {@literal 5}. See {@link ClusterClientOptions#DEFAULT_MAX_REDIRECTS}. - * - * @return the maximal number of followed cluster redirects - */ - public int getMaxRedirects() { - return maxRedirects; - } - - /** - * The {@link ClusterTopologyRefreshOptions} for detailed control of topology updates. - * - * @return the {@link ClusterTopologyRefreshOptions}. - */ - public ClusterTopologyRefreshOptions getTopologyRefreshOptions() { - return topologyRefreshOptions; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterCommand.java b/src/main/java/com/lambdaworks/redis/cluster/ClusterCommand.java deleted file mode 100644 index ba0bbf59e1..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterCommand.java +++ /dev/null @@ -1,118 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandKeyword; -import com.lambdaworks.redis.protocol.CommandWrapper; -import com.lambdaworks.redis.protocol.ProtocolKeyword; -import com.lambdaworks.redis.protocol.RedisCommand; -import io.netty.buffer.ByteBuf; - -/** - * @author Mark Paluch - * @since 3.0 - */ -class ClusterCommand extends CommandWrapper implements RedisCommand { - - private RedisChannelWriter retry; - private int redirections; - private int maxRedirections; - private boolean completed; - - /** - * - * @param command - * @param retry - * @param maxRedirections - */ - ClusterCommand(RedisCommand command, RedisChannelWriter retry, int maxRedirections) { - super(command); - this.retry = retry; - this.maxRedirections = maxRedirections; - } - - @Override - public void complete() { - - if (isMoved() || isAsk()) { - - boolean retryCommand = maxRedirections > redirections; - redirections++; - - if(retryCommand) { - try { - retry.write(this); - } catch (Exception e) { - completeExceptionally(e); - } - return; - } - } - super.complete(); - completed = true; - } - - public boolean isMoved() { - if (command.getOutput() != null && command.getOutput().getError() != null - && command.getOutput().getError().startsWith(CommandKeyword.MOVED.name())) { - return true; - } - return false; - } - - public boolean isAsk() { - if (getError() != null && getError().startsWith(CommandKeyword.ASK.name())) { - return true; - } - return false; - } - - @Override - public CommandArgs getArgs() { - return command.getArgs(); - } - - @Override - public void encode(ByteBuf buf) { - command.encode(buf); - } - - @Override - public boolean completeExceptionally(Throwable ex) { - boolean result = command.completeExceptionally(ex); - completed = true; - return result; - } - - @Override - public ProtocolKeyword getType() { - return command.getType(); - } - - public boolean isCompleted() { - return completed; - } - - @Override - public boolean isDone() { - return isCompleted(); - } - - public String getError() { - if (command.getOutput() != null) { - return command.getOutput().getError(); - } - return null; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [command=").append(command); - sb.append(", redirections=").append(redirections); - sb.append(", maxRedirections=").append(maxRedirections); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterConnectionProvider.java b/src/main/java/com/lambdaworks/redis/cluster/ClusterConnectionProvider.java deleted file mode 100644 index a27f803ad8..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterConnectionProvider.java +++ /dev/null @@ -1,114 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.io.Closeable; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; - -/** - * Connection provider for cluster operations. - * - * @author Mark Paluch - * @since 3.0 - */ -interface ClusterConnectionProvider extends Closeable { - - /** - * Provide a connection for the intent and cluster slot. The underlying connection is bound to the nodeId. If the slot - * responsibility changes, the connection will not point to the updated nodeId. - * - * @param intent {@link com.lambdaworks.redis.cluster.ClusterConnectionProvider.Intent#READ} or - * {@link com.lambdaworks.redis.cluster.ClusterConnectionProvider.Intent#WRITE} {@literal READ} connections will be - * provided in {@literal READONLY} mode - * @param slot the slot-hash of the key, see {@link SlotHash} - * @return a valid connection which handles the slot. - * @throws RedisException if no know node can be found for the slot - */ - StatefulRedisConnection getConnection(Intent intent, int slot); - - /** - * Provide a connection for the intent and host/port. The connection can survive cluster topology updates. The connection * - * will be closed if the node identified by {@code host} and {@code port} is no longer part of the cluster. - * - * @param intent {@link com.lambdaworks.redis.cluster.ClusterConnectionProvider.Intent#READ} or - * {@link com.lambdaworks.redis.cluster.ClusterConnectionProvider.Intent#WRITE} {@literal READ} connections will be - * provided in {@literal READONLY} mode - * @param host host of the node - * @param port port of the node - * @return a valid connection to the given host. - * @throws RedisException if the host is not part of the cluster - */ - StatefulRedisConnection getConnection(Intent intent, String host, int port); - - /** - * Provide a connection for the intent and nodeId. The connection can survive cluster topology updates. The connection will - * be closed if the node identified by {@code nodeId} is no longer part of the cluster. - * - * - * @param intent Connection intent {@literal READ} or {@literal WRITE} - * @param nodeId the nodeId of the cluster node - * @return a valid connection to the given nodeId. - * @throws RedisException if the {@code nodeId} is not part of the cluster - */ - StatefulRedisConnection getConnection(Intent intent, String nodeId); - - /** - * Close the connections and free all resources. - */ - @Override - void close(); - - /** - * Reset the writer state. Queued commands will be canceled and the internal state will be reset. This is useful when the - * internal state machine gets out of sync with the connection. - */ - void reset(); - - /** - * Close connections that are not in use anymore/not part of the cluster. - */ - void closeStaleConnections(); - - /** - * Update partitions. - * - * @param partitions the new partitions - */ - void setPartitions(Partitions partitions); - - /** - * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands - * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is - * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. - * - * @param autoFlush state of autoFlush. - */ - void setAutoFlushCommands(boolean autoFlush); - - /** - * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to - * achieve batching. No-op if channel is not connected. - */ - void flushCommands(); - - /** - * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the - * documentation for {@link ReadFrom} for more information. - * - * @param readFrom the read from setting, must not be {@literal null} - */ - void setReadFrom(ReadFrom readFrom); - - /** - * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. - * - * @return the read from setting - */ - ReadFrom getReadFrom(); - - enum Intent { - READ, WRITE; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterDistributionChannelWriter.java b/src/main/java/com/lambdaworks/redis/cluster/ClusterDistributionChannelWriter.java deleted file mode 100644 index 15415dbee4..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterDistributionChannelWriter.java +++ /dev/null @@ -1,247 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.cluster.SlotHash.getSlot; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.internal.HostAndPort; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandKeyword; -import com.lambdaworks.redis.protocol.ProtocolKeyword; -import com.lambdaworks.redis.protocol.RedisCommand; - -import io.netty.util.concurrent.EventExecutorGroup; - -/** - * Channel writer for cluster operation. This writer looks up the right partition by hash/slot for the operation. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -class ClusterDistributionChannelWriter implements RedisChannelWriter { - - private final RedisChannelWriter defaultWriter; - private final ClusterEventListener clusterEventListener; - private final EventExecutorGroup eventExecutors; - private final int executionLimit; - private ClusterConnectionProvider clusterConnectionProvider; - private boolean closed = false; - - long p20, p21, p22, p23, p24, p25, p26; - long p30, p31, p32, p33, p34, p35, p36, p37; - - ClusterDistributionChannelWriter(ClientOptions clientOptions, RedisChannelWriter defaultWriter, - ClusterEventListener clusterEventListener, EventExecutorGroup eventExecutors) { - - if (clientOptions instanceof ClusterClientOptions) { - this.executionLimit = ((ClusterClientOptions) clientOptions).getMaxRedirects(); - } else { - this.executionLimit = 5; - } - - this.defaultWriter = defaultWriter; - this.clusterEventListener = clusterEventListener; - this.eventExecutors = eventExecutors; - } - - @Override - @SuppressWarnings("unchecked") - public > C write(C command) { - - LettuceAssert.notNull(command, "Command must not be null"); - - if (closed) { - throw new RedisException("Connection is closed"); - } - - RedisCommand commandToSend = command; - CommandArgs args = command.getArgs(); - - if (!(command instanceof ClusterCommand)) { - commandToSend = new ClusterCommand<>(command, this, executionLimit); - } - - - if (commandToSend instanceof ClusterCommand && !commandToSend.isDone()) { - - ClusterCommand clusterCommand = (ClusterCommand) commandToSend; - if (clusterCommand.isMoved() || clusterCommand.isAsk()) { - - HostAndPort target; - boolean asking; - if (clusterCommand.isMoved()) { - target = getMoveTarget(clusterCommand.getError()); - clusterEventListener.onMovedRedirection(); - asking = false; - } else { - target = getAskTarget(clusterCommand.getError()); - asking = true; - clusterEventListener.onAskRedirection(); - } - - commandToSend.getOutput().setError((String) null); - - eventExecutors.submit(() -> { - - try { - RedisChannelHandler connection = (RedisChannelHandler) clusterConnectionProvider - .getConnection(ClusterConnectionProvider.Intent.WRITE, target.getHostText(), target.getPort()); - - if (asking) { - // set asking bit - StatefulRedisConnection statefulRedisConnection = (StatefulRedisConnection) connection; - statefulRedisConnection.async().asking(); - } - - connection.getChannelWriter().write(command); - } catch (Exception e) { - command.completeExceptionally(e); - } - }); - - return command; - } - } - - RedisChannelWriter channelWriter = null; - - if (args != null && args.getFirstEncodedKey() != null) { - int hash = getSlot(args.getFirstEncodedKey()); - ClusterConnectionProvider.Intent intent = getIntent(command.getType()); - - RedisChannelHandler connection = (RedisChannelHandler) clusterConnectionProvider.getConnection(intent, - hash); - - channelWriter = connection.getChannelWriter(); - } - - if (channelWriter instanceof ClusterDistributionChannelWriter) { - ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) channelWriter; - channelWriter = writer.defaultWriter; - } - - if (command.getOutput() != null) { - commandToSend.getOutput().setError((String) null); - } - - if (channelWriter != null && channelWriter != this && channelWriter != defaultWriter) { - return channelWriter.write((C) commandToSend); - } - - defaultWriter.write((C) commandToSend); - - return command; - } - - private ClusterConnectionProvider.Intent getIntent(ProtocolKeyword type) { - for (ProtocolKeyword readOnlyCommand : ReadOnlyCommands.READ_ONLY_COMMANDS) { - if (readOnlyCommand == type) { - return ClusterConnectionProvider.Intent.READ; - } - } - - return ClusterConnectionProvider.Intent.WRITE; - } - - static HostAndPort getMoveTarget(String errorMessage) { - - LettuceAssert.notEmpty(errorMessage, "ErrorMessage must not be empty"); - LettuceAssert.isTrue(errorMessage.startsWith(CommandKeyword.MOVED.name()), - "ErrorMessage must start with " + CommandKeyword.MOVED); - - String[] movedMessageParts = errorMessage.split(" "); - LettuceAssert.isTrue(movedMessageParts.length >= 3, "ErrorMessage must consist of 3 tokens (" + errorMessage + ")"); - - return HostAndPort.parseCompat(movedMessageParts[2]); - } - - static HostAndPort getAskTarget(String errorMessage) { - - LettuceAssert.notEmpty(errorMessage, "ErrorMessage must not be empty"); - LettuceAssert.isTrue(errorMessage.startsWith(CommandKeyword.ASK.name()), - "ErrorMessage must start with " + CommandKeyword.ASK); - - String[] movedMessageParts = errorMessage.split(" "); - LettuceAssert.isTrue(movedMessageParts.length >= 3, "ErrorMessage must consist of 3 tokens (" + errorMessage + ")"); - - return HostAndPort.parseCompat(movedMessageParts[2]); - } - - @Override - public void close() { - - if (closed) { - return; - } - - closed = true; - - if (defaultWriter != null) { - defaultWriter.close(); - } - - if (clusterConnectionProvider != null) { - clusterConnectionProvider.close(); - clusterConnectionProvider = null; - } - - } - - @Override - public void setRedisChannelHandler(RedisChannelHandler redisChannelHandler) { - defaultWriter.setRedisChannelHandler(redisChannelHandler); - } - - @Override - public void setAutoFlushCommands(boolean autoFlush) { - getClusterConnectionProvider().setAutoFlushCommands(autoFlush); - } - - @Override - public void flushCommands() { - getClusterConnectionProvider().flushCommands(); - } - - public ClusterConnectionProvider getClusterConnectionProvider() { - return clusterConnectionProvider; - } - - @Override - public void reset() { - defaultWriter.reset(); - clusterConnectionProvider.reset(); - } - - public void setClusterConnectionProvider(ClusterConnectionProvider clusterConnectionProvider) { - this.clusterConnectionProvider = clusterConnectionProvider; - } - - public void setPartitions(Partitions partitions) { - if (clusterConnectionProvider != null) { - clusterConnectionProvider.setPartitions(partitions); - } - } - - /** - * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the - * documentation for {@link ReadFrom} for more information. - * - * @param readFrom the read from setting, must not be {@literal null} - */ - public void setReadFrom(ReadFrom readFrom) { - clusterConnectionProvider.setReadFrom(readFrom); - } - - /** - * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. - * - * @return the read from setting - */ - public ReadFrom getReadFrom() { - return clusterConnectionProvider.getReadFrom(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterEventListener.java b/src/main/java/com/lambdaworks/redis/cluster/ClusterEventListener.java deleted file mode 100644 index 852779784f..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterEventListener.java +++ /dev/null @@ -1,30 +0,0 @@ -package com.lambdaworks.redis.cluster; - -/** - * @author Mark Paluch - */ -interface ClusterEventListener { - - void onAskRedirection(); - - void onMovedRedirection(); - - void onReconnection(int attempt); - - static ClusterEventListener NO_OP = new ClusterEventListener() { - @Override - public void onAskRedirection() { - - } - - @Override - public void onMovedRedirection() { - - } - - @Override - public void onReconnection(int attempt) { - - } - }; -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterNodeCommandHandler.java b/src/main/java/com/lambdaworks/redis/cluster/ClusterNodeCommandHandler.java deleted file mode 100644 index dc28064f2b..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterNodeCommandHandler.java +++ /dev/null @@ -1,131 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.ArrayList; -import java.util.Collection; -import java.util.Queue; -import java.util.Set; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.internal.LettuceSets; -import com.lambdaworks.redis.protocol.CommandHandler; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.resource.ClientResources; - -import io.netty.channel.ChannelHandler; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Command handler for node connections within the Redis Cluster context. This handler can requeue commands if it is - * disconnected and closed but has commands in the queue. If the handler was connected it would retry commands using the - * {@literal MOVED} or {@literal ASK} redirection. - * - * @author Mark Paluch - */ -@ChannelHandler.Sharable -class ClusterNodeCommandHandler extends CommandHandler { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(ClusterNodeCommandHandler.class); - private static final Set CHANNEL_OPEN_STATES = LettuceSets.unmodifiableSet(LifecycleState.ACTIVATING, - LifecycleState.ACTIVE, LifecycleState.CONNECTED); - - private final RedisChannelWriter clusterChannelWriter; - - /** - * Initialize a new instance that handles commands from the supplied queue. - * - * @param clientOptions client options for this connection - * @param clientResources client resources for this connection - * @param queue The command queue - * @param clusterChannelWriter top-most channel writer. - */ - public ClusterNodeCommandHandler(ClientOptions clientOptions, ClientResources clientResources, - Queue> queue, RedisChannelWriter clusterChannelWriter) { - super(clientOptions, clientResources, queue); - this.clusterChannelWriter = clusterChannelWriter; - } - - /** - * Prepare the closing of the channel. - */ - public void prepareClose() { - if (channel != null) { - ConnectionWatchdog connectionWatchdog = channel.pipeline().get(ConnectionWatchdog.class); - if (connectionWatchdog != null) { - connectionWatchdog.setReconnectSuspended(true); - } - } - } - - /** - * Move queued and buffered commands from the inactive connection to the master command writer. This is done only if the - * current connection is disconnected and auto-reconnect is enabled (command-retries). If the connection would be open, we - * could get into a race that the commands we're moving are right now in processing. Alive connections can handle redirects - * and retries on their own. - */ - @Override - public void close() { - - logger.debug("{} close()", logPrefix()); - - if (clusterChannelWriter != null) { - - if (isAutoReconnect() && !CHANNEL_OPEN_STATES.contains(getState())) { - - Collection> commands = shiftCommands(queue); - retriggerCommands(commands); - } - - Collection> commands = shiftCommands(commandBuffer); - retriggerCommands(commands); - } - - super.close(); - } - - protected void retriggerCommands(Collection> commands) { - - for (RedisCommand queuedCommand : commands) { - - if (queuedCommand == null || queuedCommand.isCancelled()) { - continue; - } - - try { - clusterChannelWriter.write(queuedCommand); - } catch (RedisException e) { - queuedCommand.completeExceptionally(e); - } - } - } - - /** - * Retrieve commands within a lock to prevent concurrent modification - */ - private Collection> shiftCommands(Collection> source) { - - synchronized (stateLock) { - - try { - - lockWritersExclusive(); - - try { - return new ArrayList<>(source); - } finally { - source.clear(); - } - - } finally { - unlockWritersExclusive(); - } - } - } - - public boolean isAutoReconnect() { - return clientOptions.isAutoReconnect(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshScheduler.java b/src/main/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshScheduler.java deleted file mode 100644 index 07c963ded7..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshScheduler.java +++ /dev/null @@ -1,186 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicReference; - -import com.lambdaworks.redis.resource.ClientResources; - -import io.netty.util.concurrent.EventExecutorGroup; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * @author Mark Paluch - */ -class ClusterTopologyRefreshScheduler implements Runnable, ClusterEventListener { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(ClusterTopologyRefreshScheduler.class); - private static final ClusterTopologyRefreshOptions FALLBACK_OPTIONS = ClusterTopologyRefreshOptions.create(); - - private final RedisClusterClient redisClusterClient; - private final ClientResources clientResources; - private final ClusterTopologyRefreshTask clusterTopologyRefreshTask; - private final AtomicReference timeoutRef = new AtomicReference<>(); - - ClusterTopologyRefreshScheduler(RedisClusterClient redisClusterClient, ClientResources clientResources) { - - this.redisClusterClient = redisClusterClient; - this.clientResources = clientResources; - this.clusterTopologyRefreshTask = new ClusterTopologyRefreshTask(redisClusterClient); - } - - @Override - public void run() { - - logger.debug("ClusterTopologyRefreshScheduler.run()"); - - if (isEventLoopActive() && redisClusterClient.getClusterClientOptions() != null) { - if (!redisClusterClient.getClusterClientOptions().isRefreshClusterView()) { - logger.debug("Periodic ClusterTopologyRefresh is disabled"); - return; - } - } else { - logger.debug("Periodic ClusterTopologyRefresh is disabled"); - return; - } - - clientResources.eventExecutorGroup().submit(clusterTopologyRefreshTask); - } - - private void indicateTopologyRefreshSignal() { - - logger.debug("ClusterTopologyRefreshScheduler.indicateTopologyRefreshSignal()"); - - if (!acquireTimeout()) { - return; - } - - if (isEventLoopActive() && redisClusterClient.getClusterClientOptions() != null) { - clientResources.eventExecutorGroup().submit(clusterTopologyRefreshTask); - } else { - logger.debug("Adaptive ClusterTopologyRefresh is disabled"); - } - } - - /** - * Check if the {@link EventExecutorGroup} is active - * - * @return false if the worker pool is terminating, shutdown or terminated - */ - protected boolean isEventLoopActive() { - - EventExecutorGroup eventExecutors = clientResources.eventExecutorGroup(); - if (eventExecutors.isShuttingDown() || eventExecutors.isShutdown() || eventExecutors.isTerminated()) { - return false; - } - - return true; - } - - private boolean acquireTimeout() { - - Timeout existingTimeout = timeoutRef.get(); - - if (existingTimeout != null) { - if (!existingTimeout.isExpired()) { - return false; - } - } - - ClusterTopologyRefreshOptions refreshOptions = getClusterTopologyRefreshOptions(); - Timeout timeout = new Timeout(refreshOptions.getAdaptiveRefreshTimeout(), - refreshOptions.getAdaptiveRefreshTimeoutUnit()); - - if (timeoutRef.compareAndSet(existingTimeout, timeout)) { - return true; - } - - return false; - } - - @Override - public void onAskRedirection() { - - if (isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger.ASK_REDIRECT)) { - indicateTopologyRefreshSignal(); - } - } - - @Override - public void onMovedRedirection() { - - if (isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger.MOVED_REDIRECT)) { - indicateTopologyRefreshSignal(); - } - } - - @Override - public void onReconnection(int attempt) { - - if (isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger.PERSISTENT_RECONNECTS) - && attempt >= getClusterTopologyRefreshOptions().getRefreshTriggersReconnectAttempts()) { - indicateTopologyRefreshSignal(); - } - } - - private ClusterTopologyRefreshOptions getClusterTopologyRefreshOptions() { - - ClusterClientOptions clusterClientOptions = redisClusterClient.getClusterClientOptions(); - - if (clusterClientOptions != null) { - return clusterClientOptions.getTopologyRefreshOptions(); - } - - return FALLBACK_OPTIONS; - } - - private boolean isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger refreshTrigger) { - return getClusterTopologyRefreshOptions().getAdaptiveRefreshTriggers().contains(refreshTrigger); - } - - /** - * Value object to represent a timeout. - * - * @author Mark Paluch - * @since 4.2 - */ - private class Timeout { - - private final long expiresMs; - - public Timeout(long timeout, TimeUnit timeUnit) { - this.expiresMs = System.currentTimeMillis() + timeUnit.toMillis(timeout); - } - - public boolean isExpired() { - return expiresMs < System.currentTimeMillis(); - } - - public long remaining() { - - long diff = expiresMs - System.currentTimeMillis(); - if (diff > 0) { - return diff; - } - return 0; - } - } - - private static class ClusterTopologyRefreshTask implements Runnable { - - private final RedisClusterClient redisClusterClient; - - public ClusterTopologyRefreshTask(RedisClusterClient redisClusterClient) { - this.redisClusterClient = redisClusterClient; - } - - public void run() { - - if (logger.isDebugEnabled()) { - logger.debug("ClusterTopologyRefreshTask requesting partitions from {}", - redisClusterClient.getTopologyRefreshSource()); - } - redisClusterClient.reloadPartitions(); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/DynamicAsyncNodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/DynamicAsyncNodeSelection.java deleted file mode 100644 index 305d7f29dd..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/DynamicAsyncNodeSelection.java +++ /dev/null @@ -1,53 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.HashMap; -import java.util.Iterator; -import java.util.List; -import java.util.Map; -import java.util.function.Predicate; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @param Command command interface type to invoke multi-node operations. - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -class DynamicAsyncNodeSelection extends DynamicNodeSelection, CMD, K, V> { - - public DynamicAsyncNodeSelection(StatefulRedisClusterConnection globalConnection, - Predicate selector, ClusterConnectionProvider.Intent intent) { - super(globalConnection, selector, intent); - } - - public Iterator> iterator() { - List list = nodes().stream().collect(Collectors.toList()); - return list.stream().map(node -> getConnection(node).async()).iterator(); - } - - @Override - public RedisAsyncCommands commands(int index) { - return statefulMap().get(nodes().get(index)).async(); - } - - @Override - public Map> asMap() { - - List list = nodes().stream().collect(Collectors.toList()); - Map> map = new HashMap<>(); - - list.forEach((key) -> map.put(key, getConnection(key).async())); - - return map; - } - - // This method is never called, the value is supplied by AOP magic. - @Override - public CMD commands() { - return null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/DynamicNodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/DynamicNodeSelection.java deleted file mode 100644 index be6fa3fcc5..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/DynamicNodeSelection.java +++ /dev/null @@ -1,33 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.List; -import java.util.function.Predicate; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Dynamic selection of nodes. - * - * @param API type. - * @param Command command interface type to invoke multi-node operations. - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -abstract class DynamicNodeSelection extends AbstractNodeSelection { - - private final Predicate selector; - - public DynamicNodeSelection(StatefulRedisClusterConnection globalConnection, Predicate selector, - ClusterConnectionProvider.Intent intent) { - super(globalConnection, intent); - this.selector = selector; - } - - @Override - protected List nodes() { - return globalConnection.getPartitions().getPartitions().stream().filter(selector).collect(Collectors.toList()); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/DynamicSyncNodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/DynamicSyncNodeSelection.java deleted file mode 100644 index d8a1275478..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/DynamicSyncNodeSelection.java +++ /dev/null @@ -1,53 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.HashMap; -import java.util.Iterator; -import java.util.List; -import java.util.Map; -import java.util.function.Predicate; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @param Command command interface type to invoke multi-node operations. - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -class DynamicSyncNodeSelection extends DynamicNodeSelection, CMD, K, V> { - - public DynamicSyncNodeSelection(StatefulRedisClusterConnection globalConnection, - Predicate selector, ClusterConnectionProvider.Intent intent) { - super(globalConnection, selector, intent); - } - - public Iterator> iterator() { - List list = nodes().stream().collect(Collectors.toList()); - return list.stream().map(node -> getConnection(node).sync()).iterator(); - } - - @Override - public RedisCommands commands(int index) { - return statefulMap().get(nodes().get(index)).sync(); - } - - @Override - public Map> asMap() { - - List list = nodes().stream().collect(Collectors.toList()); - Map> map = new HashMap<>(); - - list.forEach((key) -> map.put(key, getConnection(key).sync())); - - return map; - } - - // This method is never called, the value is supplied by AOP magic. - @Override - public CMD commands() { - return null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/MultiNodeExecution.java b/src/main/java/com/lambdaworks/redis/cluster/MultiNodeExecution.java deleted file mode 100644 index 5417420baf..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/MultiNodeExecution.java +++ /dev/null @@ -1,97 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.lang.reflect.Proxy; -import java.util.Map; -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.atomic.AtomicLong; -import java.util.function.Predicate; - -import com.lambdaworks.redis.RedisCommandInterruptedException; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.sync.NodeSelection; -import com.lambdaworks.redis.cluster.api.sync.NodeSelectionCommands; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Utility to perform and synchronize command executions on multiple cluster nodes. - * - * @author Mark Paluch - */ -class MultiNodeExecution { - static T execute(Callable function) { - try { - return function.call(); - } catch (Exception e) { - throw new RedisException(e); - } - } - - /** - * Aggregate (sum) results of the {@link RedisFuture}s. - * - * @param executions mapping of a key to the future - * @return future producing an aggregation result - */ - protected static RedisFuture aggregateAsync(Map> executions) { - return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { - AtomicLong result = new AtomicLong(); - for (RedisFuture future : executions.values()) { - Long value = execute(() -> future.get()); - if (value != null) { - result.getAndAdd(value); - } - } - - return result.get(); - }); - } - - /** - * Returns the result of the first {@link RedisFuture} and guarantee that all futures are finished. - * - * @param executions mapping of a key to the future - * @param result type - * @return future returning the first result. - */ - protected static RedisFuture firstOfAsync(Map> executions) { - return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { - // make sure, that all futures are executed before returning the result. - for (RedisFuture future : executions.values()) { - execute(() -> future.get()); - } - for (RedisFuture future : executions.values()) { - return execute(() -> future.get()); - } - return null; - }); - } - - /** - * Returns always {@literal OK} and guarantee that all futures are finished. - * - * @param executions mapping of a key to the future - * @return future returning the first result. - */ - protected static RedisFuture alwaysOkOfAsync(Map> executions) { - return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { - // make sure, that all futures are executed before returning the result. - for (RedisFuture future : executions.values()) { - try { - future.get(); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - } catch (ExecutionException e) { - // swallow exceptions - } - } - return "OK"; - } ); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/NodeSelectionInvocationHandler.java b/src/main/java/com/lambdaworks/redis/cluster/NodeSelectionInvocationHandler.java deleted file mode 100644 index 38d7e95145..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/NodeSelectionInvocationHandler.java +++ /dev/null @@ -1,219 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.lang.reflect.InvocationTargetException; -import java.lang.reflect.Method; -import java.util.*; -import java.util.concurrent.*; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.RedisCommandExecutionException; -import com.lambdaworks.redis.RedisCommandInterruptedException; -import com.lambdaworks.redis.RedisCommandTimeoutException; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.AbstractInvocationHandler; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Invocation handler to trigger commands on multiple connections and return a holder for the values. - * - * @author Mark Paluch - */ -class NodeSelectionInvocationHandler extends AbstractInvocationHandler { - - private AbstractNodeSelection selection; - private boolean sync; - private long timeout; - private TimeUnit unit; - private final Map nodeSelectionMethods = new ConcurrentHashMap<>(); - private final Map connectionMethod = new ConcurrentHashMap<>(); - public final static Method NULL_MARKER_METHOD; - - static { - try { - NULL_MARKER_METHOD = NodeSelectionInvocationHandler.class.getDeclaredMethod("handleInvocation", Object.class, - Method.class, Object[].class); - } catch (NoSuchMethodException e) { - throw new IllegalStateException(e); - } - } - - public NodeSelectionInvocationHandler(AbstractNodeSelection selection) { - this(selection, false, 0, null); - } - - public NodeSelectionInvocationHandler(AbstractNodeSelection selection, boolean sync, long timeout, TimeUnit unit) { - if (sync) { - LettuceAssert.isTrue(timeout > 0, "Timeout must be greater 0 when using sync mode"); - LettuceAssert.notNull(unit, "Unit must not be null when using sync mode"); - } - - this.selection = selection; - this.sync = sync; - this.unit = unit; - this.timeout = timeout; - } - - @Override - @SuppressWarnings("rawtypes") - protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { - - try { - Method targetMethod = findMethod(RedisClusterAsyncCommands.class, method, connectionMethod); - - Map> connections = new HashMap<>(selection.size(), 1); - connections.putAll(selection.statefulMap()); - - if (targetMethod != null) { - - Map> executions = new HashMap<>(); - for (Map.Entry> entry : connections.entrySet()) { - - CompletionStage result = (CompletionStage) targetMethod.invoke(entry.getValue().async(), args); - executions.put(entry.getKey(), result); - } - - if (sync) { - if (!awaitAll(timeout, unit, executions.values())) { - throw createTimeoutException(executions); - } - - if (atLeastOneFailed(executions)) { - throw createExecutionException(executions); - } - - return new SyncExecutionsImpl(executions); - - } - return new AsyncExecutionsImpl<>((Map) executions); - } - - if (method.getName().equals("commands") && args.length == 0) { - return proxy; - } - - targetMethod = findMethod(NodeSelectionSupport.class, method, nodeSelectionMethods); - return targetMethod.invoke(selection, args); - } catch (InvocationTargetException e) { - throw e.getTargetException(); - } - } - - public static boolean awaitAll(long timeout, TimeUnit unit, Collection> futures) { - boolean complete; - - try { - long nanos = unit.toNanos(timeout); - long time = System.nanoTime(); - - for (CompletionStage f : futures) { - if (nanos < 0) { - return false; - } - try { - f.toCompletableFuture().get(nanos, TimeUnit.NANOSECONDS); - } catch (ExecutionException e) { - // ignore - } - long now = System.nanoTime(); - nanos -= now - time; - time = now; - } - - complete = true; - } catch (TimeoutException e) { - complete = false; - } catch (Exception e) { - throw new RedisCommandInterruptedException(e); - } - - return complete; - } - - private boolean atLeastOneFailed(Map> executions) { - return executions.values().stream() - .filter(completionStage -> completionStage.toCompletableFuture().isCompletedExceptionally()).findFirst() - .isPresent(); - } - - private RedisCommandTimeoutException createTimeoutException(Map> executions) { - List notFinished = new ArrayList<>(); - executions.forEach((redisClusterNode, completionStage) -> { - if (!completionStage.toCompletableFuture().isDone()) { - notFinished.add(redisClusterNode); - } - }); - String description = getNodeDescription(notFinished); - return new RedisCommandTimeoutException("Command timed out for node(s): " + description); - } - - private RedisCommandExecutionException createExecutionException(Map> executions) { - List failed = new ArrayList<>(); - executions.forEach((redisClusterNode, completionStage) -> { - if (!completionStage.toCompletableFuture().isCompletedExceptionally()) { - failed.add(redisClusterNode); - } - }); - - RedisCommandExecutionException e = new RedisCommandExecutionException( - "Multi-node command execution failed on node(s): " + getNodeDescription(failed)); - - executions.forEach((redisClusterNode, completionStage) -> { - CompletableFuture completableFuture = completionStage.toCompletableFuture(); - if (completableFuture.isCompletedExceptionally()) { - try { - completableFuture.get(); - } catch (Exception innerException) { - - if (innerException instanceof ExecutionException) { - e.addSuppressed(innerException.getCause()); - } else { - e.addSuppressed(innerException); - } - } - } - }); - return e; - } - - private String getNodeDescription(List notFinished) { - return String.join(", ", - notFinished.stream().map(redisClusterNode -> getDescriptor(redisClusterNode)).collect(Collectors.toList())); - } - - private String getDescriptor(RedisClusterNode redisClusterNode) { - StringBuffer buffer = new StringBuffer(redisClusterNode.getNodeId()); - buffer.append(" ("); - - if (redisClusterNode.getUri() != null) { - buffer.append(redisClusterNode.getUri().getHost()).append(':').append(redisClusterNode.getUri().getPort()); - } - - buffer.append(')'); - return buffer.toString(); - } - - private Method findMethod(Class type, Method method, Map cache) { - - Method result = cache.get(method); - if (result != null && result != NULL_MARKER_METHOD) { - return result; - } - - for (Method typeMethod : type.getMethods()) { - if (!typeMethod.getName().equals(method.getName()) - || !Arrays.equals(typeMethod.getParameterTypes(), method.getParameterTypes())) { - continue; - } - - cache.put(method, typeMethod); - return typeMethod; - } - - // Null-marker to avoid full class method scans. - cache.put(method, NULL_MARKER_METHOD); - return null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/PartitionAccessor.java b/src/main/java/com/lambdaworks/redis/cluster/PartitionAccessor.java deleted file mode 100644 index 26b7dd0752..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/PartitionAccessor.java +++ /dev/null @@ -1,54 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.ArrayList; -import java.util.Collection; -import java.util.List; -import java.util.function.Predicate; - -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Accessor for Partitions. - * - * @author Mark Paluch - */ -class PartitionAccessor { - - private final Collection partitions; - - PartitionAccessor(Collection partitions) { - this.partitions = partitions; - } - - List getMasters() { - return get(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)); - } - - List getSlaves() { - return get(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); - - } - - List getSlaves(RedisClusterNode master) { - return get(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE) - && master.getNodeId().equals(redisClusterNode.getSlaveOf())); - } - - List getReadCandidates(RedisClusterNode master) { - return get(redisClusterNode -> redisClusterNode.getNodeId().equals(master.getNodeId()) - || (redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE) && master.getNodeId().equals( - redisClusterNode.getSlaveOf()))); - } - - List get(Predicate test) { - - List result = new ArrayList<>(partitions.size()); - for (RedisClusterNode partition : partitions) { - if (test.test(partition)) { - result.add(partition); - } - } - return result; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/PipelinedRedisFuture.java b/src/main/java/com/lambdaworks/redis/cluster/PipelinedRedisFuture.java deleted file mode 100644 index d4090c740c..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/PipelinedRedisFuture.java +++ /dev/null @@ -1,63 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.Map; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CompletionStage; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; -import java.util.function.Function; - -import com.lambdaworks.redis.RedisFuture; - -/** - * Pipelining for commands that are executed on multiple cluster nodes. Merges results and emits one composite result. - * - * @author Mark Paluch - */ -class PipelinedRedisFuture extends CompletableFuture implements RedisFuture { - - private CountDownLatch latch = new CountDownLatch(1); - - public PipelinedRedisFuture(CompletionStage completionStage, Function converter) { - completionStage.thenAccept(v -> complete(converter.apply(v))) - .exceptionally(throwable -> { - completeExceptionally(throwable); - return null; - }); - } - - public PipelinedRedisFuture(Map> executions, Function, V> converter) { - - CompletableFuture.allOf(executions.values().toArray(new CompletableFuture[executions.size()])) - .thenRun(() -> complete(converter.apply(this))).exceptionally(throwable -> { - completeExceptionally(throwable); - return null; - }); - } - - @Override - public boolean complete(V value) { - boolean result = super.complete(value); - latch.countDown(); - return result; - } - - @Override - public boolean completeExceptionally(Throwable ex) { - - boolean value = super.completeExceptionally(ex); - latch.countDown(); - return value; - } - - @Override - public String getError() { - return null; - } - - @Override - public boolean await(long timeout, TimeUnit unit) throws InterruptedException { - return latch.await(timeout, unit); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/PooledClusterConnectionProvider.java b/src/main/java/com/lambdaworks/redis/cluster/PooledClusterConnectionProvider.java deleted file mode 100644 index b6c7f46ecb..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/PooledClusterConnectionProvider.java +++ /dev/null @@ -1,535 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.net.InetSocketAddress; -import java.net.SocketAddress; -import java.util.*; -import java.util.concurrent.ConcurrentHashMap; -import java.util.function.Function; -import java.util.function.Supplier; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.HostAndPort; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; -import com.lambdaworks.redis.resource.SocketAddressResolver; - -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Connection provider with built-in connection caching. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings({ "unchecked", "rawtypes" }) -class PooledClusterConnectionProvider implements ClusterConnectionProvider { - private static final InternalLogger logger = InternalLoggerFactory.getInstance(PooledClusterConnectionProvider.class); - - // Contains NodeId-identified and HostAndPort-identified connections. - private final Map> connections = new ConcurrentHashMap<>(); - private final Object stateLock = new Object(); - private final boolean debugEnabled; - private final StatefulRedisConnection writers[] = new StatefulRedisConnection[SlotHash.SLOT_COUNT]; - private final StatefulRedisConnection readers[][] = new StatefulRedisConnection[SlotHash.SLOT_COUNT][]; - private final RedisClusterClient redisClusterClient; - private final ConnectionFactory connectionFactory; - - private Partitions partitions; - private boolean autoFlushCommands = true; - private ReadFrom readFrom; - - public PooledClusterConnectionProvider(RedisClusterClient redisClusterClient, RedisChannelWriter clusterWriter, - RedisCodec redisCodec) { - this.redisClusterClient = redisClusterClient; - this.debugEnabled = logger.isDebugEnabled(); - this.connectionFactory = new ConnectionFactory(redisClusterClient, redisCodec, clusterWriter); - } - - @Override - @SuppressWarnings({ "unchecked", "hiding", "rawtypes" }) - public StatefulRedisConnection getConnection(Intent intent, int slot) { - if (debugEnabled) { - logger.debug("getConnection(" + intent + ", " + slot + ")"); - } - - try { - if (intent == Intent.READ && readFrom != null) { - return getReadConnection(slot); - } - return getWriteConnection(slot); - } catch (RedisException e) { - throw e; - } catch (RuntimeException e) { - throw new RedisException(e); - } - } - - private StatefulRedisConnection getWriteConnection(int slot) { - StatefulRedisConnection writer;// avoid races when reconfiguring partitions. - synchronized (stateLock) { - writer = writers[slot]; - } - - if (writer == null) { - RedisClusterNode partition = partitions.getPartitionBySlot(slot); - if (partition == null) { - throw new RedisException("Cannot determine a partition for slot " + slot + " (Partitions: " + partitions + ")"); - } - - // Use always host and port for slot-oriented operations. We don't want to get reconnected on a different - // host because the nodeId can be handled by a different host. - RedisURI uri = partition.getUri(); - ConnectionKey key = new ConnectionKey(Intent.WRITE, uri.getHost(), uri.getPort()); - return writers[slot] = getOrCreateConnection(key); - - } - return writer; - } - - protected StatefulRedisConnection getReadConnection(int slot) { - StatefulRedisConnection readerCandidates[];// avoid races when reconfiguring partitions. - synchronized (stateLock) { - readerCandidates = readers[slot]; - } - - if (readerCandidates == null) { - RedisClusterNode master = partitions.getPartitionBySlot(slot); - if (master == null) { - throw new RedisException( - "Cannot determine a partition to read for slot " + slot + " (Partitions: " + partitions + ")"); - } - - List candidates = getReadCandidates(master); - List selection = readFrom.select(new ReadFrom.Nodes() { - @Override - public List getNodes() { - return candidates; - } - - @Override - public Iterator iterator() { - return candidates.iterator(); - } - }); - - if (selection.isEmpty()) { - throw new RedisException("Cannot determine a partition to read for slot " + slot + " (Partitions: " + partitions - + ") with setting " + readFrom); - } - - readerCandidates = getReadFromConnections(selection); - readers[slot] = readerCandidates; - } - - // try working connections at first - for (StatefulRedisConnection readerCandidate : readerCandidates) { - if (!readerCandidate.isOpen()) { - continue; - } - return readerCandidate; - } - - // fall-back to the first connection for same behavior as writing - return readerCandidates[0]; - } - - private StatefulRedisConnection[] getReadFromConnections(List selection) { - StatefulRedisConnection[] readerCandidates; - // Use always host and port for slot-oriented operations. We don't want to get reconnected on a different - // host because the nodeId can be handled by a different host. - - readerCandidates = new StatefulRedisConnection[selection.size()]; - - for (int i = 0; i < selection.size(); i++) { - RedisNodeDescription redisClusterNode = selection.get(i); - - RedisURI uri = redisClusterNode.getUri(); - ConnectionKey key = new ConnectionKey( - redisClusterNode.getRole() == RedisInstance.Role.MASTER ? Intent.WRITE : Intent.READ, uri.getHost(), - uri.getPort()); - - readerCandidates[i] = getOrCreateConnection(key); - } - - return readerCandidates; - } - - private List getReadCandidates(RedisClusterNode master) { - - return partitions.stream() // - .filter(partition -> isReadCandidate(master, partition)) // - .collect(Collectors.toList()); - } - - private boolean isReadCandidate(RedisClusterNode master, RedisClusterNode partition) { - return master.getNodeId().equals(partition.getNodeId()) || master.getNodeId().equals(partition.getSlaveOf()); - } - - @Override - public StatefulRedisConnection getConnection(Intent intent, String nodeId) { - if (debugEnabled) { - logger.debug("getConnection(" + intent + ", " + nodeId + ")"); - } - - ConnectionKey key = new ConnectionKey(intent, nodeId); - return getOrCreateConnection(key); - } - - @Override - @SuppressWarnings({ "unchecked", "hiding", "rawtypes" }) - public StatefulRedisConnection getConnection(Intent intent, String host, int port) { - try { - if (debugEnabled) { - logger.debug("getConnection(" + intent + ", " + host + ", " + port + ")"); - } - - if (validateClusterNodeMembership()) { - RedisClusterNode redisClusterNode = getPartition(host, port); - - if (redisClusterNode == null) { - HostAndPort hostAndPort = HostAndPort.of(host, port); - throw invalidConnectionPoint(hostAndPort.toString()); - } - } - - ConnectionKey key = new ConnectionKey(intent, host, port); - return getOrCreateConnection(key); - } catch (RedisException e) { - throw e; - } catch (RuntimeException e) { - throw new RedisException(e); - } - } - - private RedisClusterNode getPartition(String host, int port) { - for (RedisClusterNode partition : partitions) { - RedisURI uri = partition.getUri(); - if (port == uri.getPort() && host.equals(uri.getHost())) { - return partition; - } - } - return null; - } - - @Override - public void close() { - - this.connections.clear(); - resetFastConnectionCache(); - - new HashMap<>(this.connections) // - .values() // - .stream() // - .filter(StatefulConnection::isOpen).forEach(StatefulConnection::close); - } - - @Override - public void reset() { - allConnections().forEach(StatefulRedisConnection::reset); - } - - /** - * Synchronize on {@code stateLock} to initiate a happens-before relation and clear the thread caches of other threads. - * - * @param partitions the new partitions. - */ - @Override - public void setPartitions(Partitions partitions) { - boolean reconfigurePartitions = false; - - synchronized (stateLock) { - if (this.partitions != null) { - reconfigurePartitions = true; - } - this.partitions = partitions; - } - - if (reconfigurePartitions) { - reconfigurePartitions(); - } - } - - private void reconfigurePartitions() { - - if (!redisClusterClient.expireStaleConnections()) { - return; - } - - Set staleConnections = getStaleConnectionKeys(); - - for (ConnectionKey key : staleConnections) { - StatefulRedisConnection connection = connections.get(key); - - RedisChannelHandler redisChannelHandler = (RedisChannelHandler) connection; - - if (redisChannelHandler.getChannelWriter() instanceof ClusterNodeCommandHandler) { - ClusterNodeCommandHandler clusterNodeCommandHandler = (ClusterNodeCommandHandler) redisChannelHandler - .getChannelWriter(); - clusterNodeCommandHandler.prepareClose(); - } - } - - resetFastConnectionCache(); - closeStaleConnections(); - } - - /** - * Close stale connections. - */ - @Override - public void closeStaleConnections() { - logger.debug("closeStaleConnections() count before expiring: {}", getConnectionCount()); - - Set stale = getStaleConnectionKeys(); - - for (ConnectionKey connectionKey : stale) { - StatefulRedisConnection connection = connections.get(connectionKey); - if (connection != null) { - connections.remove(connectionKey); - connection.close(); - } - } - - logger.debug("closeStaleConnections() count after expiring: {}", getConnectionCount()); - } - - /** - * Retrieve a set of PoolKey's for all pooled connections that are within the pool but not within the {@link Partitions}. - * - * @return Set of {@link ConnectionKey}s - */ - private Set getStaleConnectionKeys() { - Map> map = new HashMap<>(connections); - Set stale = new HashSet<>(); - - for (ConnectionKey connectionKey : map.keySet()) { - - if (connectionKey.nodeId != null && partitions.getPartitionByNodeId(connectionKey.nodeId) != null) { - continue; - } - - if (connectionKey.host != null && getPartition(connectionKey.host, connectionKey.port) != null) { - continue; - } - stale.add(connectionKey); - } - return stale; - } - - /** - * Set auto-flush on all commands. Synchronize on {@code stateLock} to initiate a happens-before relation and clear the - * thread caches of other threads. - * - * @param autoFlush state of autoFlush. - */ - @Override - public void setAutoFlushCommands(boolean autoFlush) { - synchronized (stateLock) { - this.autoFlushCommands = autoFlush; - } - - allConnections().forEach(connection -> connection.setAutoFlushCommands(autoFlush)); - } - - private Collection> allConnections() { - return LettuceLists.unmodifiableList(connections.values()); - } - - @Override - public void flushCommands() { - allConnections().forEach(StatefulConnection::flushCommands); - } - - @Override - public void setReadFrom(ReadFrom readFrom) { - synchronized (stateLock) { - this.readFrom = readFrom; - Arrays.fill(readers, null); - } - } - - @Override - public ReadFrom getReadFrom() { - return this.readFrom; - } - - /** - * - * @return number of connections. - */ - long getConnectionCount() { - return connections.size(); - } - - /** - * Reset the internal connection cache. This is necessary because the {@link Partitions} have no reference to the connection - * cache. - * - * Synchronize on {@code stateLock} to initiate a happens-before relation and clear the thread caches of other threads. - */ - private void resetFastConnectionCache() { - synchronized (stateLock) { - Arrays.fill(writers, null); - Arrays.fill(readers, null); - - } - } - - private RuntimeException invalidConnectionPoint(String message) { - return new IllegalArgumentException( - "Connection to " + message + " not allowed. This connection point is not known in the cluster view"); - } - - private Supplier getSocketAddressSupplier(final ConnectionKey connectionKey) { - return () -> { - if (connectionKey.nodeId != null) { - SocketAddress socketAddress = getSocketAddress(connectionKey.nodeId); - logger.debug("Resolved SocketAddress {} using for Cluster node {}", socketAddress, connectionKey.nodeId); - return socketAddress; - } - SocketAddress socketAddress = new InetSocketAddress(connectionKey.host, connectionKey.port); - logger.debug("Resolved SocketAddress {} using for Cluster node at {}:{}", socketAddress, connectionKey.host, - connectionKey.port); - return socketAddress; - }; - } - - private SocketAddress getSocketAddress(String nodeId) { - for (RedisClusterNode partition : partitions) { - if (partition.getNodeId().equals(nodeId)) { - return SocketAddressResolver.resolve(partition.getUri(), redisClusterClient.getResources().dnsResolver()); - } - } - return null; - } - - /** - * Connection to identify a connection either by nodeId or host/port. - */ - private static class ConnectionKey { - private final ClusterConnectionProvider.Intent intent; - private final String nodeId; - private final String host; - private final int port; - - public ConnectionKey(Intent intent, String nodeId) { - this.intent = intent; - this.nodeId = nodeId; - this.host = null; - this.port = 0; - } - - public ConnectionKey(Intent intent, String host, int port) { - this.intent = intent; - this.host = host; - this.port = port; - this.nodeId = null; - } - - @Override - public boolean equals(Object o) { - if (this == o) - return true; - if (!(o instanceof ConnectionKey)) - return false; - - ConnectionKey key = (ConnectionKey) o; - - if (port != key.port) - return false; - if (intent != key.intent) - return false; - if (nodeId != null ? !nodeId.equals(key.nodeId) : key.nodeId != null) - return false; - return !(host != null ? !host.equals(key.host) : key.host != null); - } - - @Override - public int hashCode() { - int result = intent != null ? intent.name().hashCode() : 0; - result = 31 * result + (nodeId != null ? nodeId.hashCode() : 0); - result = 31 * result + (host != null ? host.hashCode() : 0); - result = 31 * result + port; - return result; - } - } - - private boolean validateClusterNodeMembership() { - return redisClusterClient.getClusterClientOptions() == null - || redisClusterClient.getClusterClientOptions().isValidateClusterNodeMembership(); - } - - private StatefulRedisConnection getOrCreateConnection(ConnectionKey key) { - return connections.computeIfAbsent(key, connectionFactory); - } - - private class ConnectionFactory implements Function> { - - private final RedisClusterClient redisClusterClient; - private final RedisCodec redisCodec; - private final RedisChannelWriter clusterWriter; - - public ConnectionFactory(RedisClusterClient redisClusterClient, RedisCodec redisCodec, - RedisChannelWriter clusterWriter) { - this.redisClusterClient = redisClusterClient; - this.redisCodec = redisCodec; - this.clusterWriter = clusterWriter; - } - - @Override - public StatefulRedisConnection apply(ConnectionKey key) { - - StatefulRedisConnection connection = null; - - if (key.nodeId != null) { - if (partitions.getPartitionByNodeId(key.nodeId) == null) { - throw invalidConnectionPoint("node id " + key.nodeId); - } - - // NodeId connections do not provide command recovery due to cluster reconfiguration - connection = redisClusterClient.connectToNode(redisCodec, key.nodeId, null, getSocketAddressSupplier(key)); - } - - if (key.host != null) { - - if (validateClusterNodeMembership()) { - if (getPartition(key.host, key.port) == null) { - throw invalidConnectionPoint(key.host + ":" + key.port); - } - } - - // Host and port connections do provide command recovery due to cluster reconfiguration - connection = redisClusterClient.connectToNode(redisCodec, key.host + ":" + key.port, clusterWriter, - getSocketAddressSupplier(key)); - } - - LettuceAssert.notNull(connection, "Connection is null. Check ConnectionKey because host and nodeId are null"); - - try { - if (key.intent == Intent.READ) { - connection.sync().readOnly(); - } - - synchronized (stateLock) { - connection.setAutoFlushCommands(autoFlushCommands); - } - } catch (RuntimeException e) { - connection.close(); - throw e; - } - - return connection; - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ReadOnlyCommands.java b/src/main/java/com/lambdaworks/redis/cluster/ReadOnlyCommands.java deleted file mode 100644 index 22e9846cc6..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ReadOnlyCommands.java +++ /dev/null @@ -1,44 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.HashSet; -import java.util.Set; - -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.ProtocolKeyword; - -/** - * Contains all command names that are read-only commands. - * - * @author Mark Paluch - */ -class ReadOnlyCommands { - - public final static ProtocolKeyword READ_ONLY_COMMANDS[]; - - static { - - Set set = new HashSet(CommandName.values().length); - - for (CommandName commandNames : CommandName.values()) { - set.add(CommandType.valueOf(commandNames.name())); - } - - READ_ONLY_COMMANDS = set.toArray(new ProtocolKeyword[set.size()]); - } - - enum CommandName { - ASKING, BITCOUNT, BITPOS, CLIENT, COMMAND, DUMP, ECHO, EXISTS, // - GEODIST, GEOPOS, GEORADIUS, GEORADIUSBYMEMBER, GEOHASH, GET, GETBIT, // - GETRANGE, HEXISTS, HGET, HGETALL, HKEYS, HLEN, HMGET, HSCAN, HSTRLEN, // - HVALS, INFO, KEYS, LINDEX, LLEN, LRANGE, MGET, MULTI, PFCOUNT, PTTL, // - RANDOMKEY, READWRITE, SCAN, SCARD, SCRIPT, // - SDIFF, SINTER, SISMEMBER, SMEMBERS, SRANDMEMBER, SSCAN, STRLEN, // - SUNION, TIME, TTL, TYPE, WAIT, ZCARD, ZCOUNT, ZLEXCOUNT, ZRANGE, // - ZRANGEBYLEX, ZRANGEBYSCORE, ZRANK, ZREVRANGE, /* ZREVRANGEBYLEX , */ZREVRANGEBYSCORE, ZREVRANK, ZSCAN, ZSCORE, // - - // Pub/Sub commands are no key-space commands so they are safe to execute on slave nodes - PUBLISH, PUBSUB, PSUBSCRIBE, PUNSUBSCRIBE, SUBSCRIBE, UNSUBSCRIBE - - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ReconnectEventListener.java b/src/main/java/com/lambdaworks/redis/cluster/ReconnectEventListener.java deleted file mode 100644 index b8891ac9f4..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/ReconnectEventListener.java +++ /dev/null @@ -1,21 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import com.lambdaworks.redis.ConnectionEvents.Reconnect; -import com.lambdaworks.redis.protocol.ReconnectionListener; - -/** - * @author Mark Paluch - */ -class ReconnectEventListener implements ReconnectionListener { - - private final ClusterEventListener clusterEventListener; - - public ReconnectEventListener(ClusterEventListener clusterEventListener) { - this.clusterEventListener = clusterEventListener; - } - - @Override - public void onReconnect(Reconnect reconnect) { - clusterEventListener.onReconnection(reconnect.getAttempt()); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterAsyncCommandsImpl.java b/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterAsyncCommandsImpl.java deleted file mode 100644 index 89a2b5fd17..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterAsyncCommandsImpl.java +++ /dev/null @@ -1,539 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.cluster.ClusterScanSupport.asyncClusterKeyScanCursorMapper; -import static com.lambdaworks.redis.cluster.ClusterScanSupport.asyncClusterStreamScanCursorMapper; -import static com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode.NodeFlag.MASTER; - -import java.lang.reflect.Proxy; -import java.util.*; -import java.util.function.BiFunction; -import java.util.function.Function; -import java.util.function.Predicate; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.async.RedisKeyAsyncCommands; -import com.lambdaworks.redis.api.async.RedisScriptingAsyncCommands; -import com.lambdaworks.redis.api.async.RedisServerAsyncCommands; -import com.lambdaworks.redis.cluster.ClusterScanSupport.ScanCursorMapper; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.AsyncNodeSelection; -import com.lambdaworks.redis.cluster.api.async.NodeSelectionAsyncCommands; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.output.IntegerOutput; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.protocol.AsyncCommand; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * An advanced asynchronous and thread-safe API for a Redis Cluster connection. - * - * @author Mark Paluch - * @since 3.3 - */ -@SuppressWarnings("unchecked") -public class RedisAdvancedClusterAsyncCommandsImpl extends AbstractRedisAsyncCommands implements - RedisAdvancedClusterAsyncConnection, RedisAdvancedClusterAsyncCommands { - - private Random random = new Random(); - - /** - * Initialize a new connection. - * - * @param connection the stateful connection - * @param codec Codec used to encode/decode keys and values. - */ - public RedisAdvancedClusterAsyncCommandsImpl(StatefulRedisClusterConnectionImpl connection, RedisCodec codec) { - super(connection, codec); - } - - @Override - public RedisFuture del(K... keys) { - return del(Arrays.asList(keys)); - } - - @Override - public RedisFuture del(Iterable keys) { - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.del(keys); - } - - Map> executions = new HashMap<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - RedisFuture del = super.del(entry.getValue()); - executions.put(entry.getKey(), del); - } - - return MultiNodeExecution.aggregateAsync(executions); - } - - @Override - public RedisFuture unlink(K... keys) { - return unlink(Arrays.asList(keys)); - } - - @Override - public RedisFuture unlink(Iterable keys) { - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.unlink(keys); - } - - Map> executions = new HashMap<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - RedisFuture unlink = super.unlink(entry.getValue()); - executions.put(entry.getKey(), unlink); - } - - return MultiNodeExecution.aggregateAsync(executions); - } - - @Override - public RedisFuture exists(K... keys) { - return exists(Arrays.asList(keys)); - } - - public RedisFuture exists(Iterable keys) { - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.exists(keys); - } - - Map> executions = new HashMap<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - RedisFuture exists = super.exists(entry.getValue()); - executions.put(entry.getKey(), exists); - } - - return MultiNodeExecution.aggregateAsync(executions); - } - - @Override - public RedisFuture> mget(K... keys) { - return mget(Arrays.asList(keys)); - } - - @Override - public RedisFuture> mget(Iterable keys) { - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.mget(keys); - } - - Map slots = SlotHash.getSlots(partitioned); - Map>> executions = new HashMap<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - RedisFuture> mget = super.mget(entry.getValue()); - executions.put(entry.getKey(), mget); - } - - // restore order of key - return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { - List result = new ArrayList<>(); - for (K opKey : keys) { - int slot = slots.get(opKey); - - int position = partitioned.get(slot).indexOf(opKey); - RedisFuture> listRedisFuture = executions.get(slot); - result.add(MultiNodeExecution.execute(() -> listRedisFuture.get().get(position))); - } - - return result; - }); - } - - @Override - public RedisFuture mget(ValueStreamingChannel channel, K... keys) { - return mget(channel, Arrays.asList(keys)); - } - - @Override - public RedisFuture mget(ValueStreamingChannel channel, Iterable keys) { - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.mget(channel, keys); - } - - Map> executions = new HashMap<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - RedisFuture del = super.mget(channel, entry.getValue()); - executions.put(entry.getKey(), del); - } - - return MultiNodeExecution.aggregateAsync(executions); - } - - @Override - public RedisFuture mset(Map map) { - Map> partitioned = SlotHash.partition(codec, map.keySet()); - - if (partitioned.size() < 2) { - return super.mset(map); - } - - Map> executions = new HashMap<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - - Map op = new HashMap<>(); - entry.getValue().forEach(k -> op.put(k, map.get(k))); - - RedisFuture mset = super.mset(op); - executions.put(entry.getKey(), mset); - } - - return MultiNodeExecution.firstOfAsync(executions); - } - - @Override - public RedisFuture msetnx(Map map) { - Map> partitioned = SlotHash.partition(codec, map.keySet()); - - if (partitioned.size() < 2) { - return super.msetnx(map); - } - - Map> executions = new HashMap<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - - Map op = new HashMap<>(); - entry.getValue().forEach(k -> op.put(k, map.get(k))); - - RedisFuture msetnx = super.msetnx(op); - executions.put(entry.getKey(), msetnx); - } - - return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { - for (RedisFuture listRedisFuture : executions.values()) { - Boolean b = MultiNodeExecution.execute(() -> listRedisFuture.get()); - if (b != null && b) { - return true; - } - } - - return false; - }); - } - - @Override - public RedisFuture clientSetname(K name) { - Map> executions = new HashMap<>(); - - for (RedisClusterNode redisClusterNode : getStatefulConnection().getPartitions()) { - RedisClusterAsyncCommands byNodeId = getConnection(redisClusterNode.getNodeId()); - if (byNodeId.isOpen()) { - executions.put("NodeId: " + redisClusterNode.getNodeId(), byNodeId.clientSetname(name)); - } - - RedisURI uri = redisClusterNode.getUri(); - RedisClusterAsyncCommands byHost = getConnection(uri.getHost(), uri.getPort()); - if (byHost.isOpen()) { - executions.put("HostAndPort: " + redisClusterNode.getNodeId(), byHost.clientSetname(name)); - } - } - - return MultiNodeExecution.firstOfAsync(executions); - } - - @Override - public RedisFuture> clusterGetKeysInSlot(int slot, int count) { - RedisClusterAsyncCommands connectionBySlot = findConnectionBySlot(slot); - - if (connectionBySlot != null) { - return connectionBySlot.clusterGetKeysInSlot(slot, count); - } - - return super.clusterGetKeysInSlot(slot, count); - } - - @Override - public RedisFuture clusterCountKeysInSlot(int slot) { - RedisClusterAsyncCommands connectionBySlot = findConnectionBySlot(slot); - - if (connectionBySlot != null) { - return connectionBySlot.clusterCountKeysInSlot(slot); - } - - return super.clusterCountKeysInSlot(slot); - } - - @Override - public RedisFuture dbsize() { - Map> executions = executeOnMasters(RedisServerAsyncCommands::dbsize); - return MultiNodeExecution.aggregateAsync(executions); - } - - @Override - public RedisFuture flushall() { - Map> executions = executeOnMasters(RedisServerAsyncCommands::flushall); - return MultiNodeExecution.firstOfAsync(executions); - } - - @Override - public RedisFuture flushdb() { - Map> executions = executeOnMasters(RedisServerAsyncCommands::flushdb); - return MultiNodeExecution.firstOfAsync(executions); - } - - @Override - public RedisFuture scriptFlush() { - Map> executions = executeOnNodes(RedisScriptingAsyncCommands::scriptFlush, - redisClusterNode -> true); - return MultiNodeExecution.firstOfAsync(executions); - } - - @Override - public RedisFuture scriptKill() { - Map> executions = executeOnNodes(RedisScriptingAsyncCommands::scriptFlush, - redisClusterNode -> true); - return MultiNodeExecution.alwaysOkOfAsync(executions); - } - - @Override - public RedisFuture randomkey() { - - Partitions partitions = getStatefulConnection().getPartitions(); - int index = random.nextInt(partitions.size()); - - RedisClusterAsyncCommands connection = getConnection(partitions.getPartition(index).getNodeId()); - return connection.randomkey(); - } - - @Override - public RedisFuture> keys(K pattern) { - Map>> executions = executeOnMasters(commands -> commands.keys(pattern)); - - return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { - List result = new ArrayList<>(); - for (RedisFuture> future : executions.values()) { - result.addAll(MultiNodeExecution.execute(() -> future.get())); - } - return result; - }); - } - - @Override - public RedisFuture keys(KeyStreamingChannel channel, K pattern) { - Map> executions = executeOnMasters(commands -> commands.keys(channel, pattern)); - return MultiNodeExecution.aggregateAsync(executions); - } - - @Override - public void shutdown(boolean save) { - - executeOnNodes(commands -> { - commands.shutdown(save); - - Command command = new Command<>(CommandType.SHUTDOWN, new IntegerOutput<>(codec), null); - AsyncCommand async = new AsyncCommand(command); - async.complete(); - return async; - }, redisClusterNode -> true); - } - - @Override - public RedisFuture touch(K... keys) { - return touch(Arrays.asList(keys)); - } - - public RedisFuture touch(Iterable keys) { - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.touch(keys); - } - - Map> executions = new HashMap<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - RedisFuture touch = super.touch(entry.getValue()); - executions.put(entry.getKey(), touch); - } - - return MultiNodeExecution.aggregateAsync(executions); - } - - /** - * Run a command on all available masters, - * - * @param function function producing the command - * @param result type - * @return map of a key (counter) and commands. - */ - protected Map> executeOnMasters( - Function, RedisFuture> function) { - return executeOnNodes(function, redisClusterNode -> redisClusterNode.is(MASTER)); - } - - /** - * Run a command on all available nodes that match {@code filter}. - * - * @param function function producing the command - * @param filter filter function for the node selection - * @param result type - * @return map of a key (counter) and commands. - */ - protected Map> executeOnNodes( - Function, RedisFuture> function, Function filter) { - Map> executions = new HashMap<>(); - - for (RedisClusterNode redisClusterNode : getStatefulConnection().getPartitions()) { - - if (!filter.apply(redisClusterNode)) { - continue; - } - - RedisURI uri = redisClusterNode.getUri(); - RedisClusterAsyncCommands connection = getConnection(uri.getHost(), uri.getPort()); - if (connection.isOpen()) { - executions.put(redisClusterNode.getNodeId(), function.apply(connection)); - } - } - return executions; - } - - private RedisClusterAsyncCommands findConnectionBySlot(int slot) { - RedisClusterNode node = getStatefulConnection().getPartitions().getPartitionBySlot(slot); - if (node != null) { - return getConnection(node.getUri().getHost(), node.getUri().getPort()); - } - - return null; - } - - @Override - public RedisClusterAsyncCommands getConnection(String nodeId) { - return getStatefulConnection().getConnection(nodeId).async(); - } - - @Override - public RedisClusterAsyncCommands getConnection(String host, int port) { - return getStatefulConnection().getConnection(host, port).async(); - } - - @Override - public StatefulRedisClusterConnection getStatefulConnection() { - return (StatefulRedisClusterConnection) connection; - } - - @Override - public AsyncNodeSelection nodes(Predicate predicate) { - return nodes(predicate, false); - } - - @Override - public AsyncNodeSelection readonly(Predicate predicate) { - return nodes(predicate, ClusterConnectionProvider.Intent.READ, false); - } - - @Override - public AsyncNodeSelection nodes(Predicate predicate, boolean dynamic) { - return nodes(predicate, ClusterConnectionProvider.Intent.WRITE, dynamic); - } - - @SuppressWarnings("unchecked") - protected AsyncNodeSelection nodes(Predicate predicate, ClusterConnectionProvider.Intent intent, - boolean dynamic) { - - NodeSelectionSupport, ?> selection; - - if (dynamic) { - selection = new DynamicAsyncNodeSelection<>(getStatefulConnection(), predicate, intent); - } else { - selection = new StaticAsyncNodeSelection<>(getStatefulConnection(), predicate, intent); - } - - NodeSelectionInvocationHandler h = new NodeSelectionInvocationHandler((AbstractNodeSelection) selection); - return (AsyncNodeSelection) Proxy.newProxyInstance(NodeSelectionSupport.class.getClassLoader(), new Class[] { - NodeSelectionAsyncCommands.class, AsyncNodeSelection.class }, h); - } - - @Override - public RedisFuture> scan() { - return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(), asyncClusterKeyScanCursorMapper()); - } - - @Override - public RedisFuture> scan(ScanArgs scanArgs) { - return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(scanArgs), - asyncClusterKeyScanCursorMapper()); - } - - @Override - public RedisFuture> scan(ScanCursor scanCursor, ScanArgs scanArgs) { - return clusterScan(scanCursor, (connection, cursor) -> connection.scan(cursor, scanArgs), - asyncClusterKeyScanCursorMapper()); - } - - @Override - public RedisFuture> scan(ScanCursor scanCursor) { - return clusterScan(scanCursor, (connection, cursor) -> connection.scan(cursor), asyncClusterKeyScanCursorMapper()); - } - - @Override - public RedisFuture scan(KeyStreamingChannel channel) { - return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(channel), - asyncClusterStreamScanCursorMapper()); - } - - @Override - public RedisFuture scan(KeyStreamingChannel channel, ScanArgs scanArgs) { - return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(channel, scanArgs), - asyncClusterStreamScanCursorMapper()); - } - - @Override - public RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { - return clusterScan(scanCursor, (connection, cursor) -> connection.scan(channel, cursor, scanArgs), - asyncClusterStreamScanCursorMapper()); - } - - @Override - public RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor) { - return clusterScan(scanCursor, (connection, cursor) -> connection.scan(channel, cursor), - asyncClusterStreamScanCursorMapper()); - } - - private RedisFuture clusterScan(ScanCursor cursor, - BiFunction, ScanCursor, RedisFuture> scanFunction, - ScanCursorMapper> resultMapper) { - - return clusterScan(getStatefulConnection(), cursor, scanFunction, resultMapper); - } - - /** - * Perform a SCAN in the cluster. - * - */ - static RedisFuture clusterScan(StatefulRedisClusterConnection connection, - ScanCursor cursor, BiFunction, ScanCursor, RedisFuture> scanFunction, - ScanCursorMapper> mapper) { - - List nodeIds = ClusterScanSupport.getNodeIds(connection, cursor); - String currentNodeId = ClusterScanSupport.getCurrentNodeId(cursor, nodeIds); - ScanCursor continuationCursor = ClusterScanSupport.getContinuationCursor(cursor); - - RedisFuture scanCursor = scanFunction.apply(connection.getConnection(currentNodeId).async(), continuationCursor); - return mapper.map(nodeIds, currentNodeId, scanCursor); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterAsyncConnection.java b/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterAsyncConnection.java deleted file mode 100644 index 4bbc480aa6..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterAsyncConnection.java +++ /dev/null @@ -1,59 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import com.lambdaworks.redis.RedisClusterAsyncConnection; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; - -/** - * Advanced asynchronous and thread-safe cluster API. - * - * @author Mark Paluch - * @since 3.3 - * @deprecated Use {@link com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands} - */ -@Deprecated -public interface RedisAdvancedClusterAsyncConnection extends RedisClusterAsyncConnection { - - /** - * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. This - * connection is bound to the node id. Once the cluster topology view is updated, the connection will try to reconnect the - * to the node with the specified {@code nodeId}, that behavior can also lead to a closed connection once the node with the - * specified {@code nodeId} is no longer part of the cluster. - * - * Do not close the connections. Otherwise, unpredictable behavior will occur. The nodeId must be part of the cluster and is - * validated against the current topology view in {@link com.lambdaworks.redis.cluster.models.partitions.Partitions}. - * - * In contrast to the {@link RedisAdvancedClusterAsyncConnection}, node-connections do not route commands to other cluster - * nodes. - * - * @param nodeId the node Id - * @return a connection to the requested cluster node - * @throws RedisException if the requested node identified by {@code nodeId} is not part of the cluster - */ - RedisClusterAsyncConnection getConnection(String nodeId); - - /** - * Retrieve a connection to the specified cluster node using the nodeId. This connection is bound to a host and port. - * Updates to the cluster topology view can close the connection once the host, identified by {@code host} and {@code port}, - * are no longer part of the cluster. - * - * Do not close the connections. Otherwise, unpredictable behavior will occur. The node must be part of the cluster and - * host/port are validated (exact check) against the current topology view in - * {@link com.lambdaworks.redis.cluster.models.partitions.Partitions}. - * - * In contrast to the {@link RedisAdvancedClusterAsyncConnection}, node-connections do not route commands to other cluster - * nodes. - * - * @param host the host - * @param port the port - * @return a connection to the requested cluster node - * @throws RedisException if the requested node identified by {@code host} and {@code port} is not part of the cluster - */ - RedisClusterAsyncConnection getConnection(String host, int port); - - /** - * @return the underlying connection. - */ - StatefulRedisClusterConnection getStatefulConnection(); - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterConnection.java b/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterConnection.java deleted file mode 100644 index 30f30ba03a..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterConnection.java +++ /dev/null @@ -1,57 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import com.lambdaworks.redis.RedisClusterConnection; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; - -/** - * Advanced synchronous and thread-safe cluster API. - * - * @author Mark Paluch - * @since 3.3 - * @deprecated Use {@link com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands} - */ -@Deprecated -public interface RedisAdvancedClusterConnection extends RedisClusterConnection { - - /** - * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. This - * connection is bound to the node id. Once the cluster topology view is updated, the connection will try to reconnect the - * to the node with the specified {@code nodeId}, that behavior can also lead to a closed connection once the node with the - * specified {@code nodeId} is no longer part of the cluster. - * - * Do not close the connections. Otherwise, unpredictable behavior will occur. The nodeId must be part of the cluster and is - * validated against the current topology view in {@link com.lambdaworks.redis.cluster.models.partitions.Partitions}. - * - * In contrast to the {@link RedisAdvancedClusterConnection}, node-connections do not route commands to other cluster nodes. - * - * @param nodeId the node Id - * @return a connection to the requested cluster node - * @throws RedisException if the requested node identified by {@code nodeId} is not part of the cluster - */ - RedisClusterConnection getConnection(String nodeId); - - /** - * Retrieve a connection to the specified cluster node using the nodeId. This connection is bound to a host and port. - * Updates to the cluster topology view can close the connection once the host, identified by {@code host} and {@code port}, - * are no longer part of the cluster. - * - * Do not close the connections. Otherwise, unpredictable behavior will occur. The node must be part of the cluster and - * host/port are validated (exact check) against the current topology view in - * {@link com.lambdaworks.redis.cluster.models.partitions.Partitions}. - * - * In contrast to the {@link RedisAdvancedClusterConnection}, node-connections do not route commands to other cluster nodes. - * - * @param host the host - * @param port the port - * @return a connection to the requested cluster node - * @throws RedisException if the requested node identified by {@code host} and {@code port} is not part of the cluster - */ - RedisClusterConnection getConnection(String host, int port); - - /** - * @return the underlying connection. - */ - StatefulRedisClusterConnection getStatefulConnection(); - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterReactiveCommandsImpl.java b/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterReactiveCommandsImpl.java deleted file mode 100644 index ce559f0390..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/RedisAdvancedClusterReactiveCommandsImpl.java +++ /dev/null @@ -1,484 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.cluster.ClusterScanSupport.reactiveClusterKeyScanCursorMapper; -import static com.lambdaworks.redis.cluster.ClusterScanSupport.reactiveClusterStreamScanCursorMapper; -import static com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode.NodeFlag.MASTER; - -import java.util.*; -import java.util.function.BiFunction; -import java.util.function.Function; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.internal.LettuceLists; -import rx.Observable; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.rx.RedisKeyReactiveCommands; -import com.lambdaworks.redis.api.rx.RedisScriptingReactiveCommands; -import com.lambdaworks.redis.api.rx.RedisServerReactiveCommands; -import com.lambdaworks.redis.api.rx.Success; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.rx.RedisAdvancedClusterReactiveCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisClusterReactiveCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * An advanced reactive and thread-safe API to a Redis Cluster connection. - * - * @author Mark Paluch - * @since 4.0 - */ -public class RedisAdvancedClusterReactiveCommandsImpl extends AbstractRedisReactiveCommands implements - RedisAdvancedClusterReactiveCommands { - - private Random random = new Random(); - - /** - * Initialize a new connection. - * - * @param connection the stateful connection - * @param codec Codec used to encode/decode keys and values. - */ - public RedisAdvancedClusterReactiveCommandsImpl(StatefulRedisClusterConnectionImpl connection, RedisCodec codec) { - super(connection, codec); - } - - @Override - public Observable del(K... keys) { - return del(Arrays.asList(keys)); - } - - @Override - public Observable del(Iterable keys) { - - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.del(keys); - } - - List> observables = new ArrayList<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - observables.add(super.del(entry.getValue())); - } - - return Observable.merge(observables).reduce((accu, next) -> accu + next); - } - - @Override - public Observable unlink(K... keys) { - return unlink(Arrays.asList(keys)); - } - - @Override - public Observable unlink(Iterable keys) { - - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.unlink(keys); - } - - List> observables = new ArrayList<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - observables.add(super.unlink(entry.getValue())); - } - - return Observable.merge(observables).reduce((accu, next) -> accu + next); - } - - @Override - public Observable exists(K... keys) { - return exists(Arrays.asList(keys)); - } - - public Observable exists(Iterable keys) { - - Map> partitioned = SlotHash.partition(codec, keys); - - if (partitioned.size() < 2) { - return super.exists(keys); - } - - List> observables = new ArrayList<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - observables.add(super.exists(entry.getValue())); - } - - return Observable.merge(observables).reduce((accu, next) -> accu + next); - } - - @Override - public Observable mget(K... keys) { - return mget(Arrays.asList(keys)); - } - - public Observable mget(Iterable keys) { - - List keyList = LettuceLists.newList(keys); - Map> partitioned = SlotHash.partition(codec, keyList); - - if (partitioned.size() < 2) { - return super.mget(keyList); - } - - List> observables = new ArrayList<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - observables.add(super.mget(entry.getValue())); - } - Observable observable = Observable.concat(Observable.from(observables)); - - Observable> map = observable.toList().map(vs -> { - - Object[] values = new Object[vs.size()]; - int offset = 0; - for (Map.Entry> entry : partitioned.entrySet()) { - - for (int i = 0; i < keyList.size(); i++) { - - int index = entry.getValue().indexOf(keyList.get(i)); - if (index == -1) { - continue; - } - - values[i] = vs.get(offset + index); - } - - offset += entry.getValue().size(); - } - - List objects = (List) new ArrayList<>(Arrays.asList(values)); - return objects; - }); - - return map.compose(new FlattenTransform<>()); - } - - @Override - public Observable mget(ValueStreamingChannel channel, K... keys) { - return mget(channel, Arrays.asList(keys)); - } - - public Observable mget(ValueStreamingChannel channel, Iterable keys) { - - List keyList = LettuceLists.newList(keys); - Map> partitioned = SlotHash.partition(codec, keyList); - - if (partitioned.size() < 2) { - return super.mget(channel, keyList); - } - - List> observables = new ArrayList<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - observables.add(super.mget(channel, entry.getValue())); - } - - return Observable.merge(observables).reduce((accu, next) -> accu + next); - } - - @Override - public Observable msetnx(Map map) { - - return pipeliningWithMap(map, kvMap -> super.msetnx(kvMap), - booleanObservable -> booleanObservable.reduce((accu, next) -> accu || next)); - } - - @Override - public Observable mset(Map map) { - return pipeliningWithMap(map, kvMap -> super.mset(kvMap), Observable::last); - } - - @Override - public Observable clusterGetKeysInSlot(int slot, int count) { - RedisClusterReactiveCommands connectionBySlot = findConnectionBySlot(slot); - - if (connectionBySlot != null) { - return connectionBySlot.clusterGetKeysInSlot(slot, count); - } - - return super.clusterGetKeysInSlot(slot, count); - } - - @Override - public Observable clusterCountKeysInSlot(int slot) { - RedisClusterReactiveCommands connectionBySlot = findConnectionBySlot(slot); - - if (connectionBySlot != null) { - return connectionBySlot.clusterCountKeysInSlot(slot); - } - - return super.clusterCountKeysInSlot(slot); - } - - @Override - public Observable clientSetname(K name) { - List> observables = new ArrayList<>(); - - for (RedisClusterNode redisClusterNode : getStatefulConnection().getPartitions()) { - RedisClusterReactiveCommands byNodeId = getConnection(redisClusterNode.getNodeId()); - if (byNodeId.isOpen()) { - observables.add(byNodeId.clientSetname(name)); - } - - RedisClusterReactiveCommands byHost = getConnection(redisClusterNode.getUri().getHost(), redisClusterNode - .getUri().getPort()); - if (byHost.isOpen()) { - observables.add(byHost.clientSetname(name)); - } - } - - return Observable.merge(observables).last(); - } - - @Override - public Observable dbsize() { - Map> observables = executeOnMasters(RedisServerReactiveCommands::dbsize); - return Observable.merge(observables.values()).reduce((accu, next) -> accu + next); - } - - @Override - public Observable flushall() { - Map> observables = executeOnMasters(RedisServerReactiveCommands::flushall); - return Observable.merge(observables.values()).last(); - } - - @Override - public Observable flushdb() { - Map> observables = executeOnMasters(RedisServerReactiveCommands::flushdb); - return Observable.merge(observables.values()).last(); - } - - @Override - public Observable keys(K pattern) { - Map> observables = executeOnMasters(commands -> commands.keys(pattern)); - return Observable.merge(observables.values()); - } - - @Override - public Observable keys(KeyStreamingChannel channel, K pattern) { - Map> observables = executeOnMasters(commands -> commands.keys(channel, pattern)); - return Observable.merge(observables.values()).reduce((accu, next) -> accu + next); - } - - @Override - public Observable randomkey() { - - Partitions partitions = getStatefulConnection().getPartitions(); - int index = random.nextInt(partitions.size()); - - RedisClusterReactiveCommands connection = getConnection(partitions.getPartition(index).getNodeId()); - return connection.randomkey(); - } - - @Override - public Observable scriptFlush() { - Map> observables = executeOnNodes(RedisScriptingReactiveCommands::scriptFlush, - redisClusterNode -> true); - return Observable.merge(observables.values()).last(); - } - - @Override - public Observable scriptKill() { - Map> observables = executeOnNodes(RedisScriptingReactiveCommands::scriptFlush, - redisClusterNode -> true); - return Observable.merge(observables.values()).onErrorReturn(throwable -> "OK").last(); - } - - @Override - public Observable shutdown(boolean save) { - Map> observables = executeOnNodes(commands -> commands.shutdown(save), - redisClusterNode -> true); - return Observable.merge(observables.values()).onErrorReturn(throwable -> null).last(); - } - - @Override - public Observable touch(K... keys) { - return touch(Arrays.asList(keys)); - } - - public Observable touch(Iterable keys) { - - List keyList = LettuceLists.newList(keys); - Map> partitioned = SlotHash.partition(codec, keyList); - - if (partitioned.size() < 2) { - return super.touch(keyList); - } - - List> observables = new ArrayList<>(); - - for (Map.Entry> entry : partitioned.entrySet()) { - observables.add(super.touch(entry.getValue())); - } - - return Observable.merge(observables).reduce((accu, next) -> accu + next); - } - - /** - * Run a command on all available masters, - * - * @param function function producing the command - * @param result type - * @return map of a key (counter) and commands. - */ - protected Map> executeOnMasters( - Function, Observable> function) { - return executeOnNodes(function, redisClusterNode -> redisClusterNode.is(MASTER)); - } - - /** - * Run a command on all available nodes that match {@code filter}. - * - * @param function function producing the command - * @param filter filter function for the node selection - * @param result type - * @return map of a key (counter) and commands. - */ - protected Map> executeOnNodes( - Function, Observable> function, Function filter) { - Map> executions = new HashMap<>(); - - for (RedisClusterNode redisClusterNode : getStatefulConnection().getPartitions()) { - - if (!filter.apply(redisClusterNode)) { - continue; - } - - RedisURI uri = redisClusterNode.getUri(); - RedisClusterReactiveCommands connection = getConnection(uri.getHost(), uri.getPort()); - if (connection.isOpen()) { - executions.put(redisClusterNode.getNodeId(), function.apply(connection)); - } - } - return executions; - } - - private RedisClusterReactiveCommands findConnectionBySlot(int slot) { - RedisClusterNode node = getStatefulConnection().getPartitions().getPartitionBySlot(slot); - if (node != null) { - return getConnection(node.getUri().getHost(), node.getUri().getPort()); - } - - return null; - } - - @Override - public StatefulRedisClusterConnection getStatefulConnection() { - return (StatefulRedisClusterConnection) connection; - } - - @Override - public RedisClusterReactiveCommands getConnection(String nodeId) { - return getStatefulConnection().getConnection(nodeId).reactive(); - } - - @Override - public RedisClusterReactiveCommands getConnection(String host, int port) { - return getStatefulConnection().getConnection(host, port).reactive(); - } - - @Override - public Observable> scan() { - return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(), reactiveClusterKeyScanCursorMapper()); - } - - @Override - public Observable> scan(ScanArgs scanArgs) { - return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(scanArgs), - reactiveClusterKeyScanCursorMapper()); - } - - @Override - public Observable> scan(ScanCursor scanCursor, ScanArgs scanArgs) { - return clusterScan(scanCursor, (connection, cursor) -> connection.scan(cursor, scanArgs), - reactiveClusterKeyScanCursorMapper()); - } - - @Override - public Observable> scan(ScanCursor scanCursor) { - return clusterScan(scanCursor, (connection, cursor) -> connection.scan(cursor), reactiveClusterKeyScanCursorMapper()); - } - - @Override - public Observable scan(KeyStreamingChannel channel) { - return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(channel), - reactiveClusterStreamScanCursorMapper()); - } - - @Override - public Observable scan(KeyStreamingChannel channel, ScanArgs scanArgs) { - return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(channel, scanArgs), - reactiveClusterStreamScanCursorMapper()); - } - - @Override - public Observable scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { - return clusterScan(scanCursor, (connection, cursor) -> connection.scan(channel, cursor, scanArgs), - reactiveClusterStreamScanCursorMapper()); - } - - @Override - public Observable scan(KeyStreamingChannel channel, ScanCursor scanCursor) { - return clusterScan(scanCursor, (connection, cursor) -> connection.scan(channel, cursor), - reactiveClusterStreamScanCursorMapper()); - } - - private Observable clusterScan(ScanCursor cursor, - BiFunction, ScanCursor, Observable> scanFunction, - ClusterScanSupport.ScanCursorMapper> resultMapper) { - - return clusterScan(getStatefulConnection(), cursor, scanFunction, (ClusterScanSupport.ScanCursorMapper) resultMapper); - } - - /** - * Perform a SCAN in the cluster. - * - */ - static Observable clusterScan(StatefulRedisClusterConnection connection, - ScanCursor cursor, BiFunction, ScanCursor, Observable> scanFunction, - ClusterScanSupport.ScanCursorMapper> mapper) { - - List nodeIds = ClusterScanSupport.getNodeIds(connection, cursor); - String currentNodeId = ClusterScanSupport.getCurrentNodeId(cursor, nodeIds); - ScanCursor continuationCursor = ClusterScanSupport.getContinuationCursor(cursor); - - Observable scanCursor = scanFunction.apply(connection.getConnection(currentNodeId).reactive(), continuationCursor); - return mapper.map(nodeIds, currentNodeId, scanCursor); - } - - private Observable pipeliningWithMap(Map map, Function, Observable> function, - Function, Observable> resultFunction) { - - Map> partitioned = SlotHash.partition(codec, map.keySet()); - - if (partitioned.size() < 2) { - return function.apply(map); - } - - List> observables = partitioned.values().stream().map(ks -> { - Map op = new HashMap<>(); - ks.forEach(k -> op.put(k, map.get(k))); - return function.apply(op); - }).collect(Collectors.toList()); - - return resultFunction.apply(Observable.merge(observables)); - } - - static class FlattenTransform implements Observable.Transformer, T> { - - @Override - public Observable call(Observable> source) { - return source.flatMap(values -> Observable.from(values)); - } - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/RedisClusterClient.java b/src/main/java/com/lambdaworks/redis/cluster/RedisClusterClient.java deleted file mode 100644 index 0dbe3163e7..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/RedisClusterClient.java +++ /dev/null @@ -1,972 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.io.Closeable; -import java.net.SocketAddress; -import java.util.*; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicReference; -import java.util.function.Consumer; -import java.util.function.Function; -import java.util.function.Predicate; -import java.util.function.Supplier; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.event.ClusterTopologyChangedEvent; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.cluster.topology.ClusterTopologyRefresh; -import com.lambdaworks.redis.cluster.topology.NodeConnectionFactory; -import com.lambdaworks.redis.cluster.topology.TopologyComparators; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.codec.StringCodec; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceFactories; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.protocol.CommandHandler; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.pubsub.PubSubCommandHandler; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnectionImpl; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.SocketAddressResolver; - -import io.netty.util.concurrent.ScheduledFuture; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * A scalable thread-safe Redis cluster client. Multiple threads may share one connection. The - * cluster client handles command routing based on the first key of the command and maintains a view of the cluster that is - * available when calling the {@link #getPartitions()} method. - * - *

- * Connections to the cluster members are opened on the first access to the cluster node and managed by the - * {@link StatefulRedisClusterConnection}. You should not use transactional commands on cluster connections since {@code - * MULTI}, {@code EXEC} and {@code DISCARD} have no key and cannot be assigned to a particular node. - *

- *

- * The Redis cluster client provides a {@link RedisAdvancedClusterCommands sync}, {@link RedisAdvancedClusterAsyncCommands - * async} and {@link com.lambdaworks.redis.cluster.api.rx.RedisAdvancedClusterReactiveCommands reactive} API. - *

- * - *

- * Connections to particular nodes can be obtained by {@link StatefulRedisClusterConnection#getConnection(String)} providing the - * node id or {@link StatefulRedisClusterConnection#getConnection(String, int)} by host and port. - *

- * - *

- * Multiple keys operations have to operate on a key - * that hashes to the same slot. Following commands do not need to follow that rule since they are pipelined according to its - * hash value to multiple nodes in parallel on the sync, async and, reactive API: - *

- *
    - *
  • {@link RedisAdvancedClusterAsyncCommands#del(Object[]) DEL}
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#unlink(Object[]) UNLINK}
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#mget(Object[]) MGET}
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#mget(ValueStreamingChannel, Object[]) MGET with streaming}
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#mset(Map) MSET}
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#msetnx(Map) MSETNX}
  • - *
- * - *

- * Following commands on the Cluster sync, async and, reactive API are implemented with a Cluster-flavor: - *

- *
    - *
  • {@link RedisAdvancedClusterAsyncCommands#clientSetname(Object)} Executes {@code CLIENT SET} on all connections and - * initializes new connections with the {@code clientName}.
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#flushall()} Run {@code FLUSHALL} on all master nodes.
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#flushdb()} Executes {@code FLUSHDB} on all master nodes.
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#keys(Object)} Executes {@code - * KEYS} on all.
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#randomkey()} Returns a random key from a random master node.
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#scriptFlush()} Executes {@code SCRIPT FLUSH} on all nodes.
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#scriptKill()} Executes {@code SCRIPT KILL} on all nodes.
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#shutdown(boolean)} Executes {@code SHUTDOWN} on all nodes.
  • - *
  • {@link RedisAdvancedClusterAsyncCommands#scan()} Executes a {@code SCAN} on all nodes according to {@link ReadFrom}. The - * resulting cursor must be reused across the {@code SCAN} to scan iteratively across the whole cluster.
  • - *
- * - *

- * Cluster commands can be issued to multiple hosts in parallel by using the {@link NodeSelectionSupport} API. A set of nodes is - * selected using a {@link java.util.function.Predicate} and commands can be issued to the node selection - * - *

- * AsyncExecutions ping = commands.masters().commands().ping();
- * Collection nodes = ping.nodes();
- * nodes.stream().forEach(redisClusterNode -> ping.get(redisClusterNode));
- * 
- *

- * - * {@link RedisClusterClient} is an expensive resource. Reuse this instance or the {@link ClientResources} as much as possible. - * - * @author Mark Paluch - * @since 3.0 - */ -public class RedisClusterClient extends AbstractRedisClient { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(RedisClusterClient.class); - - protected AtomicBoolean clusterTopologyRefreshActivated = new AtomicBoolean(false); - protected AtomicReference> clusterTopologyRefreshFuture = new AtomicReference<>(); - - private final ClusterTopologyRefresh refresh = new ClusterTopologyRefresh(new NodeConnectionFactoryImpl(), getResources()); - private final ClusterTopologyRefreshScheduler clusterTopologyRefreshScheduler = new ClusterTopologyRefreshScheduler(this, - getResources()); - - private Partitions partitions; - private final Iterable initialUris; - - private RedisClusterClient() { - - setOptions(ClusterClientOptions.create()); - this.initialUris = Collections.emptyList(); - } - - /** - * Initialize the client with an initial cluster URI. - * - * @param initialUri initial cluster URI - * @deprecated Use {@link #create(RedisURI)} - */ - @Deprecated - public RedisClusterClient(RedisURI initialUri) { - this(Collections.singletonList(assertNotNull(initialUri))); - } - - /** - * Initialize the client with a list of cluster URI's. All uris are tried in sequence for connecting initially to the - * cluster. If any uri is successful for connection, the others are not tried anymore. The initial uri is needed to discover - * the cluster structure for distributing the requests. - * - * @param redisURIs iterable of initial {@link RedisURI cluster URIs}. Must not be {@literal null} and not empty. - * @deprecated Use {@link #create(Iterable)} - */ - @Deprecated - public RedisClusterClient(List redisURIs) { - this(null, redisURIs); - } - - /** - * Initialize the client with a list of cluster URI's. All uris are tried in sequence for connecting initially to the - * cluster. If any uri is successful for connection, the others are not tried anymore. The initial uri is needed to discover - * the cluster structure for distributing the requests. - * - * @param clientResources the client resources. If {@literal null}, the client will create a new dedicated instance of - * client resources and keep track of them. - * @param redisURIs iterable of initial {@link RedisURI cluster URIs}. Must not be {@literal null} and not empty. - */ - protected RedisClusterClient(ClientResources clientResources, Iterable redisURIs) { - - super(clientResources); - - assertNotEmpty(redisURIs); - assertSameOptions(redisURIs); - - this.initialUris = Collections.unmodifiableList(LettuceLists.newList(redisURIs)); - - setDefaultTimeout(getFirstUri().getTimeout(), getFirstUri().getUnit()); - setOptions(ClusterClientOptions.builder().build()); - } - - private static void assertSameOptions(Iterable redisURIs) { - - Boolean ssl = null; - Boolean startTls = null; - Boolean verifyPeer = null; - - for (RedisURI redisURI : redisURIs) { - - if (ssl == null) { - ssl = redisURI.isSsl(); - } - if (startTls == null) { - startTls = redisURI.isStartTls(); - } - if (verifyPeer == null) { - verifyPeer = redisURI.isVerifyPeer(); - } - - if (ssl.booleanValue() != redisURI.isSsl()) { - throw new IllegalArgumentException( - "RedisURI " + redisURI + " SSL is not consistent with the other seed URI SSL settings"); - } - - if (startTls.booleanValue() != redisURI.isStartTls()) { - throw new IllegalArgumentException( - "RedisURI " + redisURI + " StartTLS is not consistent with the other seed URI StartTLS settings"); - } - - if (verifyPeer.booleanValue() != redisURI.isVerifyPeer()) { - throw new IllegalArgumentException( - "RedisURI " + redisURI + " VerifyPeer is not consistent with the other seed URI VerifyPeer settings"); - } - } - } - - /** - * Create a new client that connects to the supplied {@link RedisURI uri} with default {@link ClientResources}. You can - * connect to different Redis servers but you must supply a {@link RedisURI} on connecting. - * - * @param redisURI the Redis URI, must not be {@literal null} - * @return a new instance of {@link RedisClusterClient} - */ - public static RedisClusterClient create(RedisURI redisURI) { - assertNotNull(redisURI); - return create(Collections.singleton(redisURI)); - } - - /** - * Create a new client that connects to the supplied {@link RedisURI uri} with default {@link ClientResources}. You can - * connect to different Redis servers but you must supply a {@link RedisURI} on connecting. - * - * @param redisURIs one or more Redis URI, must not be {@literal null} and not empty - * @return a new instance of {@link RedisClusterClient} - */ - public static RedisClusterClient create(Iterable redisURIs) { - assertNotEmpty(redisURIs); - assertSameOptions(redisURIs); - return new RedisClusterClient(null, redisURIs); - } - - /** - * Create a new client that connects to the supplied uri with default {@link ClientResources}. You can connect to different - * Redis servers but you must supply a {@link RedisURI} on connecting. - * - * @param uri the Redis URI, must not be {@literal null} - * @return a new instance of {@link RedisClusterClient} - */ - public static RedisClusterClient create(String uri) { - LettuceAssert.notNull(uri, "URI must not be null"); - return create(RedisURI.create(uri)); - } - - /** - * Create a new client that connects to the supplied {@link RedisURI uri} with shared {@link ClientResources}. You need to - * shut down the {@link ClientResources} upon shutting down your application.You can connect to different Redis servers but - * you must supply a {@link RedisURI} on connecting. - * - * @param clientResources the client resources, must not be {@literal null} - * @param redisURI the Redis URI, must not be {@literal null} - * @return a new instance of {@link RedisClusterClient} - */ - public static RedisClusterClient create(ClientResources clientResources, RedisURI redisURI) { - assertNotNull(clientResources); - assertNotNull(redisURI); - return create(clientResources, Collections.singleton(redisURI)); - } - - /** - * Create a new client that connects to the supplied uri with shared {@link ClientResources}.You need to shut down the - * {@link ClientResources} upon shutting down your application. You can connect to different Redis servers but you must - * supply a {@link RedisURI} on connecting. - * - * @param clientResources the client resources, must not be {@literal null} - * @param uri the Redis URI, must not be {@literal null} - * @return a new instance of {@link RedisClusterClient} - */ - public static RedisClusterClient create(ClientResources clientResources, String uri) { - assertNotNull(clientResources); - LettuceAssert.notNull(uri, "URI must not be null"); - return create(clientResources, RedisURI.create(uri)); - } - - /** - * Create a new client that connects to the supplied {@link RedisURI uri} with shared {@link ClientResources}. You need to - * shut down the {@link ClientResources} upon shutting down your application.You can connect to different Redis servers but - * you must supply a {@link RedisURI} on connecting. - * - * @param clientResources the client resources, must not be {@literal null} - * @param redisURIs one or more Redis URI, must not be {@literal null} and not empty - * @return a new instance of {@link RedisClusterClient} - */ - public static RedisClusterClient create(ClientResources clientResources, Iterable redisURIs) { - assertNotNull(clientResources); - assertNotEmpty(redisURIs); - assertSameOptions(redisURIs); - return new RedisClusterClient(clientResources, redisURIs); - } - - /** - * Connect to a Redis Cluster and treat keys and values as UTF-8 strings. - *

- * What to expect from this connection: - *

- *
    - *
  • A default connection is created to the node with the lowest latency
  • - *
  • Keyless commands are send to the default connection
  • - *
  • Single-key keyspace commands are routed to the appropriate node
  • - *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • - *
  • Pub/sub commands are sent to the node that handles the slot derived from the pub/sub channel
  • - *
- * - * @return A new stateful Redis Cluster connection - */ - public StatefulRedisClusterConnection connect() { - return connect(newStringStringCodec()); - } - - /** - * Connect to a Redis Cluster. Use the supplied {@link RedisCodec codec} to encode/decode keys and values. - *

- * What to expect from this connection: - *

- *
    - *
  • A default connection is created to the node with the lowest latency
  • - *
  • Keyless commands are send to the default connection
  • - *
  • Single-key keyspace commands are routed to the appropriate node
  • - *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • - *
  • Pub/sub commands are sent to the node that handles the slot derived from the pub/sub channel
  • - *
- * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new stateful Redis Cluster connection - */ - @SuppressWarnings("unchecked") - public StatefulRedisClusterConnection connect(RedisCodec codec) { - return connectClusterImpl(codec); - } - - /** - * Connect to a Redis Cluster using pub/sub connections and treat keys and values as UTF-8 strings. - *

- * What to expect from this connection: - *

- *
    - *
  • A default connection is created to the node with the least number of clients
  • - *
  • Pub/sub commands are sent to the node with the least number of clients
  • - *
  • Keyless commands are send to the default connection
  • - *
  • Single-key keyspace commands are routed to the appropriate node
  • - *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • - *
- * - * @return A new stateful Redis Cluster connection - */ - public StatefulRedisPubSubConnection connectPubSub() { - return connectPubSub(newStringStringCodec()); - } - - /** - * Connect to a Redis Cluster using pub/sub connections. Use the supplied {@link RedisCodec codec} to encode/decode keys and - * values. - *

- * What to expect from this connection: - *

- *
    - *
  • A default connection is created to the node with the least number of clients
  • - *
  • Pub/sub commands are sent to the node with the least number of clients
  • - *
  • Keyless commands are send to the default connection
  • - *
  • Single-key keyspace commands are routed to the appropriate node
  • - *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • - *
- * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new stateful Redis Cluster connection - */ - @SuppressWarnings("unchecked") - public StatefulRedisPubSubConnection connectPubSub(RedisCodec codec) { - return connectClusterPubSubImpl(codec); - } - - /** - * Open a new synchronous connection to a Redis Cluster that treats keys and values as UTF-8 strings. - * - * @return A new connection - * @deprecated Use {@code connect().sync()} - */ - @Deprecated - public RedisAdvancedClusterCommands connectCluster() { - return connectCluster(newStringStringCodec()); - } - - /** - * Open a new synchronous connection to a Redis Cluster. Use the supplied {@link RedisCodec codec} to encode/decode keys and - * values. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - * @deprecated @deprecated Use {@code connect(codec).sync()} - */ - @SuppressWarnings("unchecked") - @Deprecated - public RedisAdvancedClusterCommands connectCluster(RedisCodec codec) { - return connectClusterImpl(codec).sync(); - } - - /** - * Open a new asynchronous connection to a Redis Cluster that treats keys and values as UTF-8 strings. - * - * @return A new connection - * @deprecated Use {@code connect().async()} - */ - @Deprecated - public RedisAdvancedClusterAsyncCommands connectClusterAsync() { - return connectClusterImpl(newStringStringCodec()).async(); - } - - /** - * Open a new asynchronous connection to a Redis Cluster. Use the supplied {@link RedisCodec codec} to encode/decode keys - * and values. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - * @deprecated @deprecated Use {@code connect(codec).async()} - */ - @Deprecated - public RedisAdvancedClusterAsyncCommands connectClusterAsync(RedisCodec codec) { - return connectClusterImpl(codec).async(); - } - - protected StatefulRedisConnection connectToNode(final SocketAddress socketAddress) { - return connectToNode(newStringStringCodec(), socketAddress.toString(), null, new Supplier() { - @Override - public SocketAddress get() { - return socketAddress; - } - }); - } - - /** - * Create a connection to a redis socket address. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param nodeId the nodeId - * @param clusterWriter global cluster writer - * @param socketAddressSupplier supplier for the socket address - * @param Key type - * @param Value type - * @return A new connection - */ - StatefulRedisConnection connectToNode(RedisCodec codec, String nodeId, - RedisChannelWriter clusterWriter, final Supplier socketAddressSupplier) { - - assertNotNull(codec); - assertNotEmpty(initialUris); - - LettuceAssert.notNull(socketAddressSupplier, "SocketAddressSupplier must not be null"); - - logger.debug("connectNode(" + nodeId + ")"); - Queue> queue = LettuceFactories.newConcurrentQueue(); - - ClusterNodeCommandHandler handler = new ClusterNodeCommandHandler(clientOptions, getResources(), queue, - clusterWriter); - StatefulRedisConnectionImpl connection = new StatefulRedisConnectionImpl(handler, codec, timeout, unit); - - try { - connectStateful(handler, connection, getFirstUri(), socketAddressSupplier); - - connection.registerCloseables(closeableResources, connection); - } catch (RedisException e) { - connection.close(); - throw e; - } - - return connection; - } - - /** - * Create a clustered pub/sub connection with command distributor. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return a new connection - */ - StatefulRedisClusterConnectionImpl connectClusterImpl(RedisCodec codec) { - - if (partitions == null) { - initializePartitions(); - } - - activateTopologyRefreshIfNeeded(); - - logger.debug("connectCluster(" + initialUris + ")"); - Queue> queue = LettuceFactories.newConcurrentQueue(); - - Supplier socketAddressSupplier = getSocketAddressSupplier(TopologyComparators::sortByClientCount); - - CommandHandler handler = new CommandHandler(clientOptions, clientResources, queue); - - ClusterDistributionChannelWriter clusterWriter = new ClusterDistributionChannelWriter(clientOptions, - handler, clusterTopologyRefreshScheduler, getResources().eventExecutorGroup()); - PooledClusterConnectionProvider pooledClusterConnectionProvider = new PooledClusterConnectionProvider(this, - clusterWriter, codec); - - clusterWriter.setClusterConnectionProvider(pooledClusterConnectionProvider); - - StatefulRedisClusterConnectionImpl connection = new StatefulRedisClusterConnectionImpl<>(clusterWriter, codec, - timeout, unit); - - connection.setReadFrom(ReadFrom.MASTER); - connection.setPartitions(partitions); - - boolean connected = false; - RedisException causingException = null; - int connectionAttempts = Math.max(1, partitions.size()); - - for (int i = 0; i < connectionAttempts; i++) { - try { - connectStateful(handler, connection, getFirstUri(), socketAddressSupplier); - connected = true; - break; - } catch (RedisException e) { - logger.warn(e.getMessage()); - causingException = e; - } - } - - if (!connected) { - connection.close(); - if (causingException != null) { - throw causingException; - } - } - - connection.registerCloseables(closeableResources, connection, clusterWriter, pooledClusterConnectionProvider); - - return connection; - } - - /** - * Create a clustered connection with command distributor. - * - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param Key type - * @param Value type - * @return a new connection - */ - StatefulRedisPubSubConnectionImpl connectClusterPubSubImpl(RedisCodec codec) { - - if (partitions == null) { - initializePartitions(); - } - - activateTopologyRefreshIfNeeded(); - - logger.debug("connectClusterPubSub(" + initialUris + ")"); - Queue> queue = LettuceFactories.newConcurrentQueue(); - - Supplier socketAddressSupplier = getSocketAddressSupplier(TopologyComparators::sortByClientCount); - - PubSubCommandHandler handler = new PubSubCommandHandler(clientOptions, clientResources, queue, codec); - - ClusterDistributionChannelWriter clusterWriter = new ClusterDistributionChannelWriter(clientOptions, - handler, clusterTopologyRefreshScheduler, getResources().eventExecutorGroup()); - PooledClusterConnectionProvider pooledClusterConnectionProvider = new PooledClusterConnectionProvider(this, - clusterWriter, codec); - - clusterWriter.setClusterConnectionProvider(pooledClusterConnectionProvider); - - StatefulRedisPubSubConnectionImpl connection = new StatefulRedisPubSubConnectionImpl<>(clusterWriter, codec, - timeout, unit); - - clusterWriter.setPartitions(partitions); - - boolean connected = false; - RedisException causingException = null; - int connectionAttempts = Math.max(1, partitions.size()); - - for (int i = 0; i < connectionAttempts; i++) { - try { - connectStateful(handler, connection, getFirstUri(), socketAddressSupplier); - connected = true; - break; - } catch (RedisException e) { - logger.warn(e.getMessage()); - causingException = e; - } - } - - if (!connected) { - connection.close(); - throw causingException; - } - - connection.registerCloseables(closeableResources, connection, clusterWriter, pooledClusterConnectionProvider); - - if (getFirstUri().getPassword() != null) { - connection.async().auth(new String(getFirstUri().getPassword())); - } - - return connection; - } - - /** - * Connect to a endpoint provided by {@code socketAddressSupplier} using connection settings (authentication, SSL) from - * {@code connectionSettings}. - * - * @param handler - * @param connection - * @param connectionSettings - * @param socketAddressSupplier - * @param - * @param - */ - private void connectStateful(CommandHandler handler, StatefulRedisConnectionImpl connection, - RedisURI connectionSettings, Supplier socketAddressSupplier) { - - connectStateful0(handler, connection, connectionSettings, socketAddressSupplier); - - if (connectionSettings.getPassword() != null && connectionSettings.getPassword().length != 0) { - connection.async().auth(new String(connectionSettings.getPassword())); - } - } - - /** - * Connect to a endpoint provided by {@code socketAddressSupplier} using connection settings (authentication, SSL) from - * {@code connectionSettings}. - * - * @param handler - * @param connection - * @param connectionSettings - * @param socketAddressSupplier - * @param - * @param - */ - private void connectStateful(CommandHandler handler, StatefulRedisClusterConnectionImpl connection, - RedisURI connectionSettings, Supplier socketAddressSupplier) { - - connectStateful0(handler, connection, connectionSettings, socketAddressSupplier); - - if (connectionSettings.getPassword() != null && connectionSettings.getPassword().length != 0) { - connection.async().auth(new String(connectionSettings.getPassword())); - } - } - - /** - * Connect to a endpoint provided by {@code socketAddressSupplier} using connection settings (SSL) from {@code - * connectionSettings}. - * - * @param handler - * @param connection - * @param connectionSettings - * @param socketAddressSupplier - * @param - * @param - */ - private void connectStateful0(CommandHandler handler, RedisChannelHandler connection, - RedisURI connectionSettings, Supplier socketAddressSupplier) { - - ConnectionBuilder connectionBuilder; - if (connectionSettings.isSsl()) { - SslConnectionBuilder sslConnectionBuilder = SslConnectionBuilder.sslConnectionBuilder(); - sslConnectionBuilder.ssl(connectionSettings); - connectionBuilder = sslConnectionBuilder; - } else { - connectionBuilder = ConnectionBuilder.connectionBuilder(); - } - - connectionBuilder.reconnectionListener(new ReconnectEventListener(clusterTopologyRefreshScheduler)); - connectionBuilder.clientOptions(clientOptions); - connectionBuilder.clientResources(clientResources); - connectionBuilder(handler, connection, socketAddressSupplier, connectionBuilder, connectionSettings); - channelType(connectionBuilder, connectionSettings); - - initializeChannel(connectionBuilder); - } - - /** - * Reload partitions and re-initialize the distribution table. - */ - public void reloadPartitions() { - - if (partitions == null) { - initializePartitions(); - partitions.updateCache(); - } else { - - Partitions loadedPartitions = loadPartitions(); - if (TopologyComparators.isChanged(getPartitions(), loadedPartitions)) { - - logger.debug("Using a new cluster topology"); - - List before = new ArrayList(getPartitions()); - List after = new ArrayList(loadedPartitions); - - getResources().eventBus().publish(new ClusterTopologyChangedEvent(before, after)); - } - - this.partitions.reload(loadedPartitions.getPartitions()); - } - - updatePartitionsInConnections(); - } - - protected void updatePartitionsInConnections() { - - forEachClusterConnection(input -> { - input.setPartitions(partitions); - }); - } - - protected void initializePartitions() { - - Partitions loadedPartitions = loadPartitions(); - this.partitions = loadedPartitions; - } - - /** - * Retrieve the cluster view. Partitions are shared amongst all connections opened by this client instance. - * - * @return the partitions. - */ - public Partitions getPartitions() { - if (partitions == null) { - initializePartitions(); - } - return partitions; - } - - /** - * Retrieve partitions. Nodes within {@link Partitions} are ordered by latency. Lower latency nodes come first. - * - * @return Partitions - */ - protected Partitions loadPartitions() { - - Iterable topologyRefreshSource = getTopologyRefreshSource(); - Map partitions = refresh.loadViews(topologyRefreshSource, useDynamicRefreshSources()); - - if (partitions.isEmpty()) { - throw new RedisException("Cannot retrieve initial cluster partitions from initial URIs " + topologyRefreshSource); - } - - Partitions loadedPartitions = partitions.values().iterator().next(); - RedisURI viewedBy = refresh.getViewedBy(partitions, loadedPartitions); - - for (RedisClusterNode partition : loadedPartitions) { - if (viewedBy != null) { - RedisURI uri = partition.getUri(); - applyUriConnectionSettings(viewedBy, uri); - } - } - - activateTopologyRefreshIfNeeded(); - - return loadedPartitions; - } - - private void activateTopologyRefreshIfNeeded() { - - if (getOptions() instanceof ClusterClientOptions) { - ClusterClientOptions options = (ClusterClientOptions) getOptions(); - ClusterTopologyRefreshOptions topologyRefreshOptions = options.getTopologyRefreshOptions(); - - if (!topologyRefreshOptions.isPeriodicRefreshEnabled() || clusterTopologyRefreshActivated.get()) { - return; - } - - if (clusterTopologyRefreshActivated.compareAndSet(false, true)) { - ScheduledFuture scheduledFuture = genericWorkerPool - .scheduleAtFixedRate(clusterTopologyRefreshScheduler, options.getRefreshPeriod(), - options.getRefreshPeriod(), options.getRefreshPeriodUnit()); - clusterTopologyRefreshFuture.set(scheduledFuture); - } - } - } - - protected RedisURI getFirstUri() { - assertNotEmpty(initialUris); - Iterator iterator = initialUris.iterator(); - return iterator.next(); - } - - /** - * Returns a {@link Supplier} for {@link SocketAddress connection points}. - * - * @param sortFunction Sort function to enforce a specific order. The sort function must not change the order or the input - * parameter but create a new collection with the desired order, must not be {@literal null}. - * @return {@link Supplier} for {@link SocketAddress connection points}. - */ - protected Supplier getSocketAddressSupplier( - Function> sortFunction) { - - LettuceAssert.notNull(sortFunction, "Sort function must not be null"); - - final RoundRobinSocketAddressSupplier socketAddressSupplier = new RoundRobinSocketAddressSupplier(partitions, - sortFunction, clientResources); - return () -> { - if (partitions.isEmpty()) { - SocketAddress socketAddress = SocketAddressResolver.resolve(getFirstUri(), clientResources.dnsResolver()); - logger.debug("Resolved SocketAddress {} using {}", socketAddress, getFirstUri()); - return socketAddress; - } - - return socketAddressSupplier.get(); - }; - } - - protected RedisCodec newStringStringCodec() { - return StringCodec.UTF8; - } - - /** - * Sets the new cluster topology. The partitions are not applied to existing connections. - * - * @param partitions partitions object - */ - public void setPartitions(Partitions partitions) { - this.partitions = partitions; - } - - /** - * Returns the {@link ClientResources} which are used with that client. - * - * @return the {@link ClientResources} for this client - */ - public ClientResources getResources() { - return clientResources; - } - - /** - * Shutdown this client and close all open connections. The client should be discarded after calling shutdown. - * - * @param quietPeriod the quiet period as described in the documentation - * @param timeout the maximum amount of time to wait until the executor is shutdown regardless if a task was submitted - * during the quiet period - * @param timeUnit the unit of {@code quietPeriod} and {@code timeout} - */ - @Override - public void shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { - - if(clusterTopologyRefreshActivated.compareAndSet(true, false)){ - - ScheduledFuture scheduledFuture = clusterTopologyRefreshFuture.get(); - - try { - scheduledFuture.cancel(false); - clusterTopologyRefreshFuture.set(null); - } catch (Exception e) { - logger.debug("Could not unschedule Cluster topology refresh", e); - } - } - - super.shutdown(quietPeriod, timeout, timeUnit); - } - - protected void forEachClusterConnection(Consumer> function) { - forEachCloseable(input -> input instanceof StatefulRedisClusterConnectionImpl, function); - } - - protected void forEachCloseable(Predicate selector, Consumer function) { - for (Closeable c : closeableResources) { - if (selector.test(c)) { - function.accept((T) c); - } - } - } - - /** - * Set the {@link ClusterClientOptions} for the client. - * - * @param clientOptions client options for the client and connections that are created after setting the options - */ - public void setOptions(ClusterClientOptions clientOptions) { - super.setOptions(clientOptions); - } - - /** - * Returns the initial {@link RedisURI URIs}. - * - * @return the initial {@link RedisURI URIs} - */ - protected Iterable getInitialUris() { - return initialUris; - } - - ClusterClientOptions getClusterClientOptions() { - if (getOptions() instanceof ClusterClientOptions) { - return (ClusterClientOptions) getOptions(); - } - return null; - } - - boolean expireStaleConnections() { - return getClusterClientOptions() == null || getClusterClientOptions().isCloseStaleConnections(); - } - - static void applyUriConnectionSettings(RedisURI from, RedisURI to) { - - if (from.getPassword() != null && from.getPassword().length != 0) { - to.setPassword(new String(from.getPassword())); - } - - to.setTimeout(from.getTimeout()); - to.setUnit(from.getUnit()); - to.setSsl(from.isSsl()); - to.setStartTls(from.isStartTls()); - to.setVerifyPeer(from.isVerifyPeer()); - } - - private static void assertNotNull(RedisCodec codec) { - LettuceAssert.notNull(codec, "RedisCodec must not be null"); - } - - private static void assertNotEmpty(Iterable redisURIs) { - LettuceAssert.notNull(redisURIs, "RedisURIs must not be null"); - LettuceAssert.isTrue(redisURIs.iterator().hasNext(), "RedisURIs must not be empty"); - } - - private static RedisURI assertNotNull(RedisURI redisURI) { - LettuceAssert.notNull(redisURI, "RedisURI must not be null"); - return redisURI; - } - - private static void assertNotNull(ClientResources clientResources) { - LettuceAssert.notNull(clientResources, "ClientResources must not be null"); - } - - protected Iterable getTopologyRefreshSource() { - - boolean initialSeedNodes = !useDynamicRefreshSources(); - - Iterable seed; - if (initialSeedNodes || partitions == null || partitions.isEmpty()) { - seed = RedisClusterClient.this.initialUris; - } else { - List uris = new ArrayList<>(); - for (RedisClusterNode partition : TopologyComparators.sortByUri(partitions)) { - uris.add(partition.getUri()); - } - seed = uris; - } - return seed; - } - - protected boolean useDynamicRefreshSources() { - - if (getClusterClientOptions() != null) { - ClusterTopologyRefreshOptions topologyRefreshOptions = getClusterClientOptions().getTopologyRefreshOptions(); - - return topologyRefreshOptions.useDynamicRefreshSources(); - } - return true; - } - - private class NodeConnectionFactoryImpl implements NodeConnectionFactory { - @Override - public StatefulRedisConnection connectToNode(RedisCodec codec, SocketAddress socketAddress) { - return RedisClusterClient.this.connectToNode(codec, socketAddress.toString(), null, new Supplier() { - @Override - public SocketAddress get() { - return socketAddress; - } - }); - } - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/RoundRobin.java b/src/main/java/com/lambdaworks/redis/cluster/RoundRobin.java deleted file mode 100644 index bb22c99e39..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/RoundRobin.java +++ /dev/null @@ -1,47 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.Collection; - -/** - * Circular element provider. This class allows infinite scrolling over a collection with the possibility to provide an initial - * offset. - * - * @author Mark Paluch - */ -class RoundRobin { - - protected final Collection collection; - protected V offset; - - public RoundRobin(Collection collection) { - this(collection, null); - } - - public RoundRobin(Collection collection, V offset) { - this.collection = collection; - this.offset = offset; - } - - /** - * Returns the next item. - * - * @return the next item - */ - public V next() { - if (offset != null) { - boolean accept = false; - for (V element : collection) { - if (element == offset) { - accept = true; - continue; - } - - if (accept) { - return offset = element; - } - } - } - - return offset = collection.iterator().next(); - } -} \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/RoundRobinSocketAddressSupplier.java b/src/main/java/com/lambdaworks/redis/cluster/RoundRobinSocketAddressSupplier.java deleted file mode 100644 index fc7214d103..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/RoundRobinSocketAddressSupplier.java +++ /dev/null @@ -1,67 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.net.SocketAddress; -import java.util.ArrayList; -import java.util.Collection; -import java.util.function.Function; -import java.util.function.Supplier; - -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.SocketAddressResolver; - -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Round-Robin socket address supplier. Cluster nodes are iterated circular/infinitely. - * - * @author Mark Paluch - */ -class RoundRobinSocketAddressSupplier implements Supplier { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(RoundRobinSocketAddressSupplier.class); - private final Collection partitions; - private final Collection clusterNodes = new ArrayList<>(); - private final Function, Collection> sortFunction; - private final ClientResources clientResources; - private RoundRobin roundRobin; - - public RoundRobinSocketAddressSupplier(Collection partitions, - Function, Collection> sortFunction, - ClientResources clientResources) { - - LettuceAssert.notNull(partitions, "Partitions must not be null"); - LettuceAssert.notNull(sortFunction, "Sort-Function must not be null"); - - this.partitions = partitions; - this.clusterNodes.addAll(partitions); - this.roundRobin = new RoundRobin<>(clusterNodes); - this.sortFunction = (Function) sortFunction; - this.clientResources = clientResources; - resetRoundRobin(); - } - - @Override - public SocketAddress get() { - if (!clusterNodes.containsAll(partitions)) { - resetRoundRobin(); - } - - RedisClusterNode redisClusterNode = roundRobin.next(); - return getSocketAddress(redisClusterNode); - } - - protected void resetRoundRobin() { - clusterNodes.clear(); - clusterNodes.addAll(sortFunction.apply(partitions)); - roundRobin.offset = null; - } - - protected SocketAddress getSocketAddress(RedisClusterNode redisClusterNode) { - SocketAddress resolvedAddress = SocketAddressResolver.resolve(redisClusterNode.getUri(), clientResources.dnsResolver()); - logger.debug("Resolved SocketAddress {} using for Cluster node {}", resolvedAddress, redisClusterNode.getNodeId()); - return resolvedAddress; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/SlotHash.java b/src/main/java/com/lambdaworks/redis/cluster/SlotHash.java deleted file mode 100644 index 680f9d3e16..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/SlotHash.java +++ /dev/null @@ -1,138 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.nio.ByteBuffer; -import java.util.*; - -import com.lambdaworks.codec.CRC16; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Utility to calculate the slot from a key. - * - * @author Mark Paluch - * @since 3.0 - */ -public class SlotHash { - - /** - * Constant for a subkey start. - */ - public static final byte SUBKEY_START = (byte) '{'; - - /** - * Constant for a subkey end. - */ - public static final byte SUBKEY_END = (byte) '}'; - - /** - * Number of redis cluster slot hashes. - */ - public static final int SLOT_COUNT = 16384; - - private SlotHash() { - - } - - /** - * Calculate the slot from the given key. - * - * @param key the key - * @return slot - */ - public static final int getSlot(String key) { - return getSlot(key.getBytes()); - } - - /** - * Calculate the slot from the given key. - * - * @param key the key - * @return slot - */ - public static final int getSlot(byte[] key) { - return getSlot(ByteBuffer.wrap(key)); - } - - /** - * Calculate the slot from the given key. - * - * @param key the key - * @return slot - */ - public static final int getSlot(ByteBuffer key) { - - byte[] input = new byte[key.remaining()]; - key.duplicate().get(input); - - byte[] finalKey = input; - - int start = indexOf(input, SUBKEY_START); - if (start != -1) { - int end = indexOf(input, start + 1, SUBKEY_END); - if (end != -1 && end != start + 1) { - - finalKey = new byte[end - (start + 1)]; - System.arraycopy(input, start + 1, finalKey, 0, finalKey.length); - } - } - return CRC16.crc16(finalKey) % SLOT_COUNT; - } - - private static int indexOf(byte[] haystack, byte needle) { - return indexOf(haystack, 0, needle); - } - - private static int indexOf(byte[] haystack, int start, byte needle) { - - for (int i = start; i < haystack.length; i++) { - - if (haystack[i] == needle) { - return i; - } - } - - return -1; - } - - /** - * Partition keys by slot-hash. The resulting map honors order of the keys. - * - * @param codec codec to encode the key - * @param keys iterable of keys - * @param Key type. - * @param Value type. - * @result map between slot-hash and an ordered list of keys. - * - */ - static Map> partition(RedisCodec codec, Iterable keys) { - - Map> partitioned = new HashMap<>(); - for (K key : keys) { - int slot = getSlot(codec.encodeKey(key)); - if (!partitioned.containsKey(slot)) { - partitioned.put(slot, new ArrayList<>()); - } - Collection list = partitioned.get(slot); - list.add(key); - } - return partitioned; - } - - /** - * Create mapping between the Key and hash slot. - * - * @param partitioned map partitioned by slothash and keys - * @param - */ - static Map getSlots(Map> partitioned) { - - Map result = new HashMap<>(); - for (Map.Entry> entry : partitioned.entrySet()) { - for (K key : entry.getValue()) { - result.put(key, entry.getKey()); - } - } - - return result; - } -} \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/StatefulRedisClusterConnectionImpl.java b/src/main/java/com/lambdaworks/redis/cluster/StatefulRedisClusterConnectionImpl.java deleted file mode 100644 index e36a7f97bc..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/StatefulRedisClusterConnectionImpl.java +++ /dev/null @@ -1,356 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.protocol.CommandType.*; - -import java.lang.invoke.MethodHandle; -import java.lang.invoke.MethodHandles; -import java.lang.reflect.*; -import java.util.Map; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.TimeUnit; -import java.util.function.Consumer; -import java.util.function.Predicate; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisAdvancedClusterReactiveCommands; -import com.lambdaworks.redis.cluster.api.sync.NodeSelection; -import com.lambdaworks.redis.cluster.api.sync.NodeSelectionCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.AbstractInvocationHandler; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.CompleteableCommand; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; -import com.lambdaworks.redis.protocol.RedisCommand; - -import io.netty.channel.ChannelHandler; - -/** - * A thread-safe connection to a Redis Cluster. Multiple threads may share one {@link StatefulRedisClusterConnectionImpl} - * - * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All - * pending commands will be (re)sent after successful reconnection. - * - * @author Mark Paluch - * @since 4.0 - */ -@ChannelHandler.Sharable -public class StatefulRedisClusterConnectionImpl extends RedisChannelHandler - implements StatefulRedisClusterConnection { - - private Partitions partitions; - - private char[] password; - private boolean readOnly; - - protected final RedisCodec codec; - protected final RedisAdvancedClusterCommands sync; - protected final RedisAdvancedClusterAsyncCommandsImpl async; - protected final RedisAdvancedClusterReactiveCommandsImpl reactive; - - /** - * Initialize a new connection. - * - * @param writer the channel writer - * @param codec Codec used to encode/decode keys and values. - * @param timeout Maximum time to wait for a response. - * @param unit Unit of time for the timeout. - */ - public StatefulRedisClusterConnectionImpl(RedisChannelWriter writer, RedisCodec codec, long timeout, - TimeUnit unit) { - super(writer, timeout, unit); - this.codec = codec; - - this.async = new RedisAdvancedClusterAsyncCommandsImpl<>(this, codec); - this.sync = (RedisAdvancedClusterCommands) Proxy.newProxyInstance(AbstractRedisClient.class.getClassLoader(), - new Class[] { RedisAdvancedClusterConnection.class, RedisAdvancedClusterCommands.class }, syncInvocationHandler()); - this.reactive = new RedisAdvancedClusterReactiveCommandsImpl<>(this, codec); - } - - @Override - public RedisAdvancedClusterCommands sync() { - return sync; - } - - public InvocationHandler syncInvocationHandler() { - return new ClusterFutureSyncInvocationHandler<>(this, async()); - } - - @Override - public RedisAdvancedClusterAsyncCommands async() { - return async; - } - - @Override - public RedisAdvancedClusterReactiveCommands reactive() { - return reactive; - } - - @Deprecated - protected RedisAdvancedClusterReactiveCommandsImpl getReactiveCommands() { - return reactive; - } - - private RedisURI lookup(String nodeId) { - - for (RedisClusterNode partition : partitions) { - if (partition.getNodeId().equals(nodeId)) { - return partition.getUri(); - } - } - return null; - } - - @Override - public StatefulRedisConnection getConnection(String nodeId) { - - RedisURI redisURI = lookup(nodeId); - - if (redisURI == null) { - throw new RedisException("NodeId " + nodeId + " does not belong to the cluster"); - } - - return getClusterDistributionChannelWriter().getClusterConnectionProvider() - .getConnection(ClusterConnectionProvider.Intent.WRITE, nodeId); - } - - @Override - public StatefulRedisConnection getConnection(String host, int port) { - - return getClusterDistributionChannelWriter().getClusterConnectionProvider() - .getConnection(ClusterConnectionProvider.Intent.WRITE, host, port); - } - - public ClusterDistributionChannelWriter getClusterDistributionChannelWriter() { - return (ClusterDistributionChannelWriter) super.getChannelWriter(); - } - - @Override - public void activated() { - - super.activated(); - // do not block in here, since the channel flow will be interrupted. - if (password != null) { - async.authAsync(new String(password)); - } - - if (readOnly) { - async.readOnly(); - } - } - - @Override - public > C dispatch(C cmd) { - - RedisCommand local = cmd; - - if (local.getType().name().equals(AUTH.name())) { - local = attachOnComplete(local, status -> { - if (status.equals("OK") && cmd.getArgs().getFirstString() != null) { - this.password = cmd.getArgs().getFirstString().toCharArray(); - } - }); - } - - if (local.getType().name().equals(READONLY.name())) { - local = attachOnComplete(local, status -> { - if (status.equals("OK")) { - this.readOnly = true; - } - }); - } - - if (local.getType().name().equals(READWRITE.name())) { - local = attachOnComplete(local, status -> { - if (status.equals("OK")) { - this.readOnly = false; - } - }); - } - - return super.dispatch((C) local); - } - - private RedisCommand attachOnComplete(RedisCommand command, Consumer consumer) { - - if (command instanceof CompleteableCommand) { - CompleteableCommand completeable = (CompleteableCommand) command; - completeable.onComplete(consumer); - } - return command; - } - - public void setPartitions(Partitions partitions) { - this.partitions = partitions; - getClusterDistributionChannelWriter().setPartitions(partitions); - } - - public Partitions getPartitions() { - return partitions; - } - - @Override - public void setReadFrom(ReadFrom readFrom) { - LettuceAssert.notNull(readFrom, "ReadFrom must not be null"); - getClusterDistributionChannelWriter().setReadFrom(readFrom); - } - - @Override - public ReadFrom getReadFrom() { - return getClusterDistributionChannelWriter().getReadFrom(); - } - - /** - * Invocation-handler to synchronize API calls which use Futures as backend. This class leverages the need to implement a - * full sync class which just delegates every request. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ - @SuppressWarnings("unchecked") - private static class ClusterFutureSyncInvocationHandler extends AbstractInvocationHandler { - - private final StatefulRedisClusterConnection connection; - private final Object asyncApi; - private final Map apiMethodCache = new ConcurrentHashMap<>(RedisClusterCommands.class.getMethods().length, 1); - private final Map connectionMethodCache = new ConcurrentHashMap<>(5, 1); - - private final static Constructor LOOKUP_CONSTRUCTOR; - - static { - try { - LOOKUP_CONSTRUCTOR = MethodHandles.Lookup.class.getDeclaredConstructor(Class.class, int.class); - if (!LOOKUP_CONSTRUCTOR.isAccessible()) { - LOOKUP_CONSTRUCTOR.setAccessible(true); - } - } catch (NoSuchMethodException exp) { - // should be impossible, but... - throw new IllegalStateException(exp); - } - } - - ClusterFutureSyncInvocationHandler(StatefulRedisClusterConnection connection, Object asyncApi) { - this.connection = connection; - this.asyncApi = asyncApi; - } - - static MethodHandles.Lookup privateMethodHandleLookup(Class declaringClass) { - try { - return LOOKUP_CONSTRUCTOR.newInstance(declaringClass, MethodHandles.Lookup.PRIVATE); - } catch (InstantiationException | IllegalAccessException | InvocationTargetException e) { - throw new IllegalStateException(e); - } - } - - static MethodHandle getDefaultMethodHandle(Method method) { - Class declaringClass = method.getDeclaringClass(); - try { - return privateMethodHandleLookup(declaringClass).unreflectSpecial(method, declaringClass); - } catch (IllegalAccessException e) { - throw new IllegalArgumentException("Did not pass in an interface method: " + method); - } - } - - /** - * - * @see AbstractInvocationHandler#handleInvocation(Object, Method, Object[]) - */ - @Override - protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { - - try { - - if (method.isDefault()) { - return getDefaultMethodHandle(method).bindTo(proxy).invokeWithArguments(args); - } - - if (method.getName().equals("getConnection") && args.length > 0) { - Method targetMethod = connectionMethodCache.computeIfAbsent(method, key -> { - try { - return connection.getClass().getMethod(key.getName(), key.getParameterTypes()); - } catch (NoSuchMethodException e) { - throw new IllegalStateException(e); - } - }); - - Object result = targetMethod.invoke(connection, args); - if (result instanceof StatefulRedisClusterConnection) { - StatefulRedisClusterConnection connection = (StatefulRedisClusterConnection) result; - return connection.sync(); - } - - if (result instanceof StatefulRedisConnection) { - StatefulRedisConnection connection = (StatefulRedisConnection) result; - return connection.sync(); - } - } - - if (method.getName().equals("readonly") && args.length == 1) { - return nodes((Predicate) args[0], ClusterConnectionProvider.Intent.READ, false); - } - - if (method.getName().equals("nodes") && args.length == 1) { - return nodes((Predicate) args[0], ClusterConnectionProvider.Intent.WRITE, false); - } - - if (method.getName().equals("nodes") && args.length == 2) { - return nodes((Predicate) args[0], ClusterConnectionProvider.Intent.WRITE, - (Boolean) args[1]); - } - - Method targetMethod = apiMethodCache.computeIfAbsent(method, key -> { - - try { - return asyncApi.getClass().getMethod(key.getName(), key.getParameterTypes()); - } catch (NoSuchMethodException e) { - throw new IllegalStateException(e); - } - }); - - Object result = targetMethod.invoke(asyncApi, args); - - if (result instanceof RedisFuture) { - RedisFuture command = (RedisFuture) result; - if (!method.getName().equals("exec") && !method.getName().equals("multi")) { - if (connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection).isMulti()) { - return null; - } - } - return LettuceFutures.awaitOrCancel(command, connection.getTimeout(), connection.getTimeoutUnit()); - } - - return result; - - } catch (InvocationTargetException e) { - throw e.getTargetException(); - } - } - - protected NodeSelection nodes(Predicate predicate, ClusterConnectionProvider.Intent intent, - boolean dynamic) { - - NodeSelectionSupport, ?> selection; - - if (dynamic) { - selection = new DynamicSyncNodeSelection<>(connection, predicate, intent); - } else { - selection = new StaticSyncNodeSelection<>(connection, predicate, intent); - } - - NodeSelectionInvocationHandler h = new NodeSelectionInvocationHandler((AbstractNodeSelection) selection, - true, connection.getTimeout(), connection.getTimeoutUnit()); - return (NodeSelection) Proxy.newProxyInstance(NodeSelectionSupport.class.getClassLoader(), - new Class[] { NodeSelectionCommands.class, NodeSelection.class }, h); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/StaticAsyncNodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/StaticAsyncNodeSelection.java deleted file mode 100644 index 33d524d836..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/StaticAsyncNodeSelection.java +++ /dev/null @@ -1,53 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.HashMap; -import java.util.Iterator; -import java.util.List; -import java.util.Map; -import java.util.function.Predicate; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @param Command command interface type to invoke multi-node operations. - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -class StaticAsyncNodeSelection extends StaticNodeSelection, CMD, K, V> { - - public StaticAsyncNodeSelection(StatefulRedisClusterConnection globalConnection, - Predicate selector, ClusterConnectionProvider.Intent intent) { - super(globalConnection, selector, intent); - } - - public Iterator> iterator() { - List list = nodes().stream().collect(Collectors.toList()); - return list.stream().map(node -> getConnection(node).async()).iterator(); - } - - @Override - public RedisAsyncCommands commands(int index) { - return statefulMap().get(nodes().get(index)).async(); - } - - @Override - public Map> asMap() { - - List list = nodes().stream().collect(Collectors.toList()); - Map> map = new HashMap<>(); - - list.forEach((key) -> map.put(key, getConnection(key).async())); - - return map; - } - - // This method is never called, the value is supplied by AOP magic. - @Override - public CMD commands() { - return null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/StaticNodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/StaticNodeSelection.java deleted file mode 100644 index d0df121af3..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/StaticNodeSelection.java +++ /dev/null @@ -1,35 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.List; -import java.util.function.Predicate; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Static selection of nodes. - * - * @param API type. - * @param Command command interface type to invoke multi-node operations. - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -abstract class StaticNodeSelection extends AbstractNodeSelection { - - private final List redisClusterNodes; - - public StaticNodeSelection(StatefulRedisClusterConnection globalConnection, Predicate selector, - ClusterConnectionProvider.Intent intent) { - super(globalConnection, intent); - - this.redisClusterNodes = globalConnection.getPartitions().getPartitions().stream().filter(selector) - .collect(Collectors.toList()); - } - - @Override - protected List nodes() { - return redisClusterNodes; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/StaticSyncNodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/StaticSyncNodeSelection.java deleted file mode 100644 index 8633afe788..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/StaticSyncNodeSelection.java +++ /dev/null @@ -1,53 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.HashMap; -import java.util.Iterator; -import java.util.List; -import java.util.Map; -import java.util.function.Predicate; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @param Command command interface type to invoke multi-node operations. - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -class StaticSyncNodeSelection extends StaticNodeSelection, CMD, K, V> { - - public StaticSyncNodeSelection(StatefulRedisClusterConnection globalConnection, Predicate selector, - ClusterConnectionProvider.Intent intent) { - super(globalConnection, selector, intent); - } - - public Iterator> iterator() { - List list = nodes().stream().collect(Collectors.toList()); - return list.stream().map(node -> getConnection(node).sync()).iterator(); - } - - @Override - public RedisCommands commands(int index) { - return statefulMap().get(nodes().get(index)).sync(); - } - - @Override - public Map> asMap() { - - List list = nodes().stream().collect(Collectors.toList()); - Map> map = new HashMap<>(); - - list.forEach((key) -> map.put(key, getConnection(key).sync())); - - return map; - } - - // This method is never called, the value is supplied by AOP magic. - @Override - public CMD commands() { - return null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/SyncExecutionsImpl.java b/src/main/java/com/lambdaworks/redis/cluster/SyncExecutionsImpl.java deleted file mode 100644 index 1efbad9556..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/SyncExecutionsImpl.java +++ /dev/null @@ -1,45 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.Collection; -import java.util.HashMap; -import java.util.Map; -import java.util.concurrent.CompletionStage; -import java.util.concurrent.ExecutionException; - -import com.lambdaworks.redis.cluster.api.sync.Executions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - */ -class SyncExecutionsImpl implements Executions { - - private Map executions; - - public SyncExecutionsImpl(Map> executions) throws ExecutionException, - InterruptedException { - - Map result = new HashMap<>(executions.size(), 1); - for (Map.Entry> entry : executions.entrySet()) { - result.put(entry.getKey(), entry.getValue().toCompletableFuture().get()); - } - - this.executions = result; - } - - @Override - public Map asMap() { - return executions; - } - - @Override - public Collection nodes() { - return executions.keySet(); - } - - @Override - public T get(RedisClusterNode redisClusterNode) { - return executions.get(redisClusterNode); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/NodeSelectionSupport.java b/src/main/java/com/lambdaworks/redis/cluster/api/NodeSelectionSupport.java deleted file mode 100644 index fe118badea..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/NodeSelectionSupport.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis.cluster.api; - -import java.util.Map; - -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * A node selection represents a set of Redis Cluster nodes. Provides access to particular node connection APIs and allows the - * execution of commands on the selected cluster nodes. - * - * @param API type. - * @param Command command interface type to invoke multi-node operations. - * @author Mark Paluch - * @since 4.0 - */ -public interface NodeSelectionSupport { - - /** - * - * @return number of nodes. - */ - int size(); - - /** - * - * @return commands API to run on this node selection. - */ - CMD commands(); - - /** - * Obtain the connection/commands to a particular node. - * - * @param index index of the node - * @return the connection/commands object - */ - API commands(int index); - - /** - * Get the {@link RedisClusterNode}. - * - * @param index index of the cluster node - * @return the cluster node - */ - RedisClusterNode node(int index); - - /** - * - * @return map of {@link RedisClusterNode} and the connection/commands objects - */ - Map asMap(); - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/StatefulRedisClusterConnection.java b/src/main/java/com/lambdaworks/redis/cluster/api/StatefulRedisClusterConnection.java deleted file mode 100644 index b6a4502f18..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/StatefulRedisClusterConnection.java +++ /dev/null @@ -1,101 +0,0 @@ -package com.lambdaworks.redis.cluster.api; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.ClusterClientOptions; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisAdvancedClusterReactiveCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; - -/** - * A stateful cluster connection providing. Advanced cluster connections provide transparent command routing based on the first - * command key. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface StatefulRedisClusterConnection extends StatefulConnection { - - /** - * Returns the {@link RedisAdvancedClusterCommands} API for the current connection. Does not create a new connection. - * - * @return the synchronous API for the underlying connection. - */ - RedisAdvancedClusterCommands sync(); - - /** - * Returns the {@link RedisAdvancedClusterAsyncCommands} API for the current connection. Does not create a new connection. - * - * @return the asynchronous API for the underlying connection. - */ - RedisAdvancedClusterAsyncCommands async(); - - /** - * Returns the {@link RedisAdvancedClusterReactiveCommands} API for the current connection. Does not create a new - * connection. - * - * @return the reactive API for the underlying connection. - */ - RedisAdvancedClusterReactiveCommands reactive(); - - /** - * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. This - * connection is bound to the node id. Once the cluster topology view is updated, the connection will try to reconnect the - * to the node with the specified {@code nodeId}, that behavior can also lead to a closed connection once the node with the - * specified {@code nodeId} is no longer part of the cluster. - * - * Do not close the connections. Otherwise, unpredictable behavior will occur. The nodeId must be part of the cluster and is - * validated against the current topology view in {@link com.lambdaworks.redis.cluster.models.partitions.Partitions}. - * - * - * In contrast to the {@link StatefulRedisClusterConnection}, node-connections do not route commands to other cluster nodes. - * - * @param nodeId the node Id - * @return a connection to the requested cluster node - * @throws RedisException if the requested node identified by {@code nodeId} is not part of the cluster - */ - StatefulRedisConnection getConnection(String nodeId); - - /** - * Retrieve a connection to the specified cluster node using host and port. This connection is bound to a host and port. - * Updates to the cluster topology view can close the connection once the host, identified by {@code host} and {@code port}, - * are no longer part of the cluster. - * - * Do not close the connections. Otherwise, unpredictable behavior will occur. Host and port connections are verified by - * default for cluster membership, see {@link ClusterClientOptions#isValidateClusterNodeMembership()}. - * - * In contrast to the {@link StatefulRedisClusterConnection}, node-connections do not route commands to other cluster nodes. - * - * @param host the host - * @param port the port - * @return a connection to the requested cluster node - * @throws RedisException if the requested node identified by {@code host} and {@code port} is not part of the cluster - */ - StatefulRedisConnection getConnection(String host, int port); - - /** - * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the - * documentation for {@link ReadFrom} for more information. - * - * @param readFrom the read from setting, must not be {@literal null} - */ - void setReadFrom(ReadFrom readFrom); - - /** - * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. - * - * @return the read from setting - */ - ReadFrom getReadFrom(); - - /** - * - * @return Known partitions for this connection. - */ - Partitions getPartitions(); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/AsyncExecutions.java b/src/main/java/com/lambdaworks/redis/cluster/api/async/AsyncExecutions.java deleted file mode 100644 index a83f988074..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/AsyncExecutions.java +++ /dev/null @@ -1,73 +0,0 @@ -package com.lambdaworks.redis.cluster.api.async; - -import java.util.*; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CompletionStage; -import java.util.stream.Stream; -import java.util.stream.StreamSupport; - -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Result holder for a command that was executed asynchronously on multiple nodes. This API is subject to incompatible changes - * in a future release. The API is exempt from any compatibility guarantees made by lettuce. The current state implies nothing - * about the quality or performance of the API in question, only the fact that it is not "API-frozen." - * - * The NodeSelection command API and its result types are a base for discussions. - * - * - * @author Mark Paluch - * @since 4.0 - */ -public interface AsyncExecutions extends Iterable> { - - /** - * - * @return map between {@link RedisClusterNode} and the {@link CompletionStage} - */ - Map> asMap(); - - /** - * - * @return collection of nodes on which the command was executed. - */ - Collection nodes(); - - /** - * - * @param redisClusterNode the node - * @return the completion stage for this node - */ - CompletionStage get(RedisClusterNode redisClusterNode); - - /** - * - * @return array of futures. - */ - CompletableFuture[] futures(); - - /** - * - * @return iterator over the {@link CompletionStage}s - */ - @Override - default Iterator> iterator() { - return asMap().values().iterator(); - } - - /** - * - * @return a {@code Spliterator} over the {@link CompletionStage CompletionStages} in this collection - */ - @Override - default Spliterator> spliterator() { - return Spliterators.spliterator(iterator(), nodes().size(), 0); - } - - /** - * @return a sequential {@code Stream} over the {@link CompletionStage CompletionStages} in this collection - */ - default Stream> stream() { - return StreamSupport.stream(spliterator(), false); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/AsyncNodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/api/async/AsyncNodeSelection.java deleted file mode 100644 index 5a14b7df7e..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/AsyncNodeSelection.java +++ /dev/null @@ -1,19 +0,0 @@ -package com.lambdaworks.redis.cluster.api.async; - -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; - -/** - * Node selection with access to asynchronous executed commands on the set. This API is subject to incompatible changes in a - * future release. The API is exempt from any compatibility guarantees made by lettuce. The current state implies nothing about - * the quality or performance of the API in question, only the fact that it is not "API-frozen." - * - * The NodeSelection command API and its result types are a base for discussions. - * - * @author Mark Paluch - * @since 4.0 - */ -public interface AsyncNodeSelection extends - NodeSelectionSupport, NodeSelectionAsyncCommands> { - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/BaseNodeSelectionAsyncCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/async/BaseNodeSelectionAsyncCommands.java deleted file mode 100644 index 70f16aa8c8..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/BaseNodeSelectionAsyncCommands.java +++ /dev/null @@ -1,97 +0,0 @@ -package com.lambdaworks.redis.cluster.api.async; - -import java.lang.AutoCloseable; -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.RedisFuture; - -/** - * - * Asynchronous executed commands on a node selection for basic commands. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi - */ -public interface BaseNodeSelectionAsyncCommands extends AutoCloseable { - - /** - * Post a message to a channel. - * - * @param channel the channel type: key - * @param message the message type: value - * @return Long integer-reply the number of clients that received the message. - */ - AsyncExecutions publish(K channel, V message); - - /** - * Lists the currently *active channels*. - * - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - AsyncExecutions> pubsubChannels(); - - /** - * Lists the currently *active channels*. - * - * @param channel the key - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - AsyncExecutions> pubsubChannels(K channel); - - /** - * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. - * - * @param channels channel keys - * @return array-reply a list of channels and number of subscribers for every channel. - */ - AsyncExecutions> pubsubNumsub(K... channels); - - /** - * Returns the number of subscriptions to patterns. - * - * @return Long integer-reply the number of patterns all the clients are subscribed to. - */ - AsyncExecutions pubsubNumpat(); - - /** - * Echo the given string. - * - * @param msg the message type: value - * @return V bulk-string-reply - */ - AsyncExecutions echo(V msg); - - /** - * Return the role of the instance in the context of replication. - * - * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional - * elements are role-specific. - */ - AsyncExecutions> role(); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - AsyncExecutions ping(); - - /** - * Close the connection. - * - * @return String simple-string-reply always OK. - */ - AsyncExecutions quit(); - - /** - * Wait for replication. - * - * @param replicas minimum number of replicas - * @param timeout timeout in milliseconds - * @return number of replicas - */ - AsyncExecutions waitForReplication(int replicas, long timeout); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionAsyncCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionAsyncCommands.java deleted file mode 100644 index 8d8dd297d3..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionAsyncCommands.java +++ /dev/null @@ -1,16 +0,0 @@ -package com.lambdaworks.redis.cluster.api.async; - -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; - -/** - * Asynchronous and thread-safe Redis API to execute commands on a {@link NodeSelectionSupport}. - * - * @author Mark Paluch - */ -public interface NodeSelectionAsyncCommands extends BaseNodeSelectionAsyncCommands, - NodeSelectionHashAsyncCommands, NodeSelectionHLLAsyncCommands, NodeSelectionKeyAsyncCommands, - NodeSelectionListAsyncCommands, NodeSelectionScriptingAsyncCommands, - NodeSelectionServerAsyncCommands, NodeSelectionSetAsyncCommands, NodeSelectionSortedSetAsyncCommands, - NodeSelectionStringAsyncCommands, NodeSelectionGeoAsyncCommands { - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionHLLAsyncCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionHLLAsyncCommands.java deleted file mode 100644 index c6de42ff6b..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionHLLAsyncCommands.java +++ /dev/null @@ -1,48 +0,0 @@ -package com.lambdaworks.redis.cluster.api.async; - -import com.lambdaworks.redis.RedisFuture; - -/** - * Asynchronous executed commands on a node selection for HyperLogLog (PF* commands). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi - */ -public interface NodeSelectionHLLAsyncCommands { - - /** - * Adds the specified elements to the specified HyperLogLog. - * - * @param key the key - * @param values the values - * - * @return Long integer-reply specifically: - * - * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. - */ - AsyncExecutions pfadd(K key, V... values); - - /** - * Merge N different HyperLogLogs into a single one. - * - * @param destkey the destination key - * @param sourcekeys the source key - * - * @return String simple-string-reply The command just returns {@code OK}. - */ - AsyncExecutions pfmerge(K destkey, K... sourcekeys); - - /** - * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). - * - * @param keys the keys - * - * @return Long integer-reply specifically: - * - * The approximated number of unique elements observed via {@code PFADD}. - */ - AsyncExecutions pfcount(K... keys); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionScriptingAsyncCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionScriptingAsyncCommands.java deleted file mode 100644 index 8e84c5f285..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionScriptingAsyncCommands.java +++ /dev/null @@ -1,95 +0,0 @@ -package com.lambdaworks.redis.cluster.api.async; - -import java.util.List; -import com.lambdaworks.redis.ScriptOutputType; -import com.lambdaworks.redis.RedisFuture; - -/** - * Asynchronous executed commands on a node selection for Scripting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi - */ -public interface NodeSelectionScriptingAsyncCommands { - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type output type - * @param keys key names - * @param expected return type - * @return script result - */ - AsyncExecutions eval(String script, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - AsyncExecutions eval(String script, ScriptOutputType type, K[] keys, V... values); - - /** - * Evaluates a script cached on the server side by its SHA1 digest - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param expected return type - * @return script result - */ - AsyncExecutions evalsha(String digest, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - AsyncExecutions evalsha(String digest, ScriptOutputType type, K[] keys, V... values); - - /** - * Check existence of scripts in the script cache. - * - * @param digests script digests - * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 - * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 - * is returned, otherwise 0 is returned. - */ - AsyncExecutions> scriptExists(String... digests); - - /** - * Remove all the scripts from the script cache. - * - * @return String simple-string-reply - */ - AsyncExecutions scriptFlush(); - - /** - * Kill the script currently in execution. - * - * @return String simple-string-reply - */ - AsyncExecutions scriptKill(); - - /** - * Load the specified Lua script into the script cache. - * - * @param script script content - * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. - */ - AsyncExecutions scriptLoad(V script); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionSortedSetAsyncCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionSortedSetAsyncCommands.java deleted file mode 100644 index 489f704428..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionSortedSetAsyncCommands.java +++ /dev/null @@ -1,836 +0,0 @@ -package com.lambdaworks.redis.cluster.api.async; - -import java.util.List; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.ScoredValueScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ZAddArgs; -import com.lambdaworks.redis.ZStoreArgs; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; - -/** - * Asynchronous executed commands on a node selection for Sorted Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi - */ -public interface NodeSelectionSortedSetAsyncCommands { - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - AsyncExecutions zadd(K key, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - AsyncExecutions zadd(K key, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - AsyncExecutions zadd(K key, ScoredValue... scoredValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - AsyncExecutions zadd(K key, ZAddArgs zAddArgs, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - AsyncExecutions zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the ke - * @param zAddArgs arguments for zadd - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - AsyncExecutions zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); - - /** - * ZADD acts like ZINCRBY - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The total number of elements changed - */ - AsyncExecutions zaddincr(K key, double score, V member); - - /** - * Get the number of members in a sorted set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} - * does not exist. - */ - AsyncExecutions zcard(K key); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - AsyncExecutions zcount(K key, double min, double max); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - AsyncExecutions zcount(K key, String min, String max); - - /** - * Increment the score of a member in a sorted set. - * - * @param key the key - * @param amount the increment type: long - * @param member the member type: key - * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented - * as string. - */ - AsyncExecutions zincrby(K key, double amount, K member); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - AsyncExecutions zinterstore(K destination, K... keys); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - AsyncExecutions zinterstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - AsyncExecutions> zrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - AsyncExecutions>> zrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrangebyscore(K key, double min, double max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrangebyscore(K key, double min, double max, long offset, long count); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrangebyscore(K key, String min, String max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - AsyncExecutions>> zrangebyscoreWithScores(K key, double min, double max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - AsyncExecutions>> zrangebyscoreWithScores(K key, String min, String max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - AsyncExecutions>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - AsyncExecutions>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); - - /** - * Return a range of members in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Determine the index of a member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - AsyncExecutions zrank(K key, V member); - - /** - * Remove one or more members from a sorted set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply specifically: - * - * The number of members removed from the sorted set, not including non existing members. - */ - AsyncExecutions zrem(K key, V... members); - - /** - * Remove all members in a sorted set within the given indexes. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long integer-reply the number of elements removed. - */ - AsyncExecutions zremrangebyrank(K key, long start, long stop); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - AsyncExecutions zremrangebyscore(K key, double min, double max); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - AsyncExecutions zremrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - AsyncExecutions> zrevrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - AsyncExecutions>> zrevrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrevrangebyscore(K key, double max, double min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrevrangebyscore(K key, String max, String min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the withscores - * @param count the null - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrevrangebyscore(K key, double max, double min, long offset, long count); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrevrangebyscore(K key, String max, String min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions>> zrevrangebyscoreWithScores(K key, double max, double min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - AsyncExecutions>> zrevrangebyscoreWithScores(K key, String max, String min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - AsyncExecutions>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param max max score - * @param min min score - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Determine the index of a member in a sorted set, with scores ordered from high to low. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - AsyncExecutions zrevrank(K key, V member); - - /** - * Get the score associated with the given member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as - * string. - */ - AsyncExecutions zscore(K key, V member); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination destination key - * @param keys source keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - AsyncExecutions zunionstore(K destination, K... keys); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - AsyncExecutions zunionstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @return ScoredValueScanCursor<V> scan cursor. - */ - AsyncExecutions> zscan(K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - AsyncExecutions> zscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - AsyncExecutions> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ScoredValueScanCursor<V> scan cursor. - */ - AsyncExecutions> zscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - AsyncExecutions zscan(ScoredValueStreamingChannel channel, K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - AsyncExecutions zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - AsyncExecutions zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - AsyncExecutions zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Count the number of members in a sorted set between a given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - AsyncExecutions zlexcount(K key, String min, String max); - - /** - * Remove all members in a sorted set between the given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - AsyncExecutions zremrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - AsyncExecutions> zrangebylex(K key, String min, String max, long offset, long count); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/api/async/package-info.java deleted file mode 100644 index 2cdba3715e..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Cluster API for asynchronous executed commands. - */ -package com.lambdaworks.redis.cluster.api.async; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/api/package-info.java deleted file mode 100644 index e2cee2c2c0..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Cluster connection API. - */ -package com.lambdaworks.redis.cluster.api; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/rx/RedisAdvancedClusterReactiveCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/rx/RedisAdvancedClusterReactiveCommands.java deleted file mode 100644 index 9e038feae3..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/rx/RedisAdvancedClusterReactiveCommands.java +++ /dev/null @@ -1,280 +0,0 @@ -package com.lambdaworks.redis.cluster.api.rx; - -import java.util.Map; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.rx.*; -import com.lambdaworks.redis.cluster.ClusterClientOptions; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.output.KeyStreamingChannel; - -import rx.Observable; - -/** - * Advanced reactive and thread-safe Redis Cluster API. - * - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisAdvancedClusterReactiveCommands extends RedisClusterReactiveCommands { - - /** - * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. In - * contrast to the {@link RedisAdvancedClusterReactiveCommands}, node-connections do not route commands to other cluster - * nodes - * - * @param nodeId the node Id - * @return a connection to the requested cluster node - */ - RedisClusterReactiveCommands getConnection(String nodeId); - - /** - * Retrieve a connection to the specified cluster node using host and port. In contrast to the - * {@link RedisAdvancedClusterReactiveCommands}, node-connections do not route commands to other cluster nodes. Host and - * port connections are verified by default for cluster membership, see - * {@link ClusterClientOptions#isValidateClusterNodeMembership()}. - * - * @param host the host - * @param port the port - * @return a connection to the requested cluster node - */ - RedisClusterReactiveCommands getConnection(String host, int port); - - /** - * @return the underlying connection. - */ - StatefulRedisClusterConnection getStatefulConnection(); - - /** - * Delete one or more keys with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - * @see RedisKeyReactiveCommands#del(Object[]) - */ - Observable del(K... keys); - - /** - * Unlink one or more keys with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - * @see RedisKeyReactiveCommands#unlink(Object[]) - */ - Observable unlink(K... keys); - - /** - * Determine how many keys exist with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. - * - * @param keys the keys - * @return Long integer-reply specifically: Number of existing keys - */ - Observable exists(K... keys); - - /** - * Get the values of all the given keys with pipelining. Cross-slot keys will result in multiple calls to the particular - * cluster nodes. - * - * @param keys the key - * @return V array-reply list of values at the specified keys. - * @see RedisStringReactiveCommands#mget(Object[]) - */ - Observable mget(K... keys); - - /** - * Set multiple keys to multiple values with pipelining. Cross-slot keys will result in multiple calls to the particular - * cluster nodes. - * - * @param map the map - * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. - * @see RedisStringReactiveCommands#mset(Map) - */ - Observable mset(Map map); - - /** - * Set multiple keys to multiple values, only if none of the keys exist with pipelining. Cross-slot keys will result in - * multiple calls to the particular cluster nodes. - * - * @param map the null - * @return Boolean integer-reply specifically: - * - * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). - * - * @see RedisStringReactiveCommands#msetnx(Map) - */ - Observable msetnx(Map map); - - /** - * Set the current connection name on all cluster nodes with pipelining. - * - * @param name the client name - * @return simple-string-reply {@code OK} if the connection name was successfully set. - * @see RedisServerReactiveCommands#clientSetname(Object) - */ - Observable clientSetname(K name); - - /** - * Remove all keys from all databases on all cluster masters with pipelining. - * - * @return String simple-string-reply - * @see RedisServerReactiveCommands#flushall() - */ - Observable flushall(); - - /** - * Remove all keys from the current database on all cluster masters with pipelining. - * - * @return String simple-string-reply - * @see RedisServerReactiveCommands#flushdb() - */ - Observable flushdb(); - - /** - * Return the number of keys in the selected database on all cluster masters. - * - * @return Long integer-reply - * @see RedisServerReactiveCommands#dbsize() - */ - Observable dbsize(); - - /** - * Find all keys matching the given pattern on all cluster masters. - * - * @param pattern the pattern type: patternkey (pattern) - * @return List<K> array-reply list of keys matching {@code pattern}. - * @see RedisKeyReactiveCommands#keys(Object) - */ - Observable keys(K pattern); - - /** - * Find all keys matching the given pattern on all cluster masters. - * - * @param channel the channel - * @param pattern the pattern - * @return Long array-reply list of keys matching {@code pattern}. - * @see RedisKeyReactiveCommands#keys(KeyStreamingChannel, Object) - */ - Observable keys(KeyStreamingChannel channel, K pattern); - - /** - * Return a random key from the keyspace on a random master. - * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. - * @see RedisKeyReactiveCommands#randomkey() - */ - Observable randomkey(); - - /** - * Remove all the scripts from the script cache on all cluster nodes. - * - * @return String simple-string-reply - * @see RedisScriptingReactiveCommands#scriptFlush() - */ - Observable scriptFlush(); - - /** - * Kill the script currently in execution on all cluster nodes. This call does not fail even if no scripts are running. - * - * @return String simple-string-reply, always {@literal OK}. - * @see RedisScriptingReactiveCommands#scriptKill() - */ - Observable scriptKill(); - - /** - * Synchronously save the dataset to disk and then shut down all nodes of the cluster. - * - * @param save {@literal true} force save operation - * @see RedisServerReactiveCommands#shutdown(boolean) - */ - Observable shutdown(boolean save); - - /** - * Incrementally iterate the keys space over the whole Cluster. - * - * @return KeyScanCursor<K> scan cursor. - * @see RedisKeyReactiveCommands#scan(ScanArgs) - */ - Observable> scan(); - - /** - * Incrementally iterate the keys space over the whole Cluster. - * - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - * @see RedisKeyReactiveCommands#scan(ScanArgs) - */ - Observable> scan(ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space over the whole Cluster. - * - * @param scanCursor cursor to resume the scan. It's required to reuse the {@code scanCursor} instance from the previous - * {@link #scan()} call. - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - * @see RedisKeyReactiveCommands#scan(ScanCursor, ScanArgs) - */ - Observable> scan(ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space over the whole Cluster. - * - * @param scanCursor cursor to resume the scan. It's required to reuse the {@code scanCursor} instance from the previous - * {@link #scan()} call. - * @return KeyScanCursor<K> scan cursor. - * @see RedisKeyReactiveCommands#scan(ScanCursor) - */ - Observable> scan(ScanCursor scanCursor); - - /** - * Incrementally iterate the keys space over the whole Cluster. - * - * @param channel streaming channel that receives a call for every key - * @return StreamScanCursor scan cursor. - * @see RedisKeyReactiveCommands#scan(KeyStreamingChannel) - */ - Observable scan(KeyStreamingChannel channel); - - /** - * Incrementally iterate the keys space over the whole Cluster. - * - * @param channel streaming channel that receives a call for every key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - * @see RedisKeyReactiveCommands#scan(KeyStreamingChannel, ScanArgs) - */ - Observable scan(KeyStreamingChannel channel, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space over the whole Cluster. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume the scan. It's required to reuse the {@code scanCursor} instance from the previous - * {@link #scan()} call. - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - * @see RedisKeyReactiveCommands#scan(KeyStreamingChannel, ScanCursor, ScanArgs) - */ - Observable scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space over the whole Cluster. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume the scan. It's required to reuse the {@code scanCursor} instance from the previous - * {@link #scan()} call. - * @return StreamScanCursor scan cursor. - * @see RedisKeyReactiveCommands#scan(ScanCursor, ScanArgs) - */ - Observable scan(KeyStreamingChannel channel, ScanCursor scanCursor); - - /** - * Touch one or more keys with pipelining. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * Cross-slot keys will result in multiple calls to the particular cluster nodes. - * - * @param keys the keys - * @return Long integer-reply the number of found keys. - */ - Observable touch(K... keys); - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/rx/RedisClusterReactiveCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/rx/RedisClusterReactiveCommands.java deleted file mode 100644 index 68b7d483ce..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/rx/RedisClusterReactiveCommands.java +++ /dev/null @@ -1,313 +0,0 @@ -package com.lambdaworks.redis.cluster.api.rx; - -import java.util.Map; -import java.util.concurrent.TimeUnit; - -import rx.Observable; - -import com.lambdaworks.redis.api.rx.*; - -/** - * A complete reactive and thread-safe cluster Redis API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisClusterReactiveCommands extends RedisHashReactiveCommands, RedisKeyReactiveCommands, - RedisStringReactiveCommands, RedisListReactiveCommands, RedisSetReactiveCommands, - RedisSortedSetReactiveCommands, RedisScriptingReactiveCommands, RedisServerReactiveCommands, - RedisHLLReactiveCommands, RedisGeoReactiveCommands, BaseRedisReactiveCommands { - - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - Observable auth(String password); - - /** - * Generate a new config epoch, incrementing the current epoch, assign the new epoch to this node, WITHOUT any consensus and - * persist the configuration on disk before sending packets with the new configuration. - * - * @return String simple-string-reply If the new config epoch is generated and assigned either BUMPED (epoch) or STILL - * (epoch) are returned. - */ - Observable clusterBumpepoch(); - - /** - * Meet another cluster node to include the node into the cluster. The command starts the cluster handshake and returns with - * {@literal OK} when the node was added to the cluster. - * - * @param ip IP address of the host - * @param port port number. - * @return String simple-string-reply - */ - Observable clusterMeet(String ip, int port); - - /** - * Blacklist and remove the cluster node from the cluster. - * - * @param nodeId the node Id - * @return String simple-string-reply - */ - Observable clusterForget(String nodeId); - - /** - * Adds slots to the cluster node. The current node will become the master for the specified slots. - * - * @param slots one or more slots from {@literal 0} to {@literal 16384} - * @return String simple-string-reply - */ - Observable clusterAddSlots(int... slots); - - /** - * Removes slots from the cluster node. - * - * @param slots one or more slots from {@literal 0} to {@literal 16384} - * @return String simple-string-reply - */ - Observable clusterDelSlots(int... slots); - - /** - * Assign a slot to a node. The command migrates the specified slot from the current node to the specified node in - * {@code nodeId} - * - * @param slot the slot - * @param nodeId the id of the node that will become the master for the slot - * @return String simple-string-reply - */ - Observable clusterSetSlotNode(int slot, String nodeId); - - /** - * Clears migrating / importing state from the slot. - * - * @param slot the slot - * @return String simple-string-reply - */ - Observable clusterSetSlotStable(int slot); - - /** - * Flag a slot as {@literal MIGRATING} (outgoing) towards the node specified in {@code nodeId}. The slot must be handled by - * the current node in order to be migrated. - * - * @param slot the slot - * @param nodeId the id of the node is targeted to become the master for the slot - * @return String simple-string-reply - */ - Observable clusterSetSlotMigrating(int slot, String nodeId); - - /** - * Flag a slot as {@literal IMPORTING} (incoming) from the node specified in {@code nodeId}. - * - * @param slot the slot - * @param nodeId the id of the node is the master of the slot - * @return String simple-string-reply - */ - Observable clusterSetSlotImporting(int slot, String nodeId); - - /** - * Get information and statistics about the cluster viewed by the current node. - * - * @return String bulk-string-reply as a collection of text lines. - */ - Observable clusterInfo(); - - /** - * Obtain the nodeId for the currently connected node. - * - * @return String simple-string-reply - */ - Observable clusterMyId(); - - /** - * Obtain details about all cluster nodes. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} - * - * @return String bulk-string-reply as a collection of text lines - */ - Observable clusterNodes(); - - /** - * List slaves for a certain node identified by its {@code nodeId}. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} - * - * @param nodeId node id of the master node - * @return List<String> array-reply list of slaves. The command returns data in the same format as - * {@link #clusterNodes()} but one line per slave. - */ - Observable clusterSlaves(String nodeId); - - /** - * Retrieve the list of keys within the {@code slot}. - * - * @param slot the slot - * @param count maximal number of keys - * @return List<K> array-reply list of keys - */ - Observable clusterGetKeysInSlot(int slot, int count); - - /** - * Returns the number of keys in the specified Redis Cluster hash {@code slot}. - * - * @param slot the slot - * @return Integer reply: The number of keys in the specified hash slot, or an error if the hash slot is invalid. - */ - Observable clusterCountKeysInSlot(int slot); - - /** - * Returns the number of failure reports for the specified node. Failure reports are the way Redis Cluster uses in order to - * promote a {@literal PFAIL} state, that means a node is not reachable, to a {@literal FAIL} state, that means that the - * majority of masters in the cluster agreed within a window of time that the node is not reachable. - * - * @param nodeId the node id - * @return Integer reply: The number of active failure reports for the node. - */ - Observable clusterCountFailureReports(String nodeId); - - /** - * Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and - * testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Basically the same as - * {@link com.lambdaworks.redis.cluster.SlotHash#getSlot(byte[])}. If not, call Houston and report that we've got a problem. - * - * @param key the key. - * @return Integer reply: The hash slot number. - */ - Observable clusterKeyslot(K key); - - /** - * Forces a node to save the nodes.conf configuration on disk. - * - * @return String simple-string-reply: {@code OK} or an error if the operation fails. - */ - Observable clusterSaveconfig(); - - /** - * This command sets a specific config epoch in a fresh node. It only works when: - *
    - *
  • The nodes table of the node is empty.
  • - *
  • The node current config epoch is zero.
  • - *
- * - * @param configEpoch the config epoch - * @return String simple-string-reply: {@code OK} or an error if the operation fails. - */ - Observable clusterSetConfigEpoch(long configEpoch); - - /** - * Get array of cluster slots to node mappings. - * - * @return List<Object> array-reply nested list of slot ranges with IP/Port mappings. - */ - Observable clusterSlots(); - - /** - * The asking command is required after a {@code -ASK} redirection. The client should issue {@code ASKING} before to - * actually send the command to the target instance. See the Redis Cluster specification for more information. - * - * @return String simple-string-reply - */ - Observable asking(); - - /** - * Turn this node into a slave of the node with the id {@code nodeId}. - * - * @param nodeId master node id - * @return String simple-string-reply - */ - Observable clusterReplicate(String nodeId); - - /** - * Failover a cluster node. Turns the currently connected node into a master and the master into its slave. - * - * @param force do not coordinate with master if {@literal true} - * @return String simple-string-reply - */ - Observable clusterFailover(boolean force); - - /** - * Reset a node performing a soft or hard reset: - *
    - *
  • All other nodes are forgotten
  • - *
  • All the assigned / open slots are released
  • - *
  • If the node is a slave, it turns into a master
  • - *
  • Only for hard reset: a new Node ID is generated
  • - *
  • Only for hard reset: currentEpoch and configEpoch are set to 0
  • - *
  • The new configuration is saved and the cluster state updated
  • - *
  • If the node was a slave, the whole data set is flushed away
  • - *
- * - * @param hard {@literal true} for hard reset. Generates a new nodeId and currentEpoch/configEpoch are set to 0 - * @return String simple-string-reply - */ - Observable clusterReset(boolean hard); - - /** - * Delete all the slots associated with the specified node. The number of deleted slots is returned. - * - * @return String simple-string-reply - */ - Observable clusterFlushslots(); - - /** - * Tells a Redis cluster slave node that the client is ok reading possibly stale data and is not interested in running write - * queries. - * - * @return String simple-string-reply - */ - Observable readOnly(); - - /** - * Resets readOnly flag. - * - * @return String simple-string-reply - */ - Observable readWrite(); - - /** - * Delete a key with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. - * - * @param keys the key - * @return Observable<Long> integer-reply The number of keys that were removed. - */ - Observable del(K... keys); - - /** - * Get the values of all the given keys with pipelining. Cross-slot keys will result in multiple calls to the particular - * cluster nodes. - * - * @param keys the key - * @return Observable<List<V>> array-reply list of values at the specified keys. - */ - Observable mget(K... keys); - - /** - * Set multiple keys to multiple values with pipelining. Cross-slot keys will result in multiple calls to the particular - * cluster nodes. - * - * @param map the null - * @return Observable<String> simple-string-reply always {@code OK} since {@code MSET} can't fail. - */ - Observable mset(Map map); - - /** - * Set multiple keys to multiple values, only if none of the keys exist with pipelining. Cross-slot keys will result in - * multiple calls to the particular cluster nodes. - * - * @param map the null - * @return Observable<Boolean> integer-reply specifically: - * - * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). - */ - Observable msetnx(Map map); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/rx/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/api/rx/package-info.java deleted file mode 100644 index 240604692c..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/rx/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Cluster API for reactive commands. - */ -package com.lambdaworks.redis.cluster.api.rx; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/BaseNodeSelectionCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/BaseNodeSelectionCommands.java deleted file mode 100644 index 3d38f61c10..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/BaseNodeSelectionCommands.java +++ /dev/null @@ -1,96 +0,0 @@ -package com.lambdaworks.redis.cluster.api.sync; - -import java.lang.AutoCloseable; -import java.util.List; -import java.util.Map; - -/** - * - * Synchronous executed commands on a node selection for basic commands. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi - */ -public interface BaseNodeSelectionCommands extends AutoCloseable { - - /** - * Post a message to a channel. - * - * @param channel the channel type: key - * @param message the message type: value - * @return Long integer-reply the number of clients that received the message. - */ - Executions publish(K channel, V message); - - /** - * Lists the currently *active channels*. - * - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - Executions> pubsubChannels(); - - /** - * Lists the currently *active channels*. - * - * @param channel the key - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - Executions> pubsubChannels(K channel); - - /** - * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. - * - * @param channels channel keys - * @return array-reply a list of channels and number of subscribers for every channel. - */ - Executions> pubsubNumsub(K... channels); - - /** - * Returns the number of subscriptions to patterns. - * - * @return Long integer-reply the number of patterns all the clients are subscribed to. - */ - Executions pubsubNumpat(); - - /** - * Echo the given string. - * - * @param msg the message type: value - * @return V bulk-string-reply - */ - Executions echo(V msg); - - /** - * Return the role of the instance in the context of replication. - * - * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional - * elements are role-specific. - */ - Executions> role(); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - Executions ping(); - - /** - * Close the connection. - * - * @return String simple-string-reply always OK. - */ - Executions quit(); - - /** - * Wait for replication. - * - * @param replicas minimum number of replicas - * @param timeout timeout in milliseconds - * @return number of replicas - */ - Executions waitForReplication(int replicas, long timeout); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/Executions.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/Executions.java deleted file mode 100644 index 2b3d889e22..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/Executions.java +++ /dev/null @@ -1,67 +0,0 @@ -package com.lambdaworks.redis.cluster.api.sync; - -import java.util.*; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CompletionStage; -import java.util.stream.Stream; -import java.util.stream.StreamSupport; - -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Result holder for a command that was executed synchronously on multiple nodes. This API is subject to incompatible changes in - * a future release. The API is exempt from any compatibility guarantees made by lettuce. The current state implies nothing - * about the quality or performance of the API in question, only the fact that it is not "API-frozen." - * - * The NodeSelection command API and its result types are a base for discussions. - * - * - * @author Mark Paluch - * @since 4.0 - */ -public interface Executions extends Iterable { - - /** - * - * @return map between {@link RedisClusterNode} and the {@link CompletionStage} - */ - Map asMap(); - - /** - * - * @return collection of nodes on which the command was executed. - */ - Collection nodes(); - - /** - * - * @param redisClusterNode the node - * @return the completion stage for this node - */ - T get(RedisClusterNode redisClusterNode); - - /** - * - * @return iterator over the {@link CompletionStage}s - */ - @Override - default Iterator iterator() { - return asMap().values().iterator(); - } - - /** - * - * @return a {@code Spliterator} over the elements in this collection - */ - @Override - default Spliterator spliterator() { - return Spliterators.spliterator(iterator(), nodes().size(), 0); - } - - /** - * @return a sequential {@code Stream} over the elements in this collection - */ - default Stream stream() { - return StreamSupport.stream(spliterator(), false); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelection.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelection.java deleted file mode 100644 index 4aa55752c8..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelection.java +++ /dev/null @@ -1,21 +0,0 @@ -package com.lambdaworks.redis.cluster.api.sync; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; - -/** - * Node selection with access to synchronous executed commands on the set. Commands are triggered concurrently to the selected - * nodes and synchronized afterwards. - * - * This API is subject to incompatible changes in a future release. The API is exempt from any compatibility guarantees made by - * lettuce. The current state implies nothing about the quality or performance of the API in question, only the fact that it is - * not "API-frozen." - * - * The NodeSelection command API and its result types are a base for discussions. - * - * @author Mark Paluch - * @since 4.0 - */ -public interface NodeSelection extends NodeSelectionSupport, NodeSelectionCommands> { - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionCommands.java deleted file mode 100644 index fa5929a784..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionCommands.java +++ /dev/null @@ -1,15 +0,0 @@ -package com.lambdaworks.redis.cluster.api.sync; - -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; - -/** - * Synchronous and thread-safe Redis API to execute commands on a {@link NodeSelectionSupport}. - * - * @author Mark Paluch - */ -public interface NodeSelectionCommands extends BaseNodeSelectionCommands, NodeSelectionHashCommands, - NodeSelectionHLLCommands, NodeSelectionKeyCommands, NodeSelectionListCommands, - NodeSelectionScriptingCommands, NodeSelectionServerCommands, NodeSelectionSetCommands, - NodeSelectionSortedSetCommands, NodeSelectionStringCommands, NodeSelectionGeoCommands { - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionHLLCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionHLLCommands.java deleted file mode 100644 index 39236a01a1..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionHLLCommands.java +++ /dev/null @@ -1,46 +0,0 @@ -package com.lambdaworks.redis.cluster.api.sync; - -/** - * Synchronous executed commands on a node selection for HyperLogLog (PF* commands). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi - */ -public interface NodeSelectionHLLCommands { - - /** - * Adds the specified elements to the specified HyperLogLog. - * - * @param key the key - * @param values the values - * - * @return Long integer-reply specifically: - * - * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. - */ - Executions pfadd(K key, V... values); - - /** - * Merge N different HyperLogLogs into a single one. - * - * @param destkey the destination key - * @param sourcekeys the source key - * - * @return String simple-string-reply The command just returns {@code OK}. - */ - Executions pfmerge(K destkey, K... sourcekeys); - - /** - * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). - * - * @param keys the keys - * - * @return Long integer-reply specifically: - * - * The approximated number of unique elements observed via {@code PFADD}. - */ - Executions pfcount(K... keys); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionScriptingCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionScriptingCommands.java deleted file mode 100644 index e89b0132c6..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionScriptingCommands.java +++ /dev/null @@ -1,94 +0,0 @@ -package com.lambdaworks.redis.cluster.api.sync; - -import java.util.List; -import com.lambdaworks.redis.ScriptOutputType; - -/** - * Synchronous executed commands on a node selection for Scripting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi - */ -public interface NodeSelectionScriptingCommands { - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type output type - * @param keys key names - * @param expected return type - * @return script result - */ - Executions eval(String script, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - Executions eval(String script, ScriptOutputType type, K[] keys, V... values); - - /** - * Evaluates a script cached on the server side by its SHA1 digest - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param expected return type - * @return script result - */ - Executions evalsha(String digest, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - Executions evalsha(String digest, ScriptOutputType type, K[] keys, V... values); - - /** - * Check existence of scripts in the script cache. - * - * @param digests script digests - * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 - * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 - * is returned, otherwise 0 is returned. - */ - Executions> scriptExists(String... digests); - - /** - * Remove all the scripts from the script cache. - * - * @return String simple-string-reply - */ - Executions scriptFlush(); - - /** - * Kill the script currently in execution. - * - * @return String simple-string-reply - */ - Executions scriptKill(); - - /** - * Load the specified Lua script into the script cache. - * - * @param script script content - * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. - */ - Executions scriptLoad(V script); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionSortedSetCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionSortedSetCommands.java deleted file mode 100644 index 288961a928..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionSortedSetCommands.java +++ /dev/null @@ -1,835 +0,0 @@ -package com.lambdaworks.redis.cluster.api.sync; - -import java.util.List; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.ScoredValueScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ZAddArgs; -import com.lambdaworks.redis.ZStoreArgs; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Synchronous executed commands on a node selection for Sorted Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi - */ -public interface NodeSelectionSortedSetCommands { - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Executions zadd(K key, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Executions zadd(K key, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Executions zadd(K key, ScoredValue... scoredValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Executions zadd(K key, ZAddArgs zAddArgs, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Executions zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the ke - * @param zAddArgs arguments for zadd - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Executions zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); - - /** - * ZADD acts like ZINCRBY - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The total number of elements changed - */ - Executions zaddincr(K key, double score, V member); - - /** - * Get the number of members in a sorted set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} - * does not exist. - */ - Executions zcard(K key); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Executions zcount(K key, double min, double max); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Executions zcount(K key, String min, String max); - - /** - * Increment the score of a member in a sorted set. - * - * @param key the key - * @param amount the increment type: long - * @param member the member type: key - * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented - * as string. - */ - Executions zincrby(K key, double amount, K member); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Executions zinterstore(K destination, K... keys); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Executions zinterstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - Executions> zrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - Executions>> zrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrangebyscore(K key, double min, double max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrangebyscore(K key, double min, double max, long offset, long count); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrangebyscore(K key, String min, String max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - Executions>> zrangebyscoreWithScores(K key, double min, double max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - Executions>> zrangebyscoreWithScores(K key, String min, String max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - Executions>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - Executions>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); - - /** - * Return a range of members in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Executions zrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Executions zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Executions zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Executions zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Executions zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Executions zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Determine the index of a member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Executions zrank(K key, V member); - - /** - * Remove one or more members from a sorted set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply specifically: - * - * The number of members removed from the sorted set, not including non existing members. - */ - Executions zrem(K key, V... members); - - /** - * Remove all members in a sorted set within the given indexes. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long integer-reply the number of elements removed. - */ - Executions zremrangebyrank(K key, long start, long stop); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Executions zremrangebyscore(K key, double min, double max); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Executions zremrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - Executions> zrevrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - Executions>> zrevrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrevrangebyscore(K key, double max, double min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrevrangebyscore(K key, String max, String min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the withscores - * @param count the null - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrevrangebyscore(K key, double max, double min, long offset, long count); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrevrangebyscore(K key, String max, String min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions>> zrevrangebyscoreWithScores(K key, double max, double min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - Executions>> zrevrangebyscoreWithScores(K key, String max, String min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - Executions>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Executions zrevrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Executions zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param max max score - * @param min min score - * @return Long count of elements in the specified range. - */ - Executions zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Executions zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Executions zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Executions zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Determine the index of a member in a sorted set, with scores ordered from high to low. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Executions zrevrank(K key, V member); - - /** - * Get the score associated with the given member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as - * string. - */ - Executions zscore(K key, V member); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination destination key - * @param keys source keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Executions zunionstore(K destination, K... keys); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Executions zunionstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @return ScoredValueScanCursor<V> scan cursor. - */ - Executions> zscan(K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - Executions> zscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - Executions> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ScoredValueScanCursor<V> scan cursor. - */ - Executions> zscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - Executions zscan(ScoredValueStreamingChannel channel, K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Executions zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - Executions zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - Executions zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Count the number of members in a sorted set between a given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Executions zlexcount(K key, String min, String max); - - /** - * Remove all members in a sorted set between the given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Executions zremrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - Executions> zrangebylex(K key, String min, String max, long offset, long count); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/RedisClusterCommands.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/RedisClusterCommands.java deleted file mode 100644 index 611c9e1a1a..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/RedisClusterCommands.java +++ /dev/null @@ -1,282 +0,0 @@ -package com.lambdaworks.redis.cluster.api.sync; - -import java.util.List; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.RedisClusterConnection; -import com.lambdaworks.redis.api.sync.*; - -/** - * A complete synchronous and thread-safe Redis Cluster API with 400+ Methods. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisClusterCommands extends RedisHashCommands, RedisKeyCommands, RedisStringCommands, - RedisListCommands, RedisSetCommands, RedisSortedSetCommands, RedisScriptingCommands, - RedisServerCommands, RedisHLLCommands, RedisGeoCommands, BaseRedisCommands, AutoCloseable, - RedisClusterConnection { - - /** - * Set the default timeout for operations. - * - * @param timeout the timeout value - * @param unit the unit of the timeout value - */ - void setTimeout(long timeout, TimeUnit unit); - - /** - * Authenticate to the server. - * - * @param password the password - * @return String simple-string-reply - */ - String auth(String password); - - /** - * Generate a new config epoch, incrementing the current epoch, assign the new epoch to this node, WITHOUT any consensus and - * persist the configuration on disk before sending packets with the new configuration. - * - * @return String simple-string-reply If the new config epoch is generated and assigned either BUMPED (epoch) or STILL - * (epoch) are returned. - */ - String clusterBumpepoch(); - - /** - * Meet another cluster node to include the node into the cluster. The command starts the cluster handshake and returns with - * {@literal OK} when the node was added to the cluster. - * - * @param ip IP address of the host - * @param port port number. - * @return String simple-string-reply - */ - String clusterMeet(String ip, int port); - - /** - * Blacklist and remove the cluster node from the cluster. - * - * @param nodeId the node Id - * @return String simple-string-reply - */ - String clusterForget(String nodeId); - - /** - * Adds slots to the cluster node. The current node will become the master for the specified slots. - * - * @param slots one or more slots from {@literal 0} to {@literal 16384} - * @return String simple-string-reply - */ - String clusterAddSlots(int... slots); - - /** - * Removes slots from the cluster node. - * - * @param slots one or more slots from {@literal 0} to {@literal 16384} - * @return String simple-string-reply - */ - String clusterDelSlots(int... slots); - - /** - * Assign a slot to a node. The command migrates the specified slot from the current node to the specified node in - * {@code nodeId} - * - * @param slot the slot - * @param nodeId the id of the node that will become the master for the slot - * @return String simple-string-reply - */ - String clusterSetSlotNode(int slot, String nodeId); - - /** - * Clears migrating / importing state from the slot. - * - * @param slot the slot - * @return String simple-string-reply - */ - String clusterSetSlotStable(int slot); - - /** - * Flag a slot as {@literal MIGRATING} (outgoing) towards the node specified in {@code nodeId}. The slot must be handled by - * the current node in order to be migrated. - * - * @param slot the slot - * @param nodeId the id of the node is targeted to become the master for the slot - * @return String simple-string-reply - */ - String clusterSetSlotMigrating(int slot, String nodeId); - - /** - * Flag a slot as {@literal IMPORTING} (incoming) from the node specified in {@code nodeId}. - * - * @param slot the slot - * @param nodeId the id of the node is the master of the slot - * @return String simple-string-reply - */ - String clusterSetSlotImporting(int slot, String nodeId); - - /** - * Get information and statistics about the cluster viewed by the current node. - * - * @return String bulk-string-reply as a collection of text lines. - */ - String clusterInfo(); - - /** - * Obtain the nodeId for the currently connected node. - * - * @return String simple-string-reply - */ - String clusterMyId(); - - /** - * Obtain details about all cluster nodes. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} - * - * @return String bulk-string-reply as a collection of text lines - */ - String clusterNodes(); - - /** - * List slaves for a certain node identified by its {@code nodeId}. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} - * - * @param nodeId node id of the master node - * @return List<String> array-reply list of slaves. The command returns data in the same format as - * {@link #clusterNodes()} but one line per slave. - */ - List clusterSlaves(String nodeId); - - /** - * Retrieve the list of keys within the {@code slot}. - * - * @param slot the slot - * @param count maximal number of keys - * @return List<K> array-reply list of keys - */ - List clusterGetKeysInSlot(int slot, int count); - - /** - * Returns the number of keys in the specified Redis Cluster hash {@code slot}. - * - * @param slot the slot - * @return Integer reply: The number of keys in the specified hash slot, or an error if the hash slot is invalid. - */ - Long clusterCountKeysInSlot(int slot); - - /** - * Returns the number of failure reports for the specified node. Failure reports are the way Redis Cluster uses in order to - * promote a {@literal PFAIL} state, that means a node is not reachable, to a {@literal FAIL} state, that means that the - * majority of masters in the cluster agreed within a window of time that the node is not reachable. - * - * @param nodeId the node id - * @return Integer reply: The number of active failure reports for the node. - */ - Long clusterCountFailureReports(String nodeId); - - /** - * Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and - * testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Basically the same as - * {@link com.lambdaworks.redis.cluster.SlotHash#getSlot(byte[])}. If not, call Houston and report that we've got a problem. - * - * @param key the key. - * @return Integer reply: The hash slot number. - */ - Long clusterKeyslot(K key); - - /** - * Forces a node to save the nodes.conf configuration on disk. - * - * @return String simple-string-reply: {@code OK} or an error if the operation fails. - */ - String clusterSaveconfig(); - - /** - * This command sets a specific config epoch in a fresh node. It only works when: - *
    - *
  • The nodes table of the node is empty.
  • - *
  • The node current config epoch is zero.
  • - *
- * - * @param configEpoch the config epoch - * @return String simple-string-reply: {@code OK} or an error if the operation fails. - */ - String clusterSetConfigEpoch(long configEpoch); - - /** - * Get array of cluster slots to node mappings. - * - * @return List<Object> array-reply nested list of slot ranges with IP/Port mappings. - */ - List clusterSlots(); - - /** - * The asking command is required after a {@code -ASK} redirection. The client should issue {@code ASKING} before to - * actually send the command to the target instance. See the Redis Cluster specification for more information. - * - * @return String simple-string-reply - */ - String asking(); - - /** - * Turn this node into a slave of the node with the id {@code nodeId}. - * - * @param nodeId master node id - * @return String simple-string-reply - */ - String clusterReplicate(String nodeId); - - /** - * Failover a cluster node. Turns the currently connected node into a master and the master into its slave. - * - * @param force do not coordinate with master if {@literal true} - * @return String simple-string-reply - */ - String clusterFailover(boolean force); - - /** - * Reset a node performing a soft or hard reset: - *
    - *
  • All other nodes are forgotten
  • - *
  • All the assigned / open slots are released
  • - *
  • If the node is a slave, it turns into a master
  • - *
  • Only for hard reset: a new Node ID is generated
  • - *
  • Only for hard reset: currentEpoch and configEpoch are set to 0
  • - *
  • The new configuration is saved and the cluster state updated
  • - *
  • If the node was a slave, the whole data set is flushed away
  • - *
- * - * @param hard {@literal true} for hard reset. Generates a new nodeId and currentEpoch/configEpoch are set to 0 - * @return String simple-string-reply - */ - String clusterReset(boolean hard); - - /** - * Delete all the slots associated with the specified node. The number of deleted slots is returned. - * - * @return String simple-string-reply - */ - String clusterFlushslots(); - - /** - * Tells a Redis cluster slave node that the client is ok reading possibly stale data and is not interested in running write - * queries. - * - * @return String simple-string-reply - */ - String readOnly(); - - /** - * Resets readOnly flag. - * - * @return String simple-string-reply - */ - String readWrite(); - - /** - * Close the connection. The connection will become not usable anymore as soon as this method was called. - */ - @Override - void close(); - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/api/sync/package-info.java deleted file mode 100644 index 2621d24a0b..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Cluster API for synchronous executed commands. - */ -package com.lambdaworks.redis.cluster.api.sync; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/event/ClusterTopologyChangedEvent.java b/src/main/java/com/lambdaworks/redis/cluster/event/ClusterTopologyChangedEvent.java deleted file mode 100644 index b735673135..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/event/ClusterTopologyChangedEvent.java +++ /dev/null @@ -1,57 +0,0 @@ -package com.lambdaworks.redis.cluster.event; - -import java.util.Collections; -import java.util.List; - -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.event.Event; - -/** - * Signals a discovered cluster topology change. The event carries the view {@link #before()} and {@link #after} the change. - * - * @author Mark Paluch - * @since 3.4 - */ -public class ClusterTopologyChangedEvent implements Event { - private final List before; - private final List after; - - /** - * Creates a new {@link ClusterTopologyChangedEvent}. - * - * @param before the cluster topology view before the topology changed, must not be {@literal null} - * @param after the cluster topology view after the topology changed, must not be {@literal null} - */ - public ClusterTopologyChangedEvent(List before, List after) { - this.before = Collections.unmodifiableList(before); - this.after = Collections.unmodifiableList(after); - } - - /** - * Returns the cluster topology view before the topology changed. - * - * @return the cluster topology view before the topology changed. - */ - public List before() { - return before; - } - - /** - * Returns the cluster topology view after the topology changed. - * - * @return the cluster topology view after the topology changed. - */ - public List after() { - return after; - } - - @Override - public String toString() { - final StringBuffer sb = new StringBuffer(); - sb.append(getClass().getSimpleName()); - sb.append(" [before=").append(before.size()); - sb.append(", after=").append(after.size()); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/event/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/event/package-info.java deleted file mode 100644 index 75bb219e15..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/event/package-info.java +++ /dev/null @@ -1,5 +0,0 @@ -/** - * Cluster event types. - */ -package com.lambdaworks.redis.cluster.event; - diff --git a/src/main/java/com/lambdaworks/redis/cluster/models/partitions/ClusterPartitionParser.java b/src/main/java/com/lambdaworks/redis/cluster/models/partitions/ClusterPartitionParser.java deleted file mode 100644 index 68ad37decf..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/models/partitions/ClusterPartitionParser.java +++ /dev/null @@ -1,170 +0,0 @@ -package com.lambdaworks.redis.cluster.models.partitions; - -import java.util.*; -import java.util.regex.Pattern; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.LettuceStrings; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.internal.HostAndPort; -import com.lambdaworks.redis.internal.LettuceLists; - -/** - * Parser for node information output of {@code CLUSTER NODES} and {@code CLUSTER SLAVES}. - * - * @author Mark Paluch - * @since 3.0 - */ -public class ClusterPartitionParser { - - public static final String CONNECTED = "connected"; - - private static final String TOKEN_SLOT_IN_TRANSITION = "["; - private static final char TOKEN_NODE_SEPARATOR = '\n'; - private static final Pattern TOKEN_PATTERN = Pattern.compile(Character.toString(TOKEN_NODE_SEPARATOR)); - private static final Pattern SPACE_PATTERN = Pattern.compile(" "); - private static final Pattern DASH_PATTERN = Pattern.compile("\\-"); - private static final Map FLAG_MAPPING; - - static { - Map map = new HashMap<>(); - - map.put("noflags", RedisClusterNode.NodeFlag.NOFLAGS); - map.put("myself", RedisClusterNode.NodeFlag.MYSELF); - map.put("master", RedisClusterNode.NodeFlag.MASTER); - map.put("slave", RedisClusterNode.NodeFlag.SLAVE); - map.put("fail?", RedisClusterNode.NodeFlag.EVENTUAL_FAIL); - map.put("fail", RedisClusterNode.NodeFlag.FAIL); - map.put("handshake", RedisClusterNode.NodeFlag.HANDSHAKE); - map.put("noaddr", RedisClusterNode.NodeFlag.NOADDR); - FLAG_MAPPING = Collections.unmodifiableMap(map); - } - - /** - * Utility constructor. - */ - private ClusterPartitionParser() { - - } - - /** - * Parse partition lines into Partitions object. - * - * @param nodes output of CLUSTER NODES - * @return the partitions object. - */ - public static Partitions parse(String nodes) { - Partitions result = new Partitions(); - - try { - List mappedNodes = TOKEN_PATTERN.splitAsStream(nodes).filter(s -> !s.isEmpty()) - .map(ClusterPartitionParser::parseNode) - .collect(Collectors.toList()); - result.addAll(mappedNodes); - } catch (Exception e) { - throw new RedisException("Cannot parse " + nodes, e); - } - - return result; - } - - private static RedisClusterNode parseNode(String nodeInformation) { - - Iterator iterator = SPACE_PATTERN.splitAsStream(nodeInformation).iterator(); - - String nodeId = iterator.next(); - boolean connected = false; - RedisURI uri = null; - - String hostAndPortPart = iterator.next(); - if(hostAndPortPart.contains("@")) { - hostAndPortPart = hostAndPortPart.substring(0, hostAndPortPart.indexOf('@')); - } - - HostAndPort hostAndPort = HostAndPort.parseCompat(hostAndPortPart); - - if (LettuceStrings.isNotEmpty(hostAndPort.getHostText())) { - uri = RedisURI.Builder.redis(hostAndPort.getHostText(), hostAndPort.getPort()).build(); - } - - String flags = iterator.next(); - List flagStrings = LettuceLists.newList(flags.split("\\,")); - - Set nodeFlags = readFlags(flagStrings); - - String slaveOfString = iterator.next(); // (nodeId or -) - String slaveOf = "-".equals(slaveOfString) ? null : slaveOfString; - - long pingSentTs = getLongFromIterator(iterator, 0); - long pongReceivedTs = getLongFromIterator(iterator, 0); - long configEpoch = getLongFromIterator(iterator, 0); - - String connectedFlags = iterator.next(); // "connected" : "disconnected" - - if (CONNECTED.equals(connectedFlags)) { - connected = true; - } - - List slotStrings = LettuceLists.newList(iterator); // slot, from-to [slot->-nodeID] [slot-<-nodeID] - List slots = readSlots(slotStrings); - - RedisClusterNode partition = new RedisClusterNode(uri, nodeId, connected, slaveOf, pingSentTs, pongReceivedTs, - configEpoch, slots, nodeFlags); - - return partition; - - } - - private static Set readFlags(List flagStrings) { - - Set flags = new HashSet<>(); - for (String flagString : flagStrings) { - if (FLAG_MAPPING.containsKey(flagString)) { - flags.add(FLAG_MAPPING.get(flagString)); - } - } - return Collections.unmodifiableSet(flags); - } - - private static List readSlots(List slotStrings) { - - List slots = new ArrayList<>(); - for (String slotString : slotStrings) { - - if (slotString.startsWith(TOKEN_SLOT_IN_TRANSITION)) { - // not interesting - continue; - - } - - if (slotString.contains("-")) { - // slot range - Iterator it = DASH_PATTERN.splitAsStream(slotString).iterator(); - int from = Integer.parseInt(it.next()); - int to = Integer.parseInt(it.next()); - - for (int slot = from; slot <= to; slot++) { - slots.add(slot); - - } - continue; - } - - slots.add(Integer.parseInt(slotString)); - } - - return Collections.unmodifiableList(slots); - } - - private static long getLongFromIterator(Iterator iterator, long defaultValue) { - if (iterator.hasNext()) { - Object object = iterator.next(); - if (object instanceof String) { - return Long.parseLong((String) object); - } - } - return defaultValue; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/models/partitions/RedisClusterNode.java b/src/main/java/com/lambdaworks/redis/cluster/models/partitions/RedisClusterNode.java deleted file mode 100644 index c3991977fe..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/models/partitions/RedisClusterNode.java +++ /dev/null @@ -1,280 +0,0 @@ -package com.lambdaworks.redis.cluster.models.partitions; - -import java.io.Serializable; -import java.util.ArrayList; -import java.util.List; -import java.util.Set; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceSets; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -/** - * Representation of a Redis Cluster node. A {@link RedisClusterNode} is identified by its {@code nodeId}. A - * {@link RedisClusterNode} can be a {@link #getRole() responsible master} for zero to - * {@link com.lambdaworks.redis.cluster.SlotHash#SLOT_COUNT 16384} slots, a slave of one {@link #getSlaveOf() master} of carry - * different {@link com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode.NodeFlag flags}. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings("serial") -public class RedisClusterNode implements Serializable, RedisNodeDescription { - private RedisURI uri; - private String nodeId; - - private boolean connected; - private String slaveOf; - private long pingSentTimestamp; - private long pongReceivedTimestamp; - private long configEpoch; - - private List slots; - private Set flags; - - public RedisClusterNode() { - - } - - public RedisClusterNode(RedisURI uri, String nodeId, boolean connected, String slaveOf, long pingSentTimestamp, - long pongReceivedTimestamp, long configEpoch, List slots, Set flags) { - this.uri = uri; - this.nodeId = nodeId; - this.connected = connected; - this.slaveOf = slaveOf; - this.pingSentTimestamp = pingSentTimestamp; - this.pongReceivedTimestamp = pongReceivedTimestamp; - this.configEpoch = configEpoch; - this.slots = slots; - this.flags = flags; - } - - public RedisClusterNode(RedisClusterNode redisClusterNode) { - this.uri = redisClusterNode.uri; - this.nodeId = redisClusterNode.nodeId; - this.connected = redisClusterNode.connected; - this.slaveOf = redisClusterNode.slaveOf; - this.pingSentTimestamp = redisClusterNode.pingSentTimestamp; - this.pongReceivedTimestamp = redisClusterNode.pongReceivedTimestamp; - this.configEpoch = redisClusterNode.configEpoch; - this.slots = new ArrayList<>(redisClusterNode.slots); - this.flags = LettuceSets.newHashSet(redisClusterNode.flags); - } - - /** - * Create a new instance of {@link RedisClusterNode} by passing the {@code nodeId} - * - * @param nodeId the nodeId - * @return a new instance of {@link RedisClusterNode} - */ - public static RedisClusterNode of(String nodeId) { - RedisClusterNode redisClusterNode = new RedisClusterNode(); - redisClusterNode.setNodeId(nodeId); - return redisClusterNode; - } - - public RedisURI getUri() { - return uri; - } - - /** - * Sets thhe connection point details. Usually the host/ip/port where a particular Redis Cluster node server is running. - * - * @param uri the {@link RedisURI}, must not be {@literal null} - */ - public void setUri(RedisURI uri) { - LettuceAssert.notNull(uri, "RedisURI must not be null"); - this.uri = uri; - } - - public String getNodeId() { - return nodeId; - } - - /** - * Sets {@code nodeId}. - * - * @param nodeId the {@code nodeId} - */ - public void setNodeId(String nodeId) { - LettuceAssert.notNull(nodeId, "NodeId must not be null"); - this.nodeId = nodeId; - } - - public boolean isConnected() { - return connected; - } - - /** - * Sets the {@code connected} flag. The {@code connected} flag describes whether the node which provided details about the - * node is connected to the particular {@link RedisClusterNode}. - * - * @param connected the {@code connected} flag - */ - public void setConnected(boolean connected) { - this.connected = connected; - } - - public String getSlaveOf() { - return slaveOf; - } - - /** - * Sets the replication source. - * - * @param slaveOf the replication source, can be {@literal null} - */ - public void setSlaveOf(String slaveOf) { - this.slaveOf = slaveOf; - } - - public long getPingSentTimestamp() { - return pingSentTimestamp; - } - - /** - * Sets the last {@code pingSentTimestamp}. - * - * @param pingSentTimestamp the last {@code pingSentTimestamp} - */ - public void setPingSentTimestamp(long pingSentTimestamp) { - this.pingSentTimestamp = pingSentTimestamp; - } - - public long getPongReceivedTimestamp() { - return pongReceivedTimestamp; - } - - /** - * Sets the last {@code pongReceivedTimestamp}. - * - * @param pongReceivedTimestamp the last {@code pongReceivedTimestamp} - */ - public void setPongReceivedTimestamp(long pongReceivedTimestamp) { - this.pongReceivedTimestamp = pongReceivedTimestamp; - } - - public long getConfigEpoch() { - return configEpoch; - } - - /** - * Sets the {@code configEpoch}. - * - * @param configEpoch the {@code configEpoch} - */ - public void setConfigEpoch(long configEpoch) { - this.configEpoch = configEpoch; - } - - public List getSlots() { - return slots; - } - - /** - * Sets the list of slots for which this {@link RedisClusterNode} is the - * {@link com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode.NodeFlag#MASTER}. The list is empty if this node - * is not a master or the node is not responsible for any slots at all. - * - * @param slots list of slots, must not be {@literal null} but may be empty - */ - public void setSlots(List slots) { - LettuceAssert.notNull(slots, "Slots must not be null"); - - this.slots = slots; - } - - public Set getFlags() { - return flags; - } - - /** - * Set of {@link com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode.NodeFlag node flags}. - * - * @param flags the set of node flags. - */ - public void setFlags(Set flags) { - this.flags = flags; - } - - @Override - public boolean equals(Object o) { - if (this == o) { - return true; - } - if (!(o instanceof RedisClusterNode)) { - return false; - } - - RedisClusterNode that = (RedisClusterNode) o; - - if (nodeId != null ? !nodeId.equals(that.nodeId) : that.nodeId != null) { - return false; - } - - return true; - } - - @Override - public int hashCode() { - int result = 31 * (nodeId != null ? nodeId.hashCode() : 0); - return result; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [uri=").append(uri); - sb.append(", nodeId='").append(nodeId).append('\''); - sb.append(", connected=").append(connected); - sb.append(", slaveOf='").append(slaveOf).append('\''); - sb.append(", pingSentTimestamp=").append(pingSentTimestamp); - sb.append(", pongReceivedTimestamp=").append(pongReceivedTimestamp); - sb.append(", configEpoch=").append(configEpoch); - sb.append(", flags=").append(flags); - if (slots != null) { - sb.append(", slot count=").append(slots.size()); - } - sb.append(']'); - return sb.toString(); - } - - /** - * - * @param nodeFlag the node flag - * @return true if the {@linkplain NodeFlag} is contained within the flags. - */ - public boolean is(NodeFlag nodeFlag) { - return getFlags().contains(nodeFlag); - } - - /** - * - * @param slot the slot hash - * @return true if the slot is contained within the handled slots. - */ - public boolean hasSlot(int slot) { - return getSlots().contains(slot); - } - - /** - * Returns the {@link com.lambdaworks.redis.models.role.RedisInstance.Role} of the Redis Cluster node based on the - * {@link #getFlags() flags}. - * - * @return the Redis Cluster node role - */ - @Override - public Role getRole() { - return is(NodeFlag.MASTER) ? Role.MASTER : Role.SLAVE; - } - - /** - * Redis Cluster node flags. - */ - public enum NodeFlag { - NOFLAGS, MYSELF, SLAVE, MASTER, EVENTUAL_FAIL, FAIL, HANDSHAKE, NOADDR; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/models/partitions/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/models/partitions/package-info.java deleted file mode 100644 index 12cc10b0b2..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/models/partitions/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Model and parser for the {@code CLUSTER NODES} and {@code CLUSTER SLAVES} output. - */ -package com.lambdaworks.redis.cluster.models.partitions; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotRange.java b/src/main/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotRange.java deleted file mode 100644 index 750287f6b7..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotRange.java +++ /dev/null @@ -1,185 +0,0 @@ -package com.lambdaworks.redis.cluster.models.slots; - -import java.io.Serializable; -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.Set; - -import com.google.common.net.HostAndPort; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Represents a range of slots together with its master and slaves. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings("serial") -public class ClusterSlotRange implements Serializable { - private int from; - private int to; - - @Deprecated - private HostAndPort master; - - private RedisClusterNode masterNode; - - @Deprecated - private List slaves = Collections.emptyList(); - - private List slaveNodes = Collections.emptyList(); - - public ClusterSlotRange() { - - } - - /** - * Constructs a {@link ClusterSlotRange} - * - * @param from from slot - * @param to to slot - * @param master master for the slots, may be {@literal null} - * @param slaves list of slaves must not be {@literal null} but may be empty - * @deprecated Use {@link #ClusterSlotRange(int, int, RedisClusterNode, List)} - */ - @Deprecated - public ClusterSlotRange(int from, int to, HostAndPort master, List slaves) { - - LettuceAssert.notNull(master, "Master must not be null"); - LettuceAssert.notNull(slaves, "Slaves must not be null"); - - this.from = from; - this.to = to; - this.masterNode = toRedisClusterNode(master, null, Collections.singleton(RedisClusterNode.NodeFlag.MASTER)); - this.slaveNodes = toRedisClusterNodes(slaves, null, Collections.singleton(RedisClusterNode.NodeFlag.SLAVE)); - this.master = master; - this.slaves = slaves; - } - - /** - * Constructs a {@link ClusterSlotRange} - * - * @param from from slot - * @param to to slot - * @param masterNode master for the slots, may be {@literal null} - * @param slaveNodes list of slaves must not be {@literal null} but may be empty - */ - public ClusterSlotRange(int from, int to, RedisClusterNode masterNode, List slaveNodes) { - - LettuceAssert.notNull(masterNode, "MasterNode must not be null"); - LettuceAssert.notNull(slaveNodes, "SlaveNodes must not be null"); - - this.from = from; - this.to = to; - this.master = toHostAndPort(masterNode); - this.slaves = toHostAndPorts(slaveNodes); - this.masterNode = masterNode; - this.slaveNodes = slaveNodes; - } - - private HostAndPort toHostAndPort(RedisClusterNode redisClusterNode) { - RedisURI uri = redisClusterNode.getUri(); - return HostAndPort.fromParts(uri.getHost(), uri.getPort()); - } - - private List toHostAndPorts(List nodes) { - List result = new ArrayList<>(); - for (RedisClusterNode node : nodes) { - result.add(toHostAndPort(node)); - } - return result; - } - - private RedisClusterNode toRedisClusterNode(HostAndPort hostAndPort, String slaveOf, Set flags) { - RedisClusterNode redisClusterNode = new RedisClusterNode(); - redisClusterNode.setUri(RedisURI - .create(hostAndPort.getHostText(), hostAndPort.getPortOrDefault(RedisURI.DEFAULT_REDIS_PORT))); - redisClusterNode.setSlaveOf(slaveOf); - redisClusterNode.setFlags(flags); - return redisClusterNode; - } - - private List toRedisClusterNodes(List hostAndPorts, String slaveOf, Set flags) { - List result = new ArrayList<>(); - for (HostAndPort hostAndPort : hostAndPorts) { - result.add(toRedisClusterNode(hostAndPort, slaveOf, flags)); - } - return result; - } - - public int getFrom() { - return from; - } - - public int getTo() { - return to; - } - - /** - * @deprecated Use {@link #getMasterNode()} to retrieve the {@code nodeId} and the {@code slaveOf} details. - * @return the master host and port - */ - @Deprecated - public HostAndPort getMaster() { - return master; - } - - /** - * @deprecated Use {@link #getSlaveNodes()} to retrieve the {@code nodeId} and the {@code slaveOf} details. - * @return the master host and port - */ - @Deprecated - public List getSlaves() { - return slaves; - } - - public RedisClusterNode getMasterNode() { - return masterNode; - } - - public void setMasterNode(RedisClusterNode masterNode) { - this.masterNode = masterNode; - } - - public List getSlaveNodes() { - return slaveNodes; - } - - public void setSlaveNodes(List slaveNodes) { - this.slaveNodes = slaveNodes; - } - - public void setFrom(int from) { - this.from = from; - } - - public void setTo(int to) { - this.to = to; - } - - public void setMaster(HostAndPort master) { - LettuceAssert.notNull(master, "Master must not be null"); - this.master = master; - } - - public void setSlaves(List slaves) { - - LettuceAssert.notNull(slaves, "Slaves must not be null"); - this.slaves = slaves; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [from=").append(from); - sb.append(", to=").append(to); - sb.append(", masterNode=").append(masterNode); - sb.append(", slaveNodes=").append(slaveNodes); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotsParser.java b/src/main/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotsParser.java deleted file mode 100644 index 38b1743ff0..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotsParser.java +++ /dev/null @@ -1,154 +0,0 @@ -package com.lambdaworks.redis.cluster.models.slots; - -import java.util.*; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Parser for redis CLUSTER SLOTS command output. - * - * @author Mark Paluch - * @since 3.0 - */ -public class ClusterSlotsParser { - - /** - * Utility constructor. - */ - private ClusterSlotsParser() { - - } - - /** - * Parse the output of the redis CLUSTER SLOTS command and convert it to a list of - * {@link com.lambdaworks.redis.cluster.models.slots.ClusterSlotRange} - * - * @param clusterSlotsOutput output of CLUSTER SLOTS command - * @return List>ClusterSlotRange> - */ - public static List parse(List clusterSlotsOutput) { - List result = new ArrayList<>(); - Map nodeCache = new HashMap<>(); - - for (Object o : clusterSlotsOutput) { - - if (!(o instanceof List)) { - continue; - } - - List range = (List) o; - if (range.size() < 2) { - continue; - } - - ClusterSlotRange clusterSlotRange = parseRange(range, nodeCache); - result.add(clusterSlotRange); - } - - Collections.sort(result, new Comparator() { - @Override - public int compare(ClusterSlotRange o1, ClusterSlotRange o2) { - return o1.getFrom() - o2.getFrom(); - } - }); - - return Collections.unmodifiableList(result); - } - - private static ClusterSlotRange parseRange(List range, Map nodeCache) { - Iterator iterator = range.iterator(); - - int from = Math.toIntExact(getLongFromIterator(iterator, 0)); - int to = Math.toIntExact(getLongFromIterator(iterator, 0)); - RedisClusterNode master = null; - - List slaves = new ArrayList<>(); - if (iterator.hasNext()) { - master = getRedisClusterNode(iterator, nodeCache); - if(master != null) { - master.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MASTER)); - Set slots = new TreeSet<>(master.getSlots()); - slots.addAll(createSlots(from, to)); - master.setSlots(new ArrayList<>(slots)); - } - } - - while (iterator.hasNext()) { - RedisClusterNode slave = getRedisClusterNode(iterator, nodeCache); - if (slave != null) { - slave.setSlaveOf(master.getNodeId()); - slave.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.SLAVE)); - slaves.add(slave); - } - } - - return new ClusterSlotRange(from, to, master, Collections.unmodifiableList(slaves)); - } - - private static List createSlots(int from, int to) { - List slots = new ArrayList<>(); - for (int i = from; i < to + 1; i++) { - slots.add(i); - } - return slots; - } - - private static RedisClusterNode getRedisClusterNode(Iterator iterator, Map nodeCache) { - Object element = iterator.next(); - RedisClusterNode redisClusterNode = null; - if (element instanceof List) { - List hostAndPortList = (List) element; - if (hostAndPortList.size() < 2) { - return null; - } - - Iterator hostAndPortIterator = hostAndPortList.iterator(); - String host = (String) hostAndPortIterator.next(); - int port = Math.toIntExact(getLongFromIterator(hostAndPortIterator, 0)); - String nodeId; - - - if (hostAndPortIterator.hasNext()) { - nodeId = (String) hostAndPortIterator.next(); - - redisClusterNode = nodeCache.get(nodeId); - if(redisClusterNode == null) { - redisClusterNode = createNode(host, port); - nodeCache.put(nodeId, redisClusterNode); - redisClusterNode.setNodeId(nodeId); - } - } - else { - String key = host + ":" + port; - redisClusterNode = nodeCache.get(key); - if(redisClusterNode == null) { - redisClusterNode = createNode(host, port); - nodeCache.put(key, redisClusterNode); - } - } - } - return redisClusterNode; - } - - private static RedisClusterNode createNode(String host, int port) { - RedisClusterNode redisClusterNode = new RedisClusterNode(); - redisClusterNode.setUri(RedisURI.create(host, port)); - redisClusterNode.setSlots(new ArrayList<>()); - return redisClusterNode; - } - - private static long getLongFromIterator(Iterator iterator, long defaultValue) { - if (iterator.hasNext()) { - Object object = iterator.next(); - if (object instanceof String) { - return Long.parseLong((String) object); - } - - if (object instanceof Number) { - return ((Number) object).longValue(); - } - } - return defaultValue; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/models/slots/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/models/slots/package-info.java deleted file mode 100644 index 973d70a9ff..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/models/slots/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Model and parser for the {@code CLUSTER SLOTS} output. - */ -package com.lambdaworks.redis.cluster.models.slots; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/package-info.java deleted file mode 100644 index 6f2c3f8807..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Client for Redis Cluster, see {@link com.lambdaworks.redis.cluster.RedisClusterClient}. - */ -package com.lambdaworks.redis.cluster; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/ClusterTopologyRefresh.java b/src/main/java/com/lambdaworks/redis/cluster/topology/ClusterTopologyRefresh.java deleted file mode 100644 index 652d265f00..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/ClusterTopologyRefresh.java +++ /dev/null @@ -1,238 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import java.net.SocketAddress; -import java.util.*; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.stream.Collectors; -import java.util.stream.StreamSupport; - -import com.lambdaworks.redis.LettuceStrings; -import com.lambdaworks.redis.RedisCommandInterruptedException; -import com.lambdaworks.redis.RedisConnectionException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.SocketAddressResolver; - -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Utility to refresh the cluster topology view based on {@link Partitions}. - * - * @author Mark Paluch - */ -public class ClusterTopologyRefresh { - - static final Utf8StringCodec CODEC = new Utf8StringCodec(); - private static final InternalLogger logger = InternalLoggerFactory.getInstance(ClusterTopologyRefresh.class); - - private final NodeConnectionFactory nodeConnectionFactory; - private final ClientResources clientResources; - - public ClusterTopologyRefresh(NodeConnectionFactory nodeConnectionFactory, ClientResources clientResources) { - this.nodeConnectionFactory = nodeConnectionFactory; - this.clientResources = clientResources; - } - - /** - * Load partition views from a collection of {@link RedisURI}s and return the view per {@link RedisURI}. Partitions contain - * an ordered list of {@link RedisClusterNode}s. The sort key is latency. Nodes with lower latency come first. - * - * @param seed collection of {@link RedisURI}s - * @param discovery {@literal true} to discover additional nodes - * @return mapping between {@link RedisURI} and {@link Partitions} - */ - public Map loadViews(Iterable seed, boolean discovery) { - - Connections connections = getConnections(seed); - Requests requestedTopology = connections.requestTopology(); - Requests requestedClients = connections.requestClients(); - - long commandTimeoutNs = getCommandTimeoutNs(seed); - - try { - NodeTopologyViews nodeSpecificViews = getNodeSpecificViews(requestedTopology, requestedClients, commandTimeoutNs); - - if (discovery) { - Set allKnownUris = nodeSpecificViews.getClusterNodes(); - Set discoveredNodes = difference(allKnownUris, toSet(seed)); - - if (!discoveredNodes.isEmpty()) { - Connections discoveredConnections = getConnections(discoveredNodes); - connections = connections.mergeWith(discoveredConnections); - - requestedTopology = requestedTopology.mergeWith(discoveredConnections.requestTopology()); - requestedClients = requestedClients.mergeWith(discoveredConnections.requestClients()); - - nodeSpecificViews = getNodeSpecificViews(requestedTopology, requestedClients, commandTimeoutNs); - - return nodeSpecificViews.toMap(); - } - } - - return nodeSpecificViews.toMap(); - - } catch (InterruptedException e) { - - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - } finally { - connections.close(); - } - } - - private Set toSet(Iterable seed) { - return StreamSupport.stream(seed.spliterator(), false).collect(Collectors.toCollection(HashSet::new)); - } - - NodeTopologyViews getNodeSpecificViews(Requests requestedTopology, Requests requestedClients, - long commandTimeoutNs) throws InterruptedException { - - List allNodes = new ArrayList<>(); - - Map latencies = new HashMap<>(); - Map clientCountByNodeId = new HashMap<>(); - - long waitTime = requestedTopology.await(commandTimeoutNs, TimeUnit.NANOSECONDS); - requestedClients.await(commandTimeoutNs - waitTime, TimeUnit.NANOSECONDS); - - Set nodes = requestedTopology.nodes(); - - List views = new ArrayList<>(); - for (RedisURI node : nodes) { - - try { - NodeTopologyView nodeTopologyView = NodeTopologyView.from(node, requestedTopology, requestedClients); - - if (!nodeTopologyView.isAvailable()) { - continue; - } - - List nodeWithStats = nodeTopologyView.getPartitions() // - .stream() // - .filter(ClusterTopologyRefresh::validNode) // - .map(RedisClusterNodeSnapshot::new).collect(Collectors.toList()); - - for (RedisClusterNodeSnapshot partition : nodeWithStats) { - - if (partition.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - - if(partition.getUri() == null){ - partition.setUri(node); - } - - // record latency for later partition ordering - latencies.put(partition.getNodeId(), nodeTopologyView.getLatency()); - clientCountByNodeId.put(partition.getNodeId(), nodeTopologyView.getConnectedClients()); - } - } - - allNodes.addAll(nodeWithStats); - - Partitions partitions = new Partitions(); - partitions.addAll(nodeWithStats); - - nodeTopologyView.setPartitions(partitions); - - views.add(nodeTopologyView); - } catch (ExecutionException e) { - logger.warn(String.format("Cannot retrieve partition view from %s, error: %s", node, e)); - } - } - - for (RedisClusterNodeSnapshot node : allNodes) { - node.setConnectedClients(clientCountByNodeId.get(node.getNodeId())); - node.setLatencyNs(latencies.get(node.getNodeId())); - } - - for (NodeTopologyView view : views) { - Collections.sort(view.getPartitions().getPartitions(), TopologyComparators.LatencyComparator.INSTANCE); - view.getPartitions().updateCache(); - } - - return new NodeTopologyViews(views); - } - - private static boolean validNode(RedisClusterNode redisClusterNode) { - - if (redisClusterNode.is(RedisClusterNode.NodeFlag.NOADDR)) { - return false; - } - - if (redisClusterNode.getUri() == null || redisClusterNode.getUri().getPort() == 0 - || LettuceStrings.isEmpty(redisClusterNode.getUri().getHost())) { - return false; - } - - return true; - } - - /* - * Open connections where an address can be resolved. - */ - private Connections getConnections(Iterable redisURIs) { - - Connections connections = new Connections(); - - for (RedisURI redisURI : redisURIs) { - if (redisURI.getHost() == null || connections.connectedNodes().contains(redisURI)) { - continue; - } - - try { - SocketAddress socketAddress = SocketAddressResolver.resolve(redisURI, clientResources.dnsResolver()); - StatefulRedisConnection connection = nodeConnectionFactory.connectToNode(CODEC, socketAddress); - connection.async().clientSetname("lettuce#ClusterTopologyRefresh"); - - connections.addConnection(redisURI, connection); - } catch (RedisConnectionException e) { - - if (logger.isDebugEnabled()) { - logger.debug(e.getMessage(), e); - } else { - logger.warn(e.getMessage()); - } - } catch (RuntimeException e) { - logger.warn(String.format("Cannot connect to %s", redisURI), e); - } - } - return connections; - } - - /** - * Resolve a {@link RedisURI} from a map of cluster views by {@link Partitions} as key - * - * @param map the map - * @param partitions the key - * @return a {@link RedisURI} or null - */ - public RedisURI getViewedBy(Map map, Partitions partitions) { - - for (Map.Entry entry : map.entrySet()) { - if (entry.getValue() == partitions) { - return entry.getKey(); - } - } - - return null; - } - - private static Set difference(Set set1, Set set2) { - - Set result = set1.stream().filter(e -> !set2.contains(e)).collect(Collectors.toSet()); - result.addAll(set2.stream().filter(e -> !set1.contains(e)).collect(Collectors.toList())); - - return result; - } - - private long getCommandTimeoutNs(Iterable redisURIs) { - - RedisURI redisURI = redisURIs.iterator().next(); - return redisURI.getUnit().toNanos(redisURI.getTimeout()); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/Connections.java b/src/main/java/com/lambdaworks/redis/cluster/topology/Connections.java deleted file mode 100644 index 711f171975..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/Connections.java +++ /dev/null @@ -1,112 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import java.util.Map; -import java.util.Set; -import java.util.TreeMap; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandKeyword; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * @author Mark Paluch - */ -class Connections { - - private Map> connections = new TreeMap<>( - TopologyComparators.RedisURIComparator.INSTANCE); - - public Connections() { - } - - private Connections(Map> connections) { - this.connections = connections; - } - - /** - * Add a connection for a {@link RedisURI} - * - * @param redisURI - * @param connection - */ - public void addConnection(RedisURI redisURI, StatefulRedisConnection connection) { - connections.put(redisURI, connection); - } - - /* - * Initiate {@code CLUSTER NODES} on all connections and return the {@link Requests}. - * - * @return the {@link Requests}. - */ - public Requests requestTopology() { - - Requests requests = new Requests(); - - for (Map.Entry> entry : connections.entrySet()) { - - CommandArgs args = new CommandArgs<>(ClusterTopologyRefresh.CODEC).add(CommandKeyword.NODES); - Command command = new Command<>(CommandType.CLUSTER, - new StatusOutput<>(ClusterTopologyRefresh.CODEC), args); - TimedAsyncCommand timedCommand = new TimedAsyncCommand<>(command); - - entry.getValue().dispatch(timedCommand); - requests.addRequest(entry.getKey(), timedCommand); - } - - return requests; - } - - /* - * Initiate {@code CLIENT LIST} on all connections and return the {@link Requests}. - * - * @return the {@link Requests}. - */ - public Requests requestClients() { - - Requests requests = new Requests(); - - for (Map.Entry> entry : connections.entrySet()) { - - CommandArgs args = new CommandArgs<>(ClusterTopologyRefresh.CODEC).add(CommandKeyword.LIST); - Command command = new Command<>(CommandType.CLIENT, - new StatusOutput<>(ClusterTopologyRefresh.CODEC), args); - TimedAsyncCommand timedCommand = new TimedAsyncCommand<>(command); - - entry.getValue().dispatch(timedCommand); - requests.addRequest(entry.getKey(), timedCommand); - } - - return requests; - } - - /** - * Close all connections. - */ - public void close() { - for (StatefulRedisConnection connection : connections.values()) { - connection.close(); - } - } - - /** - * - * @return a set of {@link RedisURI} for which {@link Connections} has a connection. - */ - public Set connectedNodes() { - return connections.keySet(); - } - - public Connections mergeWith(Connections discoveredConnections) { - - Map> result = new TreeMap<>( - TopologyComparators.RedisURIComparator.INSTANCE); - result.putAll(this.connections); - result.putAll(discoveredConnections.connections); - - return new Connections(result); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/NodeConnectionFactory.java b/src/main/java/com/lambdaworks/redis/cluster/topology/NodeConnectionFactory.java deleted file mode 100644 index 45d6250a87..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/NodeConnectionFactory.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import java.net.SocketAddress; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Factory interface to obtain {@link StatefulRedisConnection connections} to Redis cluster nodes. - * - * @author Mark Paluch - * @since 4.2 - */ -public interface NodeConnectionFactory { - - /** - * Connects to a {@link SocketAddress} with the given {@link RedisCodec}. - * - * @param codec must not be {@literal null}. - * @param socketAddress must not be {@literal null}. - * @param - * @param - * @return a new {@link StatefulRedisConnection} - */ - StatefulRedisConnection connectToNode(RedisCodec codec, SocketAddress socketAddress); -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/NodeTopologyView.java b/src/main/java/com/lambdaworks/redis/cluster/topology/NodeTopologyView.java deleted file mode 100644 index e346b19d78..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/NodeTopologyView.java +++ /dev/null @@ -1,123 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import java.util.concurrent.ExecutionException; - -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - */ -class NodeTopologyView { - - private final boolean available; - private final RedisURI redisURI; - - private Partitions partitions; - private final int connectedClients; - - private final long latency; - private final String clusterNodes; - - private final String clientList; - - NodeTopologyView(RedisURI redisURI) { - - this.available = false; - this.redisURI = redisURI; - this.partitions = new Partitions(); - this.connectedClients = 0; - this.clusterNodes = null; - this.clientList = null; - this.latency = 0; - } - - NodeTopologyView(RedisURI redisURI, String clusterNodes, String clientList, long latency) { - - this.available = true; - this.redisURI = redisURI; - - this.partitions = ClusterPartitionParser.parse(clusterNodes); - this.connectedClients = getClients(clientList); - - this.clusterNodes = clusterNodes; - this.clientList = clientList; - this.latency = latency; - - getOwnPartition().setUri(redisURI); - } - - static NodeTopologyView from(RedisURI redisURI, Requests clusterNodesRequests, Requests clientListRequests) - throws ExecutionException, InterruptedException { - - TimedAsyncCommand nodes = clusterNodesRequests.getRequest(redisURI); - TimedAsyncCommand clients = clientListRequests.getRequest(redisURI); - - if (resultAvailable(nodes) && resultAvailable(clients)) { - return new NodeTopologyView(redisURI, nodes.get(), clients.get(), nodes.duration()); - } - return new NodeTopologyView(redisURI); - } - - static boolean resultAvailable(RedisFuture redisFuture) { - - if (redisFuture != null && redisFuture.isDone() && !redisFuture.isCancelled()) { - return true; - } - - return false; - } - - private int getClients(String rawClientsOutput) { - return rawClientsOutput.trim().split("\\n").length; - } - - long getLatency() { - return latency; - } - - boolean isAvailable() { - return available; - } - - Partitions getPartitions() { - return partitions; - } - - int getConnectedClients() { - return connectedClients; - } - - String getNodeId() { - return getOwnPartition().getNodeId(); - } - - RedisURI getRedisURI() { - return getOwnPartition().getUri(); - } - - private RedisClusterNode getOwnPartition() { - for (RedisClusterNode partition : partitions) { - if (partition.is(RedisClusterNode.NodeFlag.MYSELF)) { - return partition; - } - } - - throw new IllegalStateException("Cannot determine own partition"); - } - - String getClientList() { - return clientList; - } - - String getClusterNodes() { - return clusterNodes; - } - - void setPartitions(Partitions partitions) { - this.partitions = partitions; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/NodeTopologyViews.java b/src/main/java/com/lambdaworks/redis/cluster/topology/NodeTopologyViews.java deleted file mode 100644 index 3b984f4be9..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/NodeTopologyViews.java +++ /dev/null @@ -1,57 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -import java.util.*; - -/** - * @author Mark Paluch - */ -class NodeTopologyViews { - - private List views = new ArrayList<>(); - - public NodeTopologyViews(List views) { - this.views = views; - } - - /** - * Return cluster node URI's using the topology query sources and partitions. - * - * @return - */ - public Set getClusterNodes() { - - Set result = new HashSet<>(); - - Map knownUris = new HashMap<>(); - for (NodeTopologyView view : views) { - knownUris.put(view.getNodeId(), view.getRedisURI()); - } - - for (NodeTopologyView view : views) { - for (RedisClusterNode redisClusterNode : view.getPartitions()) { - if (knownUris.containsKey(redisClusterNode.getNodeId())) { - result.add(knownUris.get(redisClusterNode.getNodeId())); - } else { - result.add(redisClusterNode.getUri()); - } - } - } - - return result; - } - - public Map toMap() { - - Map nodeSpecificViews = new TreeMap<>(TopologyComparators.RedisURIComparator.INSTANCE); - - for (NodeTopologyView view : views) { - nodeSpecificViews.put(view.getRedisURI(), view.getPartitions()); - } - - return nodeSpecificViews; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/RedisClusterNodeSnapshot.java b/src/main/java/com/lambdaworks/redis/cluster/topology/RedisClusterNodeSnapshot.java deleted file mode 100644 index 5a66793ef6..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/RedisClusterNodeSnapshot.java +++ /dev/null @@ -1,35 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - */ -class RedisClusterNodeSnapshot extends RedisClusterNode { - - private Long latencyNs; - private Integer connectedClients; - - public RedisClusterNodeSnapshot() { - } - - public RedisClusterNodeSnapshot(RedisClusterNode redisClusterNode) { - super(redisClusterNode); - } - - Long getLatencyNs() { - return latencyNs; - } - - void setLatencyNs(Long latencyNs) { - this.latencyNs = latencyNs; - } - - Integer getConnectedClients() { - return connectedClients; - } - - void setConnectedClients(Integer connectedClients) { - this.connectedClients = connectedClients; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/Requests.java b/src/main/java/com/lambdaworks/redis/cluster/topology/Requests.java deleted file mode 100644 index 0e7d9179c7..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/Requests.java +++ /dev/null @@ -1,73 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import java.util.Map; -import java.util.Set; -import java.util.TreeMap; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.RedisURI; - -/** - * @author Mark Paluch - */ -class Requests { - - Map> rawViews = new TreeMap<>( - TopologyComparators.RedisURIComparator.INSTANCE); - - Requests() { - } - - Requests(Map> rawViews) { - this.rawViews = rawViews; - } - - void addRequest(RedisURI redisURI, TimedAsyncCommand command) { - rawViews.put(redisURI, command); - } - - long await(long timeout, TimeUnit timeUnit) throws InterruptedException { - - long waitTime = 0; - - for (Map.Entry> entry : rawViews.entrySet()) { - long timeoutLeft = timeUnit.toNanos(timeout) - waitTime; - - if (timeoutLeft <= 0) { - break; - } - - long startWait = System.nanoTime(); - RedisFuture future = entry.getValue(); - - try { - if (!future.await(timeoutLeft, TimeUnit.NANOSECONDS)) { - break; - } - } finally { - waitTime += System.nanoTime() - startWait; - } - - } - return waitTime; - } - - Set nodes() { - return rawViews.keySet(); - } - - TimedAsyncCommand getRequest(RedisURI redisURI) { - return rawViews.get(redisURI); - } - - Requests mergeWith(Requests requests) { - - Map> result = new TreeMap<>( - TopologyComparators.RedisURIComparator.INSTANCE); - result.putAll(this.rawViews); - result.putAll(requests.rawViews); - - return new Requests(result); - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/TimedAsyncCommand.java b/src/main/java/com/lambdaworks/redis/cluster/topology/TimedAsyncCommand.java deleted file mode 100644 index d7e5a6e5bd..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/TimedAsyncCommand.java +++ /dev/null @@ -1,46 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import com.lambdaworks.redis.protocol.AsyncCommand; -import com.lambdaworks.redis.protocol.RedisCommand; - -import io.netty.buffer.ByteBuf; - -/** - * Timed command that records the time at which the command was encoded and completed. - * - * @param Key type - * @param Value type - * @param Result type - * @author Mark Paluch - */ -class TimedAsyncCommand extends AsyncCommand { - - long encodedAtNs = -1; - long completedAtNs = -1; - - public TimedAsyncCommand(RedisCommand command) { - super(command); - } - - @Override - public void encode(ByteBuf buf) { - completedAtNs = -1; - encodedAtNs = -1; - - super.encode(buf); - encodedAtNs = System.nanoTime(); - } - - @Override - public void complete() { - completedAtNs = System.nanoTime(); - super.complete(); - } - - public long duration() { - if (completedAtNs == -1 || encodedAtNs == -1) { - return -1; - } - return completedAtNs - encodedAtNs; - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/TopologyComparators.java b/src/main/java/com/lambdaworks/redis/cluster/topology/TopologyComparators.java deleted file mode 100644 index 16e4c94ab0..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/TopologyComparators.java +++ /dev/null @@ -1,267 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import java.util.Collections; -import java.util.Comparator; -import java.util.List; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.internal.LettuceSets; - -/** - * Comparators for {@link RedisClusterNode} and {@link RedisURI}. - * - * @author Mark Paluch - */ -public class TopologyComparators { - - /** - * Sort partitions by a {@code fixedOrder} and by {@link RedisURI}. Nodes are sorted as provided in {@code fixedOrder}. - * {@link RedisURI RedisURIs}s not contained in {@code fixedOrder} are ordered after the fixed sorting and sorted wihin the - * block by comparing {@link RedisURI}. - * - * @param clusterNodes the sorting input - * @param fixedOrder the fixed order part - * @return List containing {@link RedisClusterNode}s ordered by {@code fixedOrder} and {@link RedisURI} - * @see #sortByUri(Iterable) - */ - public static List predefinedSort(Iterable clusterNodes, - Iterable fixedOrder) { - - LettuceAssert.notNull(clusterNodes, "Cluster nodes must not be null"); - LettuceAssert.notNull(fixedOrder, "Fixed order must not be null"); - - List fixedOrderList = LettuceLists.newList(fixedOrder); - List withOrderSpecification = LettuceLists.newList(clusterNodes)// - .stream()// - .filter(redisClusterNode -> fixedOrderList.contains(redisClusterNode.getUri()))// - .collect(Collectors.toList()); - - List withoutSpecification = LettuceLists.newList(clusterNodes)// - .stream()// - .filter(redisClusterNode -> !fixedOrderList.contains(redisClusterNode.getUri()))// - .collect(Collectors.toList()); - - Collections.sort(withOrderSpecification, new PredefinedRedisClusterNodeComparator(fixedOrderList)); - Collections.sort(withoutSpecification, (o1, o2) -> RedisURIComparator.INSTANCE.compare(o1.getUri(), o2.getUri())); - - withOrderSpecification.addAll(withoutSpecification); - - return withOrderSpecification; - } - - /** - * Sort partitions by RedisURI. - * - * @param clusterNodes - * @return List containing {@link RedisClusterNode}s ordered by {@link RedisURI} - */ - public static List sortByUri(Iterable clusterNodes) { - - LettuceAssert.notNull(clusterNodes, "Cluster nodes must not be null"); - - List ordered = LettuceLists.newList(clusterNodes); - Collections.sort(ordered, (o1, o2) -> RedisURIComparator.INSTANCE.compare(o1.getUri(), o2.getUri())); - return ordered; - } - - /** - * Sort partitions by client count. - * - * @param clusterNodes - * @return List containing {@link RedisClusterNode}s ordered by client count - */ - public static List sortByClientCount(Iterable clusterNodes) { - - LettuceAssert.notNull(clusterNodes, "Cluster nodes must not be null"); - - List ordered = LettuceLists.newList(clusterNodes); - Collections.sort(ordered, ClientCountComparator.INSTANCE); - return ordered; - } - - /** - * Sort partitions by latency. - * - * @param clusterNodes - * @return List containing {@link RedisClusterNode}s ordered by latency - */ - public static List sortByLatency(Iterable clusterNodes) { - - List ordered = LettuceLists.newList(clusterNodes); - Collections.sort(ordered, LatencyComparator.INSTANCE); - return ordered; - } - - /** - * Check if properties changed which are essential for cluster operations. - * - * @param o1 the first object to be compared. - * @param o2 the second object to be compared. - * @return {@literal true} if {@code MASTER} or {@code SLAVE} flags changed or the responsible slots changed. - */ - public static boolean isChanged(Partitions o1, Partitions o2) { - - if (o1.size() != o2.size()) { - return true; - } - - for (RedisClusterNode base : o2) { - if (!essentiallyEqualsTo(base, o1.getPartitionByNodeId(base.getNodeId()))) { - return true; - } - } - - return false; - } - - /** - * Check for {@code MASTER} or {@code SLAVE} flags and whether the responsible slots changed. - * - * @param o1 the first object to be compared. - * @param o2 the second object to be compared. - * @return {@literal true} if {@code MASTER} or {@code SLAVE} flags changed or the responsible slots changed. - */ - static boolean essentiallyEqualsTo(RedisClusterNode o1, RedisClusterNode o2) { - - if (o2 == null) { - return false; - } - - if (!sameFlags(o1, o2, RedisClusterNode.NodeFlag.MASTER)) { - return false; - } - - if (!sameFlags(o1, o2, RedisClusterNode.NodeFlag.SLAVE)) { - return false; - } - - if (!LettuceSets.newHashSet(o1.getSlots()).equals(LettuceSets.newHashSet(o2.getSlots()))) { - return false; - } - - return true; - } - - private static boolean sameFlags(RedisClusterNode base, RedisClusterNode other, RedisClusterNode.NodeFlag flag) { - if (base.getFlags().contains(flag)) { - if (!other.getFlags().contains(flag)) { - return false; - } - } else { - if (other.getFlags().contains(flag)) { - return false; - } - } - return true; - } - - static class PredefinedRedisClusterNodeComparator implements Comparator { - private final List fixedOrder; - - public PredefinedRedisClusterNodeComparator(List fixedOrder) { - this.fixedOrder = fixedOrder; - } - - @Override - public int compare(RedisClusterNode o1, RedisClusterNode o2) { - - int index1 = fixedOrder.indexOf(o1.getUri()); - int index2 = fixedOrder.indexOf(o2.getUri()); - - return Integer.compare(index1, index2); - } - } - - /** - * Compare {@link RedisClusterNodeSnapshot} based on their latency. Lowest comes first. Objects of type - * {@link RedisClusterNode} cannot be compared and yield to a result of {@literal 0}. - */ - enum LatencyComparator implements Comparator { - - INSTANCE; - - @Override - public int compare(RedisClusterNode o1, RedisClusterNode o2) { - if (o1 instanceof RedisClusterNodeSnapshot && o2 instanceof RedisClusterNodeSnapshot) { - - RedisClusterNodeSnapshot w1 = (RedisClusterNodeSnapshot) o1; - RedisClusterNodeSnapshot w2 = (RedisClusterNodeSnapshot) o2; - - if (w1.getLatencyNs() != null && w2.getLatencyNs() != null) { - return w1.getLatencyNs().compareTo(w2.getLatencyNs()); - } - - if (w1.getLatencyNs() != null && w2.getLatencyNs() == null) { - return -1; - } - - if (w1.getLatencyNs() == null && w2.getLatencyNs() != null) { - return 1; - } - } - - return 0; - } - } - - /** - * Compare {@link RedisClusterNodeSnapshot} based on their client count. Lowest comes first. Objects of type - * {@link RedisClusterNode} cannot be compared and yield to a result of {@literal 0}. - */ - enum ClientCountComparator implements Comparator { - - INSTANCE; - - @Override - public int compare(RedisClusterNode o1, RedisClusterNode o2) { - if (o1 instanceof RedisClusterNodeSnapshot && o2 instanceof RedisClusterNodeSnapshot) { - - RedisClusterNodeSnapshot w1 = (RedisClusterNodeSnapshot) o1; - RedisClusterNodeSnapshot w2 = (RedisClusterNodeSnapshot) o2; - - if (w1.getConnectedClients() != null && w2.getConnectedClients() != null) { - return w1.getConnectedClients().compareTo(w2.getConnectedClients()); - } - - if (w1.getConnectedClients() == null && w2.getConnectedClients() != null) { - return 1; - } - - if (w1.getConnectedClients() != null && w2.getConnectedClients() == null) { - return -1; - } - } - - return 0; - } - } - - /** - * Compare {@link RedisURI} based on their host and port representation. - */ - enum RedisURIComparator implements Comparator { - - INSTANCE; - - @Override - public int compare(RedisURI o1, RedisURI o2) { - String h1 = ""; - String h2 = ""; - - if (o1 != null) { - h1 = o1.getHost() + ":" + o1.getPort(); - } - - if (o2 != null) { - h2 = o2.getHost() + ":" + o2.getPort(); - } - - return h1.compareToIgnoreCase(h2); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/cluster/topology/package-info.java b/src/main/java/com/lambdaworks/redis/cluster/topology/package-info.java deleted file mode 100644 index 9dcaa601a2..0000000000 --- a/src/main/java/com/lambdaworks/redis/cluster/topology/package-info.java +++ /dev/null @@ -1,5 +0,0 @@ -/** - * Support for cluster topology refresh. - */ -package com.lambdaworks.redis.cluster.topology; - diff --git a/src/main/java/com/lambdaworks/redis/codec/ByteArrayCodec.java b/src/main/java/com/lambdaworks/redis/codec/ByteArrayCodec.java deleted file mode 100644 index 55d6311da9..0000000000 --- a/src/main/java/com/lambdaworks/redis/codec/ByteArrayCodec.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis.codec; - -import java.nio.ByteBuffer; - -/** - * A {@link RedisCodec} that uses plain byte arrays. - * - * @author Mark Paluch - * @since 3.3 - */ -public class ByteArrayCodec implements RedisCodec { - - public final static ByteArrayCodec INSTANCE = new ByteArrayCodec(); - private final static byte[] EMPTY = new byte[0]; - - @Override - public byte[] decodeKey(ByteBuffer bytes) { - return getBytes(bytes); - } - - @Override - public byte[] decodeValue(ByteBuffer bytes) { - return getBytes(bytes); - } - - @Override - public ByteBuffer encodeKey(byte[] key) { - - if(key == null){ - return ByteBuffer.wrap(EMPTY); - } - - return ByteBuffer.wrap(key); - } - - @Override - public ByteBuffer encodeValue(byte[] value) { - - if(value == null){ - return ByteBuffer.wrap(EMPTY); - } - - return ByteBuffer.wrap(value); - } - - private static byte[] getBytes(ByteBuffer buffer) { - byte[] b = new byte[buffer.remaining()]; - buffer.get(b); - return b; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/codec/ByteBufferInputStream.java b/src/main/java/com/lambdaworks/redis/codec/ByteBufferInputStream.java deleted file mode 100644 index 805d42dea7..0000000000 --- a/src/main/java/com/lambdaworks/redis/codec/ByteBufferInputStream.java +++ /dev/null @@ -1,27 +0,0 @@ -package com.lambdaworks.redis.codec; - -import java.io.IOException; -import java.io.InputStream; -import java.nio.ByteBuffer; - -class ByteBufferInputStream extends InputStream { - - private final ByteBuffer buffer; - - public ByteBufferInputStream(ByteBuffer b) { - this.buffer = b; - } - - @Override - public int available() throws IOException { - return buffer.remaining(); - } - - @Override - public int read() throws IOException { - if (buffer.remaining() > 0) { - return (buffer.get() & 0xFF); - } - return -1; - } -} diff --git a/src/main/java/com/lambdaworks/redis/codec/CompressionCodec.java b/src/main/java/com/lambdaworks/redis/codec/CompressionCodec.java deleted file mode 100644 index 6b166685e6..0000000000 --- a/src/main/java/com/lambdaworks/redis/codec/CompressionCodec.java +++ /dev/null @@ -1,157 +0,0 @@ -package com.lambdaworks.redis.codec; - -import java.io.ByteArrayOutputStream; -import java.io.IOException; -import java.io.InputStream; -import java.io.OutputStream; -import java.nio.ByteBuffer; -import java.util.zip.DeflaterOutputStream; -import java.util.zip.GZIPInputStream; -import java.util.zip.GZIPOutputStream; -import java.util.zip.InflaterInputStream; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * A compressing/decompressing {@link RedisCodec} that wraps a typed {@link RedisCodec codec} and compresses values using GZIP - * or Deflate. See {@link com.lambdaworks.redis.codec.CompressionCodec.CompressionType} for supported compression types. - * - * @author Mark Paluch - */ -public class CompressionCodec { - - /** - * A {@link RedisCodec} that compresses values from a delegating {@link RedisCodec}. - * - * @param delegate codec used for key-value encoding/decoding, must not be {@literal null}. - * @param compressionType the compression type, must not be {@literal null}. - * @param Key type. - * @param Value type. - * @return Value-compressing codec. - */ - @SuppressWarnings({ "rawtypes", "unchecked" }) - public static RedisCodec valueCompressor(RedisCodec delegate, CompressionType compressionType) { - LettuceAssert.notNull(delegate, "RedisCodec must not be null"); - LettuceAssert.notNull(compressionType, "CompressionType must not be null"); - return (RedisCodec) new CompressingValueCodecWrapper((RedisCodec) delegate, compressionType); - } - - private static class CompressingValueCodecWrapper implements RedisCodec { - - private RedisCodec delegate; - private CompressionType compressionType; - - public CompressingValueCodecWrapper(RedisCodec delegate, CompressionType compressionType) { - this.delegate = delegate; - this.compressionType = compressionType; - } - - @Override - public Object decodeKey(ByteBuffer bytes) { - return delegate.decodeKey(bytes); - } - - @Override - public Object decodeValue(ByteBuffer bytes) { - try { - return delegate.decodeValue(decompress(bytes)); - } catch (IOException e) { - throw new IllegalStateException(e); - } - } - - @Override - public ByteBuffer encodeKey(Object key) { - return delegate.encodeKey(key); - } - - @Override - public ByteBuffer encodeValue(Object value) { - try { - return compress(delegate.encodeValue(value)); - } catch (IOException e) { - throw new IllegalStateException(e); - } - } - - private ByteBuffer compress(ByteBuffer source) throws IOException { - if (source.remaining() == 0) { - return source; - } - - ByteBufferInputStream sourceStream = new ByteBufferInputStream(source); - ByteArrayOutputStream outputStream = new ByteArrayOutputStream(source.remaining() / 2); - OutputStream compressor = null; - if (compressionType == CompressionType.GZIP) { - compressor = new GZIPOutputStream(outputStream); - } - - if (compressionType == CompressionType.DEFLATE) { - compressor = new DeflaterOutputStream(outputStream); - } - - try { - copy(sourceStream, compressor); - } finally { - compressor.close(); - } - - return ByteBuffer.wrap(outputStream.toByteArray()); - } - - private ByteBuffer decompress(ByteBuffer source) throws IOException { - if (source.remaining() == 0) { - return source; - } - - ByteBufferInputStream sourceStream = new ByteBufferInputStream(source); - ByteArrayOutputStream outputStream = new ByteArrayOutputStream(source.remaining() * 2); - InputStream decompressor = null; - if (compressionType == CompressionType.GZIP) { - decompressor = new GZIPInputStream(sourceStream); - } - - if (compressionType == CompressionType.DEFLATE) { - decompressor = new InflaterInputStream(sourceStream); - } - - try { - copy(decompressor, outputStream); - } finally { - decompressor.close(); - } - - return ByteBuffer.wrap(outputStream.toByteArray()); - } - - } - - /** - * Copies all bytes from the input stream to the output stream. Does not close or flush either stream. - * - * @param from the input stream to read from - * @param to the output stream to write to - * @return the number of bytes copied - * @throws IOException if an I/O error occurs - */ - private static long copy(InputStream from, OutputStream to) throws IOException { - LettuceAssert.notNull(from, "From must not be null"); - LettuceAssert.notNull(to, "From must not be null"); - byte[] buf = new byte[4096]; - long total = 0; - while (true) { - int r = from.read(buf); - if (r == -1) { - break; - } - to.write(buf, 0, r); - total += r; - } - return total; - } - - public enum CompressionType { - GZIP, DEFLATE; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/codec/RedisCodec.java b/src/main/java/com/lambdaworks/redis/codec/RedisCodec.java deleted file mode 100644 index afcc3771ac..0000000000 --- a/src/main/java/com/lambdaworks/redis/codec/RedisCodec.java +++ /dev/null @@ -1,55 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.codec; - -import java.nio.ByteBuffer; - -/** - * A {@link RedisCodec} encodes keys and values sent to Redis, and decodes keys and values in the command output. - * - * The methods are called by multiple threads and must be thread-safe. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - * @author Mark Paluch - */ -public interface RedisCodec { - /** - * Decode the key output by redis. - * - * @param bytes Raw bytes of the key, must not be {@literal null}. - * - * @return The decoded key, may be {@literal null}. - */ - K decodeKey(ByteBuffer bytes); - - /** - * Decode the value output by redis. - * - * @param bytes Raw bytes of the value, must not be {@literal null}. - * - * @return The decoded value, may be {@literal null}. - */ - V decodeValue(ByteBuffer bytes); - - /** - * Encode the key for output to redis. - * - * @param key the key, may be {@literal null}. - * - * @return The encoded key, never {@literal null}. - */ - ByteBuffer encodeKey(K key); - - /** - * Encode the value for output to redis. - * - * @param value the value, may be {@literal null}. - * - * @return The encoded value, never {@literal null}. - */ - ByteBuffer encodeValue(V value); - -} diff --git a/src/main/java/com/lambdaworks/redis/codec/ToByteBufEncoder.java b/src/main/java/com/lambdaworks/redis/codec/ToByteBufEncoder.java deleted file mode 100644 index 719001e94d..0000000000 --- a/src/main/java/com/lambdaworks/redis/codec/ToByteBufEncoder.java +++ /dev/null @@ -1,43 +0,0 @@ -package com.lambdaworks.redis.codec; - -import io.netty.buffer.ByteBuf; - -/** - * Optimized encoder that encodes keys and values directly on a {@link ByteBuf}. This encoder does not allocate buffers, it just - * encodes data to existing buffers. - *

- * Classes implementing {@link ToByteBufEncoder} are required to implement {@link RedisCodec} as well. You should implement also - * the {@link RedisCodec#encodeKey(Object)} and {@link RedisCodec#encodeValue(Object)} methods to ensure compatibility for users - * that access the {@link RedisCodec} API only. - *

- * - * @author Mark Paluch - * @since 4.3 - */ -public interface ToByteBufEncoder { - - /** - * Encode the key for output to redis. - * - * @param key the key, may be {@literal null}. - * @param target the target buffer, must not be {@literal null}. - */ - void encodeKey(K key, ByteBuf target); - - /** - * Encode the value for output to redis. - * - * @param value the value, may be {@literal null}. - * @param target the target buffer, must not be {@literal null}. - */ - void encodeValue(V value, ByteBuf target); - - /** - * Estimates the size of the resulting byte stream. This method is called for keys and values to estimate the size for the - * temporary buffer to allocate. - * - * @param keyOrValue the key or value, may be {@link null}. - * @return the estimated number of bytes in the encoded representation. - */ - int estimateSize(Object keyOrValue); -} diff --git a/src/main/java/com/lambdaworks/redis/codec/Utf8StringCodec.java b/src/main/java/com/lambdaworks/redis/codec/Utf8StringCodec.java deleted file mode 100644 index 137808a000..0000000000 --- a/src/main/java/com/lambdaworks/redis/codec/Utf8StringCodec.java +++ /dev/null @@ -1,77 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.codec; - -import static java.nio.charset.CoderResult.OVERFLOW; - -import java.nio.ByteBuffer; -import java.nio.CharBuffer; -import java.nio.charset.Charset; -import java.nio.charset.CharsetDecoder; - -import com.lambdaworks.redis.protocol.LettuceCharsets; - -/** - * A {@link RedisCodec} that handles UTF-8 encoded keys and values. - * - * @author Will Glozer - */ -public class Utf8StringCodec implements RedisCodec { - - private final static byte[] EMPTY = new byte[0]; - - private Charset charset; - private CharsetDecoder decoder; - private CharBuffer chars; - - - /** - * Initialize a new instance that encodes and decodes strings using the UTF-8 charset; - */ - public Utf8StringCodec() { - charset = LettuceCharsets.UTF8; - decoder = charset.newDecoder(); - chars = CharBuffer.allocate(1024); - } - - @Override - public String decodeKey(ByteBuffer bytes) { - return decode(bytes); - } - - @Override - public String decodeValue(ByteBuffer bytes) { - return decode(bytes); - } - - @Override - public ByteBuffer encodeKey(String key) { - return encode(key); - } - - @Override - public ByteBuffer encodeValue(String value) { - return encode(value); - } - - private synchronized String decode(ByteBuffer bytes) { - chars.clear(); - bytes.mark(); - - decoder.reset(); - while (decoder.decode(bytes, chars, true) == OVERFLOW || decoder.flush(chars) == OVERFLOW) { - chars = CharBuffer.allocate(chars.capacity() * 2); - bytes.reset(); - } - - return chars.flip().toString(); - } - - private ByteBuffer encode(String string) { - if (string == null) { - return ByteBuffer.wrap(EMPTY); - } - - return charset.encode(string); - } -} diff --git a/src/main/java/com/lambdaworks/redis/codec/package-info.java b/src/main/java/com/lambdaworks/redis/codec/package-info.java deleted file mode 100644 index b6f3fa5e50..0000000000 --- a/src/main/java/com/lambdaworks/redis/codec/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Codecs for key/value type conversion. - */ -package com.lambdaworks.redis.codec; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/event/DefaultEventBus.java b/src/main/java/com/lambdaworks/redis/event/DefaultEventBus.java deleted file mode 100644 index 1d16240069..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/DefaultEventBus.java +++ /dev/null @@ -1,35 +0,0 @@ -package com.lambdaworks.redis.event; - -import rx.Observable; -import rx.Scheduler; -import rx.subjects.PublishSubject; -import rx.subjects.Subject; - -/** - * Default implementation for an {@link EventBus}. Events are published using a {@link Scheduler}. - * - * @author Mark Paluch - * @since 3.4 - */ -public class DefaultEventBus implements EventBus { - - private final Subject bus; - private final Scheduler scheduler; - - public DefaultEventBus(Scheduler scheduler) { - this.bus = PublishSubject. create().toSerialized(); - this.scheduler = scheduler; - } - - @Override - public Observable get() { - return bus.onBackpressureDrop().observeOn(scheduler); - } - - @Override - public void publish(Event event) { - if (bus.hasObservers()) { - bus.onNext(event); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/event/DefaultEventPublisherOptions.java b/src/main/java/com/lambdaworks/redis/event/DefaultEventPublisherOptions.java deleted file mode 100644 index 80b634e63c..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/DefaultEventPublisherOptions.java +++ /dev/null @@ -1,93 +0,0 @@ -package com.lambdaworks.redis.event; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.metrics.CommandLatencyCollectorOptions; - -/** - * The default implementation of {@link CommandLatencyCollectorOptions}. - * - * @author Mark Paluch - */ -public class DefaultEventPublisherOptions implements EventPublisherOptions { - - public static final long DEFAULT_EMIT_INTERVAL = 10; - public static final TimeUnit DEFAULT_EMIT_INTERVAL_UNIT = TimeUnit.MINUTES; - - private static final DefaultEventPublisherOptions DISABLED = new Builder().eventEmitInterval(0, TimeUnit.SECONDS).build(); - - private final long eventEmitInterval; - private final TimeUnit eventEmitIntervalUnit; - - protected DefaultEventPublisherOptions(Builder builder) { - this.eventEmitInterval = builder.eventEmitInterval; - this.eventEmitIntervalUnit = builder.eventEmitIntervalUnit; - } - - /** - * Builder for {@link DefaultEventPublisherOptions}. - */ - public static class Builder { - - private long eventEmitInterval = DEFAULT_EMIT_INTERVAL; - private TimeUnit eventEmitIntervalUnit = DEFAULT_EMIT_INTERVAL_UNIT; - - public Builder() { - } - - /** - * Sets the emit interval and the interval unit. Event emission will be disabled if the {@code eventEmitInterval} is set - * to 0}. Defaults to 10} {@link TimeUnit#MINUTES}. See {@link DefaultEventPublisherOptions#DEFAULT_EMIT_INTERVAL} - * {@link DefaultEventPublisherOptions#DEFAULT_EMIT_INTERVAL_UNIT}. - * - * @param eventEmitInterval the event interval, must be greater or equal to 0} - * @param eventEmitIntervalUnit the {@link TimeUnit} for the interval, must not be null - * @return this - */ - public Builder eventEmitInterval(long eventEmitInterval, TimeUnit eventEmitIntervalUnit) { - LettuceAssert.isTrue(eventEmitInterval >= 0, "EventEmitInterval must be greater or equal to 0"); - LettuceAssert.notNull(eventEmitIntervalUnit, "EventEmitIntervalUnit must not be null"); - - this.eventEmitInterval = eventEmitInterval; - this.eventEmitIntervalUnit = eventEmitIntervalUnit; - return this; - } - - /** - * - * @return a new instance of {@link DefaultEventPublisherOptions}. - */ - public DefaultEventPublisherOptions build() { - return new DefaultEventPublisherOptions(this); - } - } - - @Override - public long eventEmitInterval() { - return eventEmitInterval; - } - - @Override - public TimeUnit eventEmitIntervalUnit() { - return eventEmitIntervalUnit; - } - - /** - * Create a new {@link DefaultEventPublisherOptions} using default settings. - * - * @return a new instance of a default {@link DefaultEventPublisherOptions} instance - */ - public static DefaultEventPublisherOptions create() { - return new Builder().build(); - } - - /** - * Create a disabled {@link DefaultEventPublisherOptions} using default settings. - * - * @return a new instance of a default {@link DefaultEventPublisherOptions} instance with disabled event emission - */ - public static DefaultEventPublisherOptions disabled() { - return DISABLED; - } -} diff --git a/src/main/java/com/lambdaworks/redis/event/Event.java b/src/main/java/com/lambdaworks/redis/event/Event.java deleted file mode 100644 index 36ea26332c..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/Event.java +++ /dev/null @@ -1,11 +0,0 @@ -package com.lambdaworks.redis.event; - -/** - * - * Marker-interface for events that are published over the event bus. - * - * @author Mark Paluch - * @since 3.4 - */ -public interface Event { -} diff --git a/src/main/java/com/lambdaworks/redis/event/EventBus.java b/src/main/java/com/lambdaworks/redis/event/EventBus.java deleted file mode 100644 index e5475fc6d5..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/EventBus.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.redis.event; - -import rx.Observable; - -/** - * Interface for an EventBus. Events can be published over the bus that are delivered to the subscribers. - * - * @author Mark Paluch - * @since 3.4 - */ -public interface EventBus { - - /** - * Subscribe to the event bus and {@link Event}s. The {@link Observable} drops events on backpressure to avoid contention. - * - * @return the observable to obtain events. - */ - Observable get(); - - /** - * Publish a {@link Event} to the bus. - * - * @param event the event to publish - */ - void publish(Event event); -} diff --git a/src/main/java/com/lambdaworks/redis/event/EventPublisherOptions.java b/src/main/java/com/lambdaworks/redis/event/EventPublisherOptions.java deleted file mode 100644 index 91264e4607..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/EventPublisherOptions.java +++ /dev/null @@ -1,25 +0,0 @@ -package com.lambdaworks.redis.event; - -import java.util.concurrent.TimeUnit; - -/** - * Configuration interface for command latency collection. - * - * @author Mark Paluch - */ -public interface EventPublisherOptions { - - /** - * Returns the interval for emit metrics. - * - * @return the interval for emit metrics - */ - long eventEmitInterval(); - - /** - * Returns the {@link TimeUnit} for the event emit interval. - * - * @return the {@link TimeUnit} for the event emit interval - */ - TimeUnit eventEmitIntervalUnit(); -} diff --git a/src/main/java/com/lambdaworks/redis/event/connection/ConnectedEvent.java b/src/main/java/com/lambdaworks/redis/event/connection/ConnectedEvent.java deleted file mode 100644 index dddc33fcd5..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/connection/ConnectedEvent.java +++ /dev/null @@ -1,15 +0,0 @@ -package com.lambdaworks.redis.event.connection; - -import java.net.SocketAddress; - -/** - * Event for a established TCP-level connection. - * - * @author Mark Paluch - * @since 3.4 - */ -public class ConnectedEvent extends ConnectionEventSupport { - public ConnectedEvent(SocketAddress local, SocketAddress remote) { - super(local, remote); - } -} diff --git a/src/main/java/com/lambdaworks/redis/event/connection/ConnectionActivatedEvent.java b/src/main/java/com/lambdaworks/redis/event/connection/ConnectionActivatedEvent.java deleted file mode 100644 index 1681e35002..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/connection/ConnectionActivatedEvent.java +++ /dev/null @@ -1,18 +0,0 @@ -package com.lambdaworks.redis.event.connection; - -import java.net.SocketAddress; - -import com.lambdaworks.redis.ClientOptions; - -/** - * Event for a connection activation (after SSL-handshake, {@link ClientOptions#isPingBeforeActivateConnection() PING before - * activation}, and buffered command replay). - * - * @author Mark Paluch - * @since 3.4 - */ -public class ConnectionActivatedEvent extends ConnectionEventSupport { - public ConnectionActivatedEvent(SocketAddress local, SocketAddress remote) { - super(local, remote); - } -} diff --git a/src/main/java/com/lambdaworks/redis/event/connection/ConnectionDeactivatedEvent.java b/src/main/java/com/lambdaworks/redis/event/connection/ConnectionDeactivatedEvent.java deleted file mode 100644 index 497f8fe833..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/connection/ConnectionDeactivatedEvent.java +++ /dev/null @@ -1,15 +0,0 @@ -package com.lambdaworks.redis.event.connection; - -import java.net.SocketAddress; - -/** - * Event for a connection deactivation. - * - * @author Mark Paluch - * @since 3.4 - */ -public class ConnectionDeactivatedEvent extends ConnectionEventSupport { - public ConnectionDeactivatedEvent(SocketAddress local, SocketAddress remote) { - super(local, remote); - } -} diff --git a/src/main/java/com/lambdaworks/redis/event/connection/ConnectionEvent.java b/src/main/java/com/lambdaworks/redis/event/connection/ConnectionEvent.java deleted file mode 100644 index b1205cce95..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/connection/ConnectionEvent.java +++ /dev/null @@ -1,14 +0,0 @@ -package com.lambdaworks.redis.event.connection; - -import com.lambdaworks.redis.ConnectionId; -import com.lambdaworks.redis.event.Event; - -/** - * Interface for Connection-related events - * - * @author Mark Paluch - * @since 3.4 - */ -public interface ConnectionEvent extends ConnectionId, Event { - -} diff --git a/src/main/java/com/lambdaworks/redis/event/connection/ConnectionEventSupport.java b/src/main/java/com/lambdaworks/redis/event/connection/ConnectionEventSupport.java deleted file mode 100644 index 78dfe5bc6e..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/connection/ConnectionEventSupport.java +++ /dev/null @@ -1,56 +0,0 @@ -package com.lambdaworks.redis.event.connection; - -import java.net.SocketAddress; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * @author Mark Paluch - * @since 3.4 - */ -abstract class ConnectionEventSupport implements ConnectionEvent { - - private final SocketAddress local; - private final SocketAddress remote; - - ConnectionEventSupport(SocketAddress local, SocketAddress remote) { - LettuceAssert.notNull(local, "Local must not be null"); - LettuceAssert.notNull(remote, "Remote must not be null"); - - this.local = local; - this.remote = remote; - } - - /** - * Returns the local address. - * - * @return the local address - */ - public SocketAddress localAddress() { - return local; - } - - /** - * Returns the remote address. - * - * @return the remote address - */ - public SocketAddress remoteAddress() { - return remote; - } - - @Override - public String toString() { - final StringBuffer sb = new StringBuffer(); - sb.append(getClass().getSimpleName()); - sb.append(" ["); - appendConnectionId(sb); - sb.append(']'); - return sb.toString(); - } - - void appendConnectionId(StringBuffer sb) { - sb.append(local); - sb.append(" -> ").append(remote); - } -} diff --git a/src/main/java/com/lambdaworks/redis/event/connection/DisconnectedEvent.java b/src/main/java/com/lambdaworks/redis/event/connection/DisconnectedEvent.java deleted file mode 100644 index cf4b93cebe..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/connection/DisconnectedEvent.java +++ /dev/null @@ -1,15 +0,0 @@ -package com.lambdaworks.redis.event.connection; - -import java.net.SocketAddress; - -/** - * Event for a disconnect on TCP-level. - * - * @author Mark Paluch - * @since 3.4 - */ -public class DisconnectedEvent extends ConnectionEventSupport { - public DisconnectedEvent(SocketAddress local, SocketAddress remote) { - super(local, remote); - } -} diff --git a/src/main/java/com/lambdaworks/redis/event/connection/package-info.java b/src/main/java/com/lambdaworks/redis/event/connection/package-info.java deleted file mode 100644 index 61d07650fc..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/connection/package-info.java +++ /dev/null @@ -1,5 +0,0 @@ -/** - * Connection-related events. - */ -package com.lambdaworks.redis.event.connection; - diff --git a/src/main/java/com/lambdaworks/redis/event/metrics/CommandLatencyEvent.java b/src/main/java/com/lambdaworks/redis/event/metrics/CommandLatencyEvent.java deleted file mode 100644 index d179c83a29..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/metrics/CommandLatencyEvent.java +++ /dev/null @@ -1,37 +0,0 @@ -package com.lambdaworks.redis.event.metrics; - -import java.util.Map; - -import com.lambdaworks.redis.event.Event; -import com.lambdaworks.redis.metrics.CommandLatencyId; -import com.lambdaworks.redis.metrics.CommandMetrics; - -/** - * Event that transports command latency metrics. This event carries latencies for multiple commands and connections. - * - * @author Mark Paluch - */ -public class CommandLatencyEvent implements Event { - - private Map latencies; - - public CommandLatencyEvent(Map latencies) { - this.latencies = latencies; - } - - /** - * Returns the latencies mapped between {@link CommandLatencyId connection/command} and the {@link CommandMetrics metrics}. - * - * @return the latency map. - */ - public Map getLatencies() { - return latencies; - } - - @Override - public String toString() { - final StringBuffer sb = new StringBuffer(); - sb.append(latencies); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/event/metrics/DefaultCommandLatencyEventPublisher.java b/src/main/java/com/lambdaworks/redis/event/metrics/DefaultCommandLatencyEventPublisher.java deleted file mode 100644 index a961508d95..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/metrics/DefaultCommandLatencyEventPublisher.java +++ /dev/null @@ -1,68 +0,0 @@ -package com.lambdaworks.redis.event.metrics; - -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.event.EventPublisherOptions; -import com.lambdaworks.redis.metrics.CommandLatencyCollector; - -import io.netty.util.concurrent.EventExecutorGroup; -import io.netty.util.concurrent.ScheduledFuture; - -/** - * Default implementation of a {@link CommandLatencyCollector} for command latencies. - * - * @author Mark Paluch - */ -public class DefaultCommandLatencyEventPublisher implements MetricEventPublisher { - - private final EventExecutorGroup eventExecutorGroup; - private final EventPublisherOptions options; - private final EventBus eventBus; - private final CommandLatencyCollector commandLatencyCollector; - - private final Runnable EMITTER = new Runnable() { - @Override - public void run() { - emitMetricsEvent(); - } - }; - - private volatile ScheduledFuture scheduledFuture; - - public DefaultCommandLatencyEventPublisher(EventExecutorGroup eventExecutorGroup, EventPublisherOptions options, - EventBus eventBus, CommandLatencyCollector commandLatencyCollector) { - this.eventExecutorGroup = eventExecutorGroup; - this.options = options; - this.eventBus = eventBus; - this.commandLatencyCollector = commandLatencyCollector; - - if (options.eventEmitInterval() > 0) { - scheduledFuture = this.eventExecutorGroup.scheduleAtFixedRate(EMITTER, options.eventEmitInterval(), - options.eventEmitInterval(), options.eventEmitIntervalUnit()); - } - } - - @Override - public boolean isEnabled() { - return options.eventEmitInterval() > 0 && scheduledFuture != null; - } - - @Override - public void shutdown() { - - if (scheduledFuture != null) { - scheduledFuture.cancel(true); - scheduledFuture = null; - } - } - - @Override - public void emitMetricsEvent() { - - if (!isEnabled() || !commandLatencyCollector.isEnabled()) { - return; - } - - eventBus.publish(new CommandLatencyEvent(commandLatencyCollector.retrieveMetrics())); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/event/metrics/MetricEventPublisher.java b/src/main/java/com/lambdaworks/redis/event/metrics/MetricEventPublisher.java deleted file mode 100644 index c9f3260e13..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/metrics/MetricEventPublisher.java +++ /dev/null @@ -1,29 +0,0 @@ -package com.lambdaworks.redis.event.metrics; - -import com.lambdaworks.redis.event.Event; - -/** - * Event publisher which publishes metrics by the use of {@link Event events}. - * - * @author Mark Paluch - * @since 3.4 - */ -public interface MetricEventPublisher { - - /** - * Emit immediately a metrics event. - */ - void emitMetricsEvent(); - - /** - * Returns {@literal true} if the metric collector is enabled. - * - * @return {@literal true} if the metric collector is enabled - */ - boolean isEnabled(); - - /** - * Shut down the event publisher. - */ - void shutdown(); -} diff --git a/src/main/java/com/lambdaworks/redis/event/metrics/package-info.java b/src/main/java/com/lambdaworks/redis/event/metrics/package-info.java deleted file mode 100644 index 9665bcda23..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/metrics/package-info.java +++ /dev/null @@ -1,5 +0,0 @@ -/** - * Metric events and publishing. - */ -package com.lambdaworks.redis.event.metrics; - diff --git a/src/main/java/com/lambdaworks/redis/event/package-info.java b/src/main/java/com/lambdaworks/redis/event/package-info.java deleted file mode 100644 index 633e6bba7f..0000000000 --- a/src/main/java/com/lambdaworks/redis/event/package-info.java +++ /dev/null @@ -1,5 +0,0 @@ -/** - * Event publishing and subscription. - */ -package com.lambdaworks.redis.event; - diff --git a/src/main/java/com/lambdaworks/redis/internal/LettuceAssert.java b/src/main/java/com/lambdaworks/redis/internal/LettuceAssert.java deleted file mode 100644 index d7116036d7..0000000000 --- a/src/main/java/com/lambdaworks/redis/internal/LettuceAssert.java +++ /dev/null @@ -1,132 +0,0 @@ -package com.lambdaworks.redis.internal; - -import com.lambdaworks.redis.LettuceStrings; - -import java.util.Collection; - -/** - * Assertion utility class that assists in validating arguments. This class is part of the internal API and may change without - * further notice. - * - * @author Mark Paluch - */ -public class LettuceAssert { - - /** - * prevent instances. - */ - private LettuceAssert() { - } - - /** - * Assert that a string is not empty, it must not be {@code null} and it must not be empty. - * - * @param string the object to check - * @param message the exception message to use if the assertion fails - * @throws IllegalArgumentException if the object is {@code null} or the underlying string is empty - */ - public static void notEmpty(String string, String message) { - if (LettuceStrings.isEmpty(string)) { - throw new IllegalArgumentException(message); - } - } - - /** - * Assert that an object is not {@code null} . - * - * @param object the object to check - * @param message the exception message to use if the assertion fails - * @throws IllegalArgumentException if the object is {@code null} - */ - public static void notNull(Object object, String message) { - if (object == null) { - throw new IllegalArgumentException(message); - } - } - - /** - * Assert that an array has elements; that is, it must not be {@code null} and must have at least one element. - * - * @param array the array to check - * @param message the exception message to use if the assertion fails - * @throws IllegalArgumentException if the object array is {@code null} or has no elements - */ - public static void notEmpty(Object[] array, String message) { - if (array == null || array.length == 0) { - throw new IllegalArgumentException(message); - } - } - - /** - * Assert that an array has elements; that is, it must not be {@code null} and must have at least one element. - * - * @param array the array to check - * @param message the exception message to use if the assertion fails - * @throws IllegalArgumentException if the object array is {@code null} or has no elements - */ - public static void notEmpty(int[] array, String message) { - if (array == null || array.length == 0) { - throw new IllegalArgumentException(message); - } - } - - /** - * Assert that an array has no null elements. - * - * @param array the array to check - * @param message the exception message to use if the assertion fails - * @throws IllegalArgumentException if the object array contains a {@code null} element - */ - public static void noNullElements(Object[] array, String message) { - if (array != null) { - for (Object element : array) { - if (element == null) { - throw new IllegalArgumentException(message); - } - } - } - } - - /** - * Assert that a {@link java.util.Collection} has no null elements. - * - * @param c the collection to check - * @param message the exception message to use if the assertion fails - * @throws IllegalArgumentException if the {@link Collection} contains a {@code null} element - */ - public static void noNullElements(Collection c, String message) { - if (c != null) { - for (Object element : c) { - if (element == null) { - throw new IllegalArgumentException(message); - } - } - } - } - - /** - * Assert that {@code value} is {@literal true}. - * - * @param value the value to check - * @param message the exception message to use if the assertion fails - * @throws IllegalArgumentException if the object array contains a {@code null} element - */ - public static void isTrue(boolean value, String message) { - if (!value) { - throw new IllegalArgumentException(message); - } - } - - /** - * Ensures the truth of an expression involving the state of the calling instance, but not involving any parameters to the - * calling method. - * - * @param condition a boolean expression - * @throws IllegalStateException if {@code expression} is false - */ - public static void assertState(boolean condition, String message) { - if (!condition) { - throw new IllegalStateException(message); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/internal/LettuceClassUtils.java b/src/main/java/com/lambdaworks/redis/internal/LettuceClassUtils.java deleted file mode 100644 index 947254b752..0000000000 --- a/src/main/java/com/lambdaworks/redis/internal/LettuceClassUtils.java +++ /dev/null @@ -1,78 +0,0 @@ -package com.lambdaworks.redis.internal; - -import com.lambdaworks.redis.JavaRuntime; - -/** - * Miscellaneous class utility methods. Mainly for internal use within the framework. - * - * @author Mark Paluch - * @since 4.2 - */ -public class LettuceClassUtils { - - /** - * Determine whether the {@link Class} identified by the supplied name is present and can be loaded. Will return - * {@code false} if either the class or one of its dependencies is not present or cannot be loaded. - * - * @param className the name of the class to check - * @return whether the specified class is present - */ - public static boolean isPresent(String className) { - try { - forName(className); - return true; - } catch (Throwable ex) { - // Class or one of its dependencies is not present... - return false; - } - } - - /** - * Loads a class using the {@link #getDefaultClassLoader()}. - * - * @param className - * @return - * @throws ClassNotFoundException - */ - public static Class forName(String className) throws ClassNotFoundException { - return forName(className, getDefaultClassLoader()); - } - - private static Class forName(String className, ClassLoader classLoader) throws ClassNotFoundException { - try { - return classLoader.loadClass(className); - } catch (ClassNotFoundException ex) { - int lastDotIndex = className.lastIndexOf('.'); - if (lastDotIndex != -1) { - String innerClassName = className.substring(0, lastDotIndex) + '$' + className.substring(lastDotIndex + 1); - try { - return classLoader.loadClass(innerClassName); - } catch (ClassNotFoundException ex2) { - // swallow - let original exception get through - } - } - throw ex; - } - } - - /** - * Return the default ClassLoader to use: typically the thread context ClassLoader, if available; the ClassLoader that - * loaded the ClassUtils class will be used as fallback. - * - * @return the default ClassLoader (never null) - * @see java.lang.Thread#getContextClassLoader() - */ - private static ClassLoader getDefaultClassLoader() { - ClassLoader cl = null; - try { - cl = Thread.currentThread().getContextClassLoader(); - } catch (Throwable ex) { - // Cannot access thread context ClassLoader - falling back to system class loader... - } - if (cl == null) { - // No thread context class loader -> use class loader of this class. - cl = JavaRuntime.class.getClassLoader(); - } - return cl; - } -} diff --git a/src/main/java/com/lambdaworks/redis/internal/LettuceFactories.java b/src/main/java/com/lambdaworks/redis/internal/LettuceFactories.java deleted file mode 100644 index 8d6cf9daed..0000000000 --- a/src/main/java/com/lambdaworks/redis/internal/LettuceFactories.java +++ /dev/null @@ -1,47 +0,0 @@ -package com.lambdaworks.redis.internal; - -import java.util.ArrayDeque; -import java.util.Deque; -import java.util.Queue; -import java.util.concurrent.BlockingQueue; -import java.util.concurrent.ConcurrentLinkedDeque; -import java.util.concurrent.LinkedBlockingQueue; - -/** - * This class is part of the internal API and may change without further notice. - * - * @author Mark Paluch - * @since 4.2 - */ -public class LettuceFactories { - - /** - * Creates a new {@link Queue} that does not require external synchronization. - * - * @param - * @return a new, empty {@link ConcurrentLinkedDeque}. - */ - public final static Deque newConcurrentQueue() { - return new ConcurrentLinkedDeque(); - } - - /** - * Creates a new {@link Queue} for single producer/single consumer. - * - * @param - * @return a new, empty {@link ArrayDeque}. - */ - public final static Deque newSpScQueue() { - return new ArrayDeque<>(); - } - - /** - * Creates a new {@link BlockingQueue}. - * - * @param - * @return a new, empty {@link BlockingQueue}. - */ - public static BlockingQueue newBlockingQueue() { - return new LinkedBlockingQueue<>(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/internal/package-info.java b/src/main/java/com/lambdaworks/redis/internal/package-info.java deleted file mode 100644 index a886c39099..0000000000 --- a/src/main/java/com/lambdaworks/redis/internal/package-info.java +++ /dev/null @@ -1,6 +0,0 @@ -/** - * Contains internal API. Classes in this package are part of the internal API and may change without further notice. - * - * @since 4.2 - */ -package com.lambdaworks.redis.internal; diff --git a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlave.java b/src/main/java/com/lambdaworks/redis/masterslave/MasterSlave.java deleted file mode 100644 index e1fd9e1698..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlave.java +++ /dev/null @@ -1,282 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.util.*; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Master-Slave connection API. - *

- * This API allows connections to Redis Master/Slave setups which run either Redis Standalone or are managed by Redis Sentinel. - * Master-Slave connections can discover topologies and select a source for read operations using - * {@link com.lambdaworks.redis.ReadFrom}. - *

- *

- * - * Connections can be obtained by providing the {@link RedisClient}, a {@link RedisURI} and a {@link RedisCodec}. - * - *

- *  @code
- *   RedisClient client = RedisClient.create();
- *   StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(client,
- *                                                                      RedisURI.create("redis://localhost"),
- *                                                                      new Utf8StringCodec());
- *   // ...
- *
- *   connection.close();
- *   client.shutdown();
- *   }
- * 
- *

- *

Topology Discovery

- *

- * Master-Slave topologies are either static or semi-static. Redis Standalone instances with attached slaves provide no - * failover/HA mechanism. Redis Sentinel managed instances are controlled by Redis Sentinel and allow failover (which include - * master promotion). The {@link MasterSlave} API supports both mechanisms. The topology is provided by a - * {@link TopologyProvider}: - * - *

    - *
  • {@link MasterSlaveTopologyProvider}: Dynamic topology lookup using the {@code INFO REPLICATION} output. Slaves are listed - * as {@code slaveN=...} entries. The initial connection can either point to a master or a slave and the topology provider will - * discover nodes. The connection needs to be re-established outside of lettuce in a case of Master/Slave failover or topology - * changes.
  • - *
  • {@link StaticMasterSlaveTopologyProvider}: Topology is defined by the list of {@link RedisURI URIs} and the {@code ROLE} - * output. MasterSlave uses only the supplied nodes and won't discover additional nodes in the setup. The connection needs to be - * re-established outside of lettuce in a case of Master/Slave failover or topology changes.
  • - *
  • {@link SentinelTopologyProvider}: Dynamic topology lookup using the Redis Sentinel API. In particular, - * {@code SENTINEL MASTER} and {@code SENTINEL SLAVES} output. Master/Slave failover is handled by lettuce.
  • - *
- * - *

- * Topology Updates - *

- *
    - *
  • Standalone Master/Slave: Performs a one-time topology lookup which remains static afterward
  • - *
  • Redis Sentinel: Subscribes to all Sentinels and listens for Pub/Sub messages to trigger topology refreshing
  • - *
- *

- * - * @author Mark Paluch - * @since 4.1 - */ -public class MasterSlave { - - private final static InternalLogger LOG = InternalLoggerFactory.getInstance(MasterSlave.class); - - /** - * Open a new connection to a Redis Master-Slave server/servers using the supplied {@link RedisURI} and the supplied - * {@link RedisCodec codec} to encode/decode keys. - *

- * This {@link MasterSlave} performs auto-discovery of nodes using either Redis Sentinel or Master/Slave. A {@link RedisURI} - * can point to either a master or a slave host. - *

- * - * @param redisClient the Redis client - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param redisURI the Redis server to connect to, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - */ - public static StatefulRedisMasterSlaveConnection connect(RedisClient redisClient, RedisCodec codec, - RedisURI redisURI) { - - LettuceAssert.notNull(redisClient, "RedisClient must not be null"); - LettuceAssert.notNull(codec, "RedisCodec must not be null"); - LettuceAssert.notNull(redisURI, "RedisURI must not be null"); - - if (isSentinel(redisURI)) { - return connectSentinel(redisClient, codec, redisURI); - } else { - return connectMasterSlave(redisClient, codec, redisURI); - } - } - - /** - * Open a new connection to a Redis Master-Slave server/servers using the supplied {@link RedisURI} and the supplied - * {@link RedisCodec codec} to encode/decode keys. - *

- * This {@link MasterSlave} performs auto-discovery of nodes if the URI is a Redis Sentinel URI. Master/Slave URIs will be - * treated as static topology and no additional hosts are discovered in such case. Redis Standalone Master/Slave will - * discover the roles of the supplied {@link RedisURI URIs} and issue commands to the appropriate node. - *

- * - * @param redisClient the Redis client - * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} - * @param redisURIs the Redis server to connect to, must not be {@literal null} - * @param Key type - * @param Value type - * @return A new connection - */ - public static StatefulRedisMasterSlaveConnection connect(RedisClient redisClient, RedisCodec codec, - Iterable redisURIs) { - - LettuceAssert.notNull(redisClient, "RedisClient must not be null"); - LettuceAssert.notNull(codec, "RedisCodec must not be null"); - LettuceAssert.notNull(redisURIs, "RedisURIs must not be null"); - - List uriList = LettuceLists.newList(redisURIs); - LettuceAssert.isTrue(!uriList.isEmpty(), "RedisURIs must not be empty"); - - if (isSentinel(uriList.get(0))) { - return connectSentinel(redisClient, codec, uriList.get(0)); - } else { - return connectStaticMasterSlave(redisClient, codec, uriList); - } - } - - private static StatefulRedisMasterSlaveConnection connectSentinel(RedisClient redisClient, - RedisCodec codec, RedisURI redisURI) { - - TopologyProvider topologyProvider = new SentinelTopologyProvider(redisURI.getSentinelMasterId(), redisClient, redisURI); - SentinelTopologyRefresh sentinelTopologyRefresh = new SentinelTopologyRefresh(redisClient, - redisURI.getSentinelMasterId(), redisURI.getSentinels()); - - MasterSlaveTopologyRefresh refresh = new MasterSlaveTopologyRefresh(redisClient, topologyProvider); - MasterSlaveConnectionProvider connectionProvider = new MasterSlaveConnectionProvider<>(redisClient, codec, - redisURI, Collections.emptyMap()); - - connectionProvider.setKnownNodes(refresh.getNodes(redisURI)); - - MasterSlaveChannelWriter channelWriter = new MasterSlaveChannelWriter<>(connectionProvider); - StatefulRedisMasterSlaveConnectionImpl connection = new StatefulRedisMasterSlaveConnectionImpl<>(channelWriter, - codec, redisURI.getTimeout(), redisURI.getUnit()); - - Runnable runnable = () -> { - try { - - LOG.debug("Refreshing topology"); - List nodes = refresh.getNodes(redisURI); - - LOG.debug("New topology: {}", nodes); - connectionProvider.setKnownNodes(nodes); - } catch (Exception e) { - LOG.error("Error during background refresh", e); - } - }; - - try { - connection.registerCloseables(new ArrayList<>(), sentinelTopologyRefresh); - sentinelTopologyRefresh.bind(runnable); - } catch (RuntimeException e) { - - connection.close(); - throw e; - } - - return connection; - } - - private static StatefulRedisMasterSlaveConnection connectMasterSlave(RedisClient redisClient, - RedisCodec codec, RedisURI redisURI) { - - Map> initialConnections = new HashMap<>(); - - try { - - StatefulRedisConnection nodeConnection = redisClient.connect(codec, redisURI); - initialConnections.put(redisURI, nodeConnection); - - TopologyProvider topologyProvider = new MasterSlaveTopologyProvider(nodeConnection, redisURI); - - List nodes = topologyProvider.getNodes(); - RedisNodeDescription node = getConnectedNode(redisURI, nodes); - - if (node.getRole() != RedisInstance.Role.MASTER) { - - RedisNodeDescription master = lookupMaster(nodes); - nodeConnection = redisClient.connect(codec, master.getUri()); - initialConnections.put(master.getUri(), nodeConnection); - topologyProvider = new MasterSlaveTopologyProvider(nodeConnection, master.getUri()); - } - - MasterSlaveTopologyRefresh refresh = new MasterSlaveTopologyRefresh(redisClient, topologyProvider); - MasterSlaveConnectionProvider connectionProvider = new MasterSlaveConnectionProvider<>(redisClient, codec, - redisURI, initialConnections); - - connectionProvider.setKnownNodes(refresh.getNodes(redisURI)); - - MasterSlaveChannelWriter channelWriter = new MasterSlaveChannelWriter<>(connectionProvider); - - StatefulRedisMasterSlaveConnectionImpl connection = new StatefulRedisMasterSlaveConnectionImpl<>( - channelWriter, codec, redisURI.getTimeout(), redisURI.getUnit()); - - return connection; - - } catch (RuntimeException e) { - for (StatefulRedisConnection connection : initialConnections.values()) { - connection.close(); - } - throw e; - } - } - - private static StatefulRedisMasterSlaveConnection connectStaticMasterSlave(RedisClient redisClient, - RedisCodec codec, Iterable redisURIs) { - - Map> initialConnections = new HashMap<>(); - - try { - TopologyProvider topologyProvider = new StaticMasterSlaveTopologyProvider(redisClient, redisURIs); - - RedisURI seedNode = redisURIs.iterator().next(); - - MasterSlaveTopologyRefresh refresh = new MasterSlaveTopologyRefresh(redisClient, topologyProvider); - MasterSlaveConnectionProvider connectionProvider = new MasterSlaveConnectionProvider<>(redisClient, codec, - seedNode, initialConnections); - - List nodes = refresh.getNodes(seedNode); - if (nodes.isEmpty()) { - throw new RedisException(String.format("Cannot determine topology from %s", redisURIs)); - } - - connectionProvider.setKnownNodes(nodes); - - MasterSlaveChannelWriter channelWriter = new MasterSlaveChannelWriter<>(connectionProvider); - - StatefulRedisMasterSlaveConnectionImpl connection = new StatefulRedisMasterSlaveConnectionImpl<>( - channelWriter, codec, seedNode.getTimeout(), seedNode.getUnit()); - - return connection; - - } catch (RuntimeException e) { - for (StatefulRedisConnection connection : initialConnections.values()) { - connection.close(); - } - throw e; - } - } - - private static RedisNodeDescription lookupMaster(List nodes) { - - Optional first = nodes.stream().filter(n -> n.getRole() == RedisInstance.Role.MASTER).findFirst(); - return first.orElseThrow(() -> new IllegalStateException("Cannot lookup master from " + nodes)); - } - - private static RedisNodeDescription getConnectedNode(RedisURI redisURI, List nodes) { - - Optional first = nodes.stream().filter(n -> equals(redisURI, n)).findFirst(); - return first.orElseThrow( - () -> new IllegalStateException("Cannot lookup node descriptor for connected node at " + redisURI)); - } - - private static boolean equals(RedisURI redisURI, RedisNodeDescription node) { - return node.getUri().getHost().equals(redisURI.getHost()) && node.getUri().getPort() == redisURI.getPort(); - } - - private static boolean isSentinel(RedisURI redisURI) { - return !redisURI.getSentinels().isEmpty(); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveChannelWriter.java b/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveChannelWriter.java deleted file mode 100644 index 6216cebc03..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveChannelWriter.java +++ /dev/null @@ -1,109 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.RedisChannelHandler; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.ProtocolKeyword; -import com.lambdaworks.redis.protocol.RedisCommand; - -/** - * Channel writer/dispatcher that dispatches commands based on the intent to different connections. - * - * @author Mark Paluch - */ -class MasterSlaveChannelWriter implements RedisChannelWriter { - - private MasterSlaveConnectionProvider masterSlaveConnectionProvider; - private boolean closed = false; - - public MasterSlaveChannelWriter(MasterSlaveConnectionProvider masterSlaveConnectionProvider) { - this.masterSlaveConnectionProvider = masterSlaveConnectionProvider; - } - - @Override - public > C write(C command) { - - LettuceAssert.notNull(command, "Command must not be null"); - - if (closed) { - throw new RedisException("Connection is closed"); - } - - MasterSlaveConnectionProvider.Intent intent = getIntent(command.getType()); - StatefulRedisConnection connection = masterSlaveConnectionProvider.getConnection(intent); - - return connection.dispatch(command); - } - - private MasterSlaveConnectionProvider.Intent getIntent(ProtocolKeyword type) { - - for (ProtocolKeyword readOnlyCommand : ReadOnlyCommands.READ_ONLY_COMMANDS) { - if (readOnlyCommand == type) { - return MasterSlaveConnectionProvider.Intent.READ; - } - } - return MasterSlaveConnectionProvider.Intent.WRITE; - } - - @Override - public void close() { - - if (closed) { - return; - } - - closed = true; - - if (masterSlaveConnectionProvider != null) { - masterSlaveConnectionProvider.close(); - masterSlaveConnectionProvider = null; - } - } - - public MasterSlaveConnectionProvider getMasterSlaveConnectionProvider() { - return masterSlaveConnectionProvider; - } - - @Override - public void setRedisChannelHandler(RedisChannelHandler redisChannelHandler) { - - } - - @Override - public void setAutoFlushCommands(boolean autoFlush) { - masterSlaveConnectionProvider.setAutoFlushCommands(autoFlush); - } - - @Override - public void flushCommands() { - masterSlaveConnectionProvider.flushCommands(); - } - - @Override - public void reset() { - masterSlaveConnectionProvider.reset(); - } - - /** - * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the - * documentation for {@link ReadFrom} for more information. - * - * @param readFrom the read from setting, must not be {@literal null} - */ - public void setReadFrom(ReadFrom readFrom) { - masterSlaveConnectionProvider.setReadFrom(readFrom); - } - - /** - * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. - * - * @return the read from setting - */ - public ReadFrom getReadFrom() { - return masterSlaveConnectionProvider.getReadFrom(); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveConnectionProvider.java b/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveConnectionProvider.java deleted file mode 100644 index 3350621229..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveConnectionProvider.java +++ /dev/null @@ -1,305 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static com.lambdaworks.redis.masterslave.MasterSlaveUtils.findNodeByHostAndPort; - -import java.util.*; -import java.util.concurrent.ConcurrentHashMap; -import java.util.function.Function; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceSets; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Connection provider for master/slave setups. The connection provider - * - * @author Mark Paluch - * @since 4.1 - */ -public class MasterSlaveConnectionProvider { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(MasterSlaveConnectionProvider.class); - private final boolean debugEnabled; - - // Contains HostAndPort-identified connections. - private final Map> connections = new ConcurrentHashMap<>(); - private final ConnectionFactory connectionFactory; - private final RedisURI initialRedisUri; - - private List knownNodes = new ArrayList<>(); - - private boolean autoFlushCommands = true; - private Object stateLock = new Object(); - private ReadFrom readFrom; - - @Deprecated - public MasterSlaveConnectionProvider(RedisClient redisClient, RedisCodec redisCodec, - StatefulRedisConnection masterConnection, RedisURI initialRedisUri) { - this.initialRedisUri = initialRedisUri; - this.debugEnabled = logger.isDebugEnabled(); - this.connectionFactory = new ConnectionFactory<>(redisClient, redisCodec); - connections.put(toConnectionKey(initialRedisUri), masterConnection); - } - - MasterSlaveConnectionProvider(RedisClient redisClient, RedisCodec redisCodec, RedisURI initialRedisUri, - Map> initialConnections) { - - this.initialRedisUri = initialRedisUri; - this.debugEnabled = logger.isDebugEnabled(); - this.connectionFactory = new ConnectionFactory<>(redisClient, redisCodec); - - for (Map.Entry> entry : initialConnections.entrySet()) { - connections.put(toConnectionKey(entry.getKey()), entry.getValue()); - } - } - - /** - * Retrieve a {@link StatefulRedisConnection} by the intent. - * {@link com.lambdaworks.redis.masterslave.MasterSlaveConnectionProvider.Intent#WRITE} intentions use the master - * connection, {@link com.lambdaworks.redis.masterslave.MasterSlaveConnectionProvider.Intent#READ} intentions lookup one or - * more read candidates using the {@link ReadFrom} setting. - * - * @param intent command intent - * @return the connection. - */ - public StatefulRedisConnection getConnection(Intent intent) { - - if (debugEnabled) { - logger.debug("getConnection(" + intent + ")"); - } - - if (readFrom != null && intent == Intent.READ) { - List selection = readFrom.select(new ReadFrom.Nodes() { - @Override - public List getNodes() { - return knownNodes; - } - - @Override - public Iterator iterator() { - return knownNodes.iterator(); - } - }); - - if (selection.isEmpty()) { - throw new RedisException(String.format("Cannot determine a node to read (Known nodes: %s) with setting %s", - knownNodes, readFrom)); - } - try { - for (RedisNodeDescription redisNodeDescription : selection) { - StatefulRedisConnection readerCandidate = getConnection(redisNodeDescription); - if (!readerCandidate.isOpen()) { - continue; - } - return readerCandidate; - } - - return getConnection(selection.get(0)); - } catch (RuntimeException e) { - throw new RedisException(e); - } - } - - return getConnection(getMaster()); - } - - protected StatefulRedisConnection getConnection(RedisNodeDescription redisNodeDescription) { - return connections.computeIfAbsent( - new ConnectionKey(redisNodeDescription.getUri().getHost(), redisNodeDescription.getUri().getPort()), - connectionFactory); - } - - /** - * - * @return number of connections. - */ - protected long getConnectionCount() { - return connections.size(); - } - - /** - * Retrieve a set of PoolKey's for all pooled connections that are within the pool but not within the {@link Partitions}. - * - * @return Set of {@link ConnectionKey}s - */ - private Set getStaleConnectionKeys() { - Map> map = new HashMap<>(connections); - Set stale = new HashSet<>(); - - for (ConnectionKey connectionKey : map.keySet()) { - - if (connectionKey.host != null - && findNodeByHostAndPort(knownNodes, connectionKey.host, connectionKey.port) != null) { - continue; - } - stale.add(connectionKey); - } - return stale; - } - - /** - * Close stale connections. - */ - public void closeStaleConnections() { - logger.debug("closeStaleConnections() count before expiring: {}", getConnectionCount()); - - Set stale = getStaleConnectionKeys(); - - for (ConnectionKey connectionKey : stale) { - StatefulRedisConnection connection = connections.get(connectionKey); - if (connection != null) { - connections.remove(connectionKey); - connection.close(); - } - } - - logger.debug("closeStaleConnections() count after expiring: {}", getConnectionCount()); - } - - public void reset() { - allConnections().forEach(StatefulRedisConnection::reset); - } - - /** - * Close all connections. - */ - public void close() { - allConnections().forEach(StatefulRedisConnection::close); - connections.clear(); - } - - public void flushCommands() { - allConnections().forEach(StatefulConnection::flushCommands); - } - - public void setAutoFlushCommands(boolean autoFlushCommands) { - synchronized (stateLock) { - } - allConnections().forEach(connection -> connection.setAutoFlushCommands(autoFlushCommands)); - } - - protected Collection> allConnections() { - - Set> connections = LettuceSets.newHashSet(this.connections.values()); - return (Collection) connections; - } - - /** - * - * @param knownNodes - */ - public void setKnownNodes(Collection knownNodes) { - synchronized (stateLock) { - - this.knownNodes.clear(); - this.knownNodes.addAll(knownNodes); - - closeStaleConnections(); - } - } - - public ReadFrom getReadFrom() { - return readFrom; - } - - public void setReadFrom(ReadFrom readFrom) { - synchronized (stateLock) { - this.readFrom = readFrom; - } - } - - public RedisNodeDescription getMaster() { - for (RedisNodeDescription knownNode : knownNodes) { - if (knownNode.getRole() == RedisInstance.Role.MASTER) { - return knownNode; - } - } - - throw new RedisException(String.format("Master is currently unknown: %s", knownNodes)); - } - - private class ConnectionFactory implements Function> { - - private final RedisClient redisClient; - private final RedisCodec redisCodec; - - public ConnectionFactory(RedisClient redisClient, RedisCodec redisCodec) { - this.redisClient = redisClient; - this.redisCodec = redisCodec; - } - - @Override - public StatefulRedisConnection apply(ConnectionKey key) { - - RedisURI.Builder builder = RedisURI.Builder.redis(key.host, key.port); - - if (initialRedisUri.getPassword() != null && initialRedisUri.getPassword().length != 0) { - builder.withPassword(new String(initialRedisUri.getPassword())); - } - - builder.withDatabase(initialRedisUri.getDatabase()); - - StatefulRedisConnection connection = redisClient.connect(redisCodec, builder.build()); - - synchronized (stateLock) { - connection.setAutoFlushCommands(autoFlushCommands); - } - - return connection; - } - } - - private ConnectionKey toConnectionKey(RedisURI redisURI) { - return new ConnectionKey(redisURI.getHost(), redisURI.getPort()); - } - - /** - * Connection to identify a connection by host/port. - */ - private static class ConnectionKey { - private final String host; - private final int port; - - public ConnectionKey(String host, int port) { - this.host = host; - this.port = port; - } - - @Override - public boolean equals(Object o) { - if (this == o) - return true; - if (!(o instanceof ConnectionKey)) - return false; - - ConnectionKey that = (ConnectionKey) o; - - if (port != that.port) - return false; - return !(host != null ? !host.equals(that.host) : that.host != null); - - } - - @Override - public int hashCode() { - int result = (host != null ? host.hashCode() : 0); - result = 31 * result + port; - return result; - } - } - - enum Intent { - READ, WRITE; - } -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyProvider.java b/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyProvider.java deleted file mode 100644 index 83c55186ff..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyProvider.java +++ /dev/null @@ -1,162 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.util.ArrayList; -import java.util.List; -import java.util.regex.Matcher; -import java.util.regex.Pattern; - -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Topology provider using Redis Standalone and the {@code INFO REPLICATION} output. Slaves are listed as {@code slaveN=...} - * entries. - * - * @author Mark Paluch - * @since 4.1 - */ -public class MasterSlaveTopologyProvider implements TopologyProvider { - - public final static Pattern ROLE_PATTERN = Pattern.compile("^role\\:([a-z]+)$", Pattern.MULTILINE); - public final static Pattern SLAVE_PATTERN = Pattern.compile("^slave(\\d+)\\:([a-zA-Z\\,\\=\\d\\.\\:]+)$", Pattern.MULTILINE); - public final static Pattern MASTER_HOST_PATTERN = Pattern.compile("^master_host\\:([a-zA-Z\\,\\=\\d\\.\\:]+)$", - Pattern.MULTILINE); - public final static Pattern MASTER_PORT_PATTERN = Pattern.compile("^master_port\\:(\\d+)$", Pattern.MULTILINE); - public final static Pattern IP_PATTERN = Pattern.compile("ip\\=([a-zA-Z\\d\\.\\:]+)"); - public final static Pattern PORT_PATTERN = Pattern.compile("port\\=([\\d]+)"); - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(MasterSlaveTopologyProvider.class); - - private final StatefulRedisConnection connection; - private final RedisURI redisURI; - - /** - * Creates a new {@link MasterSlaveTopologyProvider}. - * - * @param connection must not be {@literal null} - * @param redisURI must not be {@literal null} - */ - public MasterSlaveTopologyProvider(StatefulRedisConnection connection, RedisURI redisURI) { - - LettuceAssert.notNull(connection, "Redis Connection must not be null"); - LettuceAssert.notNull(redisURI, "RedisURI must not be null"); - - this.connection = connection; - this.redisURI = redisURI; - } - - @Override - public List getNodes() { - - logger.debug("Performing topology lookup"); - - String info = connection.sync().info("replication"); - try { - return getNodesFromInfo(info); - } catch (RuntimeException e) { - throw new RedisException(e); - } - } - - protected List getNodesFromInfo(String info) { - List result = new ArrayList<>(); - - RedisNodeDescription currentNodeDescription = getCurrentNodeDescription(info); - - result.add(currentNodeDescription); - - if (currentNodeDescription.getRole() == RedisInstance.Role.MASTER) { - result.addAll(getSlavesFromInfo(info)); - } else { - result.add(getMasterFromInfo(info)); - } - - return result; - } - - private RedisNodeDescription getCurrentNodeDescription(String info) { - - Matcher matcher = ROLE_PATTERN.matcher(info); - - if (!matcher.find()) { - throw new IllegalStateException("No role property in info " + info); - } - - return getRedisNodeDescription(matcher); - } - - private List getSlavesFromInfo(String info) { - - List slaves = new ArrayList<>(); - - Matcher matcher = SLAVE_PATTERN.matcher(info); - while (matcher.find()) { - - String group = matcher.group(2); - String ip = getNested(IP_PATTERN, group, 1); - String port = getNested(PORT_PATTERN, group, 1); - - slaves.add(new RedisMasterSlaveNode(ip, Integer.parseInt(port), redisURI, RedisInstance.Role.SLAVE)); - } - - return slaves; - } - - private RedisNodeDescription getMasterFromInfo(String info) { - - Matcher masterHostMatcher = MASTER_HOST_PATTERN.matcher(info); - Matcher masterPortMatcher = MASTER_PORT_PATTERN.matcher(info); - - boolean foundHost = masterHostMatcher.find(); - boolean foundPort = masterPortMatcher.find(); - - if (!foundHost || !foundPort) { - throw new IllegalStateException("Cannot resolve master from info " + info); - } - - String host = masterHostMatcher.group(1); - int port = Integer.parseInt(masterPortMatcher.group(1)); - - return new RedisMasterSlaveNode(host, port, redisURI, RedisInstance.Role.MASTER); - } - - private String getNested(Pattern pattern, String string, int group) { - - Matcher matcher = pattern.matcher(string); - if (matcher.find()) { - return matcher.group(group); - } - - throw new IllegalArgumentException("Cannot extract group " + group + " with pattern " + pattern + " from " + string); - - } - - private RedisNodeDescription getRedisNodeDescription(Matcher matcher) { - - String roleString = matcher.group(1); - RedisInstance.Role role = null; - - if (RedisInstance.Role.MASTER.name().equalsIgnoreCase(roleString)) { - role = RedisInstance.Role.MASTER; - } - - if (RedisInstance.Role.SLAVE.name().equalsIgnoreCase(roleString)) { - role = RedisInstance.Role.SLAVE; - } - - if (role == null) { - throw new IllegalStateException("Cannot resolve role " + roleString + " to " + RedisInstance.Role.MASTER + " or " - + RedisInstance.Role.SLAVE); - } - - return new RedisMasterSlaveNode(redisURI.getHost(), redisURI.getPort(), redisURI, role); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyRefresh.java b/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyRefresh.java deleted file mode 100644 index c8cac87356..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyRefresh.java +++ /dev/null @@ -1,255 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static com.lambdaworks.redis.masterslave.MasterSlaveUtils.findNodeByUri; - -import java.util.*; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisCommandInterruptedException; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.models.role.RedisNodeDescription; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.*; - -import io.netty.buffer.ByteBuf; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Utility to refresh the Master-Slave topology view based on {@link RedisNodeDescription}. - * - * @author Mark Paluch - */ -class MasterSlaveTopologyRefresh { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(MasterSlaveTopologyRefresh.class); - - private final RedisClient client; - private final TopologyProvider topologyProvider; - - public MasterSlaveTopologyRefresh(RedisClient client, TopologyProvider topologyProvider) { - this.client = client; - this.topologyProvider = topologyProvider; - } - - /** - * Load master slave nodes. Result contains an ordered list of {@link RedisNodeDescription}s. The sort key is the latency. - * Nodes with lower latency come first. - * - * @param seed collection of {@link RedisURI}s - * @return mapping between {@link RedisURI} and {@link Partitions} - */ - public List getNodes(RedisURI seed) { - - List nodes = topologyProvider.getNodes(); - - addPasswordIfNeeded(nodes, seed); - - Map> connections = getConnections(nodes); - Map> rawViews = requestPing(connections); - List result = getNodeSpecificViews(rawViews, nodes, seed); - close(connections); - - return result; - } - - private void addPasswordIfNeeded(List nodes, RedisURI seed) { - - if (seed.getPassword() != null && seed.getPassword().length != 0) { - for (RedisNodeDescription node : nodes) { - node.getUri().setPassword(new String(seed.getPassword())); - } - } - } - - protected List getNodeSpecificViews( - Map> rawViews, List nodes, RedisURI seed) { - List result = new ArrayList<>(); - - long timeout = seed.getUnit().toNanos(seed.getTimeout()); - long waitTime = 0; - Map latencies = new HashMap<>(); - - for (Map.Entry> entry : rawViews.entrySet()) { - long timeoutLeft = timeout - waitTime; - - if (timeoutLeft <= 0) { - break; - } - - long startWait = System.nanoTime(); - RedisFuture future = entry.getValue(); - - try { - - if (!future.await(timeoutLeft, TimeUnit.NANOSECONDS)) { - break; - } - waitTime += System.nanoTime() - startWait; - - future.get(); - - RedisNodeDescription redisNodeDescription = findNodeByUri(nodes, entry.getKey()); - latencies.put(redisNodeDescription, entry.getValue().duration()); - result.add(redisNodeDescription); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - } catch (ExecutionException e) { - logger.warn("Cannot retrieve partition view from " + entry.getKey(), e); - } - } - - LatencyComparator comparator = new LatencyComparator(latencies); - - Collections.sort(result, comparator); - - return result; - } - - /* - * Async request of views. - */ - @SuppressWarnings("unchecked") - private Map> requestPing( - Map> connections) { - Map> rawViews = new TreeMap<>(RedisUriComparator.INSTANCE); - for (Map.Entry> entry : connections.entrySet()) { - - TimedAsyncCommand timed = createPingCommand(); - - entry.getValue().dispatch(timed); - rawViews.put(entry.getKey(), timed); - } - return rawViews; - } - - protected TimedAsyncCommand createPingCommand() { - CommandArgs args = new CommandArgs<>(MasterSlaveUtils.CODEC); - Command command = new Command<>(CommandType.PING, new StatusOutput<>(MasterSlaveUtils.CODEC), - args); - return new TimedAsyncCommand<>(command); - } - - private void close(Map> connections) { - for (StatefulRedisConnection connection : connections.values()) { - connection.close(); - } - } - - /* - * Open connections where an address can be resolved. - */ - private Map> getConnections(Iterable nodes) { - Map> connections = new TreeMap<>(RedisUriComparator.INSTANCE); - - for (RedisNodeDescription node : nodes) { - - try { - StatefulRedisConnection connection = client.connect(node.getUri()); - connections.put(node.getUri(), connection); - } catch (RuntimeException e) { - logger.warn("Cannot connect to " + node.getUri(), e); - } - } - return connections; - } - - /** - * Compare {@link RedisURI} based on their host and port representation. - */ - static class RedisUriComparator implements Comparator { - - public final static RedisUriComparator INSTANCE = new RedisUriComparator(); - - @Override - public int compare(RedisURI o1, RedisURI o2) { - String h1 = ""; - String h2 = ""; - - if (o1 != null) { - h1 = o1.getHost() + ":" + o1.getPort(); - } - - if (o2 != null) { - h2 = o2.getHost() + ":" + o2.getPort(); - } - - return h1.compareToIgnoreCase(h2); - } - } - - /** - * Timed command that records the time at which the command was encoded and completed. - * - * @param Key type - * @param Value type - * @param Result type - */ - static class TimedAsyncCommand extends AsyncCommand { - - long encodedAtNs = -1; - long completedAtNs = -1; - - public TimedAsyncCommand(RedisCommand command) { - super(command); - } - - @Override - public void encode(ByteBuf buf) { - completedAtNs = -1; - encodedAtNs = -1; - - super.encode(buf); - encodedAtNs = System.nanoTime(); - } - - @Override - public void complete() { - completedAtNs = System.nanoTime(); - super.complete(); - } - - public long duration() { - if (completedAtNs == -1 || encodedAtNs == -1) { - return -1; - } - return completedAtNs - encodedAtNs; - } - } - - static class LatencyComparator implements Comparator { - - private final Map latencies; - - public LatencyComparator(Map latencies) { - this.latencies = latencies; - } - - @Override - public int compare(RedisNodeDescription o1, RedisNodeDescription o2) { - - Long latency1 = latencies.get(o1); - Long latency2 = latencies.get(o2); - - if (latency1 != null && latency2 != null) { - return latency1.compareTo(latency2); - } - - if (latency1 != null && latency2 == null) { - return -1; - } - - if (latency1 == null && latency2 != null) { - return 1; - } - - return 0; - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveUtils.java b/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveUtils.java deleted file mode 100644 index 284340cec5..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/MasterSlaveUtils.java +++ /dev/null @@ -1,90 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.util.Collection; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -/** - * @author Mark Paluch - */ -class MasterSlaveUtils { - static final Utf8StringCodec CODEC = new Utf8StringCodec(); - - /** - * Check if properties changed. - * - * @param o1 the first object to be compared. - * @param o2 the second object to be compared. - * @return {@literal true} if {@code MASTER} or {@code SLAVE} flags changed or the URIs are changed. - */ - static boolean isChanged(Collection o1, Collection o2) { - - if (o1.size() != o2.size()) { - return true; - } - - for (RedisNodeDescription base : o2) { - if (!essentiallyEqualsTo(base, findNodeByUri(o1, base.getUri()))) { - return true; - } - } - - return false; - } - - /** - * Lookup a {@link RedisNodeDescription} by {@link RedisURI}. - * - * @param nodes - * @param lookupUri - * @return the {@link RedisNodeDescription} or {@literal null} - */ - static RedisNodeDescription findNodeByUri(Collection nodes, RedisURI lookupUri) { - return findNodeByHostAndPort(nodes, lookupUri.getHost(), lookupUri.getPort()); - } - - /** - * Lookup a {@link RedisNodeDescription} by {@code host} and {@code port}. - * - * @param nodes - * @param host - * @param port - * @return the {@link RedisNodeDescription} or {@literal null} - */ - static RedisNodeDescription findNodeByHostAndPort(Collection nodes, String host, int port) { - for (RedisNodeDescription node : nodes) { - RedisURI nodeUri = node.getUri(); - if (nodeUri.getHost().equals(host) && nodeUri.getPort() == port) { - return node; - } - } - return null; - } - - /** - * Check for {@code MASTER} or {@code SLAVE} roles and the URI. - * - * @param o1 the first object to be compared. - * @param o2 the second object to be compared. - * @return {@literal true} if {@code MASTER} or {@code SLAVE} flags changed or the URI changed. - */ - static boolean essentiallyEqualsTo(RedisNodeDescription o1, RedisNodeDescription o2) { - - if (o2 == null) { - return false; - } - - if (o1.getRole() != o2.getRole()) { - return false; - } - - if (!o1.getUri().equals(o2.getUri())) { - return false; - } - - return true; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/ReadOnlyCommands.java b/src/main/java/com/lambdaworks/redis/masterslave/ReadOnlyCommands.java deleted file mode 100644 index 157f2412e3..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/ReadOnlyCommands.java +++ /dev/null @@ -1,41 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.util.HashSet; -import java.util.Set; - -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.ProtocolKeyword; - -/** - * Contains all command names that are read-only commands. - * - * @author Mark Paluch - */ -class ReadOnlyCommands { - - public final static ProtocolKeyword READ_ONLY_COMMANDS[]; - - static { - - Set set = new HashSet(CommandName.values().length); - - for (CommandName commandNames : CommandName.values()) { - set.add(CommandType.valueOf(commandNames.name())); - } - - READ_ONLY_COMMANDS = set.toArray(new ProtocolKeyword[set.size()]); - } - - enum CommandName { - ASKING, BITCOUNT, BITPOS, CLIENT, COMMAND, DUMP, ECHO, EXISTS, - /**/GEODIST, GEOPOS, GEORADIUS, GEORADIUSBYMEMBER, GEOHASH, GET, GETBIT, - /**/GETRANGE, HEXISTS, HGET, HGETALL, HKEYS, HLEN, HMGET, HSCAN, HSTRLEN, - /**/HVALS, INFO, KEYS, LINDEX, LLEN, LRANGE, MGET, MULTI, PFCOUNT, PTTL, - /**/RANDOMKEY, READWRITE, SCAN, SCARD, SCRIPT, - /**/SDIFF, SINTER, SISMEMBER, SMEMBERS, SRANDMEMBER, SSCAN, STRLEN, - /**/SUNION, TIME, TTL, TYPE, WAIT, ZCARD, ZCOUNT, ZLEXCOUNT, ZRANGE, - /**/ZRANGEBYLEX, ZRANGEBYSCORE, ZRANK, ZREVRANGE, /* ZREVRANGEBYLEX , */ZREVRANGEBYSCORE, ZREVRANK, ZSCAN, ZSCORE, - - } - -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/RedisMasterSlaveNode.java b/src/main/java/com/lambdaworks/redis/masterslave/RedisMasterSlaveNode.java deleted file mode 100644 index 74efeac8d4..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/RedisMasterSlaveNode.java +++ /dev/null @@ -1,67 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -/** - * A node within a Redis Master-Slave setup. - * - * @author Mark Paluch - */ -class RedisMasterSlaveNode implements RedisNodeDescription { - - private final RedisURI redisURI; - private final Role role; - - public RedisMasterSlaveNode(String host, int port, RedisURI seed, Role role) { - - RedisURI.Builder builder = RedisURI.Builder.redis(host, port); - if (seed.getPassword() != null && seed.getPassword().length != 0) { - builder.withPassword(new String(seed.getPassword())); - } - - this.redisURI = builder.withDatabase(seed.getDatabase()).build(); - this.role = role; - } - - @Override - public RedisURI getUri() { - return redisURI; - } - - @Override - public Role getRole() { - return role; - } - - @Override - public boolean equals(Object o) { - if (this == o) - return true; - if (!(o instanceof RedisMasterSlaveNode)) - return false; - - RedisMasterSlaveNode that = (RedisMasterSlaveNode) o; - - if (!redisURI.equals(that.redisURI)) - return false; - return role == that.role; - } - - @Override - public int hashCode() { - int result = redisURI.hashCode(); - result = 31 * result + role.hashCode(); - return result; - } - - @Override - public String toString() { - final StringBuffer sb = new StringBuffer(); - sb.append(getClass().getSimpleName()); - sb.append(" [redisURI=").append(redisURI); - sb.append(", role=").append(role); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/SentinelTopologyProvider.java b/src/main/java/com/lambdaworks/redis/masterslave/SentinelTopologyProvider.java deleted file mode 100644 index 79ec00c98e..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/SentinelTopologyProvider.java +++ /dev/null @@ -1,106 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static com.lambdaworks.redis.masterslave.MasterSlaveUtils.CODEC; - -import java.util.ArrayList; -import java.util.List; -import java.util.Map; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; -import java.util.stream.Collectors; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; - -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Topology provider using Redis Sentinel and the Sentinel API. - * - * @author Mark Paluch - * @since 4.1 - */ -public class SentinelTopologyProvider implements TopologyProvider { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(SentinelTopologyProvider.class); - - private final String masterId; - private final RedisClient redisClient; - private final RedisURI sentinelUri; - private final long timeout; - private final TimeUnit timeUnit; - - /** - * Creates a new {@link SentinelTopologyProvider}. - * - * @param masterId must not be empty - * @param redisClient must not be {@literal null}. - * @param sentinelUri must not be {@literal null}. - */ - public SentinelTopologyProvider(String masterId, RedisClient redisClient, RedisURI sentinelUri) { - - LettuceAssert.notEmpty(masterId, "MasterId must not be empty"); - LettuceAssert.notNull(redisClient, "RedisClient must not be null"); - LettuceAssert.notNull(sentinelUri, "Sentinel URI must not be null"); - - this.masterId = masterId; - this.redisClient = redisClient; - this.sentinelUri = sentinelUri; - this.timeout = sentinelUri.getTimeout(); - this.timeUnit = sentinelUri.getUnit(); - } - - @Override - public List getNodes() { - - logger.debug("lookup topology for masterId {}", masterId); - - try (StatefulRedisSentinelConnection connection = redisClient.connectSentinel(CODEC, sentinelUri)) { - - RedisFuture> masterFuture = connection.async().master(masterId); - RedisFuture>> slavesFuture = connection.async().slaves(masterId); - - List result = new ArrayList<>(); - try { - Map master = masterFuture.get(timeout, timeUnit); - List> slaves = slavesFuture.get(timeout, timeUnit); - - result.add(toNode(master, RedisInstance.Role.MASTER)); - result.addAll(slaves.stream().filter(SentinelTopologyProvider::isAvailable) - .map(map -> toNode(map, RedisInstance.Role.SLAVE)).collect(Collectors.toList())); - - } catch (ExecutionException | InterruptedException | TimeoutException e) { - throw new RedisException(e); - } - - return result; - } - } - - private static boolean isAvailable(Map map) { - - String flags = map.get("flags"); - if (flags != null) { - if (flags.contains("s_down") || flags.contains("o_down") || flags.contains("disconnected")) { - return false; - } - } - return true; - } - - private RedisNodeDescription toNode(Map map, RedisInstance.Role role) { - - String ip = map.get("ip"); - String port = map.get("port"); - return new RedisMasterSlaveNode(ip, Integer.parseInt(port), sentinelUri, role); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/SentinelTopologyRefresh.java b/src/main/java/com/lambdaworks/redis/masterslave/SentinelTopologyRefresh.java deleted file mode 100644 index 08acc1ecb8..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/SentinelTopologyRefresh.java +++ /dev/null @@ -1,160 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.io.Closeable; -import java.io.IOException; -import java.util.*; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicReference; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisConnectionException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.pubsub.RedisPubSubAdapter; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; - -import io.netty.util.concurrent.EventExecutorGroup; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Sentinel Pub/Sub listener-enabled topology refresh. - * - * @author Mark Paluch - * @since 4.2 - */ -class SentinelTopologyRefresh implements Closeable { - - private final static InternalLogger LOG = InternalLoggerFactory.getInstance(SentinelTopologyRefresh.class); - - private final List> pubSubConnections = new ArrayList<>(); - private final RedisClient redisClient; - private final String masterId; - private final List sentinels; - private final AtomicReference timeoutRef = new AtomicReference(); - private final Set PROCESSING_CHANNELS = new HashSet<>(Arrays.asList("failover-end", "failover-end-for-timeout")); - - private int timeout = 5; - private TimeUnit timeUnit = TimeUnit.SECONDS; - - private RedisPubSubAdapter adapter; - - SentinelTopologyRefresh(RedisClient redisClient, String masterId, List sentinels) { - - this.redisClient = redisClient; - this.masterId = masterId; - this.sentinels = sentinels; - } - - @Override - public void close() throws IOException { - - pubSubConnections.forEach(c -> c.removeListener(adapter)); - pubSubConnections.forEach(StatefulConnection::close); - } - - void bind(Runnable runnable) { - - Utf8StringCodec codec = new Utf8StringCodec(); - AtomicReference ref = new AtomicReference<>(); - - sentinels.forEach(redisURI -> { - - try { - StatefulRedisPubSubConnection pubSubConnection = redisClient.connectPubSub(codec, redisURI); - pubSubConnections.add(pubSubConnection); - } catch (RedisConnectionException e) { - if (ref.get() == null) { - ref.set(e); - } else { - ref.get().addSuppressed(e); - } - } - }); - - if (sentinels.isEmpty() && ref.get() != null) { - throw ref.get(); - } - - adapter = new RedisPubSubAdapter() { - - @Override - public void message(String pattern, String channel, String message) { - - if (processingAllowed(channel, message)) { - - LOG.debug("Received topology changed signal from Redis Sentinel, scheduling topology update"); - - Timeout timeout = timeoutRef.get(); - if (timeout == null) { - getEventExecutor().submit(runnable); - } else { - getEventExecutor().schedule(runnable, timeout.remaining(), TimeUnit.MILLISECONDS); - } - } - } - }; - - pubSubConnections.forEach(c -> { - - c.addListener(adapter); - c.async().psubscribe("*"); - }); - - } - - private boolean processingAllowed(String channel, String message) { - - if (getEventExecutor().isShuttingDown()) { - return false; - } - - if (!messageMatches(channel, message)) { - return false; - } - - Timeout existingTimeout = timeoutRef.get(); - - if (existingTimeout != null) { - if (!existingTimeout.isExpired()) { - return false; - } - } - - Timeout timeout = new Timeout(this.timeout, this.timeUnit); - return timeoutRef.compareAndSet(existingTimeout, timeout); - } - - protected EventExecutorGroup getEventExecutor() { - return redisClient.getResources().eventExecutorGroup(); - } - - private boolean messageMatches(String channel, String message) { - - // trailing spaces after the master name are not bugs - if (channel.equals("+elected-leader")) { - if (message.startsWith(String.format("master %s ", masterId))) { - return true; - } - } - - if (channel.equals("+switch-master")) { - if (message.startsWith(String.format("%s ", masterId))) { - return true; - } - } - - if (channel.equals("fix-slave-config")) { - if (message.contains(String.format("@ %s ", masterId))) { - return true; - } - } - - if (PROCESSING_CHANNELS.contains(channel)) { - return true; - } - - return false; - } -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/StatefulRedisMasterSlaveConnection.java b/src/main/java/com/lambdaworks/redis/masterslave/StatefulRedisMasterSlaveConnection.java deleted file mode 100644 index 4532e1c143..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/StatefulRedisMasterSlaveConnection.java +++ /dev/null @@ -1,30 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.api.StatefulRedisConnection; - -/** - * Redis Master-Slave connection. The connection allows slave reads by setting {@link ReadFrom}. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.1 - */ -public interface StatefulRedisMasterSlaveConnection extends StatefulRedisConnection { - - /** - * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the - * documentation for {@link ReadFrom} for more information. - * - * @param readFrom the read from setting, must not be {@literal null} - */ - void setReadFrom(ReadFrom readFrom); - - /** - * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. - * - * @return the read from setting - */ - ReadFrom getReadFrom(); -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/StatefulRedisMasterSlaveConnectionImpl.java b/src/main/java/com/lambdaworks/redis/masterslave/StatefulRedisMasterSlaveConnectionImpl.java deleted file mode 100644 index 9eb9823641..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/StatefulRedisMasterSlaveConnectionImpl.java +++ /dev/null @@ -1,42 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.StatefulRedisConnectionImpl; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * @author Mark Paluch - */ -class StatefulRedisMasterSlaveConnectionImpl extends StatefulRedisConnectionImpl implements - StatefulRedisMasterSlaveConnection { - - /** - * Initialize a new connection. - * - * @param writer the channel writer - * @param codec Codec used to encode/decode keys and values. - * @param timeout Maximum time to wait for a response. - * @param unit Unit of time for the timeout. - */ - public StatefulRedisMasterSlaveConnectionImpl(MasterSlaveChannelWriter writer, RedisCodec codec, long timeout, - TimeUnit unit) { - super(writer, codec, timeout, unit); - } - - @Override - public void setReadFrom(ReadFrom readFrom) { - getChannelWriter().setReadFrom(readFrom); - } - - @Override - public ReadFrom getReadFrom() { - return getChannelWriter().getReadFrom(); - } - - @Override - public MasterSlaveChannelWriter getChannelWriter() { - return (MasterSlaveChannelWriter) super.getChannelWriter(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/StaticMasterSlaveTopologyProvider.java b/src/main/java/com/lambdaworks/redis/masterslave/StaticMasterSlaveTopologyProvider.java deleted file mode 100644 index 010d959f34..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/StaticMasterSlaveTopologyProvider.java +++ /dev/null @@ -1,97 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.util.*; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.Future; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; -import com.lambdaworks.redis.models.role.RoleParser; - -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Topology provider for a static node collection. This provider uses a static collection of nodes to determine the role of each - * {@link RedisURI node}. Node roles may change during runtime but the configuration must remain the same. This - * {@link TopologyProvider} does not auto-discover nodes. - * - * @author Mark Paluch - */ -public class StaticMasterSlaveTopologyProvider implements TopologyProvider { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(StaticMasterSlaveTopologyProvider.class); - - private final RedisClient redisClient; - private final Iterable redisURIs; - - public StaticMasterSlaveTopologyProvider(RedisClient redisClient, Iterable redisURIs) { - - LettuceAssert.notNull(redisClient, "RedisClient must not be null"); - LettuceAssert.notNull(redisURIs, "RedisURIs must not be null"); - LettuceAssert.notNull(redisURIs.iterator().hasNext(), "RedisURIs must not be empty"); - - this.redisClient = redisClient; - this.redisURIs = redisURIs; - } - - @Override - @SuppressWarnings("rawtypes") - public List getNodes() { - - List> connections = new ArrayList<>(); - Map>> roles = new HashMap<>(); - - try { - for (RedisURI redisURI : redisURIs) { - try { - StatefulRedisConnection connection = redisClient.connect(redisURI); - connections.add(connection); - - roles.put(redisURI, connection.async().role()); - } catch (RuntimeException e) { - logger.warn("Cannot connect to {}", redisURI, e); - } - } - - RedisURI next = redisURIs.iterator().next(); - boolean success = LettuceFutures.awaitAll(next.getTimeout(), next.getUnit(), - roles.values().toArray(new Future[roles.size()])); - - if (success) { - - List result = new ArrayList<>(); - for (Map.Entry>> entry : roles.entrySet()) { - - if (!entry.getValue().isDone()) { - continue; - } - - RedisURI key = entry.getKey(); - - RedisInstance redisInstance = RoleParser.parse(entry.getValue().get()); - result.add(new RedisMasterSlaveNode(key.getHost(), key.getPort(), key, redisInstance.getRole())); - } - - return result; - } - } catch (ExecutionException e) { - throw new IllegalStateException(e); - } catch (InterruptedException e) { - - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - - } finally { - - for (StatefulRedisConnection connection : connections) { - connection.close(); - } - } - - return Collections.emptyList(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/Timeout.java b/src/main/java/com/lambdaworks/redis/masterslave/Timeout.java deleted file mode 100644 index b8c37cbfb7..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/Timeout.java +++ /dev/null @@ -1,31 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.util.concurrent.TimeUnit; - -/** - * Value object to represent a timeout. - * - * @author Mark Paluch - * @since 4.2 - */ -class Timeout { - - private final long expiresMs; - - public Timeout(long timeout, TimeUnit timeUnit) { - this.expiresMs = System.currentTimeMillis() + timeUnit.toMillis(timeout); - } - - public boolean isExpired() { - return expiresMs < System.currentTimeMillis(); - } - - public long remaining() { - - long diff = expiresMs - System.currentTimeMillis(); - if (diff > 0) { - return diff; - } - return 0; - } -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/TopologyProvider.java b/src/main/java/com/lambdaworks/redis/masterslave/TopologyProvider.java deleted file mode 100644 index dd354647e2..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/TopologyProvider.java +++ /dev/null @@ -1,25 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import java.util.List; - -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -/** - * Topology provider for Master-Slave topology discovery during runtime. Implementors of this interface return an unordered list - * of {@link RedisNodeDescription} instances. - * - * @author Mark Paluch - * @since 4.1 - */ -@FunctionalInterface -public interface TopologyProvider { - - /** - * Lookup nodes within the topology. - * - * @return list of {@link RedisNodeDescription} instances - * @throws RedisException on errors that occured during the lookup - */ - List getNodes(); -} diff --git a/src/main/java/com/lambdaworks/redis/masterslave/package-info.java b/src/main/java/com/lambdaworks/redis/masterslave/package-info.java deleted file mode 100644 index e55b21491f..0000000000 --- a/src/main/java/com/lambdaworks/redis/masterslave/package-info.java +++ /dev/null @@ -1,5 +0,0 @@ -/** - * Client support for Redis Master/Slave setups. {@link com.lambdaworks.redis.masterslave.MasterSlave} supports self-managed, - * Redis Sentinel-managed, AWS ElastiCache and Azure Redis managed Master/Slave setups. - */ -package com.lambdaworks.redis.masterslave; diff --git a/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyCollector.java b/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyCollector.java deleted file mode 100644 index f3de206bc4..0000000000 --- a/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyCollector.java +++ /dev/null @@ -1,34 +0,0 @@ -package com.lambdaworks.redis.metrics; - -import java.net.SocketAddress; -import java.util.Map; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.protocol.ProtocolKeyword; - -/** - * {@link MetricCollector} for command latencies. Command latencies are collected per connection (identified by local/remote - * tuples of {@link SocketAddress}es) and {@link ProtocolKeyword command type}. Two command latencies are available: - *
    - *
  • Latency between command send and first response (first response received)
  • - *
  • Latency between command send and command completion (complete response received)
  • - *
- * - * @author Mark Paluch - * @since 3.4 - */ -public interface CommandLatencyCollector extends MetricCollector> { - - /** - * Record the command latency per {@code connectionPoint} and {@code commandType}. - * - * @param local the local address - * @param remote the remote address - * @param commandType the command type - * @param firstResponseLatency latency value in {@link TimeUnit#NANOSECONDS} from send to the first response - * @param completionLatency latency value in {@link TimeUnit#NANOSECONDS} from send to the command completion - */ - void recordCommandLatency(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType, - long firstResponseLatency, long completionLatency); - -} diff --git a/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyCollectorOptions.java b/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyCollectorOptions.java deleted file mode 100644 index 8c3c4cd1ab..0000000000 --- a/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyCollectorOptions.java +++ /dev/null @@ -1,49 +0,0 @@ -package com.lambdaworks.redis.metrics; - -import java.util.concurrent.TimeUnit; - -/** - * Configuration interface for command latency collection. - * - * @author Mark Paluch - */ -public interface CommandLatencyCollectorOptions { - - /** - * Returns the target {@link TimeUnit} for the emitted latencies. - * - * @return the target {@link TimeUnit} for the emitted latencies - */ - TimeUnit targetUnit(); - - /** - * Returns the percentiles which should be exposed in the metric. - * - * @return the percentiles which should be exposed in the metric - */ - double[] targetPercentiles(); - - /** - * Returns whether the latencies should be reset once an event is emitted. - * - * @return {@literal true} if the latencies should be reset once an event is emitted. - */ - boolean resetLatenciesAfterEvent(); - - /** - * Returns whether to distinct latencies on local level. If {@literal true}, multiple connections to the same - * host/connection point will be recorded separately which allows to inspect every connection individually. If - * {@literal false}, multiple connections to the same host/connection point will be recorded together. This allows a - * consolidated view on one particular service. - * - * @return {@literal true} if latencies are recorded distinct on local level (per connection) - */ - boolean localDistinction(); - - /** - * Returns whether the latency collector is enabled. - * - * @return {@literal true} if the latency collector is enabled - */ - boolean isEnabled(); -} diff --git a/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyId.java b/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyId.java deleted file mode 100644 index 3312755820..0000000000 --- a/src/main/java/com/lambdaworks/redis/metrics/CommandLatencyId.java +++ /dev/null @@ -1,123 +0,0 @@ -package com.lambdaworks.redis.metrics; - -import java.io.Serializable; -import java.net.SocketAddress; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.ProtocolKeyword; - -/** - * Identifier for a command latency. Consists of a local/remote tuple of {@link SocketAddress}es and a - * {@link com.lambdaworks.redis.protocol.ProtocolKeyword commandType} part. - * - * @author Mark Paluch - */ -public class CommandLatencyId implements Serializable, Comparable { - - private final SocketAddress localAddress; - private final SocketAddress remoteAddress; - private final ProtocolKeyword commandType; - - protected CommandLatencyId(SocketAddress localAddress, SocketAddress remoteAddress, ProtocolKeyword commandType) { - LettuceAssert.notNull(localAddress, "LocalAddress must not be null"); - LettuceAssert.notNull(remoteAddress, "RemoteAddress must not be null"); - LettuceAssert.notNull(commandType, "CommandType must not be null"); - - this.localAddress = localAddress; - this.remoteAddress = remoteAddress; - this.commandType = commandType; - } - - /** - * Create a new instance of {@link CommandLatencyId}. - * - * @param localAddress the local address - * @param remoteAddress the remote address - * @param commandType the command type - * @return a new instance of {@link CommandLatencyId} - */ - public static CommandLatencyId create(SocketAddress localAddress, SocketAddress remoteAddress, ProtocolKeyword commandType) { - return new CommandLatencyId(localAddress, remoteAddress, commandType); - } - - /** - * Returns the local address. - * - * @return the local address - */ - public SocketAddress localAddress() { - return localAddress; - } - - /** - * Returns the remote address. - * - * @return the remote address - */ - public SocketAddress remoteAddress() { - return remoteAddress; - } - - /** - * Returns the command type. - * - * @return the command type - */ - public ProtocolKeyword commandType() { - return commandType; - } - - @Override - public boolean equals(Object o) { - if (this == o) - return true; - if (!(o instanceof CommandLatencyId)) - return false; - - CommandLatencyId that = (CommandLatencyId) o; - - if (!localAddress.equals(that.localAddress)) - return false; - if (!remoteAddress.equals(that.remoteAddress)) - return false; - return commandType.equals(that.commandType); - } - - @Override - public int hashCode() { - int result = localAddress.hashCode(); - result = 31 * result + remoteAddress.hashCode(); - result = 31 * result + commandType.hashCode(); - return result; - } - - @Override - public int compareTo(CommandLatencyId o) { - - if (o == null) { - return -1; - } - - int remoteResult = remoteAddress.toString().compareTo(o.remoteAddress.toString()); - if (remoteResult != 0) { - return remoteResult; - } - - int localResult = localAddress.toString().compareTo(o.localAddress.toString()); - if (localResult != 0) { - return localResult; - } - - return commandType.toString().compareTo(o.commandType.toString()); - } - - @Override - public String toString() { - final StringBuffer sb = new StringBuffer(); - sb.append("[").append(localAddress); - sb.append(" -> ").append(remoteAddress); - sb.append(", commandType=").append(commandType); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollector.java b/src/main/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollector.java deleted file mode 100644 index aa65cb45f0..0000000000 --- a/src/main/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollector.java +++ /dev/null @@ -1,228 +0,0 @@ -package com.lambdaworks.redis.metrics; - -import com.lambdaworks.redis.metrics.CommandMetrics.CommandLatency; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.ProtocolKeyword; -import io.netty.channel.local.LocalAddress; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; -import org.HdrHistogram.Histogram; -import org.LatencyUtils.LatencyStats; -import org.LatencyUtils.PauseDetector; -import org.LatencyUtils.SimplePauseDetector; - -import java.net.SocketAddress; -import java.util.Collections; -import java.util.HashMap; -import java.util.Map; -import java.util.TreeMap; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicLong; -import java.util.concurrent.atomic.AtomicReference; - -import static com.lambdaworks.redis.internal.LettuceClassUtils.isPresent; - -/** - * Default implementation of a {@link CommandLatencyCollector} for command latencies. - * - * @author Mark Paluch - */ -public class DefaultCommandLatencyCollector implements CommandLatencyCollector { - - private static final AtomicReference PAUSE_DETECTOR = new AtomicReference<>(); - private static final boolean LATENCY_UTILS_AVAILABLE = isPresent("org.LatencyUtils.PauseDetector"); - private static final boolean HDR_UTILS_AVAILABLE = isPresent("org.HdrHistogram.Histogram"); - - private static final long MIN_LATENCY = 1000; - private static final long MAX_LATENCY = TimeUnit.MINUTES.toNanos(5); - - private final CommandLatencyCollectorOptions options; - private Map latencyMetrics = new ConcurrentHashMap<>(CommandType.values().length); - - public DefaultCommandLatencyCollector(CommandLatencyCollectorOptions options) { - this.options = options; - } - - /** - * Record the command latency per {@code connectionPoint} and {@code commandType}. - * - * @param local the local address - * @param remote the remote address - * @param commandType the command type - * @param firstResponseLatency latency value in {@link TimeUnit#NANOSECONDS} from send to the first response - * @param completionLatency latency value in {@link TimeUnit#NANOSECONDS} from send to the command completion - */ - public void recordCommandLatency(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType, - long firstResponseLatency, long completionLatency) { - - if (!isEnabled()) { - return; - } - - CommandLatencyId id = createId(local, remote, commandType); - Latencies latencies = latencyMetrics.get(id); - if (latencies == null) { - - PauseDetectorWrapper wrapper = PAUSE_DETECTOR.get(); - if (wrapper == null) { - wrapper = new PauseDetectorWrapper(); - - if (PAUSE_DETECTOR.compareAndSet(null, wrapper)) { - wrapper.initialize(); - } - } - - latencies = new Latencies(PAUSE_DETECTOR.get().pauseDetector); - latencyMetrics.put(id, latencies); - } - - latencies.firstResponse.recordLatency(rangify(firstResponseLatency)); - latencies.completion.recordLatency(rangify(completionLatency)); - - } - - private CommandLatencyId createId(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType) { - return CommandLatencyId.create(options.localDistinction() ? local : LocalAddress.ANY, remote, commandType); - } - - private long rangify(long latency) { - return Math.max(MIN_LATENCY, Math.min(MAX_LATENCY, latency)); - } - - @Override - public boolean isEnabled() { - return latencyMetrics != null && options.isEnabled(); - } - - @Override - public void shutdown() { - if (latencyMetrics != null) { - latencyMetrics.clear(); - latencyMetrics = null; - } - } - - @Override - public Map retrieveMetrics() { - Map copy = new HashMap<>(); - copy.putAll(latencyMetrics); - if (options.resetLatenciesAfterEvent()) { - latencyMetrics.clear(); - } - - Map latencies = getMetrics(copy); - return latencies; - } - - private Map getMetrics(Map latencyMetrics) { - Map latencies = new TreeMap<>(); - - for (Map.Entry entry : latencyMetrics.entrySet()) { - Histogram firstResponse = entry.getValue().firstResponse.getIntervalHistogram(); - Histogram completion = entry.getValue().completion.getIntervalHistogram(); - - if (firstResponse.getTotalCount() == 0 && completion.getTotalCount() == 0) { - continue; - } - - CommandLatency firstResponseLatency = getMetric(firstResponse); - CommandLatency completionLatency = getMetric(completion); - - CommandMetrics metrics = new CommandMetrics(firstResponse.getTotalCount(), options.targetUnit(), - firstResponseLatency, completionLatency); - - latencies.put(entry.getKey(), metrics); - } - return latencies; - } - - private CommandLatency getMetric(Histogram histogram) { - Map percentiles = getPercentiles(histogram); - - TimeUnit timeUnit = options.targetUnit(); - CommandLatency metric = new CommandLatency(timeUnit.convert(histogram.getMinValue(), TimeUnit.NANOSECONDS), - timeUnit.convert(histogram.getMaxValue(), TimeUnit.NANOSECONDS), percentiles); - - return metric; - } - - private Map getPercentiles(Histogram histogram) { - Map percentiles = new TreeMap(); - for (double targetPercentile : options.targetPercentiles()) { - percentiles.put(targetPercentile, - options.targetUnit().convert(histogram.getValueAtPercentile(targetPercentile), TimeUnit.NANOSECONDS)); - } - return percentiles; - } - - /** - * Returns {@literal true} if HdrUtils and LatencyUtils are available on the class path. - * - * @return - */ - public static boolean isAvailable() { - return LATENCY_UTILS_AVAILABLE && HDR_UTILS_AVAILABLE; - } - - /** - * Returns a disabled no-op {@link CommandLatencyCollector}. - * - * @return - */ - public static CommandLatencyCollector disabled() { - - return new CommandLatencyCollector() { - @Override - public void recordCommandLatency(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType, - long firstResponseLatency, long completionLatency) { - } - - @Override - public void shutdown() { - } - - @Override - public Map retrieveMetrics() { - return Collections.emptyMap(); - } - - @Override - public boolean isEnabled() { - return false; - } - }; - } - - private static class Latencies { - - public final LatencyStats firstResponse; - public final LatencyStats completion; - - public Latencies(PauseDetector pauseDetector) { - firstResponse = LatencyStats.Builder.create().pauseDetector(pauseDetector).build(); - completion = LatencyStats.Builder.create().pauseDetector(pauseDetector).build(); - } - } - - private static class PauseDetectorWrapper { - public final static AtomicLong counter = new AtomicLong(); - PauseDetector pauseDetector; - - public void initialize() { - - if (counter.getAndIncrement() > 0) { - InternalLogger instance = InternalLoggerFactory.getInstance(getClass()); - instance.info("Initialized PauseDetectorWrapper more than once."); - } - - pauseDetector = new SimplePauseDetector(TimeUnit.MILLISECONDS.toNanos(10), TimeUnit.MILLISECONDS.toNanos(10), 3); - Runtime.getRuntime().addShutdownHook(new Thread("ShutdownHook for SimplePauseDetector") { - @Override - public void run() { - pauseDetector.shutdown(); - } - }); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/metrics/MetricCollector.java b/src/main/java/com/lambdaworks/redis/metrics/MetricCollector.java deleted file mode 100644 index 5dc2147b74..0000000000 --- a/src/main/java/com/lambdaworks/redis/metrics/MetricCollector.java +++ /dev/null @@ -1,31 +0,0 @@ -package com.lambdaworks.redis.metrics; - -/** - * Generic metrics collector interface. A metrics collector collects metrics and emits metric events. - * - * @author Mark Paluch - * @param data type of the metrics - * @since 3.4 - * - */ -public interface MetricCollector { - - /** - * Shut down the metrics collector. - */ - void shutdown(); - - /** - * Returns the collected/aggregated metrics. - * - * @return the the collected/aggregated metrics - */ - T retrieveMetrics(); - - /** - * Returns {@literal true} if the metric collector is enabled. - * - * @return {@literal true} if the metric collector is enabled - */ - boolean isEnabled(); -} diff --git a/src/main/java/com/lambdaworks/redis/metrics/package-info.java b/src/main/java/com/lambdaworks/redis/metrics/package-info.java deleted file mode 100644 index f62aa504a8..0000000000 --- a/src/main/java/com/lambdaworks/redis/metrics/package-info.java +++ /dev/null @@ -1,5 +0,0 @@ -/** - * Collectors for client metrics. - */ -package com.lambdaworks.redis.metrics; - diff --git a/src/main/java/com/lambdaworks/redis/models/command/package-info.java b/src/main/java/com/lambdaworks/redis/models/command/package-info.java deleted file mode 100644 index 005f9672b7..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/command/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Model and parser to for the {@code COMMAND} and {@code COMMAND INFO} output. - */ -package com.lambdaworks.redis.models.command; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/models/role/RedisInstance.java b/src/main/java/com/lambdaworks/redis/models/role/RedisInstance.java deleted file mode 100644 index 9f0350037f..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/role/RedisInstance.java +++ /dev/null @@ -1,23 +0,0 @@ -package com.lambdaworks.redis.models.role; - -/** - * Represents a redis instance according to the {@code ROLE} output. - * - * @author Mark Paluch - * @since 3.0 - */ -public interface RedisInstance { - - /** - * - * @return Redis instance role, see {@link com.lambdaworks.redis.models.role.RedisInstance.Role} - */ - Role getRole(); - - /** - * Possible Redis instance roles. - */ - public enum Role { - MASTER, SLAVE, SENTINEL; - } -} diff --git a/src/main/java/com/lambdaworks/redis/models/role/RedisMasterInstance.java b/src/main/java/com/lambdaworks/redis/models/role/RedisMasterInstance.java deleted file mode 100644 index 7cf17e9a68..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/role/RedisMasterInstance.java +++ /dev/null @@ -1,71 +0,0 @@ -package com.lambdaworks.redis.models.role; - -import java.io.Serializable; -import java.util.Collections; -import java.util.List; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Represents a master instance. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings("serial") -public class RedisMasterInstance implements RedisInstance, Serializable { - - private long replicationOffset; - private List slaves = Collections.emptyList(); - - public RedisMasterInstance() { - } - - /** - * Constructs a {@link RedisMasterInstance} - * - * @param replicationOffset the replication offset - * @param slaves list of slaves, must not be {@literal null} but may be empty - */ - public RedisMasterInstance(long replicationOffset, List slaves) { - LettuceAssert.notNull(slaves, "Slaves must not be null"); - this.replicationOffset = replicationOffset; - this.slaves = slaves; - } - - /** - * - * @return always {@link com.lambdaworks.redis.models.role.RedisInstance.Role#MASTER} - */ - @Override - public Role getRole() { - return Role.MASTER; - } - - public long getReplicationOffset() { - return replicationOffset; - } - - public List getSlaves() { - return slaves; - } - - public void setReplicationOffset(long replicationOffset) { - this.replicationOffset = replicationOffset; - } - - public void setSlaves(List slaves) { - LettuceAssert.notNull(slaves, "Slaves must not be null"); - this.slaves = slaves; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [replicationOffset=").append(replicationOffset); - sb.append(", slaves=").append(slaves); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/models/role/RedisNodeDescription.java b/src/main/java/com/lambdaworks/redis/models/role/RedisNodeDescription.java deleted file mode 100644 index 2115a47646..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/role/RedisNodeDescription.java +++ /dev/null @@ -1,18 +0,0 @@ -package com.lambdaworks.redis.models.role; - -import com.lambdaworks.redis.RedisURI; - -/** - * Description of a single Redis Node. - * - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisNodeDescription extends RedisInstance { - - /** - * - * @return the URI of the node - */ - RedisURI getUri(); -} diff --git a/src/main/java/com/lambdaworks/redis/models/role/RedisSentinelInstance.java b/src/main/java/com/lambdaworks/redis/models/role/RedisSentinelInstance.java deleted file mode 100644 index 9ecb882924..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/role/RedisSentinelInstance.java +++ /dev/null @@ -1,62 +0,0 @@ -package com.lambdaworks.redis.models.role; - -import java.io.Serializable; -import java.util.Collections; -import java.util.List; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Redis sentinel instance. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings("serial") -public class RedisSentinelInstance implements RedisInstance, Serializable { - private List monitoredMasters = Collections.emptyList(); - - public RedisSentinelInstance() { - } - - /** - * Constructs a {@link RedisSentinelInstance} - * - * @param monitoredMasters list of monitored masters, must not be {@literal null} but may be empty - */ - public RedisSentinelInstance(List monitoredMasters) { - LettuceAssert.notNull(monitoredMasters, "List of monitoredMasters must not be null"); - this.monitoredMasters = monitoredMasters; - } - - /** - * - * @return always {@link com.lambdaworks.redis.models.role.RedisInstance.Role#SENTINEL} - */ - @Override - public Role getRole() { - return Role.SENTINEL; - } - - /** - * - * @return List of monitored master names. - */ - public List getMonitoredMasters() { - return monitoredMasters; - } - - public void setMonitoredMasters(List monitoredMasters) { - LettuceAssert.notNull(monitoredMasters, "List of monitoredMasters must not be null"); - this.monitoredMasters = monitoredMasters; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [monitoredMasters=").append(monitoredMasters); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/models/role/RedisSlaveInstance.java b/src/main/java/com/lambdaworks/redis/models/role/RedisSlaveInstance.java deleted file mode 100644 index 8b25f9a9f2..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/role/RedisSlaveInstance.java +++ /dev/null @@ -1,103 +0,0 @@ -package com.lambdaworks.redis.models.role; - -import java.io.Serializable; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Redis slave instance. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings("serial") -public class RedisSlaveInstance implements RedisInstance, Serializable { - private ReplicationPartner master; - private State state; - - public RedisSlaveInstance() { - } - - /** - * Constructs a {@link RedisSlaveInstance} - * - * @param master master for the replication, must not be {@literal null} - * @param state slave state, must not be {@literal null} - */ - RedisSlaveInstance(ReplicationPartner master, State state) { - LettuceAssert.notNull(master, "Master must not be null"); - LettuceAssert.notNull(state, "State must not be null"); - this.master = master; - this.state = state; - } - - /** - * - * @return always {@link com.lambdaworks.redis.models.role.RedisInstance.Role#SLAVE} - */ - @Override - public Role getRole() { - return Role.SLAVE; - } - - /** - * - * @return the replication master. - */ - public ReplicationPartner getMaster() { - return master; - } - - /** - * - * @return Slave state. - */ - public State getState() { - return state; - } - - public void setMaster(ReplicationPartner master) { - LettuceAssert.notNull(master, "Master must not be null"); - this.master = master; - } - - public void setState(State state) { - LettuceAssert.notNull(state, "State must not be null"); - this.state = state; - } - - /** - * State of the slave. - */ - public enum State { - /** - * the instance needs to connect to its master. - */ - CONNECT, - - /** - * the slave-master connection is in progress. - */ - CONNECTING, - - /** - * the master and slave are trying to perform the synchronization. - */ - SYNC, - - /** - * the slave is online. - */ - CONNECTED; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [master=").append(master); - sb.append(", state=").append(state); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/models/role/ReplicationPartner.java b/src/main/java/com/lambdaworks/redis/models/role/ReplicationPartner.java deleted file mode 100644 index dd0c85f46f..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/role/ReplicationPartner.java +++ /dev/null @@ -1,69 +0,0 @@ -package com.lambdaworks.redis.models.role; - -import java.io.Serializable; - -import com.google.common.net.HostAndPort; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Replication partner providing the host and the replication offset. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings("serial") -public class ReplicationPartner implements Serializable { - private HostAndPort host; - private long replicationOffset; - - public ReplicationPartner() { - - } - - /** - * Constructs a replication partner. - * - * @param host host information, must not be {@literal null} - * @param replicationOffset the replication offset - */ - public ReplicationPartner(HostAndPort host, long replicationOffset) { - LettuceAssert.notNull(host, "Host must not be null"); - this.host = host; - this.replicationOffset = replicationOffset; - } - - /** - * - * @return host with port of the replication partner. - */ - public HostAndPort getHost() { - return host; - } - - /** - * - * @return the replication offset. - */ - public long getReplicationOffset() { - return replicationOffset; - } - - public void setHost(HostAndPort host) { - LettuceAssert.notNull(host, "Host must not be null"); - this.host = host; - } - - public void setReplicationOffset(long replicationOffset) { - this.replicationOffset = replicationOffset; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [host=").append(host); - sb.append(", replicationOffset=").append(replicationOffset); - sb.append(']'); - return sb.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/models/role/RoleParser.java b/src/main/java/com/lambdaworks/redis/models/role/RoleParser.java deleted file mode 100644 index 3a9f2c0f7d..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/role/RoleParser.java +++ /dev/null @@ -1,194 +0,0 @@ -package com.lambdaworks.redis.models.role; - -import java.util.*; - -import com.google.common.net.HostAndPort; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Parser for redis ROLE command output. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings("serial") -public class RoleParser { - protected static final Map ROLE_MAPPING; - protected static final Map SLAVE_STATE_MAPPING; - - static { - Map roleMap = new HashMap<>(); - roleMap.put("master", RedisInstance.Role.MASTER); - roleMap.put("slave", RedisInstance.Role.SLAVE); - roleMap.put("sentinel", RedisInstance.Role.SENTINEL); - - ROLE_MAPPING = Collections.unmodifiableMap(roleMap); - - Map slaveStateMap = new HashMap<>(); - slaveStateMap.put("connect", RedisSlaveInstance.State.CONNECT); - slaveStateMap.put("connected", RedisSlaveInstance.State.CONNECTED); - slaveStateMap.put("connecting", - RedisSlaveInstance.State.CONNECTING); - slaveStateMap.put("sync", RedisSlaveInstance.State.SYNC); - - SLAVE_STATE_MAPPING = Collections.unmodifiableMap(slaveStateMap); - } - - /** - * Utility constructor. - */ - private RoleParser() { - - } - - /** - * Parse the output of the redis ROLE command and convert to a RedisInstance. - * - * @param roleOutput output of the redis ROLE command - * @return RedisInstance - */ - public static RedisInstance parse(List roleOutput) { - LettuceAssert.isTrue(roleOutput != null && !roleOutput.isEmpty(), "Empty role output"); - LettuceAssert.isTrue(roleOutput.get(0) instanceof String && ROLE_MAPPING.containsKey(roleOutput.get(0)), - "First role element must be a string (any of " + ROLE_MAPPING.keySet() + ")"); - - RedisInstance.Role role = ROLE_MAPPING.get(roleOutput.get(0)); - - switch (role) { - case MASTER: - return parseMaster(roleOutput); - - case SLAVE: - return parseSlave(roleOutput); - - case SENTINEL: - return parseSentinel(roleOutput); - } - - return null; - - } - - private static RedisInstance parseMaster(List roleOutput) { - - long replicationOffset = getMasterReplicationOffset(roleOutput); - List slaves = getMasterSlaveReplicationPartners(roleOutput); - - RedisMasterInstance redisMasterInstanceRole = new RedisMasterInstance(replicationOffset, - Collections.unmodifiableList(slaves)); - return redisMasterInstanceRole; - } - - private static RedisInstance parseSlave(List roleOutput) { - - Iterator iterator = roleOutput.iterator(); - iterator.next(); // skip first element - - String ip = getStringFromIterator(iterator, ""); - long port = getLongFromIterator(iterator, 0); - - String stateString = getStringFromIterator(iterator, null); - long replicationOffset = getLongFromIterator(iterator, 0); - - ReplicationPartner master = new ReplicationPartner(HostAndPort.fromParts(ip, Math.toIntExact(port)), replicationOffset); - - RedisSlaveInstance.State state = SLAVE_STATE_MAPPING.get(stateString); - - RedisSlaveInstance redisSlaveInstanceRole = new RedisSlaveInstance(master, state); - return redisSlaveInstanceRole; - } - - private static RedisInstance parseSentinel(List roleOutput) { - - Iterator iterator = roleOutput.iterator(); - iterator.next(); // skip first element - - List monitoredMasters = getMonitoredMasters(iterator); - - RedisSentinelInstance result = new RedisSentinelInstance(Collections.unmodifiableList(monitoredMasters)); - return result; - } - - private static List getMonitoredMasters(Iterator iterator) { - List monitoredMasters = new ArrayList<>(); - - if (!iterator.hasNext()) { - return monitoredMasters; - } - - Object masters = iterator.next(); - - if (!(masters instanceof Collection)) { - return monitoredMasters; - } - - for (Object monitoredMaster : (Collection) masters) { - if (monitoredMaster instanceof String) { - monitoredMasters.add((String) monitoredMaster); - } - } - - return monitoredMasters; - } - - private static List getMasterSlaveReplicationPartners(List roleOutput) { - List slaves = new ArrayList<>(); - if (roleOutput.size() > 2 && roleOutput.get(2) instanceof Collection) { - Collection slavesOutput = (Collection) roleOutput.get(2); - - for (Object slaveOutput : slavesOutput) { - if (!(slaveOutput instanceof Collection)) { - continue; - } - - ReplicationPartner replicationPartner = getMasterSlaveReplicationPartner((Collection) slaveOutput); - slaves.add(replicationPartner); - } - } - return slaves; - } - - private static ReplicationPartner getMasterSlaveReplicationPartner(Collection slaveOutput) { - Iterator iterator = slaveOutput.iterator(); - - String ip = getStringFromIterator(iterator, ""); - long port = getLongFromIterator(iterator, 0); - long replicationOffset = getLongFromIterator(iterator, 0); - - return new ReplicationPartner(HostAndPort.fromParts(ip, Math.toIntExact(port)), replicationOffset); - } - - private static long getLongFromIterator(Iterator iterator, long defaultValue) { - if (iterator.hasNext()) { - Object object = iterator.next(); - if (object instanceof String) { - return Long.parseLong((String) object); - } - - if (object instanceof Number) { - return ((Number) object).longValue(); - } - } - return defaultValue; - } - - private static String getStringFromIterator(Iterator iterator, String defaultValue) { - if (iterator.hasNext()) { - Object object = iterator.next(); - if (object instanceof String) { - return (String) object; - } - } - return defaultValue; - } - - private static long getMasterReplicationOffset(List roleOutput) { - long replicationOffset = 0; - - if (roleOutput.size() > 1 && roleOutput.get(1) instanceof Number) { - Number number = (Number) roleOutput.get(1); - replicationOffset = number.longValue(); - } - return replicationOffset; - } -} diff --git a/src/main/java/com/lambdaworks/redis/models/role/package-info.java b/src/main/java/com/lambdaworks/redis/models/role/package-info.java deleted file mode 100644 index b833561008..0000000000 --- a/src/main/java/com/lambdaworks/redis/models/role/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Model and parser for the {@code ROLE} output. - */ -package com.lambdaworks.redis.models.role; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/output/ArrayOutput.java b/src/main/java/com/lambdaworks/redis/output/ArrayOutput.java deleted file mode 100644 index 19957b2cc5..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ArrayOutput.java +++ /dev/null @@ -1,68 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.ArrayDeque; -import java.util.ArrayList; -import java.util.Deque; -import java.util.List; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * {@link java.util.List} of objects and lists to support dynamic nested structures (List with mixed content of values and - * sublists). - * - * @param Key type. - * @param Value type. - * - * @author Mark Paluch - */ -public class ArrayOutput extends CommandOutput> { - private Deque counts = new ArrayDeque(); - private Deque> stack = new ArrayDeque>(); - - public ArrayOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - } - - @Override - public void set(ByteBuffer bytes) { - if (bytes != null) { - V value = codec.decodeValue(bytes); - stack.peek().add(value); - } - } - - @Override - public void set(long integer) { - stack.peek().add(integer); - } - - @Override - public void complete(int depth) { - if (counts.isEmpty()) { - return; - } - - if (depth == stack.size()) { - if (stack.peek().size() == counts.peek()) { - List pop = stack.pop(); - counts.pop(); - if (!stack.isEmpty()) { - stack.peek().add(pop); - } - } - } - } - - @Override - public void multi(int count) { - if (stack.isEmpty()) { - stack.push(output); - } else { - stack.push(new ArrayList<>(count)); - - } - counts.push(count); - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/BooleanListOutput.java b/src/main/java/com/lambdaworks/redis/output/BooleanListOutput.java deleted file mode 100644 index 8c33bd01df..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/BooleanListOutput.java +++ /dev/null @@ -1,42 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * {@link java.util.List} of boolean output. - * - * @author Will Glozer - * @param Key type. - * @param Value type. - */ -public class BooleanListOutput extends CommandOutput> implements StreamingOutput { - - private Subscriber subscriber; - - public BooleanListOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - setSubscriber(ListSubscriber.of(output)); - } - - @Override - public void set(long integer) { - subscriber.onNext((integer == 1) ? Boolean.TRUE : Boolean.FALSE); - } - - @Override - public void setSubscriber(Subscriber subscriber) { - LettuceAssert.notNull(subscriber, "Subscriber must not be null"); - this.subscriber = subscriber; - } - - @Override - public Subscriber getSubscriber() { - return subscriber; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/BooleanOutput.java b/src/main/java/com/lambdaworks/redis/output/BooleanOutput.java deleted file mode 100644 index 42dd2ed102..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/BooleanOutput.java +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Boolean output. The actual value is returned as an integer where 0 indicates false and 1 indicates true, or as a null bulk - * reply for script output. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class BooleanOutput extends CommandOutput { - public BooleanOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(long integer) { - output = (integer == 1) ? Boolean.TRUE : Boolean.FALSE; - } - - @Override - public void set(ByteBuffer bytes) { - output = (bytes != null) ? Boolean.TRUE : Boolean.FALSE; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ByteArrayOutput.java b/src/main/java/com/lambdaworks/redis/output/ByteArrayOutput.java deleted file mode 100644 index 02d9727537..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ByteArrayOutput.java +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (C) 2012 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Byte array output. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class ByteArrayOutput extends CommandOutput { - public ByteArrayOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(ByteBuffer bytes) { - if (bytes != null) { - output = new byte[bytes.remaining()]; - bytes.get(output); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/CommandOutput.java b/src/main/java/com/lambdaworks/redis/output/CommandOutput.java deleted file mode 100644 index 871a775d7c..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/CommandOutput.java +++ /dev/null @@ -1,136 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Abstract representation of the output of a redis command. - * - * @param Key type. - * @param Value type. - * @param Output type. - * - * @author Will Glozer - */ -public abstract class CommandOutput { - protected final RedisCodec codec; - protected T output; - protected String error; - - /** - * Initialize a new instance that encodes and decodes keys and values using the supplied codec. - * - * @param codec Codec used to encode/decode keys and values, must not be {@literal null}. - * @param output Initial value of output. - */ - public CommandOutput(RedisCodec codec, T output) { - LettuceAssert.notNull(codec, "RedisCodec must not be null"); - this.codec = codec; - this.output = output; - } - - /** - * Get the command output. - * - * @return The command output. - */ - public T get() { - return output; - } - - /** - * Set the command output to a sequence of bytes, or null. Concrete {@link CommandOutput} implementations must override this - * method unless they only receive an integer value which cannot be null. - * - * @param bytes The command output, or null. - */ - public void set(ByteBuffer bytes) { - throw new IllegalStateException(); - } - - /** - * Set the command output to a 64-bit signed integer. Concrete {@link CommandOutput} implementations must override this - * method unless they only receive a byte array value. - * - * @param integer The command output. - */ - public void set(long integer) { - throw new IllegalStateException(); - } - - /** - * Set command output to an error message from the server. - * - * @param error Error message. - */ - public void setError(ByteBuffer error) { - this.error = decodeAscii(error); - } - - /** - * Set command output to an error message from the client. - * - * @param error Error message. - */ - public void setError(String error) { - this.error = error; - } - - /** - * Check if the command resulted in an error. - * - * @return true if command resulted in an error. - */ - public boolean hasError() { - return this.error != null; - } - - /** - * Get the error that occurred. - * - * @return The error. - */ - public String getError() { - return error; - } - - /** - * Mark the command output complete. - * - * @param depth Remaining depth of output queue. - * - */ - public void complete(int depth) { - // nothing to do by default - } - - protected String decodeAscii(ByteBuffer bytes) { - if(bytes == null) { - return null; - } - - char[] chars = new char[bytes.remaining()]; - for (int i = 0; i < chars.length; i++) { - chars[i] = (char) bytes.get(); - } - return new String(chars); - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [output=").append(output); - sb.append(", error='").append(error).append('\''); - sb.append(']'); - return sb.toString(); - } - - public void multi(int count) { - - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/DateOutput.java b/src/main/java/com/lambdaworks/redis/output/DateOutput.java deleted file mode 100644 index 11c522f9b4..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/DateOutput.java +++ /dev/null @@ -1,25 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.util.Date; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Date output with no milliseconds. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class DateOutput extends CommandOutput { - public DateOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(long time) { - output = new Date(time * 1000); - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/DoubleOutput.java b/src/main/java/com/lambdaworks/redis/output/DoubleOutput.java deleted file mode 100644 index e108d21ebf..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/DoubleOutput.java +++ /dev/null @@ -1,27 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -import static java.lang.Double.parseDouble; - -/** - * Double output, may be null. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class DoubleOutput extends CommandOutput { - public DoubleOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(ByteBuffer bytes) { - output = (bytes == null) ? null : parseDouble(decodeAscii(bytes)); - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/GeoCoordinatesListOutput.java b/src/main/java/com/lambdaworks/redis/output/GeoCoordinatesListOutput.java deleted file mode 100644 index 191b4e7e6c..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/GeoCoordinatesListOutput.java +++ /dev/null @@ -1,59 +0,0 @@ -package com.lambdaworks.redis.output; - -import static java.lang.Double.parseDouble; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * A list output that creates a list with {@link GeoCoordinates}'s. - * - * @author Mark Paluch - */ -public class GeoCoordinatesListOutput extends CommandOutput> implements StreamingOutput { - - private Double x; - private Subscriber subscriber; - - public GeoCoordinatesListOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - setSubscriber(ListSubscriber.of(output)); - } - - @Override - public void set(ByteBuffer bytes) { - - Double value = (bytes == null) ? 0 : parseDouble(decodeAscii(bytes)); - - if (x == null) { - x = value; - return; - } - - subscriber.onNext(new GeoCoordinates(x, value)); - x = null; - } - - @Override - public void multi(int count) { - if (count == -1) { - subscriber.onNext(null); - } - } - - @Override - public void setSubscriber(Subscriber subscriber) { - LettuceAssert.notNull(subscriber, "Subscriber must not be null"); - this.subscriber = subscriber; - } - - @Override - public Subscriber getSubscriber() { - return subscriber; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/GeoWithinListOutput.java b/src/main/java/com/lambdaworks/redis/output/GeoWithinListOutput.java deleted file mode 100644 index ea9100ac51..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/GeoWithinListOutput.java +++ /dev/null @@ -1,102 +0,0 @@ -package com.lambdaworks.redis.output; - -import static java.lang.Double.parseDouble; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.GeoWithin; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * A list output that creates a list with either double/long or {@link GeoCoordinates}'s. - * - * @author Mark Paluch - */ -public class GeoWithinListOutput extends CommandOutput>> - implements StreamingOutput> { - - private V member; - private Double distance; - private Long geohash; - private GeoCoordinates coordinates; - - private Double x; - - private boolean withDistance; - private boolean withHash; - private boolean withCoordinates; - private Subscriber> subscriber; - - public GeoWithinListOutput(RedisCodec codec, boolean withDistance, boolean withHash, boolean withCoordinates) { - super(codec, new ArrayList<>()); - this.withDistance = withDistance; - this.withHash = withHash; - this.withCoordinates = withCoordinates; - setSubscriber(ListSubscriber.of(output)); - } - - @Override - public void set(long integer) { - if (member == null) { - member = (V) (Long) integer; - return; - } - - if (withHash) { - geohash = integer; - } - } - - @Override - public void set(ByteBuffer bytes) { - - if (member == null) { - member = codec.decodeValue(bytes); - return; - } - - Double value = (bytes == null) ? 0 : parseDouble(decodeAscii(bytes)); - if (withDistance) { - if (distance == null) { - distance = value; - return; - } - } - if (withCoordinates) { - if (x == null) { - x = value; - return; - } - - coordinates = new GeoCoordinates(x, value); - return; - } - } - - @Override - public void complete(int depth) { - if (depth == 1) { - subscriber.onNext(new GeoWithin(member, distance, geohash, coordinates)); - - member = null; - distance = null; - geohash = null; - coordinates = null; - } - } - - @Override - public void setSubscriber(Subscriber> subscriber) { - LettuceAssert.notNull(subscriber, "Subscriber must not be null"); - this.subscriber = subscriber; - } - - @Override - public Subscriber> getSubscriber() { - return subscriber; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/IntegerOutput.java b/src/main/java/com/lambdaworks/redis/output/IntegerOutput.java deleted file mode 100644 index 6ccd94b9e7..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/IntegerOutput.java +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * 64-bit integer output, may be null. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class IntegerOutput extends CommandOutput { - public IntegerOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(long integer) { - output = integer; - } - - @Override - public void set(ByteBuffer bytes) { - output = null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyListOutput.java b/src/main/java/com/lambdaworks/redis/output/KeyListOutput.java deleted file mode 100644 index e74757bd02..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyListOutput.java +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * {@link List} of keys output. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class KeyListOutput extends CommandOutput> implements StreamingOutput { - - private Subscriber subscriber; - - public KeyListOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - setSubscriber(ListSubscriber.of(output)); - } - - @Override - public void set(ByteBuffer bytes) { - subscriber.onNext(codec.decodeKey(bytes)); - } - - @Override - public void setSubscriber(Subscriber subscriber) { - LettuceAssert.notNull(subscriber, "Subscriber must not be null"); - this.subscriber = subscriber; - } - - @Override - public Subscriber getSubscriber() { - return subscriber; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyOutput.java b/src/main/java/com/lambdaworks/redis/output/KeyOutput.java deleted file mode 100644 index e0906c3f50..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyOutput.java +++ /dev/null @@ -1,26 +0,0 @@ -// Copyright (C) 2013 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Key output. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class KeyOutput extends CommandOutput { - public KeyOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(ByteBuffer bytes) { - output = (bytes == null) ? null : codec.decodeKey(bytes); - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyScanOutput.java b/src/main/java/com/lambdaworks/redis/output/KeyScanOutput.java deleted file mode 100644 index b8f7ca9672..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyScanOutput.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.KeyScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * {@link com.lambdaworks.redis.KeyScanCursor} for scan cursor output. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class KeyScanOutput extends ScanOutput> { - - public KeyScanOutput(RedisCodec codec) { - super(codec, new KeyScanCursor()); - } - - @Override - protected void setOutput(ByteBuffer bytes) { - output.getKeys().add(bytes == null ? null : codec.decodeKey(bytes)); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyScanStreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/KeyScanStreamingOutput.java deleted file mode 100644 index 61f5a2f167..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyScanStreamingOutput.java +++ /dev/null @@ -1,31 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Streaming API for multiple Keys. You can implement this interface in order to receive a call to {@code onKey} on every key. - * Key uniqueness is not guaranteed. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class KeyScanStreamingOutput extends ScanOutput { - - private final KeyStreamingChannel channel; - - public KeyScanStreamingOutput(RedisCodec codec, KeyStreamingChannel channel) { - super(codec, new StreamScanCursor()); - this.channel = channel; - } - - @Override - protected void setOutput(ByteBuffer bytes) { - channel.onKey(bytes == null ? null : codec.decodeKey(bytes)); - output.setCount(output.getCount() + 1); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyStreamingChannel.java b/src/main/java/com/lambdaworks/redis/output/KeyStreamingChannel.java deleted file mode 100644 index 296a0186e0..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyStreamingChannel.java +++ /dev/null @@ -1,19 +0,0 @@ -package com.lambdaworks.redis.output; - -/** - * Streaming API for multiple Keys. You can implement this interface in order to receive a call to {@code onKey} on every key. - * Key uniqueness is not guaranteed. - * - * @param Key type. - * @author Mark Paluch - * @since 3.0 - */ -@FunctionalInterface -public interface KeyStreamingChannel { - /** - * Called on every incoming key. - * - * @param key the key - */ - void onKey(K key); -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyStreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/KeyStreamingOutput.java deleted file mode 100644 index b24a34d462..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyStreamingOutput.java +++ /dev/null @@ -1,30 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Streaming-Output of Keys. Returns the count of all keys (including null). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * - */ -public class KeyStreamingOutput extends CommandOutput { - private final KeyStreamingChannel channel; - - public KeyStreamingOutput(RedisCodec codec, KeyStreamingChannel channel) { - super(codec, Long.valueOf(0)); - this.channel = channel; - } - - @Override - public void set(ByteBuffer bytes) { - - channel.onKey(bytes == null ? null : codec.decodeKey(bytes)); - output = output.longValue() + 1; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyValueOutput.java b/src/main/java/com/lambdaworks/redis/output/KeyValueOutput.java deleted file mode 100644 index 127ae29d98..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyValueOutput.java +++ /dev/null @@ -1,36 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import com.lambdaworks.redis.KeyValue; -import com.lambdaworks.redis.codec.RedisCodec; - -import java.nio.ByteBuffer; - -/** - * Key-value pair output. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class KeyValueOutput extends CommandOutput> { - private K key; - - public KeyValueOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(ByteBuffer bytes) { - if (bytes != null) { - if (key == null) { - key = codec.decodeKey(bytes); - } else { - V value = codec.decodeValue(bytes); - output = new KeyValue(key, value); - } - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyValueScanStreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/KeyValueScanStreamingOutput.java deleted file mode 100644 index 3f73e3721e..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyValueScanStreamingOutput.java +++ /dev/null @@ -1,41 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Streaming-Output of Key Value Pairs. Returns the count of all Key-Value pairs (including null). - * - * @param Key type. - * @param Value type. - * - * @author Mark Paluch - */ -public class KeyValueScanStreamingOutput extends ScanOutput { - - private K key; - private KeyValueStreamingChannel channel; - - public KeyValueScanStreamingOutput(RedisCodec codec, KeyValueStreamingChannel channel) { - super(codec, new StreamScanCursor()); - this.channel = channel; - } - - @Override - protected void setOutput(ByteBuffer bytes) { - - if (key == null) { - key = codec.decodeKey(bytes); - return; - } - - V value = (bytes == null) ? null : codec.decodeValue(bytes); - - channel.onKeyValue(key, value); - output.setCount(output.getCount() + 1); - key = null; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyValueStreamingChannel.java b/src/main/java/com/lambdaworks/redis/output/KeyValueStreamingChannel.java deleted file mode 100644 index 40f3cdd30f..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyValueStreamingChannel.java +++ /dev/null @@ -1,22 +0,0 @@ -package com.lambdaworks.redis.output; - -/** - * Streaming API for multiple Key-Values. You can implement this interface in order to receive a call to {@code onKeyValue} on - * every key-value pair. Key uniqueness is not guaranteed. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -@FunctionalInterface -public interface KeyValueStreamingChannel { - /** - * - * Called on every incoming key/value pair. - * - * @param key the key - * @param value the value - */ - void onKeyValue(K key, V value); -} diff --git a/src/main/java/com/lambdaworks/redis/output/KeyValueStreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/KeyValueStreamingOutput.java deleted file mode 100644 index c6d023c0d2..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/KeyValueStreamingOutput.java +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Streaming-Output of Key Value Pairs. Returns the count of all Key-Value pairs (including null). - * - * @param Key type. - * @param Value type. - * - * @author Mark Paluch - */ -public class KeyValueStreamingOutput extends CommandOutput { - private K key; - private KeyValueStreamingChannel channel; - - public KeyValueStreamingOutput(RedisCodec codec, KeyValueStreamingChannel channel) { - super(codec, Long.valueOf(0)); - this.channel = channel; - } - - @Override - public void set(ByteBuffer bytes) { - if (key == null) { - key = codec.decodeKey(bytes); - return; - } - - V value = (bytes == null) ? null : codec.decodeValue(bytes); - channel.onKeyValue(key, value); - output = output.longValue() + 1; - key = null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ListOfMapsOutput.java b/src/main/java/com/lambdaworks/redis/output/ListOfMapsOutput.java deleted file mode 100644 index 214164a420..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ListOfMapsOutput.java +++ /dev/null @@ -1,59 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.LinkedHashMap; -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * {@link java.util.List} of maps output. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class ListOfMapsOutput extends CommandOutput>> { - private MapOutput nested; - private int mapCount = -1; - private List counts = new ArrayList<>(); - - public ListOfMapsOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - nested = new MapOutput<>(codec); - } - - @Override - public void set(ByteBuffer bytes) { - nested.set(bytes); - } - - @Override - public void complete(int depth) { - - if (!counts.isEmpty()) { - int expectedSize = counts.get(0); - - if (nested.get().size() == expectedSize) { - counts.remove(0); - output.add(new LinkedHashMap<>(nested.get())); - nested.get().clear(); - } - } - } - - @Override - public void multi(int count) { - if (mapCount == -1) { - mapCount = count; - } else { - // div 2 because of key value pair counts twice - counts.add(count / 2); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ListSubscriber.java b/src/main/java/com/lambdaworks/redis/output/ListSubscriber.java deleted file mode 100644 index 6dbd8fea1f..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ListSubscriber.java +++ /dev/null @@ -1,31 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.util.List; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.output.StreamingOutput.Subscriber; - -/** - * Simple subscriber - * @author Mark Paluch - * @since 4.2 - */ -class ListSubscriber implements Subscriber { - - private List target; - - private ListSubscriber(List target) { - - LettuceAssert.notNull(target, "Target must not be null"); - this.target = target; - } - - @Override - public void onNext(T t) { - target.add(t); - } - - static ListSubscriber of(List target) { - return new ListSubscriber<>(target); - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/MapOutput.java b/src/main/java/com/lambdaworks/redis/output/MapOutput.java deleted file mode 100644 index 781871ed23..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/MapOutput.java +++ /dev/null @@ -1,50 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.LinkedHashMap; -import java.util.Map; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * {@link Map} of keys and values output. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class MapOutput extends CommandOutput> { - private K key; - - public MapOutput(RedisCodec codec) { - super(codec, new LinkedHashMap<>()); - } - - @Override - public void set(ByteBuffer bytes) { - if (key == null) { - key = codec.decodeKey(bytes); - return; - } - - V value = (bytes == null) ? null : codec.decodeValue(bytes); - output.put(key, value); - key = null; - } - - @Override - @SuppressWarnings("unchecked") - public void set(long integer) { - if (key == null) { - key = (K) Long.valueOf(integer); - return; - } - - V value = (V) Long.valueOf(integer); - output.put(key, value); - key = null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/MapScanOutput.java b/src/main/java/com/lambdaworks/redis/output/MapScanOutput.java deleted file mode 100644 index 45a440f8ab..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/MapScanOutput.java +++ /dev/null @@ -1,36 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.MapScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * {@link com.lambdaworks.redis.MapScanCursor} for scan cursor output. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class MapScanOutput extends ScanOutput> { - - private K key; - - public MapScanOutput(RedisCodec codec) { - super(codec, new MapScanCursor()); - } - - @Override - protected void setOutput(ByteBuffer bytes) { - - if (key == null) { - key = codec.decodeKey(bytes); - return; - } - - V value = (bytes == null) ? null : codec.decodeValue(bytes); - output.getMap().put(key, value); - key = null; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/MultiOutput.java b/src/main/java/com/lambdaworks/redis/output/MultiOutput.java deleted file mode 100644 index 953fe37200..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/MultiOutput.java +++ /dev/null @@ -1,93 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.List; -import java.util.Queue; - -import com.lambdaworks.redis.RedisCommandExecutionException; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceFactories; -import com.lambdaworks.redis.protocol.RedisCommand; - -/** - * Output of all commands within a MULTI block. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class MultiOutput extends CommandOutput> { - private final Queue> queue; - - public MultiOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - queue = LettuceFactories.newSpScQueue(); - } - - public void add(RedisCommand cmd) { - queue.add(cmd); - } - - public void cancel() { - for (RedisCommand c : queue) { - c.complete(); - } - } - - @Override - public void set(long integer) { - RedisCommand command = queue.peek(); - if (command != null && command.getOutput() != null) { - command.getOutput().set(integer); - } - } - - @Override - public void set(ByteBuffer bytes) { - RedisCommand command = queue.peek(); - if (command != null && command.getOutput() != null) { - command.getOutput().set(bytes); - } - } - - @Override - public void multi(int count) { - - if (count == -1 && !queue.isEmpty()) { - queue.peek().getOutput().multi(count); - } - } - - @Override - public void setError(ByteBuffer error) { - CommandOutput output = queue.isEmpty() ? this : queue.peek().getOutput(); - output.setError(decodeAscii(error)); - } - - @Override - public void complete(int depth) { - - if (queue.isEmpty()) { - return; - } - - if (depth >= 1) { - RedisCommand cmd = queue.peek(); - cmd.getOutput().complete(depth - 1); - } - - if (depth == 1) { - RedisCommand cmd = queue.remove(); - CommandOutput o = cmd.getOutput(); - output.add(!o.hasError() ? o.get() : new RedisCommandExecutionException(o.getError())); - cmd.complete(); - } else if (depth == 0 && !queue.isEmpty()) { - for (RedisCommand cmd : queue) { - cmd.complete(); - } - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/NestedMultiOutput.java b/src/main/java/com/lambdaworks/redis/output/NestedMultiOutput.java deleted file mode 100644 index 7f8b725c10..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/NestedMultiOutput.java +++ /dev/null @@ -1,56 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.Deque; -import java.util.List; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceFactories; - -/** - * {@link List} of command outputs, possibly deeply nested. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class NestedMultiOutput extends CommandOutput> { - private final Deque> stack; - private int depth; - - public NestedMultiOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - stack = LettuceFactories.newSpScQueue(); - depth = 0; - } - - @Override - public void set(long integer) { - output.add(integer); - } - - @Override - public void set(ByteBuffer bytes) { - output.add(bytes == null ? null : codec.decodeValue(bytes)); - } - - @Override - public void complete(int depth) { - if (depth > 0 && depth < this.depth) { - output = stack.pop(); - this.depth--; - } - } - - @Override - public void multi(int count) { - List a = new ArrayList<>(count); - output.add(a); - stack.push(output); - output = a; - this.depth++; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ScanOutput.java b/src/main/java/com/lambdaworks/redis/output/ScanOutput.java deleted file mode 100644 index 975c6f4bf6..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ScanOutput.java +++ /dev/null @@ -1,39 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.LettuceStrings; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Cursor handling output. - * - * @param Key type. - * @param Value type. - * @param Cursor type. - * @author Mark Paluch - */ -public abstract class ScanOutput extends CommandOutput { - - public ScanOutput(RedisCodec codec, T cursor) { - super(codec, cursor); - } - - @Override - public void set(ByteBuffer bytes) { - - if (output.getCursor() == null) { - output.setCursor(decodeAscii(bytes)); - if (LettuceStrings.isNotEmpty(output.getCursor()) && "0".equals(output.getCursor())) { - output.setFinished(true); - } - return; - } - - setOutput(bytes); - - } - - protected abstract void setOutput(ByteBuffer bytes); -} diff --git a/src/main/java/com/lambdaworks/redis/output/ScoredValueListOutput.java b/src/main/java/com/lambdaworks/redis/output/ScoredValueListOutput.java deleted file mode 100644 index c79f2e72cd..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ScoredValueListOutput.java +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * {@link List} of values and their associated scores. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class ScoredValueListOutput extends CommandOutput>> - implements StreamingOutput> { - private V value; - private Subscriber> subscriber; - - public ScoredValueListOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - setSubscriber(ListSubscriber.of(output)); - } - - @Override - public void set(ByteBuffer bytes) { - if (value == null) { - value = codec.decodeValue(bytes); - return; - } - - double score = Double.parseDouble(decodeAscii(bytes)); - subscriber.onNext(new ScoredValue<>(score, value)); - value = null; - } - - @Override - public void setSubscriber(Subscriber> subscriber) { - LettuceAssert.notNull(subscriber, "Subscriber must not be null"); - this.subscriber = subscriber; - } - - @Override - public Subscriber> getSubscriber() { - return subscriber; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ScoredValueScanOutput.java b/src/main/java/com/lambdaworks/redis/output/ScoredValueScanOutput.java deleted file mode 100644 index bc9087e2ef..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ScoredValueScanOutput.java +++ /dev/null @@ -1,37 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.ScoredValueScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * {@link com.lambdaworks.redis.ScoredValueScanCursor} for scan cursor output. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class ScoredValueScanOutput extends ScanOutput> { - - private V value; - - public ScoredValueScanOutput(RedisCodec codec) { - super(codec, new ScoredValueScanCursor()); - } - - @Override - protected void setOutput(ByteBuffer bytes) { - - if (value == null) { - value = codec.decodeValue(bytes); - return; - } - - double score = Double.parseDouble(decodeAscii(bytes)); - output.getValues().add(new ScoredValue(score, value)); - value = null; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/ScoredValueScanStreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/ScoredValueScanStreamingOutput.java deleted file mode 100644 index 831159abdf..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ScoredValueScanStreamingOutput.java +++ /dev/null @@ -1,39 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Streaming-Output of of values and their associated scores. Returns the count of all values (including null). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class ScoredValueScanStreamingOutput extends ScanOutput { - - private V value; - private final ScoredValueStreamingChannel channel; - - public ScoredValueScanStreamingOutput(RedisCodec codec, ScoredValueStreamingChannel channel) { - super(codec, new StreamScanCursor()); - this.channel = channel; - } - - @Override - protected void setOutput(ByteBuffer bytes) { - if (value == null) { - value = codec.decodeValue(bytes); - return; - } - - double score = Double.parseDouble(decodeAscii(bytes)); - channel.onValue(new ScoredValue(score, value)); - value = null; - output.setCount(output.getCount() + 1); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/ScoredValueStreamingChannel.java b/src/main/java/com/lambdaworks/redis/output/ScoredValueStreamingChannel.java deleted file mode 100644 index 95fb429128..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ScoredValueStreamingChannel.java +++ /dev/null @@ -1,21 +0,0 @@ -package com.lambdaworks.redis.output; - -import com.lambdaworks.redis.ScoredValue; - -/** - * Streaming API for multiple Keys. You can implement this interface in order to receive a call to {@code onValue} on every - * value. - * - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -@FunctionalInterface -public interface ScoredValueStreamingChannel { - /** - * Called on every incoming ScoredValue. - * - * @param value the scored value - */ - void onValue(ScoredValue value); -} diff --git a/src/main/java/com/lambdaworks/redis/output/ScoredValueStreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/ScoredValueStreamingOutput.java deleted file mode 100644 index 8c60f3e0c1..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ScoredValueStreamingOutput.java +++ /dev/null @@ -1,38 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Streaming-Output of of values and their associated scores. Returns the count of all values (including null). - * - * @author Mark Paluch - * @param Key type. - * @param Value type. - */ -public class ScoredValueStreamingOutput extends CommandOutput { - private V value; - private final ScoredValueStreamingChannel channel; - - public ScoredValueStreamingOutput(RedisCodec codec, ScoredValueStreamingChannel channel) { - super(codec, Long.valueOf(0)); - this.channel = channel; - } - - @Override - public void set(ByteBuffer bytes) { - - if (value == null) { - value = codec.decodeValue(bytes); - return; - } - - double score = Double.parseDouble(decodeAscii(bytes)); - channel.onValue(new ScoredValue(score, value)); - value = null; - output = output.longValue() + 1; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/StatusOutput.java b/src/main/java/com/lambdaworks/redis/output/StatusOutput.java deleted file mode 100644 index 39046a018c..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/StatusOutput.java +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -import static com.lambdaworks.redis.protocol.LettuceCharsets.buffer; - -/** - * Status message output. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class StatusOutput extends CommandOutput { - private static final ByteBuffer OK = buffer("OK"); - - public StatusOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(ByteBuffer bytes) { - output = OK.equals(bytes) ? "OK" : decodeAscii(bytes); - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/StreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/StreamingOutput.java deleted file mode 100644 index 4b84752f2b..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/StreamingOutput.java +++ /dev/null @@ -1,41 +0,0 @@ -package com.lambdaworks.redis.output; - -/** - * Implementors of this class support a streaming {@link CommandOutput} while the command is still processed. The receiving - * {@link Subscriber} receives {@link Subscriber#onNext(Object)} calls while the command is active. - * - * @author Mark Paluch - * @since 4.2 - */ -public interface StreamingOutput { - - /** - * Sets the {@link Subscriber}. - * - * @param subscriber - */ - void setSubscriber(Subscriber subscriber); - - /** - * Retrieves the {@link Subscriber}. - * - * @return - */ - Subscriber getSubscriber(); - - /** - * Subscriber to a {@link StreamingOutput}. - * - * @param - */ - interface Subscriber { - - /** - * Data notification sent by the {@link StreamingOutput}. - * - * @param t element - */ - void onNext(T t); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/StringListOutput.java b/src/main/java/com/lambdaworks/redis/output/StringListOutput.java deleted file mode 100644 index c1fef0a6f1..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/StringListOutput.java +++ /dev/null @@ -1,43 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * {@link List} of string output. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class StringListOutput extends CommandOutput> implements StreamingOutput{ - - private Subscriber subscriber; - - public StringListOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - setSubscriber(ListSubscriber.of(output)); - } - - @Override - public void set(ByteBuffer bytes) { - subscriber.onNext(bytes == null ? null : decodeAscii(bytes)); - } - - @Override - public void setSubscriber(Subscriber subscriber) { - LettuceAssert.notNull(subscriber, "Subscriber must not be null"); - this.subscriber = subscriber; - } - - @Override - public Subscriber getSubscriber() { - return subscriber; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ValueListOutput.java b/src/main/java/com/lambdaworks/redis/output/ValueListOutput.java deleted file mode 100644 index a22e0b129d..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ValueListOutput.java +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * {@link List} of values output. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class ValueListOutput extends CommandOutput> implements StreamingOutput { - - private Subscriber subscriber; - - public ValueListOutput(RedisCodec codec) { - super(codec, new ArrayList<>()); - setSubscriber(ListSubscriber.of(output)); - } - - @Override - public void set(ByteBuffer bytes) { - subscriber.onNext(bytes == null ? null : codec.decodeValue(bytes)); - } - - @Override - public void setSubscriber(Subscriber subscriber) { - LettuceAssert.notNull(subscriber, "Subscriber must not be null"); - this.subscriber = subscriber; - } - - @Override - public Subscriber getSubscriber() { - return subscriber; - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ValueOutput.java b/src/main/java/com/lambdaworks/redis/output/ValueOutput.java deleted file mode 100644 index b250ce6e95..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ValueOutput.java +++ /dev/null @@ -1,26 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Value output. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class ValueOutput extends CommandOutput { - public ValueOutput(RedisCodec codec) { - super(codec, null); - } - - @Override - public void set(ByteBuffer bytes) { - output = (bytes == null) ? null : codec.decodeValue(bytes); - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ValueScanOutput.java b/src/main/java/com/lambdaworks/redis/output/ValueScanOutput.java deleted file mode 100644 index 1a7e976682..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ValueScanOutput.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.ValueScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * {@link com.lambdaworks.redis.ValueScanCursor} for scan cursor output. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class ValueScanOutput extends ScanOutput> { - - public ValueScanOutput(RedisCodec codec) { - super(codec, new ValueScanCursor()); - } - - @Override - protected void setOutput(ByteBuffer bytes) { - output.getValues().add(bytes == null ? null : codec.decodeValue(bytes)); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/ValueScanStreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/ValueScanStreamingOutput.java deleted file mode 100644 index e3415fa530..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ValueScanStreamingOutput.java +++ /dev/null @@ -1,31 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Streaming API for multiple Values. You can implement this interface in order to receive a call to {@code onValue} on every - * key. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class ValueScanStreamingOutput extends ScanOutput { - - private final ValueStreamingChannel channel; - - public ValueScanStreamingOutput(RedisCodec codec, ValueStreamingChannel channel) { - super(codec, new StreamScanCursor()); - this.channel = channel; - } - - @Override - protected void setOutput(ByteBuffer bytes) { - channel.onValue(bytes == null ? null : codec.decodeValue(bytes)); - output.setCount(output.getCount() + 1); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/ValueSetOutput.java b/src/main/java/com/lambdaworks/redis/output/ValueSetOutput.java deleted file mode 100644 index 92afce5cc1..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ValueSetOutput.java +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; -import java.util.HashSet; -import java.util.Set; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * {@link Set} of value output. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class ValueSetOutput extends CommandOutput> { - public ValueSetOutput(RedisCodec codec) { - super(codec, new HashSet<>()); - } - - @Override - public void set(ByteBuffer bytes) { - output.add(bytes == null ? null : codec.decodeValue(bytes)); - } -} diff --git a/src/main/java/com/lambdaworks/redis/output/ValueStreamingChannel.java b/src/main/java/com/lambdaworks/redis/output/ValueStreamingChannel.java deleted file mode 100644 index 0ce8633de7..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ValueStreamingChannel.java +++ /dev/null @@ -1,19 +0,0 @@ -package com.lambdaworks.redis.output; - -/** - * Streaming API for multiple Keys. You can implement this interface in order to receive a call to {@code onValue} on every - * value. - * - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -@FunctionalInterface -public interface ValueStreamingChannel { - /** - * Called on every incoming value. - * - * @param value the value - */ - void onValue(V value); -} diff --git a/src/main/java/com/lambdaworks/redis/output/ValueStreamingOutput.java b/src/main/java/com/lambdaworks/redis/output/ValueStreamingOutput.java deleted file mode 100644 index bcb7a9c179..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/ValueStreamingOutput.java +++ /dev/null @@ -1,29 +0,0 @@ -package com.lambdaworks.redis.output; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * Streaming-Output of Values. Returns the count of all values (including null). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class ValueStreamingOutput extends CommandOutput { - private final ValueStreamingChannel channel; - - public ValueStreamingOutput(RedisCodec codec, ValueStreamingChannel channel) { - super(codec, Long.valueOf(0)); - this.channel = channel; - } - - @Override - public void set(ByteBuffer bytes) { - - channel.onValue(bytes == null ? null : codec.decodeValue(bytes)); - output = output.longValue() + 1; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/output/package-info.java b/src/main/java/com/lambdaworks/redis/output/package-info.java deleted file mode 100644 index b966f88ce3..0000000000 --- a/src/main/java/com/lambdaworks/redis/output/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Implementation of different output protocols including the Streaming API. - */ -package com.lambdaworks.redis.output; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/package-info.java b/src/main/java/com/lambdaworks/redis/package-info.java deleted file mode 100644 index 82adf0e005..0000000000 --- a/src/main/java/com/lambdaworks/redis/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * The redis client package containing {@link com.lambdaworks.redis.RedisClient} for regular and sentinel operations. - */ -package com.lambdaworks.redis; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/protocol/AsyncCommand.java b/src/main/java/com/lambdaworks/redis/protocol/AsyncCommand.java deleted file mode 100644 index eeb8319109..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/AsyncCommand.java +++ /dev/null @@ -1,163 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; -import java.util.function.Consumer; - -import com.lambdaworks.redis.RedisCommandExecutionException; -import com.lambdaworks.redis.RedisCommandInterruptedException; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.output.CommandOutput; -import io.netty.buffer.ByteBuf; - -/** - * An asynchronous redis command and its result. All successfully executed commands will eventually return a - * {@link CommandOutput} object. - * - * @param Key type. - * @param Value type. - * @param Command output type. - * - * @author Mark Paluch - */ -public class AsyncCommand extends CompletableFuture implements RedisCommand, RedisFuture, - CompleteableCommand, DecoratedCommand { - - protected RedisCommand command; - protected CountDownLatch latch = new CountDownLatch(1); - - /** - * - * @param command the command, must not be {@literal null}. - * - */ - public AsyncCommand(RedisCommand command) { - LettuceAssert.notNull(command, "RedisCommand must not be null"); - this.command = command; - } - - /** - * Wait up to the specified time for the command output to become available. - * - * @param timeout Maximum time to wait for a result. - * @param unit Unit of time for the timeout. - * - * @return true if the output became available. - */ - @Override - public boolean await(long timeout, TimeUnit unit) { - try { - return latch.await(timeout, unit); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw new RedisCommandInterruptedException(e); - } - } - - /** - * Get the object that holds this command's output. - * - * @return The command output object. - */ - @Override - public CommandOutput getOutput() { - return command.getOutput(); - } - - /** - * Mark this command complete and notify all waiting threads. - */ - @Override - public void complete() { - if (latch.getCount() == 1) { - completeResult(); - command.complete(); - } - latch.countDown(); - } - - protected void completeResult() { - if (command.getOutput() == null) { - complete(null); - } else if (command.getOutput().hasError()) { - completeExceptionally(new RedisCommandExecutionException(command.getOutput().getError())); - } else { - complete(command.getOutput().get()); - } - } - - @Override - public boolean completeExceptionally(Throwable ex) { - boolean result = false; - if (latch.getCount() == 1) { - command.completeExceptionally(ex); - result = super.completeExceptionally(ex); - } - latch.countDown(); - return result; - } - - @Override - public boolean cancel(boolean mayInterruptIfRunning) { - try { - command.cancel(); - return super.cancel(mayInterruptIfRunning); - } finally { - latch.countDown(); - } - } - - @Override - public String getError() { - return command.getOutput().getError(); - } - - @Override - public CommandArgs getArgs() { - return command.getArgs(); - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [type=").append(getType()); - sb.append(", output=").append(getOutput()); - sb.append(", commandType=").append(command.getClass().getName()); - sb.append(']'); - return sb.toString(); - } - - @Override - public ProtocolKeyword getType() { - return command.getType(); - } - - @Override - public void cancel() { - cancel(true); - } - - @Override - public void encode(ByteBuf buf) { - command.encode(buf); - } - - @Override - public void setOutput(CommandOutput output) { - command.setOutput(output); - } - - @Override - public void onComplete(Consumer action) { - thenAccept(action); - } - - @Override - public RedisCommand getDelegate() { - return command; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/BaseRedisCommandBuilder.java b/src/main/java/com/lambdaworks/redis/protocol/BaseRedisCommandBuilder.java deleted file mode 100644 index 1cd7afb3d8..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/BaseRedisCommandBuilder.java +++ /dev/null @@ -1,59 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.ScriptOutputType; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.output.*; - -/** - * @author Mark Paluch - * @since 3.0 - */ -public class BaseRedisCommandBuilder { - protected RedisCodec codec; - - public BaseRedisCommandBuilder(RedisCodec codec) { - this.codec = codec; - } - - protected Command createCommand(CommandType type, CommandOutput output) { - return createCommand(type, output, (CommandArgs) null); - } - - protected Command createCommand(CommandType type, CommandOutput output, K key) { - CommandArgs args = new CommandArgs(codec).addKey(key); - return createCommand(type, output, args); - } - - protected Command createCommand(CommandType type, CommandOutput output, K key, V value) { - CommandArgs args = new CommandArgs(codec).addKey(key).addValue(value); - return createCommand(type, output, args); - } - - protected Command createCommand(CommandType type, CommandOutput output, K key, V[] values) { - CommandArgs args = new CommandArgs(codec).addKey(key).addValues(values); - return createCommand(type, output, args); - } - - protected Command createCommand(CommandType type, CommandOutput output, CommandArgs args) { - return new Command(type, output, args); - } - - @SuppressWarnings("unchecked") - protected CommandOutput newScriptOutput(RedisCodec codec, ScriptOutputType type) { - switch (type) { - case BOOLEAN: - return (CommandOutput) new BooleanOutput(codec); - case INTEGER: - return (CommandOutput) new IntegerOutput(codec); - case STATUS: - return (CommandOutput) new StatusOutput(codec); - case MULTI: - return (CommandOutput) new NestedMultiOutput(codec); - case VALUE: - return (CommandOutput) new ValueOutput(codec); - default: - throw new RedisException("Unsupported script output type"); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/ChannelLogDescriptor.java b/src/main/java/com/lambdaworks/redis/protocol/ChannelLogDescriptor.java deleted file mode 100644 index adfeac8ea7..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/ChannelLogDescriptor.java +++ /dev/null @@ -1,40 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import io.netty.channel.Channel; - -/** - * @author Mark Paluch - */ -class ChannelLogDescriptor { - - static String logDescriptor(Channel channel) { - - if (channel == null) { - return "unknown"; - } - - StringBuffer buffer = new StringBuffer(64); - - buffer.append("channel=").append(getId(channel)).append(", "); - - if (channel.localAddress() != null && channel.remoteAddress() != null) { - buffer.append(channel.localAddress()).append(" -> "); - } else { - buffer.append(channel); - } - - if (!channel.isActive()) { - if (buffer.length() != 0) { - buffer.append(' '); - } - - buffer.append("(inactive)"); - } - - return buffer.toString(); - } - - private static String getId(Channel channel) { - return String.format("0x%08x", channel.hashCode()); - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/Command.java b/src/main/java/com/lambdaworks/redis/protocol/Command.java deleted file mode 100644 index 26fdfd5e2b..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/Command.java +++ /dev/null @@ -1,194 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.output.CommandOutput; - -import io.netty.buffer.ByteBuf; - -/** - * A Redis command with a {@link ProtocolKeyword command type}, {@link CommandArgs arguments} and an optional {@link CommandOutput - * output}. All successfully executed commands will eventually return a {@link CommandOutput} object. - * - * @param Key type. - * @param Value type. - * @param Command output type. - * - * @author Will Glozer - * @author Mark Paluch - */ -public class Command implements RedisCommand, WithLatency { - - private final ProtocolKeyword type; - - protected CommandArgs args; - protected CommandOutput output; - protected Throwable exception; - protected boolean cancelled = false; - protected boolean completed = false; - protected long sentNs = -1; - protected long firstResponseNs = -1; - protected long completedNs = -1; - - /** - * Create a new command with the supplied type. - * - * @param type Command type, must not be {@literal null}. - * @param output Command output, can be {@literal null}. - */ - public Command(ProtocolKeyword type, CommandOutput output) { - this(type, output, null); - } - - /** - * Create a new command with the supplied type and args. - * - * @param type Command type, must not be {@literal null}. - * @param output Command output, can be {@literal null}. - * @param args Command args, can be {@literal null} - */ - public Command(ProtocolKeyword type, CommandOutput output, CommandArgs args) { - LettuceAssert.notNull(type, "Command type must not be null"); - this.type = type; - this.output = output; - this.args = args; - } - - /** - * Get the object that holds this command's output. - * - * @return The command output object. - */ - @Override - public CommandOutput getOutput() { - return output; - } - - @Override - public boolean completeExceptionally(Throwable throwable) { - if (output != null) { - output.setError(throwable.getMessage()); - } - - exception = throwable; - return true; - } - - /** - * Mark this command complete and notify all waiting threads. - */ - @Override - public void complete() { - completed = true; - } - - @Override - public void cancel() { - cancelled = true; - } - - /** - * Encode and write this command to the supplied buffer using the new Unified - * Request Protocol. - * - * @param buf Buffer to write to. - */ - public void encode(ByteBuf buf) { - - buf.writeByte('*'); - CommandArgs.IntegerArgument.writeInteger(buf, 1 + (args != null ? args.count() : 0)); - - buf.writeBytes(CommandArgs.CRLF); - - CommandArgs.BytesArgument.writeBytes(buf, type.getBytes()); - - if (args != null) { - args.encode(buf); - } - } - - public String getError() { - return output.getError(); - } - - @Override - public CommandArgs getArgs() { - return args; - } - - /** - * - * @return the resut from the output. - */ - public T get() { - if (output != null) { - return output.get(); - } - return null; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [type=").append(type); - sb.append(", output=").append(output); - sb.append(']'); - return sb.toString(); - } - - public void setOutput(CommandOutput output) { - if (isCancelled() || completed) { - throw new IllegalStateException("Command is completed/cancelled. Cannot set a new output"); - } - this.output = output; - } - - @Override - public ProtocolKeyword getType() { - return type; - } - - @Override - public boolean isCancelled() { - return cancelled; - } - - @Override - public boolean isDone() { - return completed; - } - - @Override - public void sent(long timeNs) { - sentNs = timeNs; - firstResponseNs = -1; - completedNs = -1; - } - - @Override - public void firstResponse(long timeNs) { - firstResponseNs = timeNs; - } - - @Override - public void completed(long timeNs) { - completedNs = timeNs; - } - - @Override - public long getSent() { - return sentNs; - } - - @Override - public long getFirstResponse() { - return firstResponseNs; - } - - @Override - public long getCompleted() { - return completedNs; - } -} \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/protocol/CommandArgs.java b/src/main/java/com/lambdaworks/redis/protocol/CommandArgs.java deleted file mode 100644 index de2d79da83..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/CommandArgs.java +++ /dev/null @@ -1,609 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.codec.ByteArrayCodec; -import com.lambdaworks.redis.codec.ToByteBufEncoder; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.UnpooledByteBufAllocator; - -/** - * Redis command arguments. {@link CommandArgs} is a container for multiple singular arguments. Key and Value arguments are - * encoded using the {@link RedisCodec} to their byte representation. {@link CommandArgs} provides a fluent style of adding - * multiple arguments. A {@link CommandArgs} instance can be reused across multiple commands and invocations. - * - *

- * Usage - *

- * - *
- *     
- *         new CommandArgs<>(codec).addKey(key).addValue(value).add(CommandKeyword.FORCE);
- *     
- * 
- * - * @param Key type. - * @param Value type. - * @author Will Glozer - * @author Mark Paluch - */ -public class CommandArgs { - - static final byte[] CRLF = "\r\n".getBytes(LettuceCharsets.ASCII); - - protected final RedisCodec codec; - private final List singularArguments = new ArrayList<>(10); - private Long firstInteger; - private String firstString; - private ByteBuffer firstEncodedKey; - private K firstKey; - - /** - * - * @param codec Codec used to encode/decode keys and values, must not be {@literal null}. - */ - public CommandArgs(RedisCodec codec) { - - LettuceAssert.notNull(codec, "RedisCodec must not be null"); - this.codec = codec; - } - - /** - * - * @return the number of arguments. - */ - public int count() { - return singularArguments.size(); - } - - /** - * Adds a key argument. - * - * @param key the key - * @return the command args. - */ - public CommandArgs addKey(K key) { - - if (firstKey == null) { - firstKey = key; - } - - singularArguments.add(KeyArgument.of(key, codec)); - return this; - } - - /** - * Add multiple key arguments. - * - * @param keys must not be {@literal null}. - * @return the command args. - */ - public CommandArgs addKeys(Iterable keys) { - - LettuceAssert.notNull(keys, "Keys must not be null"); - - for (K key : keys) { - addKey(key); - } - return this; - } - - /** - * Add multiple key arguments. - * - * @param keys must not be {@literal null}. - * @return the command args. - */ - public CommandArgs addKeys(K... keys) { - - LettuceAssert.notNull(keys, "Keys must not be null"); - - for (K key : keys) { - addKey(key); - } - return this; - } - - /** - * Add a value argument. - * - * @param value the value - * @return the command args. - */ - public CommandArgs addValue(V value) { - - singularArguments.add(ValueArgument.of(value, codec)); - return this; - } - - /** - * Add multiple value arguments. - * - * @param values must not be {@literal null}. - * @return the command args. - */ - public CommandArgs addValues(Iterable values) { - - LettuceAssert.notNull(values, "Values must not be null"); - - for (V value : values) { - addValue(value); - } - return this; - } - - /** - * Add multiple value arguments. - * - * @param values must not be {@literal null}. - * @return the command args. - */ - public CommandArgs addValues(V... values) { - - LettuceAssert.notNull(values, "Values must not be null"); - - for (V value : values) { - addValue(value); - } - return this; - } - - /** - * Add a map (hash) argument. - * - * @param map the map, must not be {@literal null}. - * @return the command args. - */ - public CommandArgs add(Map map) { - - LettuceAssert.notNull(map, "Map must not be null"); - - for (Map.Entry entry : map.entrySet()) { - addKey(entry.getKey()).addValue(entry.getValue()); - } - - return this; - } - - /** - * Add a string argument. The argument is represented as bulk string. - * - * @param s the string. - * @return the command args. - */ - public CommandArgs add(String s) { - - if (firstString == null) { - firstString = s; - } - - singularArguments.add(StringArgument.of(s)); - return this; - } - - /** - * Add an 64-bit integer (long) argument. - * - * @param n the argument. - * @return the command args. - */ - public CommandArgs add(long n) { - - if (firstInteger == null) { - firstInteger = n; - } - - singularArguments.add(IntegerArgument.of(n)); - return this; - } - - /** - * Add a double argument. - * - * @param n the double argument. - * @return the command args. - */ - public CommandArgs add(double n) { - - singularArguments.add(DoubleArgument.of(n)); - return this; - } - - /** - * Add a byte-array argument. The argument is represented as bulk string. - * - * @param value the byte-array. - * @return the command args. - */ - public CommandArgs add(byte[] value) { - - singularArguments.add(BytesArgument.of(value)); - return this; - } - - /** - * Add a {@link CommandKeyword} argument. The argument is represented as bulk string. - * - * @param keyword must not be {@literal null}. - * @return the command args. - */ - public CommandArgs add(CommandKeyword keyword) { - - LettuceAssert.notNull(keyword, "CommandKeyword must not be null"); - return add(keyword.bytes); - } - - /** - * Add a {@link CommandType} argument. The argument is represented as bulk string. - * - * @param type must not be {@literal null}. - * @return the command args. - */ - public CommandArgs add(CommandType type) { - - LettuceAssert.notNull(type, "CommandType must not be null"); - return add(type.bytes); - } - - /** - * Add a {@link ProtocolKeyword} argument. The argument is represented as bulk string. - * - * @param keyword the keyword, must not be {@literal null} - * @return the command args. - */ - public CommandArgs add(ProtocolKeyword keyword) { - - LettuceAssert.notNull(keyword, "CommandKeyword must not be null"); - return add(keyword.getBytes()); - } - - @Override - public String toString() { - - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - - ByteBuf buffer = UnpooledByteBufAllocator.DEFAULT.buffer(singularArguments.size() * 10); - encode(buffer); - buffer.resetReaderIndex(); - - byte[] bytes = new byte[buffer.readableBytes()]; - buffer.readBytes(bytes); - sb.append(" [buffer=").append(new String(bytes)); - sb.append(']'); - buffer.release(); - - return sb.toString(); - } - - /** - * Returns the first integer argument. - * - * @return the first integer argument or {@literal null}. - */ - public Long getFirstInteger() { - return firstInteger; - } - - /** - * Returns the first string argument. - * - * @return the first string argument or {@literal null}. - */ - public String getFirstString() { - return firstString; - } - - /** - * Returns the first key argument in its byte-encoded representation. - * - * @return the first key argument in its byte-encoded representation or {@literal null}. - */ - public ByteBuffer getFirstEncodedKey() { - - if (firstKey == null) { - return null; - } - - if (firstEncodedKey == null) { - firstEncodedKey = codec.encodeKey(firstKey); - } - - return firstEncodedKey.duplicate(); - } - - /** - * Encode the {@link CommandArgs} and write the arguments to the {@link ByteBuf}. - * - * @param buf the target buffer. - */ - public void encode(ByteBuf buf) { - - for (SingularArgument singularArgument : singularArguments) { - singularArgument.encode(buf); - } - } - - /** - * Single argument wrapper that can be encoded. - */ - static abstract class SingularArgument { - - /** - * Encode the argument and write it to the {@code buffer}. - * - * @param buffer - */ - abstract void encode(ByteBuf buffer); - } - - static class BytesArgument extends SingularArgument { - - final byte[] val; - - private BytesArgument(byte[] val) { - this.val = val; - } - - static BytesArgument of(byte[] val) { - return new BytesArgument(val); - } - - @Override - void encode(ByteBuf buffer) { - writeBytes(buffer, val); - } - - static void writeBytes(ByteBuf buffer, byte[] value) { - - buffer.writeByte('$'); - - IntegerArgument.writeInteger(buffer, value.length); - buffer.writeBytes(CRLF); - - buffer.writeBytes(value); - buffer.writeBytes(CRLF); - } - } - - static class ByteBufferArgument { - - static void writeByteBuffer(ByteBuf target, ByteBuffer value) { - - target.writeByte('$'); - - IntegerArgument.writeInteger(target, value.remaining()); - target.writeBytes(CRLF); - - target.writeBytes(value); - target.writeBytes(CRLF); - } - - static void writeByteBuf(ByteBuf target, ByteBuf value) { - - target.writeByte('$'); - - IntegerArgument.writeInteger(target, value.readableBytes()); - target.writeBytes(CRLF); - - target.writeBytes(value); - target.writeBytes(CRLF); - } - } - - static class IntegerArgument extends SingularArgument { - - final long val; - - private IntegerArgument(long val) { - this.val = val; - } - - static IntegerArgument of(long val) { - - if (val >= 0 && val < IntegerCache.cache.length) { - return IntegerCache.cache[(int) val]; - } - - return new IntegerArgument(val); - } - - @Override - void encode(ByteBuf target) { - StringArgument.writeString(target, Long.toString(val)); - } - - static void writeInteger(ByteBuf target, long value) { - - if (value < 10) { - target.writeByte((byte) ('0' + value)); - return; - } - - String asString = Long.toString(value); - - for (int i = 0; i < asString.length(); i++) { - target.writeByte((byte) asString.charAt(i)); - } - } - } - - static class IntegerCache { - - final static IntegerArgument cache[]; - - static { - int high = Integer.getInteger("biz.paluch.redis.CommandArgs.IntegerCache", 128); - cache = new IntegerArgument[high]; - for (int i = 0; i < high; i++) { - cache[i] = new IntegerArgument(i); - } - } - } - - static class DoubleArgument extends SingularArgument { - - final double val; - - private DoubleArgument(double val) { - this.val = val; - } - - static DoubleArgument of(double val) { - return new DoubleArgument(val); - } - - @Override - void encode(ByteBuf target) { - StringArgument.writeString(target, Double.toString(val)); - } - } - - static class StringArgument extends SingularArgument { - - final String val; - - private StringArgument(String val) { - this.val = val; - } - - static StringArgument of(String val) { - return new StringArgument(val); - } - - @Override - void encode(ByteBuf target) { - writeString(target, val); - } - - static void writeString(ByteBuf target, String value) { - - target.writeByte('$'); - - IntegerArgument.writeInteger(target, value.length()); - target.writeBytes(CRLF); - - for (int i = 0; i < value.length(); i++) { - target.writeByte((byte) value.charAt(i)); - } - target.writeBytes(CRLF); - } - } - - static class KeyArgument extends SingularArgument { - - final K key; - final RedisCodec codec; - - private KeyArgument(K key, RedisCodec codec) { - this.key = key; - this.codec = codec; - } - - static KeyArgument of(K key, RedisCodec codec) { - return new KeyArgument<>(key, codec); - } - - @Override - void encode(ByteBuf target) { - - if (codec == ExperimentalByteArrayCodec.INSTANCE) { - ((ExperimentalByteArrayCodec) codec).encodeKey(target, (byte[]) key); - return; - } - - if (codec instanceof ToByteBufEncoder) { - - ToByteBufEncoder toByteBufEncoder = (ToByteBufEncoder) codec; - ByteBuf temporaryBuffer = target.alloc().buffer(toByteBufEncoder.estimateSize(key)); - toByteBufEncoder.encodeKey(key, temporaryBuffer); - - ByteBufferArgument.writeByteBuf(target, temporaryBuffer); - temporaryBuffer.release(); - - return; - } - - ByteBufferArgument.writeByteBuffer(target, codec.encodeKey(key)); - } - } - - static class ValueArgument extends SingularArgument { - - final V val; - final RedisCodec codec; - - private ValueArgument(V val, RedisCodec codec) { - this.val = val; - this.codec = codec; - } - - static ValueArgument of(V val, RedisCodec codec) { - return new ValueArgument<>(val, codec); - } - - @Override - void encode(ByteBuf target) { - - if (codec == ExperimentalByteArrayCodec.INSTANCE) { - ((ExperimentalByteArrayCodec) codec).encodeValue(target, (byte[]) val); - return; - } - - if (codec instanceof ToByteBufEncoder) { - - ToByteBufEncoder toByteBufEncoder = (ToByteBufEncoder) codec; - ByteBuf temporaryBuffer = target.alloc().buffer(toByteBufEncoder.estimateSize(val)); - toByteBufEncoder.encodeValue(val, temporaryBuffer); - - ByteBufferArgument.writeByteBuf(target, temporaryBuffer); - temporaryBuffer.release(); - - return; - } - - ByteBufferArgument.writeByteBuffer(target, codec.encodeValue(val)); - } - } - - /** - * This codec writes directly {@code byte[]} to the target buffer without wrapping it in a {@link ByteBuffer} to reduce GC - * pressure. - */ - public final static class ExperimentalByteArrayCodec extends ByteArrayCodec { - - public final static ExperimentalByteArrayCodec INSTANCE = new ExperimentalByteArrayCodec(); - - private ExperimentalByteArrayCodec() { - - } - - public void encodeKey(ByteBuf target, byte[] key) { - - target.writeByte('$'); - - if (key == null) { - target.writeBytes("0\r\n\r\n".getBytes()); - return; - } - - IntegerArgument.writeInteger(target, key.length); - target.writeBytes(CRLF); - - target.writeBytes(key); - target.writeBytes(CRLF); - } - - public void encodeValue(ByteBuf target, byte[] value) { - encodeKey(target, value); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/CommandEncoder.java b/src/main/java/com/lambdaworks/redis/protocol/CommandEncoder.java deleted file mode 100644 index 0b06635aaa..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/CommandEncoder.java +++ /dev/null @@ -1,108 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import java.nio.charset.Charset; -import java.util.Collection; - -import io.netty.buffer.ByteBuf; -import io.netty.channel.Channel; -import io.netty.channel.ChannelHandler; -import io.netty.channel.ChannelHandlerContext; -import io.netty.handler.codec.EncoderException; -import io.netty.handler.codec.MessageToByteEncoder; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * A netty {@link ChannelHandler} responsible for encoding commands. - * - * @author Mark Paluch - */ -@ChannelHandler.Sharable -public class CommandEncoder extends MessageToByteEncoder { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(CommandEncoder.class); - - /** - * If TRACE level logging has been enabled at startup. - */ - private final boolean traceEnabled; - - /** - * If DEBUG level logging has been enabled at startup. - */ - private final boolean debugEnabled; - - public CommandEncoder() { - this(true); - } - - public CommandEncoder(boolean preferDirect) { - super(preferDirect); - traceEnabled = logger.isTraceEnabled(); - debugEnabled = logger.isDebugEnabled(); - } - - @Override - protected ByteBuf allocateBuffer(ChannelHandlerContext ctx, Object msg, boolean preferDirect) throws Exception { - - if (msg instanceof Collection) { - - if (preferDirect) { - return ctx.alloc().ioBuffer(((Collection) msg).size() * 16); - } else { - return ctx.alloc().heapBuffer(((Collection) msg).size() * 16); - } - } - - if (preferDirect) { - return ctx.alloc().ioBuffer(); - } else { - return ctx.alloc().heapBuffer(); - } - } - - @Override - @SuppressWarnings("unchecked") - protected void encode(ChannelHandlerContext ctx, Object msg, ByteBuf out) throws Exception { - - if (msg instanceof RedisCommand) { - RedisCommand command = (RedisCommand) msg; - encode(ctx, out, command); - } - - if (msg instanceof Collection) { - Collection> commands = (Collection>) msg; - for (RedisCommand command : commands) { - encode(ctx, out, command); - } - } - } - - private void encode(ChannelHandlerContext ctx, ByteBuf out, RedisCommand command) { - - try { - out.markWriterIndex(); - command.encode(out); - } catch (RuntimeException e) { - out.resetWriterIndex(); - command.completeExceptionally(new EncoderException( - "Cannot encode command. Please close the connection as the connection state may be out of sync.", - e)); - } - - if (debugEnabled) { - logger.debug("{} writing command {}", logPrefix(ctx.channel()), command); - if (traceEnabled) { - logger.trace("{} Sent: {}", logPrefix(ctx.channel()), out.toString(Charset.defaultCharset()).trim()); - } - } - } - - private String logPrefix(Channel channel) { - StringBuffer buffer = new StringBuffer(64); - buffer.append('[').append(ChannelLogDescriptor.logDescriptor(channel)).append(']'); - return buffer.toString(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/CommandHandler.java b/src/main/java/com/lambdaworks/redis/protocol/CommandHandler.java deleted file mode 100644 index f20c72e57b..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/CommandHandler.java +++ /dev/null @@ -1,1027 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import java.io.IOException; -import java.net.SocketAddress; -import java.nio.channels.ClosedChannelException; -import java.nio.charset.Charset; -import java.util.*; -import java.util.concurrent.atomic.AtomicLong; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceFactories; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.internal.LettuceSets; -import com.lambdaworks.redis.resource.ClientResources; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.ByteBufAllocator; -import io.netty.channel.*; -import io.netty.channel.local.LocalAddress; -import io.netty.util.concurrent.Future; -import io.netty.util.concurrent.GenericFutureListener; -import io.netty.util.internal.logging.InternalLogLevel; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * A netty {@link ChannelHandler} responsible for writing redis commands and reading responses from the server. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - * @author Mark Paluch - */ -@ChannelHandler.Sharable -public class CommandHandler extends ChannelDuplexHandler implements RedisChannelWriter { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(CommandHandler.class); - private static final WriteLogListener WRITE_LOG_LISTENER = new WriteLogListener(); - private static final AtomicLong CHANNEL_COUNTER = new AtomicLong(); - - /** - * When we encounter an unexpected IOException we look for these {@link Throwable#getMessage() messages} (because we have no - * better way to distinguish) and log them at DEBUG rather than WARN, since they are generally caused by unclean client - * disconnects rather than an actual problem. - */ - private static final Set SUPPRESS_IO_EXCEPTION_MESSAGES = LettuceSets.unmodifiableSet("Connection reset by peer", - "Broken pipe", "Connection timed out"); - - protected final long commandHandlerId = CHANNEL_COUNTER.incrementAndGet(); - protected final ClientOptions clientOptions; - protected final ClientResources clientResources; - protected final Queue> queue; - protected final AtomicLong writers = new AtomicLong(); - protected final Object stateLock = new Object(); - - // all access to the commandBuffer is synchronized - protected final Deque> commandBuffer = LettuceFactories.newConcurrentQueue(); - protected final Deque> transportBuffer = LettuceFactories.newConcurrentQueue(); - protected final ByteBuf buffer = ByteBufAllocator.DEFAULT.directBuffer(8192 * 8); - protected final RedisStateMachine rsm = new RedisStateMachine(); - protected volatile Channel channel; - private volatile ConnectionWatchdog connectionWatchdog; - - // If TRACE level logging has been enabled at startup. - private final boolean traceEnabled; - - // If DEBUG level logging has been enabled at startup. - private final boolean debugEnabled; - private final Reliability reliability; - - private volatile LifecycleState lifecycleState = LifecycleState.NOT_CONNECTED; - private Thread exclusiveLockOwner; - private RedisChannelHandler redisChannelHandler; - private Throwable connectionError; - private String logPrefix; - private boolean autoFlushCommands = true; - - /** - * Initialize a new instance that handles commands from the supplied queue. - * - * @param clientOptions client options for this connection, must not be {@literal null} - * @param clientResources client resources for this connection, must not be {@literal null} - * @param queue The command queue, must not be {@literal null} - */ - public CommandHandler(ClientOptions clientOptions, ClientResources clientResources, Queue> queue) { - - LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); - LettuceAssert.notNull(clientResources, "ClientResources must not be null"); - LettuceAssert.notNull(queue, "Queue must not be null"); - - this.clientOptions = clientOptions; - this.clientResources = clientResources; - this.queue = queue; - this.traceEnabled = logger.isTraceEnabled(); - this.debugEnabled = logger.isDebugEnabled(); - this.reliability = clientOptions.isAutoReconnect() ? Reliability.AT_LEAST_ONCE : Reliability.AT_MOST_ONCE; - } - - /** - * @see io.netty.channel.ChannelInboundHandlerAdapter#channelRegistered(io.netty.channel.ChannelHandlerContext) - */ - @Override - public void channelRegistered(ChannelHandlerContext ctx) throws Exception { - - if (isClosed()) { - logger.debug("{} Dropping register for a closed channel", logPrefix()); - } - - synchronized (stateLock) { - channel = ctx.channel(); - } - - if (debugEnabled) { - logPrefix = null; - logger.debug("{} channelRegistered()", logPrefix()); - } - - setState(LifecycleState.REGISTERED); - - buffer.clear(); - ctx.fireChannelRegistered(); - } - - @Override - public void channelUnregistered(ChannelHandlerContext ctx) throws Exception { - - if (debugEnabled) { - logger.debug("{} channelUnregistered()", logPrefix()); - } - - if (channel != null && ctx.channel() != channel) { - logger.debug("{} My channel and ctx.channel mismatch. Propagating event to other listeners", logPrefix()); - ctx.fireChannelUnregistered(); - return; - } - - if (isClosed()) { - cancelCommands("Connection closed"); - } - - synchronized (stateLock) { - channel = null; - } - - ctx.fireChannelUnregistered(); - } - - /** - * @see io.netty.channel.ChannelInboundHandlerAdapter#channelRead(io.netty.channel.ChannelHandlerContext, java.lang.Object) - */ - @Override - public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { - - ByteBuf input = (ByteBuf) msg; - - if (!input.isReadable() || input.refCnt() == 0) { - logger.warn("{} Input not readable {}, {}", logPrefix(), input.isReadable(), input.refCnt()); - return; - } - - if (debugEnabled) { - logger.debug("{} Received: {} bytes, {} queued commands", logPrefix(), input.readableBytes(), queue.size()); - } - - try { - if (buffer.refCnt() < 1) { - logger.warn("{} Ignoring received data for closed or abandoned connection", logPrefix()); - return; - } - - if (debugEnabled && ctx.channel() != channel) { - logger.debug("{} Ignoring data for a non-registered channel {}", logPrefix(), ctx.channel()); - return; - } - - if (traceEnabled) { - logger.trace("{} Buffer: {}", logPrefix(), input.toString(Charset.defaultCharset()).trim()); - } - - buffer.writeBytes(input); - - decode(ctx, buffer); - } finally { - input.release(); - } - } - - protected void decode(ChannelHandlerContext ctx, ByteBuf buffer) throws InterruptedException { - - while (!queue.isEmpty()) { - - RedisCommand command = queue.peek(); - if (debugEnabled) { - logger.debug("{} Queue contains: {} commands", logPrefix(), queue.size()); - } - - WithLatency withLatency = getWithLatency(command); - - if (!rsm.decode(buffer, command, command.getOutput())) { - return; - } - - recordLatency(withLatency, command.getType()); - - queue.poll(); - - try { - command.complete(); - } catch (Exception e) { - logger.warn("{} Unexpected exception during command completion: {}", logPrefix, e.toString(), e); - } - - if (buffer.refCnt() != 0) { - buffer.discardReadBytes(); - } - } - } - - private WithLatency getWithLatency(RedisCommand command) { - WithLatency withLatency = null; - - if (clientResources.commandLatencyCollector().isEnabled()) { - RedisCommand unwrappedCommand = CommandWrapper.unwrap(command); - if (unwrappedCommand instanceof WithLatency) { - withLatency = (WithLatency) unwrappedCommand; - if (withLatency.getFirstResponse() == -1) { - withLatency.firstResponse(nanoTime()); - } - } - } - return withLatency; - } - - private void recordLatency(WithLatency withLatency, ProtocolKeyword commandType) { - - if (withLatency != null && clientResources.commandLatencyCollector().isEnabled() && channel != null - && remote() != null) { - - long firstResponseLatency = nanoTime() - withLatency.getFirstResponse(); - long completionLatency = nanoTime() - withLatency.getSent(); - - clientResources.commandLatencyCollector().recordCommandLatency(local(), remote(), commandType, firstResponseLatency, - completionLatency); - } - } - - private SocketAddress remote() { - return channel.remoteAddress(); - } - - private SocketAddress local() { - if (channel.localAddress() != null) { - return channel.localAddress(); - } - return LocalAddress.ANY; - } - - @Override - public > C write(C command) { - - LettuceAssert.notNull(command, "Command must not be null"); - - try { - incrementWriters(); - - if (lifecycleState == LifecycleState.CLOSED) { - throw new RedisException("Connection is closed"); - } - - if (clientOptions.getRequestQueueSize() != Integer.MAX_VALUE - && commandBuffer.size() + queue.size() >= clientOptions.getRequestQueueSize()) { - throw new RedisException("Request queue size exceeded: " + clientOptions.getRequestQueueSize() - + ". Commands are not accepted until the queue size drops."); - } - - if ((channel == null || !isConnected()) && isRejectCommand()) { - throw new RedisException("Currently not connected. Commands are rejected."); - } - - /** - * This lock causes safety for connection activation and somehow netty gets more stable and predictable performance - * than without a lock and all threads are hammering towards writeAndFlush. - */ - Channel channel = this.channel; - if (autoFlushCommands) { - - if (channel != null && isConnected() && channel.isActive()) { - writeToChannel(command, channel); - } else { - writeToBuffer(command); - } - - } else { - bufferCommand(command); - } - } finally { - decrementWriters(); - if (debugEnabled) { - logger.debug("{} write() done", logPrefix()); - } - } - - return command; - } - - protected , T> void writeToBuffer(C command) { - - if (commandBuffer.contains(command) || queue.contains(command)) { - return; - } - - if (connectionError != null) { - if (debugEnabled) { - logger.debug("{} writeToBuffer() Completing command {} due to connection error", logPrefix(), command); - } - command.completeExceptionally(connectionError); - - return; - } - - bufferCommand(command); - } - - protected , T> void writeToChannel(C command, Channel channel) { - - if (reliability == Reliability.AT_MOST_ONCE) { - // cancel on exceptions and remove from queue, because there is no housekeeping - writeAndFlush(command).addListener(new AtMostOnceWriteListener(command, queue)); - } - - if (reliability == Reliability.AT_LEAST_ONCE) { - // commands are ok to stay within the queue, reconnect will retrigger them - writeAndFlush(command).addListener(WRITE_LOG_LISTENER); - } - } - - protected void bufferCommand(RedisCommand command) { - - if (debugEnabled) { - logger.debug("{} write() buffering command {}", logPrefix(), command); - } - - commandBuffer.add(command); - } - - /** - * Wait for stateLock and increment writers. Will wait if stateLock is locked and if writer counter is negative. - */ - protected void incrementWriters() { - - if (exclusiveLockOwner == Thread.currentThread()) { - return; - } - - synchronized (stateLock) { - for (;;) { - - if (writers.get() >= 0) { - writers.incrementAndGet(); - return; - } - } - } - } - - /** - * Decrement writers without any wait. - */ - protected void decrementWriters() { - - if (exclusiveLockOwner == Thread.currentThread()) { - return; - } - - writers.decrementAndGet(); - } - - /** - * Wait for stateLock and no writers. Must be used in an outer {@code synchronized} block to prevent interleaving with other - * methods using writers. Sets writers to a negative value to create a lock for {@link #incrementWriters()}. - */ - protected void lockWritersExclusive() { - - if (exclusiveLockOwner == Thread.currentThread()) { - writers.decrementAndGet(); - return; - } - - synchronized (stateLock) { - for (;;) { - - if (writers.compareAndSet(0, -1)) { - exclusiveLockOwner = Thread.currentThread(); - return; - } - } - } - } - - /** - * Unlock writers. - */ - protected void unlockWritersExclusive() { - - if (exclusiveLockOwner == Thread.currentThread()) { - if (writers.incrementAndGet() == 0) { - exclusiveLockOwner = null; - } - } - } - - private boolean isRejectCommand() { - - if (clientOptions == null) { - return false; - } - - switch (clientOptions.getDisconnectedBehavior()) { - case REJECT_COMMANDS: - return true; - - case ACCEPT_COMMANDS: - return false; - - default: - case DEFAULT: - if (!clientOptions.isAutoReconnect()) { - return true; - } - - return false; - } - } - - boolean isConnected() { - return lifecycleState.ordinal() >= LifecycleState.CONNECTED.ordinal() - && lifecycleState.ordinal() < LifecycleState.DISCONNECTED.ordinal(); - } - - @Override - @SuppressWarnings({ "rawtypes", "unchecked" }) - public void flushCommands() { - - if (debugEnabled) { - logger.debug("{} flushCommands()", logPrefix()); - } - - if (channel != null && isConnected()) { - List> queuedCommands; - - synchronized (stateLock) { - try { - lockWritersExclusive(); - - if (commandBuffer.isEmpty()) { - return; - } - - queuedCommands = new ArrayList<>(commandBuffer.size()); - RedisCommand cmd; - while ((cmd = commandBuffer.poll()) != null) { - queuedCommands.add(cmd); - } - } finally { - unlockWritersExclusive(); - } - } - - if (debugEnabled) { - logger.debug("{} flushCommands() Flushing {} commands", logPrefix(), queuedCommands.size()); - } - - if (reliability == Reliability.AT_MOST_ONCE) { - // cancel on exceptions and remove from queue, because there is no housekeeping - writeAndFlush(queuedCommands).addListener(new AtMostOnceWriteListener(queuedCommands, this.queue)); - } - - if (reliability == Reliability.AT_LEAST_ONCE) { - // commands are ok to stay within the queue, reconnect will retrigger them - writeAndFlush(queuedCommands).addListener(WRITE_LOG_LISTENER); - } - } - } - - private > ChannelFuture writeAndFlush(List commands) { - - if (debugEnabled) { - logger.debug("{} write() writeAndFlush commands {}", logPrefix(), commands); - } - - transportBuffer.addAll(commands); - return channel.writeAndFlush(commands); - } - - private > ChannelFuture writeAndFlush(C command) { - - if (debugEnabled) { - logger.debug("{} write() writeAndFlush command {}", logPrefix(), command); - } - - transportBuffer.add(command); - return channel.writeAndFlush(command); - } - - /** - * @see io.netty.channel.ChannelDuplexHandler#write(io.netty.channel.ChannelHandlerContext, java.lang.Object, - * io.netty.channel.ChannelPromise) - */ - @Override - @SuppressWarnings("unchecked") - public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { - - if (debugEnabled) { - logger.debug("{} write(ctx, {}, promise)", logPrefix(), msg); - } - - if (msg instanceof RedisCommand) { - writeSingleCommand(ctx, (RedisCommand) msg, promise); - return; - } - - if (msg instanceof Collection) { - writeBatch(ctx, (Collection>) msg, promise); - } - } - - private void writeSingleCommand(ChannelHandlerContext ctx, RedisCommand command, ChannelPromise promise) - throws Exception { - - if (command.isCancelled()) { - transportBuffer.remove(command); - return; - } - - queueCommand(command, promise); - ctx.write(command, promise); - } - - private void writeBatch(ChannelHandlerContext ctx, Collection> msg, ChannelPromise promise) - throws Exception { - - Collection> commands = msg; - Collection> toWrite = commands; - - boolean cancelledCommands = false; - for (RedisCommand command : commands) { - if (command.isCancelled()) { - cancelledCommands = true; - break; - } - } - - if (cancelledCommands) { - - toWrite = new ArrayList<>(commands.size()); - - for (RedisCommand command : commands) { - - if (command.isCancelled()) { - transportBuffer.remove(command); - continue; - } - - toWrite.add(command); - queueCommand(command, promise); - } - } else { - - for (RedisCommand command : toWrite) { - queueCommand(command, promise); - } - } - - if (!toWrite.isEmpty()) { - ctx.write(toWrite, promise); - } - } - - private void queueCommand(RedisCommand command, ChannelPromise promise) throws Exception { - - try { - - if (command.getOutput() == null) { - // fire&forget commands are excluded from metrics - command.complete(); - } else { - - queue.add(command); - - if (clientResources.commandLatencyCollector().isEnabled()) { - RedisCommand unwrappedCommand = CommandWrapper.unwrap(command); - if (unwrappedCommand instanceof WithLatency) { - WithLatency withLatency = (WithLatency) unwrappedCommand; - withLatency.firstResponse(-1); - withLatency.sent(nanoTime()); - } - } - } - transportBuffer.remove(command); - } catch (Exception e) { - command.completeExceptionally(e); - promise.setFailure(e); - throw e; - } - } - - private long nanoTime() { - return System.nanoTime(); - } - - @Override - public void channelActive(ChannelHandlerContext ctx) throws Exception { - - logPrefix = null; - connectionWatchdog = null; - - if (debugEnabled) { - logger.debug("{} channelActive()", logPrefix()); - } - - if (ctx != null && ctx.pipeline() != null) { - - Map map = ctx.pipeline().toMap(); - - for (ChannelHandler handler : map.values()) { - if (handler instanceof ConnectionWatchdog) { - connectionWatchdog = (ConnectionWatchdog) handler; - } - } - } - - synchronized (stateLock) { - try { - lockWritersExclusive(); - setState(LifecycleState.CONNECTED); - - try { - // Move queued commands to buffer before issuing any commands because of connection activation. - // That's necessary to prepend queued commands first as some commands might get into the queue - // after the connection was disconnected. They need to be prepended to the command buffer - moveQueuedCommandsToCommandBuffer(); - activateCommandHandlerAndExecuteBufferedCommands(ctx); - } catch (Exception e) { - - if (debugEnabled) { - logger.debug("{} channelActive() ran into an exception", logPrefix()); - } - - if (clientOptions.isCancelCommandsOnReconnectFailure()) { - reset(); - } - - throw e; - } - } finally { - unlockWritersExclusive(); - } - } - - super.channelActive(ctx); - if (channel != null) { - channel.eventLoop().submit(new Runnable() { - @Override - public void run() { - channel.pipeline().fireUserEventTriggered(new ConnectionEvents.Activated()); - } - }); - } - - if (debugEnabled) { - logger.debug("{} channelActive() done", logPrefix()); - } - } - - private void moveQueuedCommandsToCommandBuffer() { - - List> queuedCommands = drainCommands(queue); - Collections.reverse(queuedCommands); - - List> transportBufferCommands = drainCommands(transportBuffer); - Collections.reverse(transportBufferCommands); - - // Queued commands first because they reached the queue before commands that are still in the transport buffer. - queuedCommands.addAll(transportBufferCommands); - - logger.debug("{} moveQueuedCommandsToCommandBuffer {} command(s) added to buffer", logPrefix(), queuedCommands.size()); - for (RedisCommand command : queuedCommands) { - commandBuffer.addFirst(command); - } - } - - private List> drainCommands(Collection> source) { - - List> target = new ArrayList<>(source.size()); - target.addAll(source); - source.removeAll(target); - return target; - } - - protected void activateCommandHandlerAndExecuteBufferedCommands(ChannelHandlerContext ctx) { - - connectionError = null; - - if (debugEnabled) { - logger.debug("{} activateCommandHandlerAndExecuteBufferedCommands {} command(s) buffered", logPrefix(), - commandBuffer.size()); - } - - channel = ctx.channel(); - - if (redisChannelHandler != null) { - - if (debugEnabled) { - logger.debug("{} activating channel handler", logPrefix()); - } - - setState(LifecycleState.ACTIVATING); - redisChannelHandler.activated(); - } - setState(LifecycleState.ACTIVE); - - flushCommands(); - } - - /** - * @see io.netty.channel.ChannelInboundHandlerAdapter#channelInactive(io.netty.channel.ChannelHandlerContext) - */ - @Override - public void channelInactive(ChannelHandlerContext ctx) throws Exception { - - if (debugEnabled) { - logger.debug("{} channelInactive()", logPrefix()); - } - - if (channel != null && ctx.channel() != channel) { - logger.debug("{} My channel and ctx.channel mismatch. Propagating event to other listeners.", logPrefix()); - super.channelInactive(ctx); - return; - } - - synchronized (stateLock) { - try { - lockWritersExclusive(); - setState(LifecycleState.DISCONNECTED); - - if (redisChannelHandler != null) { - - if (debugEnabled) { - logger.debug("{} deactivating channel handler", logPrefix()); - } - - setState(LifecycleState.DEACTIVATING); - redisChannelHandler.deactivated(); - } - - setState(LifecycleState.DEACTIVATED); - - // Shift all commands to the commandBuffer so the queue is empty. - // Allows to run onConnect commands before executing buffered commands - commandBuffer.addAll(queue); - queue.removeAll(commandBuffer); - - } finally { - unlockWritersExclusive(); - } - } - - rsm.reset(); - - if (debugEnabled) { - logger.debug("{} channelInactive() done", logPrefix()); - } - super.channelInactive(ctx); - } - - protected void setState(LifecycleState lifecycleState) { - - if (this.lifecycleState != LifecycleState.CLOSED) { - synchronized (stateLock) { - this.lifecycleState = lifecycleState; - } - } - } - - protected LifecycleState getState() { - return lifecycleState; - } - - public boolean isClosed() { - return lifecycleState == LifecycleState.CLOSED; - } - - private void cancelCommands(String message) { - - List> toCancel; - synchronized (stateLock) { - try { - lockWritersExclusive(); - toCancel = prepareReset(); - } finally { - unlockWritersExclusive(); - } - } - - for (RedisCommand cmd : toCancel) { - if (cmd.getOutput() != null) { - cmd.getOutput().setError(message); - } - cmd.cancel(); - } - } - - protected List> prepareReset() { - - int size = 0; - if (queue != null) { - size += queue.size(); - } - - if (commandBuffer != null) { - size += commandBuffer.size(); - } - - List> toCancel = new ArrayList<>(size); - - if (queue != null) { - toCancel.addAll(queue); - queue.clear(); - } - - if (commandBuffer != null) { - toCancel.addAll(commandBuffer); - commandBuffer.clear(); - } - return toCancel; - } - - @Override - public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { - - InternalLogLevel logLevel = InternalLogLevel.WARN; - - if (!queue.isEmpty()) { - RedisCommand command = queue.poll(); - if (debugEnabled) { - logger.debug("{} Storing exception in {}", logPrefix(), command); - } - logLevel = InternalLogLevel.DEBUG; - command.completeExceptionally(cause); - } - - if (channel == null || !channel.isActive() || !isConnected()) { - if (debugEnabled) { - logger.debug("{} Storing exception in connectionError", logPrefix()); - } - logLevel = InternalLogLevel.DEBUG; - connectionError = cause; - } - - if (cause instanceof IOException && logLevel.ordinal() > InternalLogLevel.INFO.ordinal()) { - logLevel = InternalLogLevel.INFO; - if (SUPPRESS_IO_EXCEPTION_MESSAGES.contains(cause.getMessage())) { - logLevel = InternalLogLevel.DEBUG; - } - } - - logger.log(logLevel, "{} Unexpected exception during request: {}", logPrefix, cause.toString(), cause); - } - - /** - * Close the connection. - */ - @Override - public void close() { - - if (debugEnabled) { - logger.debug("{} close()", logPrefix()); - } - - if (isClosed()) { - return; - } - - setState(LifecycleState.CLOSED); - Channel currentChannel = this.channel; - if (currentChannel != null) { - currentChannel.pipeline().fireUserEventTriggered(new ConnectionEvents.PrepareClose()); - currentChannel.pipeline().fireUserEventTriggered(new ConnectionEvents.Close()); - - ChannelFuture close = currentChannel.pipeline().close(); - if (currentChannel.isOpen()) { - close.syncUninterruptibly(); - } - } else if (connectionWatchdog != null) { - connectionWatchdog.prepareClose(new ConnectionEvents.PrepareClose()); - } - - rsm.close(); - - if (buffer.refCnt() > 0) { - buffer.release(); - } - } - - /** - * Reset the writer state. Queued commands will be canceled and the internal state will be reset. This is useful when the - * internal state machine gets out of sync with the connection. - */ - @Override - public void reset() { - - if (debugEnabled) { - logger.debug("{} reset()", logPrefix()); - } - - cancelCommands("Reset"); - - rsm.reset(); - - if (buffer.refCnt() > 0) { - buffer.clear(); - } - } - - /** - * Reset the command-handler to the initial not-connected state. - */ - public void initialState() { - - setState(LifecycleState.NOT_CONNECTED); - queue.clear(); - commandBuffer.clear(); - - Channel currentChannel = this.channel; - if (currentChannel != null) { - currentChannel.pipeline().fireUserEventTriggered(new ConnectionEvents.PrepareClose()); - currentChannel.pipeline().fireUserEventTriggered(new ConnectionEvents.Close()); - currentChannel.pipeline().close(); - } - } - - @Override - public void setRedisChannelHandler(RedisChannelHandler redisChannelHandler) { - this.redisChannelHandler = redisChannelHandler; - } - - @Override - public void setAutoFlushCommands(boolean autoFlush) { - synchronized (stateLock) { - this.autoFlushCommands = autoFlush; - } - } - - protected String logPrefix() { - - if (logPrefix != null) { - return logPrefix; - } - - StringBuffer buffer = new StringBuffer(64); - buffer.append('[').append("chid=0x").append(Long.toHexString(commandHandlerId)).append(", ") - .append(ChannelLogDescriptor.logDescriptor(channel)).append(']'); - return logPrefix = buffer.toString(); - } - - public enum LifecycleState { - NOT_CONNECTED, REGISTERED, CONNECTED, ACTIVATING, ACTIVE, DISCONNECTED, DEACTIVATING, DEACTIVATED, CLOSED, - } - - private enum Reliability { - AT_MOST_ONCE, AT_LEAST_ONCE; - } - - private static class AtMostOnceWriteListener implements ChannelFutureListener { - - private final Collection> sentCommands; - private final Queue queue; - - @SuppressWarnings({ "unchecked", "rawtypes" }) - public AtMostOnceWriteListener(RedisCommand sentCommand, Queue queue) { - this((Collection) LettuceLists.newList(sentCommand), queue); - } - - public AtMostOnceWriteListener(Collection> sentCommand, Queue queue) { - this.sentCommands = sentCommand; - this.queue = queue; - } - - @Override - public void operationComplete(ChannelFuture future) throws Exception { - future.await(); - if (future.cause() != null) { - - for (RedisCommand sentCommand : sentCommands) { - sentCommand.completeExceptionally(future.cause()); - } - - queue.removeAll(sentCommands); - } - } - } - - /** - * A generic future listener which logs unsuccessful writes. - */ - static class WriteLogListener implements GenericFutureListener> { - - @Override - public void operationComplete(Future future) throws Exception { - Throwable cause = future.cause(); - if (!future.isSuccess() && !(cause instanceof ClosedChannelException)) { - - String message = "Unexpected exception during request: {}"; - InternalLogLevel logLevel = InternalLogLevel.WARN; - - if (cause instanceof IOException && SUPPRESS_IO_EXCEPTION_MESSAGES.contains(cause.getMessage())) { - logLevel = InternalLogLevel.DEBUG; - } - - logger.log(logLevel, message, cause.toString(), cause); - } - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/CommandKeyword.java b/src/main/java/com/lambdaworks/redis/protocol/CommandKeyword.java deleted file mode 100644 index 130f85bbeb..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/CommandKeyword.java +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -/** - * Keyword modifiers for redis commands. - * - * @author Will Glozer - */ -public enum CommandKeyword implements ProtocolKeyword { - ADDR, ADDSLOTS, AFTER, AGGREGATE, ALPHA, AND, ASK, ASC, ASYNC, BEFORE, BUMPEPOCH, BY, CHANNELS, COPY, COUNT, COUNTKEYSINSLOT, DELSLOTS, DESC, SOFT, HARD, ENCODING, - - FAILOVER, FORGET, FLUSH, FORCE, FLUSHSLOTS, GETNAME, GETKEYSINSLOT, HTSTATS, ID, IDLETIME, KILL, KEYSLOT, LEN, LIMIT, LIST, LOAD, MATCH, - - MAX, MEET, MIN, MOVED, NO, NODE, NODES, NOSAVE, NOT, NUMSUB, NUMPAT, ONE, OR, PAUSE, REFCOUNT, REMOVE, RELOAD, REPLACE, REPLICATE, RESET, - - RESETSTAT, RESTART, REWRITE, SAVECONFIG, SDSLEN, SETNAME, SETSLOT, SLOTS, STABLE, MIGRATING, IMPORTING, SKIPME, SLAVES, STORE, SUM, SEGFAULT, WEIGHTS, - - WITHSCORES, XOR; - - public final byte[] bytes; - - private CommandKeyword() { - bytes = name().getBytes(LettuceCharsets.ASCII); - } - - @Override - public byte[] getBytes() { - return bytes; - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/CommandType.java b/src/main/java/com/lambdaworks/redis/protocol/CommandType.java deleted file mode 100644 index 90ffebafc0..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/CommandType.java +++ /dev/null @@ -1,89 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -/** - * Redis commands. - * - * @author Will Glozer - */ -public enum CommandType implements ProtocolKeyword { - // Connection - - AUTH, ECHO, PING, QUIT, READONLY, READWRITE, SELECT, - - // Server - - BGREWRITEAOF, BGSAVE, CLIENT, COMMAND, CONFIG, DBSIZE, DEBUG, FLUSHALL, FLUSHDB, INFO, MYID, LASTSAVE, ROLE, MONITOR, SAVE, SHUTDOWN, SLAVEOF, SLOWLOG, SYNC, - - // Keys - - DEL, DUMP, EXISTS, EXPIRE, EXPIREAT, KEYS, MIGRATE, MOVE, OBJECT, PERSIST, PEXPIRE, PEXPIREAT, PTTL, RANDOMKEY, RENAME, RENAMENX, RESTORE, TOUCH, TTL, TYPE, SCAN, UNLINK, - - // String - - APPEND, GET, GETRANGE, GETSET, MGET, MSET, MSETNX, SET, SETEX, PSETEX, SETNX, SETRANGE, STRLEN, - - // Numeric - - DECR, DECRBY, INCR, INCRBY, INCRBYFLOAT, - - // List - - BLPOP, BRPOP, BRPOPLPUSH, LINDEX, LINSERT, LLEN, LPOP, LPUSH, LPUSHX, LRANGE, LREM, LSET, LTRIM, RPOP, RPOPLPUSH, RPUSH, RPUSHX, SORT, - - // Hash - - HDEL, HEXISTS, HGET, HGETALL, HINCRBY, HINCRBYFLOAT, HKEYS, HLEN, HSTRLEN, HMGET, HMSET, HSET, HSETNX, HVALS, HSCAN, - - // Transaction - - DISCARD, EXEC, MULTI, UNWATCH, WATCH, - - // HyperLogLog - - PFADD, PFCOUNT, PFMERGE, - - // Pub/Sub - - PSUBSCRIBE, PUBLISH, PUNSUBSCRIBE, SUBSCRIBE, UNSUBSCRIBE, PUBSUB, - - // Sets - - SADD, SCARD, SDIFF, SDIFFSTORE, SINTER, SINTERSTORE, SISMEMBER, SMEMBERS, SMOVE, SPOP, SRANDMEMBER, SREM, SUNION, SUNIONSTORE, SSCAN, - - // Sorted Set - - ZADD, ZCARD, ZCOUNT, ZINCRBY, ZINTERSTORE, ZRANGE, ZRANGEBYSCORE, ZRANK, ZREM, ZREMRANGEBYRANK, ZREMRANGEBYSCORE, ZREVRANGE, ZREVRANGEBYSCORE, ZREVRANK, ZSCORE, ZUNIONSTORE, ZSCAN, ZLEXCOUNT, ZREMRANGEBYLEX, ZRANGEBYLEX, - - // Scripting - - EVAL, EVALSHA, SCRIPT, - - // Bits - - BITCOUNT, BITFIELD, BITOP, GETBIT, SETBIT, BITPOS, - - // Geo - GEOADD, GEORADIUS, GEORADIUSBYMEMBER, GEOENCODE, GEODECODE, GEOPOS, GEODIST, GEOHASH, - - // Others - TIME, WAIT, - - // SENTINEL - SENTINEL, - - // CLUSTER - ASKING, CLUSTER; - - public final byte[] bytes; - - private CommandType() { - bytes = name().getBytes(LettuceCharsets.ASCII); - } - - @Override - public byte[] getBytes() { - return bytes; - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/CommandWrapper.java b/src/main/java/com/lambdaworks/redis/protocol/CommandWrapper.java deleted file mode 100644 index f3675ed03f..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/CommandWrapper.java +++ /dev/null @@ -1,122 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.util.ArrayList; -import java.util.List; -import java.util.function.Consumer; - -import com.lambdaworks.redis.output.CommandOutput; -import io.netty.buffer.ByteBuf; - -/** - * Wrapper for a command. - * - * @author Mark Paluch - */ -public class CommandWrapper implements RedisCommand, CompleteableCommand, DecoratedCommand { - - protected RedisCommand command; - private List> onComplete = new ArrayList<>(); - - public CommandWrapper(RedisCommand command) { - this.command = command; - } - - @Override - public CommandOutput getOutput() { - return command.getOutput(); - } - - @Override - public void complete() { - - command.complete(); - - for (Consumer consumer : onComplete) { - if (getOutput() != null) { - consumer.accept(getOutput().get()); - } else { - consumer.accept(null); - } - } - } - - @Override - public void cancel() { - command.cancel(); - } - - @Override - public CommandArgs getArgs() { - return command.getArgs(); - } - - @Override - public boolean completeExceptionally(Throwable throwable) { - return command.completeExceptionally(throwable); - } - - @Override - public ProtocolKeyword getType() { - return command.getType(); - } - - @Override - public void encode(ByteBuf buf) { - command.encode(buf); - } - - @Override - public boolean isCancelled() { - return command.isCancelled(); - } - - @Override - public void setOutput(CommandOutput output) { - command.setOutput(output); - } - - @Override - public void onComplete(Consumer action) { - onComplete.add(action); - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(" [type=").append(getType()); - sb.append(", output=").append(getOutput()); - sb.append(", commandType=").append(command.getClass().getName()); - sb.append(']'); - return sb.toString(); - } - - @Override - public boolean isDone() { - return command.isDone(); - } - - @Override - public RedisCommand getDelegate() { - return command; - } - - /** - * Unwrap a wrapped command. - * - * @param wrapped - * @param - * @param - * @param - * @return - */ - public static RedisCommand unwrap(RedisCommand wrapped) { - - RedisCommand result = wrapped; - while (result instanceof DecoratedCommand) { - result = ((DecoratedCommand) result).getDelegate(); - } - - return result; - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/CompleteableCommand.java b/src/main/java/com/lambdaworks/redis/protocol/CompleteableCommand.java deleted file mode 100644 index 3d2479c549..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/CompleteableCommand.java +++ /dev/null @@ -1,12 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.util.function.Consumer; - -/** - * @author Mark Paluch - */ -public interface CompleteableCommand { - - void onComplete(Consumer action); - -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/ConnectionWatchdog.java b/src/main/java/com/lambdaworks/redis/protocol/ConnectionWatchdog.java deleted file mode 100644 index a9175d42f7..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/ConnectionWatchdog.java +++ /dev/null @@ -1,316 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import java.net.SocketAddress; -import java.util.concurrent.TimeUnit; -import java.util.function.Supplier; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.ConnectionEvents; -import com.lambdaworks.redis.RedisChannelHandler; -import com.lambdaworks.redis.internal.LettuceAssert; - -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.Delay; -import io.netty.bootstrap.Bootstrap; -import io.netty.channel.*; -import io.netty.channel.group.ChannelGroup; -import io.netty.util.Timeout; -import io.netty.util.Timer; -import io.netty.util.TimerTask; -import io.netty.util.concurrent.EventExecutorGroup; -import io.netty.util.internal.logging.InternalLogLevel; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * A netty {@link ChannelHandler} responsible for monitoring the channel and reconnecting when the connection is lost. - * - * @author Will Glozer - * @author Mark Paluch - */ -@ChannelHandler.Sharable -public class ConnectionWatchdog extends ChannelInboundHandlerAdapter implements TimerTask { - - public static final long LOGGING_QUIET_TIME_MS = TimeUnit.MILLISECONDS.convert(5, TimeUnit.SECONDS); - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(ConnectionWatchdog.class); - - private final Delay reconnectDelay; - private final Bootstrap bootstrap; - private final EventExecutorGroup reconnectWorkers; - private final ReconnectionHandler reconnectionHandler; - private final ReconnectionListener reconnectionListener; - - private Channel channel; - private final Timer timer; - - private SocketAddress remoteAddress; - private long lastReconnectionLogging = -1; - private CommandHandler commandHandler; - - private volatile int attempts; - private volatile boolean armed; - private volatile boolean listenOnChannelInactive; - private volatile Timeout reconnectScheduleTimeout; - - /** - * Create a new watchdog that adds to new connections to the supplied {@link ChannelGroup} and establishes a new - * {@link Channel} when disconnected, while reconnect is true. The socketAddressSupplier can supply the reconnect address. - * - * @param reconnectDelay reconnect delay, must not be {@literal null} - * @param clientOptions client options for the current connection, must not be {@literal null} - * @param bootstrap Configuration for new channels, must not be {@literal null} - * @param timer Timer used for delayed reconnect, must not be {@literal null} - * @param reconnectWorkers executor group for reconnect tasks, must not be {@literal null} - * @param socketAddressSupplier the socket address supplier to obtain an address for reconnection, may be {@literal null} - * @param reconnectionListener the reconnection listener, must not be {@literal null} - */ - public ConnectionWatchdog(Delay reconnectDelay, ClientOptions clientOptions, Bootstrap bootstrap, Timer timer, - EventExecutorGroup reconnectWorkers, Supplier socketAddressSupplier, - ReconnectionListener reconnectionListener) { - - LettuceAssert.notNull(reconnectDelay, "Delay must not be null"); - LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); - LettuceAssert.notNull(bootstrap, "Bootstrap must not be null"); - LettuceAssert.notNull(timer, "Timer must not be null"); - LettuceAssert.notNull(reconnectWorkers, "ReconnectWorkers must not be null"); - LettuceAssert.notNull(reconnectionListener, "ReconnectionListener must not be null"); - - this.reconnectDelay = reconnectDelay; - this.bootstrap = bootstrap; - this.timer = timer; - this.reconnectWorkers = reconnectWorkers; - this.reconnectionListener = reconnectionListener; - Supplier wrappedSocketAddressSupplier = new Supplier() { - @Override - public SocketAddress get() { - - if (socketAddressSupplier != null) { - try { - remoteAddress = socketAddressSupplier.get(); - } catch (RuntimeException e) { - logger.warn("Cannot retrieve the current address from socketAddressSupplier: " + e.toString() - + ", reusing old address " + remoteAddress); - } - } - - return remoteAddress; - } - }; - - this.reconnectionHandler = new ReconnectionHandler(clientOptions, bootstrap, wrappedSocketAddressSupplier); - } - - @Override - public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { - - logger.debug("{} userEventTriggered({}, {})", commandHandler.logPrefix(), ctx, evt); - - if (evt instanceof ConnectionEvents.PrepareClose) { - - ConnectionEvents.PrepareClose prepareClose = (ConnectionEvents.PrepareClose) evt; - prepareClose(prepareClose); - } - - if (evt instanceof ConnectionEvents.Activated) { - attempts = 0; - } - - super.userEventTriggered(ctx, evt); - } - - void prepareClose(ConnectionEvents.PrepareClose prepareClose) { - - setListenOnChannelInactive(false); - setReconnectSuspended(true); - prepareClose.getPrepareCloseFuture().complete(true); - - reconnectionHandler.prepareClose(); - } - - @Override - public void channelActive(ChannelHandlerContext ctx) throws Exception { - - if(commandHandler == null) { - this.commandHandler = ctx.pipeline().get(CommandHandler.class); - } - - reconnectScheduleTimeout = null; - channel = ctx.channel(); - logger.debug("{} channelActive({})", commandHandler.logPrefix(), ctx); - remoteAddress = channel.remoteAddress(); - - super.channelActive(ctx); - } - - @Override - public void channelInactive(ChannelHandlerContext ctx) throws Exception { - - logger.debug("{} channelInactive({})", commandHandler.logPrefix(), ctx); - channel = null; - - - if (listenOnChannelInactive && !reconnectionHandler.isReconnectSuspended()) { - RedisChannelHandler channelHandler = ctx.pipeline().get(RedisChannelHandler.class); - if (channelHandler != null) { - reconnectionHandler.setTimeout(channelHandler.getTimeout()); - reconnectionHandler.setTimeoutUnit(channelHandler.getTimeoutUnit()); - } - - scheduleReconnect(); - } else { - logger.debug("{} Reconnect scheduling disabled", commandHandler.logPrefix(), ctx); - } - - super.channelInactive(ctx); - } - - /** - * Schedule reconnect if channel is not available/not active. - */ - public synchronized void scheduleReconnect() { - - logger.debug("{} scheduleReconnect()", commandHandler.logPrefix()); - - if (!isEventLoopGroupActive()) { - logger.debug("isEventLoopGroupActive() == false"); - return; - } - - if (commandHandler.isClosed()) { - logger.debug("Skip reconnect scheduling, CommandHandler is closed"); - return; - } - - if ((channel == null || !channel.isActive()) && reconnectScheduleTimeout == null) { - attempts++; - - final int attempt = attempts; - int timeout = (int) reconnectDelay.getTimeUnit().toMillis(reconnectDelay.createDelay(attempt)); - logger.debug("Reconnect attempt {}, delay {}ms", attempt, timeout); - this.reconnectScheduleTimeout = timer.newTimeout(new TimerTask() { - @Override - public void run(final Timeout timeout) throws Exception { - - if (!isEventLoopGroupActive()) { - logger.debug("isEventLoopGroupActive() == false"); - return; - } - - reconnectWorkers.submit(() -> { - ConnectionWatchdog.this.run(timeout); - return null; - }); - } - }, timeout, TimeUnit.MILLISECONDS); - } else { - logger.debug("{} Skipping scheduleReconnect() because I have an active channel", commandHandler.logPrefix()); - } - } - - /** - * Reconnect to the remote address that the closed channel was connected to. This creates a new {@link ChannelPipeline} with - * the same handler instances contained in the old channel's pipeline. - * - * @param timeout Timer task handle. - * - * @throws Exception when reconnection fails. - */ - @Override - public void run(Timeout timeout) throws Exception { - - reconnectScheduleTimeout = null; - - if (!isEventLoopGroupActive()) { - logger.debug("isEventLoopGroupActive() == false"); - return; - } - - if (commandHandler.isClosed()) { - logger.debug("Skip reconnect scheduling, CommandHandler is closed"); - return; - } - - boolean shouldLog = shouldLog(); - - InternalLogLevel infoLevel = InternalLogLevel.INFO; - InternalLogLevel warnLevel = InternalLogLevel.WARN; - - if (shouldLog) { - lastReconnectionLogging = System.currentTimeMillis(); - } else { - warnLevel = InternalLogLevel.DEBUG; - infoLevel = InternalLogLevel.DEBUG; - } - - try { - reconnectionListener.onReconnect(new ConnectionEvents.Reconnect(attempts)); - reconnect(infoLevel, warnLevel); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw e; - } catch (Exception e) { - logger.log(warnLevel, "Cannot connect: {}", e.toString()); - if (!isReconnectSuspended()) { - scheduleReconnect(); - } - } - } - - protected void reconnect(InternalLogLevel infoLevel, InternalLogLevel warnLevel) throws Exception { - - logger.log(infoLevel, "Reconnecting, last destination was {}", remoteAddress); - reconnectionHandler.reconnect(infoLevel); - } - - private boolean isEventLoopGroupActive() { - - if (!isEventLoopGroupActive(bootstrap.group()) || !isEventLoopGroupActive(reconnectWorkers)) { - return false; - } - - return true; - } - - private boolean isEventLoopGroupActive(EventExecutorGroup executorService) { - - if (executorService.isShutdown() || executorService.isTerminated() || executorService.isShuttingDown()) { - return false; - } - - return true; - } - - private boolean shouldLog() { - - long quietUntil = lastReconnectionLogging + LOGGING_QUIET_TIME_MS; - - if (quietUntil > System.currentTimeMillis()) { - return false; - } - - return true; - } - - public void setListenOnChannelInactive(boolean listenOnChannelInactive) { - this.listenOnChannelInactive = listenOnChannelInactive; - } - - public boolean isListenOnChannelInactive() { - return listenOnChannelInactive; - } - - public boolean isReconnectSuspended() { - return reconnectionHandler.isReconnectSuspended(); - } - - public void setReconnectSuspended(boolean reconnectSuspended) { - reconnectionHandler.setReconnectSuspended(true); - } - - ReconnectionHandler getReconnectionHandler() { - return reconnectionHandler; - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/DecoratedCommand.java b/src/main/java/com/lambdaworks/redis/protocol/DecoratedCommand.java deleted file mode 100644 index ca2968156b..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/DecoratedCommand.java +++ /dev/null @@ -1,17 +0,0 @@ -package com.lambdaworks.redis.protocol; - -/** - * A decorated command allowing access to the underlying {@link #getDelegate()}. - * - * @author Mark Paluch - */ -public interface DecoratedCommand { - - /** - * The underlying command. - * - * @return never {@literal null}. - */ - RedisCommand getDelegate(); - -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/LettuceCharsets.java b/src/main/java/com/lambdaworks/redis/protocol/LettuceCharsets.java deleted file mode 100644 index f3b785952a..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/LettuceCharsets.java +++ /dev/null @@ -1,42 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import java.nio.ByteBuffer; -import java.nio.charset.Charset; - -/** - * {@link Charset}-related utilities. - * - * @author Will Glozer - */ -public class LettuceCharsets { - - /** - * US-ASCII charset. - */ - public static final Charset ASCII = Charset.forName("US-ASCII"); - - /** - * UTF-8 charset. - */ - public static final Charset UTF8 = Charset.forName("UTF-8"); - - /** - * Utility constructor. - */ - private LettuceCharsets() { - - } - - /** - * Create a ByteBuffer from a string using ASCII encoding. - * - * @param s the string - * @return ByteBuffer - */ - public static ByteBuffer buffer(String s) { - return ByteBuffer.wrap(s.getBytes(ASCII)); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/ProtocolKeyword.java b/src/main/java/com/lambdaworks/redis/protocol/ProtocolKeyword.java deleted file mode 100644 index 39ee43f1f6..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/ProtocolKeyword.java +++ /dev/null @@ -1,22 +0,0 @@ -package com.lambdaworks.redis.protocol; - -/** - * Interface for protocol keywords providing an encoded representation. - * - * @author Mark Paluch - */ -public interface ProtocolKeyword { - - /** - * - * @return byte[] encoded representation. - * - */ - byte[] getBytes(); - - /** - * - * @return name of the command. - */ - String name(); -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/ReconnectionHandler.java b/src/main/java/com/lambdaworks/redis/protocol/ReconnectionHandler.java deleted file mode 100644 index b48cb7abd4..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/ReconnectionHandler.java +++ /dev/null @@ -1,157 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.net.SocketAddress; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; -import java.util.function.Supplier; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.RedisChannelInitializer; -import com.lambdaworks.redis.internal.LettuceAssert; - -import io.netty.bootstrap.Bootstrap; -import io.netty.channel.Channel; -import io.netty.channel.ChannelFuture; -import io.netty.util.internal.logging.InternalLogLevel; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * @author Mark Paluch - */ -class ReconnectionHandler { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(ReconnectionHandler.class); - - private final Supplier socketAddressSupplier; - private final Bootstrap bootstrap; - private final ClientOptions clientOptions; - - private TimeUnit timeoutUnit = TimeUnit.SECONDS; - private long timeout = 60; - - private volatile ChannelFuture currentFuture; - private boolean reconnectSuspended; - - ReconnectionHandler(ClientOptions clientOptions, Bootstrap bootstrap, - Supplier socketAddressSupplier) { - - LettuceAssert.notNull(socketAddressSupplier, "SocketAddressSupplier must not be null"); - LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); - LettuceAssert.notNull(bootstrap, "Bootstrap must not be null"); - - this.socketAddressSupplier = socketAddressSupplier; - this.bootstrap = bootstrap; - this.clientOptions = clientOptions; - } - - protected boolean reconnect(InternalLogLevel infoLevel) throws Exception { - - SocketAddress remoteAddress = socketAddressSupplier.get(); - - try { - long timeLeft = timeoutUnit.toNanos(timeout); - long start = System.nanoTime(); - - logger.debug("Reconnecting to Redis at {}", remoteAddress); - currentFuture = bootstrap.connect(remoteAddress); - if (!currentFuture.await(timeLeft, TimeUnit.NANOSECONDS)) { - if (currentFuture.isCancellable()) { - currentFuture.cancel(true); - } - - throw new TimeoutException("Reconnection attempt exceeded timeout of " + timeout + " " + timeoutUnit); - } - - currentFuture.sync(); - - Channel channel = currentFuture.channel(); - - RedisChannelInitializer channelInitializer = channel.pipeline().get(RedisChannelInitializer.class); - CommandHandler commandHandler = channel.pipeline().get(CommandHandler.class); - - if (channelInitializer == null) { - logger.warn("Reconnection attempt without a RedisChannelInitializer in the channel pipeline"); - close(channel); - return false; - } - - if (commandHandler == null) { - logger.warn("Reconnection attempt without a CommandHandler in the channel pipeline"); - close(channel); - return false; - } - - try { - timeLeft -= System.nanoTime() - start; - channelInitializer.channelInitialized().get(Math.max(0, timeLeft), TimeUnit.NANOSECONDS); - if (logger.isDebugEnabled()) { - logger.log(infoLevel, "Reconnected to {}, Channel {}", remoteAddress, - ChannelLogDescriptor.logDescriptor(channel)); - } else { - logger.log(infoLevel, "Reconnected to {}", remoteAddress); - } - return true; - } catch (TimeoutException e) { - channelInitializer.channelInitialized().cancel(true); - } catch (Exception e) { - if (clientOptions.isCancelCommandsOnReconnectFailure()) { - commandHandler.reset(); - } - - if (clientOptions.isSuspendReconnectOnProtocolFailure()) { - logger.error("Cannot initialize channel. Disabling autoReconnect", e); - setReconnectSuspended(true); - } else { - logger.error("Cannot initialize channel.", e); - throw e; - } - } - } finally { - currentFuture = null; - } - - return false; - } - - private void close(Channel channel) { - if (channel != null && channel.isOpen()) { - channel.close(); - } - } - - public boolean isReconnectSuspended() { - return reconnectSuspended; - } - - public void setReconnectSuspended(boolean reconnectSuspended) { - this.reconnectSuspended = reconnectSuspended; - } - - public TimeUnit getTimeoutUnit() { - return timeoutUnit; - } - - public void setTimeoutUnit(TimeUnit timeoutUnit) { - this.timeoutUnit = timeoutUnit; - } - - public long getTimeout() { - return timeout; - } - - public void setTimeout(long timeout) { - this.timeout = timeout; - } - - public void prepareClose() { - - if (currentFuture != null && !currentFuture.isDone()) { - currentFuture.cancel(true); - } - } - - ClientOptions getClientOptions() { - return clientOptions; - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/ReconnectionListener.java b/src/main/java/com/lambdaworks/redis/protocol/ReconnectionListener.java deleted file mode 100644 index 533cd13609..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/ReconnectionListener.java +++ /dev/null @@ -1,27 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import com.lambdaworks.redis.ConnectionEvents; - -/** - * Listener for reconnection events. - * - * @author Mark Paluch - * @since 4.2 - */ -public interface ReconnectionListener { - - ReconnectionListener NO_OP = new ReconnectionListener() { - @Override - public void onReconnect(ConnectionEvents.Reconnect reconnect) { - - } - }; - - /** - * Listener method notified on a reconnection attempt. - * - * @param reconnect the event payload. - */ - void onReconnect(ConnectionEvents.Reconnect reconnect); - -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/RedisCommand.java b/src/main/java/com/lambdaworks/redis/protocol/RedisCommand.java deleted file mode 100644 index 507b678918..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/RedisCommand.java +++ /dev/null @@ -1,83 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import com.lambdaworks.redis.output.CommandOutput; -import io.netty.buffer.ByteBuf; - -/** - * A redis command that holds an output, arguments and a state, whether it is completed or not. - * - * Commands can be wrapped. Outer commands have to notify inner commands but inner commands do not communicate with outer - * commands. - * - * @author Mark Paluch - * @param Key type. - * @param Value type. - * @param Output type. - * @since 3.0 - */ -public interface RedisCommand { - - /** - * The command output. Can be null. - * - * @return the command output. - */ - CommandOutput getOutput(); - - /** - * Complete a command. - */ - void complete(); - - /** - * Cancel a command. - */ - void cancel(); - - /** - * - * @return the current command args - */ - CommandArgs getArgs(); - - /** - * - * @param throwable the exception - * @return {@code true} if this invocation caused this CompletableFuture to transition to a completed state, else - * {@code false} - */ - boolean completeExceptionally(Throwable throwable); - - /** - * - * @return the redis command type like {@literal SADD}, {@literal HMSET}, {@literal QUIT}. - */ - ProtocolKeyword getType(); - - /** - * Encode the command. - * - * @param buf byte buffer to operate on. - */ - void encode(ByteBuf buf); - - /** - * - * @return true if the command is cancelled. - */ - boolean isCancelled(); - - /** - * - * @return true if the command is completed. - */ - boolean isDone(); - - /** - * Set a new output. Only possible as long as the command is not completed/cancelled. - * - * @param output the new command output - * @throws IllegalStateException if the command is cancelled/completed - */ - void setOutput(CommandOutput output); -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/RedisStateMachine.java b/src/main/java/com/lambdaworks/redis/protocol/RedisStateMachine.java deleted file mode 100644 index a4677d43e6..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/RedisStateMachine.java +++ /dev/null @@ -1,499 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import static com.lambdaworks.redis.protocol.LettuceCharsets.buffer; -import static com.lambdaworks.redis.protocol.RedisStateMachine.State.Type.*; - -import java.nio.ByteBuffer; -import java.util.Arrays; -import java.util.concurrent.atomic.AtomicBoolean; - -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.output.CommandOutput; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.ByteBufProcessor; -import io.netty.buffer.PooledByteBufAllocator; -import io.netty.util.Version; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * State machine that decodes redis server responses encoded according to the Unified - * Request Protocol (RESP). - * - * @param Key type. - * @param Value type. - * @author Will Glozer - * @author Mark Paluch - */ -public class RedisStateMachine { - - private static final InternalLogger logger = InternalLoggerFactory.getInstance(RedisStateMachine.class); - private static final ByteBuffer QUEUED = buffer("QUEUED"); - - static class State { - enum Type { - SINGLE, ERROR, INTEGER, BULK, MULTI, BYTES - } - - Type type = null; - int count = -1; - } - - private final State[] stack; - private int stackElements; - - // If DEBUG level logging has been enabled at startup. - private final boolean debugEnabled; - private final LongProcessor longProcessor; - private final ByteBuf responseElementBuffer = PooledByteBufAllocator.DEFAULT.directBuffer(1024); - private final AtomicBoolean closed = new AtomicBoolean(); - - /** - * Initialize a new instance. - */ - public RedisStateMachine() { - stack = new State[32]; - debugEnabled = logger.isDebugEnabled(); - - Version nettyBufferVersion = Version.identify().get("netty-buffer"); - - boolean useNetty40ByteBufCompatibility = false; - if (nettyBufferVersion != null) { - useNetty40ByteBufCompatibility = nettyBufferVersion.artifactVersion().startsWith("4.0"); - } - - LongProcessor longProcessor; - if (!useNetty40ByteBufCompatibility) { - try { - longProcessor = (LongProcessor) Class - .forName("com.lambdaworks.redis.protocol.RedisStateMachine$Netty41LongProcessor").newInstance(); - } catch (ReflectiveOperationException e) { - throw new RedisException("Cannot create Netty41ToLongProcessor instance", e); - } - } else { - longProcessor = new LongProcessor(); - } - - this.longProcessor = longProcessor; - } - - /** - * Decode a command using the input buffer. - * - * @param buffer Buffer containing data from the server. - * @param output Current command output. - * @return true if a complete response was read. - */ - public boolean decode(ByteBuf buffer, CommandOutput output) { - return decode(buffer, null, output); - } - - /** - * Attempt to decode a redis response and return a flag indicating whether a complete response was read. - * - * @param buffer Buffer containing data from the server. - * @param command the command itself - * @param output Current command output. - * @return true if a complete response was read. - */ - public boolean decode(ByteBuf buffer, RedisCommand command, CommandOutput output) { - int length, end; - ByteBuffer bytes; - - if (debugEnabled) { - logger.debug("Decode {}", command); - } - - if (isEmpty(stack)) { - add(stack, new State()); - } - - if (output == null) { - return isEmpty(stack); - } - - loop: - - while (!isEmpty(stack)) { - State state = peek(stack); - - if (state.type == null) { - if (!buffer.isReadable()) { - break; - } - state.type = readReplyType(buffer); - buffer.markReaderIndex(); - } - - switch (state.type) { - case SINGLE: - if ((bytes = readLine(buffer)) == null) { - break loop; - } - - if (!QUEUED.equals(bytes)) { - safeSet(output, bytes, command); - } - break; - case ERROR: - if ((bytes = readLine(buffer)) == null) { - break loop; - } - safeSetError(output, bytes, command); - break; - case INTEGER: - if ((end = findLineEnd(buffer)) == -1) { - break loop; - } - long integer = readLong(buffer, buffer.readerIndex(), end); - safeSet(output, integer, command); - break; - case BULK: - if ((end = findLineEnd(buffer)) == -1) { - break loop; - } - length = (int) readLong(buffer, buffer.readerIndex(), end); - if (length == -1) { - safeSet(output, null, command); - } else { - state.type = BYTES; - state.count = length + 2; - buffer.markReaderIndex(); - continue loop; - } - break; - case MULTI: - if (state.count == -1) { - if ((end = findLineEnd(buffer)) == -1) { - break loop; - } - length = (int) readLong(buffer, buffer.readerIndex(), end); - state.count = length; - buffer.markReaderIndex(); - safeMulti(output, state.count, command); - } - - if (state.count <= 0) { - break; - } - - state.count--; - addFirst(stack, new State()); - - continue loop; - case BYTES: - if ((bytes = readBytes(buffer, state.count)) == null) { - break loop; - } - safeSet(output, bytes, command); - break; - default: - throw new IllegalStateException("State " + state.type + " not supported"); - } - - buffer.markReaderIndex(); - remove(stack); - - output.complete(size(stack)); - } - - if (debugEnabled) { - logger.debug("Decoded {}, empty stack: {}", command, isEmpty(stack)); - } - - return isEmpty(stack); - } - - /** - * Reset the state machine. - */ - public void reset() { - Arrays.fill(stack, null); - stackElements = 0; - } - - /** - * Close the state machine to free resources. - */ - public void close() { - if(closed.compareAndSet(false, true)) { - responseElementBuffer.release(); - } - } - - private int findLineEnd(ByteBuf buffer) { - - int start = buffer.readerIndex(); - int index = buffer.indexOf(start, buffer.writerIndex(), (byte) '\n'); - return (index > 0 && buffer.getByte(index - 1) == '\r') ? index : -1; - } - - private State.Type readReplyType(ByteBuf buffer) { - byte b = buffer.readByte(); - switch (b) { - case '+': - return SINGLE; - case '-': - return ERROR; - case ':': - return INTEGER; - case '$': - return BULK; - case '*': - return MULTI; - default: - throw new RedisException("Invalid first byte: " + Byte.toString(b)); - } - } - - private long readLong(ByteBuf buffer, int start, int end) { - return longProcessor.getValue(buffer, start, end); - } - - private ByteBuffer readLine(ByteBuf buffer) { - - ByteBuffer bytes = null; - int end = findLineEnd(buffer); - - if (end > -1) { - int start = buffer.readerIndex(); - responseElementBuffer.clear(); - int size = end - start - 1; - - if (responseElementBuffer.capacity() < size) { - responseElementBuffer.capacity(size); - } - - buffer.readBytes(responseElementBuffer, size); - - bytes = responseElementBuffer.internalNioBuffer(0, size); - - buffer.readerIndex(end + 1); - buffer.markReaderIndex(); - } - return bytes; - } - - private ByteBuffer readBytes(ByteBuf buffer, int count) { - - ByteBuffer bytes = null; - - if (buffer.readableBytes() >= count) { - responseElementBuffer.clear(); - - int size = count - 2; - - if (responseElementBuffer.capacity() < size) { - responseElementBuffer.capacity(size); - } - buffer.readBytes(responseElementBuffer, size); - - bytes = responseElementBuffer.internalNioBuffer(0, size); - buffer.readerIndex(buffer.readerIndex() + 2); - } - return bytes; - } - - /** - * Remove the head element from the stack. - * - * @param stack - */ - private void remove(State[] stack) { - stack[stackElements - 1] = null; - stackElements--; - } - - /** - * Add the element to the stack to be the new head element. - * - * @param stack - * @param state - */ - private void addFirst(State[] stack, State state) { - stack[stackElements++] = state; - } - - /** - * Returns the head element without removing it. - * - * @param stack - * @return - */ - private State peek(State[] stack) { - return stack[stackElements - 1]; - } - - /** - * Add a state as tail element. This method shifts the whole stack if the stack is not empty. - * - * @param stack - * @param state - */ - private void add(State[] stack, State state) { - - if (stackElements != 0) { - System.arraycopy(stack, 0, stack, 1, stackElements); - } - - stack[0] = state; - stackElements++; - } - - /** - * @param stack - * @return number of stack elements. - */ - private int size(State[] stack) { - return stackElements; - } - - /** - * @param stack - * @return true if the stack is empty. - */ - private boolean isEmpty(State[] stack) { - return stackElements == 0; - } - - /** - * Safely sets {@link CommandOutput#set(long)}. Completes a command exceptionally in case an exception occurs. - * - * @param output - * @param integer - * @param command - */ - protected void safeSet(CommandOutput output, long integer, RedisCommand command) { - - try { - output.set(integer); - } catch (Exception e) { - command.completeExceptionally(e); - } - } - - /** - * Safely sets {@link CommandOutput#set(ByteBuffer)}. Completes a command exceptionally in case an exception occurs. - * - * @param output - * @param bytes - * @param command - */ - protected void safeSet(CommandOutput output, ByteBuffer bytes, RedisCommand command) { - - try { - output.set(bytes); - } catch (Exception e) { - command.completeExceptionally(e); - } - } - - /** - * Safely sets {@link CommandOutput#multi(int)}. Completes a command exceptionally in case an exception occurs. - * - * @param output - * @param count - * @param command - */ - protected void safeMulti(CommandOutput output, int count, RedisCommand command) { - - try { - output.multi(count); - } catch (Exception e) { - command.completeExceptionally(e); - } - } - - /** - * Safely sets {@link CommandOutput#setError(ByteBuffer)}. Completes a command exceptionally in case an exception occurs. - * - * @param output - * @param bytes - * @param command - */ - protected void safeSetError(CommandOutput output, ByteBuffer bytes, RedisCommand command) { - - try { - output.setError(bytes); - } catch (Exception e) { - command.completeExceptionally(e); - } - } - - /** - * Compatibility code that works also on Netty 4.0. - */ - static class LongProcessor { - - public long getValue(ByteBuf buffer, int start, int end) { - - long value = 0; - - boolean negative = buffer.getByte(start) == '-'; - int offset = negative ? start + 1 : start; - while (offset < end - 1) { - int digit = buffer.getByte(offset++) - '0'; - value = value * 10 - digit; - } - if (!negative) { - value = -value; - } - - buffer.readerIndex(end + 1); - buffer.markReaderIndex(); - return value; - } - } - - /** - * Processor for Netty 4.1. Note {@link ByteBufProcessor} is deprecated but ByteProcessor does not exist in Netty 4.0. So we - * need to stick to that as long as we support Netty 4.0. - */ - @SuppressWarnings("unused") - static class Netty41LongProcessor extends LongProcessor implements ByteBufProcessor { - - long result; - boolean negative; - boolean first; - - @Override - public long getValue(ByteBuf buffer, int start, int end) { - - this.result = 0; - this.first = true; - - buffer.forEachByte(start, end - start - 1, this); - - if (!this.negative) { - this.result = -this.result; - } - buffer.readerIndex(end + 1); - - return this.result; - } - - public boolean process(byte value) throws Exception { - - if (first) { - first = false; - - if (value == '-') { - negative = true; - } else { - negative = false; - int digit = value - '0'; - result = result * 10 - digit; - } - return true; - } - - int digit = value - '0'; - result = result * 10 - digit; - - return true; - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/TransactionalCommand.java b/src/main/java/com/lambdaworks/redis/protocol/TransactionalCommand.java deleted file mode 100644 index 8630e8af2c..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/TransactionalCommand.java +++ /dev/null @@ -1,24 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.util.concurrent.CountDownLatch; - -/** - * A wrapper for commands within a {@literal MULTI} transaction. Commands triggered within a transaction will be completed - * twice. Once on the submission and once during {@literal EXEC}. Only the second completion will complete the underlying - * command. - * - * - * @param Key type. - * @param Value type. - * @param Command output type. - * - * @author Mark Paluch - */ -public class TransactionalCommand extends AsyncCommand implements RedisCommand { - - public TransactionalCommand(RedisCommand command) { - super(command); - latch = new CountDownLatch(2); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/WithLatency.java b/src/main/java/com/lambdaworks/redis/protocol/WithLatency.java deleted file mode 100644 index 7f8ed7c7df..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/WithLatency.java +++ /dev/null @@ -1,45 +0,0 @@ -package com.lambdaworks.redis.protocol; - -/** - * Interface to items recording a latency. Unit of time depends on the actual implementation. - * - * @author Mark Paluch - */ -interface WithLatency { - - /** - * Sets the time of sending the item. - * @param time the time of when the item was sent. - */ - void sent(long time); - - /** - * Sets the time of the first response. - * @param time the time of the first response. - */ - void firstResponse(long time); - - /** - * Set the time of completion. - * @param time the time of completion. - */ - void completed(long time); - - /** - * @return the time of when the item was sent. - */ - long getSent(); - - /** - * - * @return the time of the first response. - */ - long getFirstResponse(); - - /** - * - * @return the time of completion. - */ - long getCompleted(); - -} diff --git a/src/main/java/com/lambdaworks/redis/protocol/package-info.java b/src/main/java/com/lambdaworks/redis/protocol/package-info.java deleted file mode 100644 index 07967b0b47..0000000000 --- a/src/main/java/com/lambdaworks/redis/protocol/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis protocol layer abstraction. - */ -package com.lambdaworks.redis.protocol; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandArgs.java b/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandArgs.java deleted file mode 100644 index d20788e73d..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandArgs.java +++ /dev/null @@ -1,32 +0,0 @@ -package com.lambdaworks.redis.pubsub; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.protocol.CommandArgs; - -/** - * - * Command args for Pub/Sub connections. This implementation hides the first key as PubSub keys are not keys from the key-space. - * - * @author Mark Paluch - * @since 4.2 - */ -class PubSubCommandArgs extends CommandArgs { - - /** - * @param codec Codec used to encode/decode keys and values, must not be {@literal null}. - */ - public PubSubCommandArgs(RedisCodec codec) { - super(codec); - } - - /** - * - * @return always {@literal null}. - */ - @Override - public ByteBuffer getFirstEncodedKey() { - return null; - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandBuilder.java b/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandBuilder.java deleted file mode 100644 index 3bba1b8f0c..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandBuilder.java +++ /dev/null @@ -1,81 +0,0 @@ -package com.lambdaworks.redis.pubsub; - -import static com.lambdaworks.redis.protocol.CommandKeyword.CHANNELS; -import static com.lambdaworks.redis.protocol.CommandKeyword.NUMSUB; -import static com.lambdaworks.redis.protocol.CommandType.*; - -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.output.CommandOutput; -import com.lambdaworks.redis.output.IntegerOutput; -import com.lambdaworks.redis.output.KeyListOutput; -import com.lambdaworks.redis.output.MapOutput; -import com.lambdaworks.redis.protocol.BaseRedisCommandBuilder; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * Dedicated pub/sub command builder to build pub/sub commands. - * - * @author Mark Paluch - * @since 4.2 - */ -@SuppressWarnings("varargs") -class PubSubCommandBuilder extends BaseRedisCommandBuilder { - - static final String MUST_NOT_BE_EMPTY = "must not be empty"; - - PubSubCommandBuilder(RedisCodec codec) { - super(codec); - } - - Command publish(K channel, V message) { - CommandArgs args = new PubSubCommandArgs<>(codec).addKey(channel).addValue(message); - return createCommand(PUBLISH, new IntegerOutput<>(codec), args); - } - - Command> pubsubChannels(K pattern) { - CommandArgs args = new PubSubCommandArgs<>(codec).add(CHANNELS).addKey(pattern); - return createCommand(PUBSUB, new KeyListOutput<>(codec), args); - } - - @SafeVarargs - final Command> pubsubNumsub(K... patterns) { - LettuceAssert.notEmpty(patterns, "patterns " + MUST_NOT_BE_EMPTY); - - CommandArgs args = new PubSubCommandArgs<>(codec).add(NUMSUB).addKeys(patterns); - return createCommand(PUBSUB, new MapOutput<>((RedisCodec) codec), args); - } - - @SafeVarargs - final Command psubscribe(K... patterns) { - LettuceAssert.notEmpty(patterns, "patterns " + MUST_NOT_BE_EMPTY); - - return pubSubCommand(PSUBSCRIBE, new PubSubOutput<>(codec), patterns); - } - - @SafeVarargs - final Command punsubscribe(K... patterns) { - return pubSubCommand(PUNSUBSCRIBE, new PubSubOutput<>(codec), patterns); - } - - @SafeVarargs - final Command subscribe(K... channels) { - LettuceAssert.notEmpty(channels, "channels " + MUST_NOT_BE_EMPTY); - - return pubSubCommand(SUBSCRIBE, new PubSubOutput<>(codec), channels); - } - - @SafeVarargs - final Command unsubscribe(K... channels) { - return pubSubCommand(UNSUBSCRIBE, new PubSubOutput<>(codec), channels); - } - - Command pubSubCommand(CommandType type, CommandOutput output, K... keys) { - return new Command<>(type, output, new PubSubCommandArgs<>(codec).addKeys(keys)); - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandHandler.java b/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandHandler.java deleted file mode 100644 index 2937ce9c61..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/PubSubCommandHandler.java +++ /dev/null @@ -1,67 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.pubsub; - -import java.util.Queue; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.output.CommandOutput; -import com.lambdaworks.redis.protocol.CommandHandler; -import com.lambdaworks.redis.protocol.RedisCommand; - -import com.lambdaworks.redis.resource.ClientResources; -import io.netty.buffer.ByteBuf; -import io.netty.channel.ChannelHandler; -import io.netty.channel.ChannelHandlerContext; - -/** - * A netty {@link ChannelHandler} responsible for writing redis pub/sub commands and reading the response stream from the - * server. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class PubSubCommandHandler extends CommandHandler { - private RedisCodec codec; - private PubSubOutput output; - - /** - * Initialize a new instance. - * - * @param clientOptions client options for the connection - * @param clientResources client resources for this connection - * @param queue Command queue. - * @param codec Codec. - */ - public PubSubCommandHandler(ClientOptions clientOptions, ClientResources clientResources, - Queue> queue, RedisCodec codec) { - super(clientOptions, clientResources, queue); - this.codec = codec; - this.output = new PubSubOutput<>(codec); - } - - @Override - protected void decode(ChannelHandlerContext ctx, ByteBuf buffer) throws InterruptedException { - while (output.type() == null && !queue.isEmpty()) { - CommandOutput currentOutput = queue.peek().getOutput(); - if (!rsm.decode(buffer, currentOutput)) { - return; - } - queue.poll().complete(); - buffer.discardReadBytes(); - if (currentOutput instanceof PubSubOutput) { - ctx.fireChannelRead(currentOutput); - } - } - - while (rsm.decode(buffer, output)) { - ctx.fireChannelRead(output); - output = new PubSubOutput(codec); - buffer.discardReadBytes(); - } - } - -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/PubSubOutput.java b/src/main/java/com/lambdaworks/redis/pubsub/PubSubOutput.java deleted file mode 100644 index 18fecddc70..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/PubSubOutput.java +++ /dev/null @@ -1,96 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.pubsub; - -import java.nio.ByteBuffer; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.output.CommandOutput; - -/** - * One element of the redis pub/sub stream. May be a message or notification of subscription details. - * - * @param Key type. - * @param Value type. - * @param Result type. - * @author Will Glozer - */ -public class PubSubOutput extends CommandOutput { - public enum Type { - message, pmessage, psubscribe, punsubscribe, subscribe, unsubscribe - } - - private Type type; - private K channel; - private K pattern; - private long count; - - public PubSubOutput(RedisCodec codec) { - super(codec, null); - } - - public Type type() { - return type; - } - - public K channel() { - return channel; - } - - public K pattern() { - return pattern; - } - - public long count() { - return count; - } - - @Override - @SuppressWarnings({ "fallthrough", "unchecked" }) - public void set(ByteBuffer bytes) { - - if (bytes == null) { - return; - } - - if (type == null) { - type = Type.valueOf(decodeAscii(bytes)); - return; - } - - handleOutput(bytes); - } - - @SuppressWarnings("unchecked") - private void handleOutput(ByteBuffer bytes) { - switch (type) { - case pmessage: - if (pattern == null) { - pattern = codec.decodeKey(bytes); - break; - } - case message: - if (channel == null) { - channel = codec.decodeKey(bytes); - break; - } - output = (T) codec.decodeValue(bytes); - break; - case psubscribe: - case punsubscribe: - pattern = codec.decodeKey(bytes); - break; - case subscribe: - case unsubscribe: - channel = codec.decodeKey(bytes); - break; - default: - throw new UnsupportedOperationException("Operation " + type + " not supported"); - } - } - - @Override - public void set(long integer) { - count = integer; - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubAdapter.java b/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubAdapter.java deleted file mode 100644 index dc699fad82..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubAdapter.java +++ /dev/null @@ -1,43 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.pubsub; - -/** - * Convenience adapter with an empty implementation of all {@link RedisPubSubListener} callback methods. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public class RedisPubSubAdapter implements RedisPubSubListener { - @Override - public void message(K channel, V message) { - // empty adapter method - } - - @Override - public void message(K pattern, K channel, V message) { - // empty adapter method - } - - @Override - public void subscribed(K channel, long count) { - // empty adapter method - } - - @Override - public void psubscribed(K pattern, long count) { - // empty adapter method - } - - @Override - public void unsubscribed(K channel, long count) { - // empty adapter method - } - - @Override - public void punsubscribed(K pattern, long count) { - // empty adapter method - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubAsyncCommandsImpl.java b/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubAsyncCommandsImpl.java deleted file mode 100644 index a22398e1ed..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubAsyncCommandsImpl.java +++ /dev/null @@ -1,108 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.pubsub; - -import static com.lambdaworks.redis.protocol.CommandType.PSUBSCRIBE; -import static com.lambdaworks.redis.protocol.CommandType.PUNSUBSCRIBE; -import static com.lambdaworks.redis.protocol.CommandType.SUBSCRIBE; -import static com.lambdaworks.redis.protocol.CommandType.UNSUBSCRIBE; - -import com.lambdaworks.redis.RedisAsyncCommandsImpl; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.pubsub.api.async.RedisPubSubAsyncCommands; -import rx.Observable; - -import java.util.List; -import java.util.Map; - -/** - * An asynchronous and thread-safe API for a Redis pub/sub connection. - * - * @param Key type. - * @param Value type. - * @author Will Glozer - */ -public class RedisPubSubAsyncCommandsImpl extends RedisAsyncCommandsImpl implements RedisPubSubConnection, - RedisPubSubAsyncCommands { - - private PubSubCommandBuilder commandBuilder; - - /** - * Initialize a new connection. - * - * @param connection the connection . - * @param codec Codec used to encode/decode keys and values. - */ - public RedisPubSubAsyncCommandsImpl(StatefulRedisPubSubConnection connection, RedisCodec codec) { - super(connection, codec); - this.connection = connection; - this.commandBuilder = new PubSubCommandBuilder<>(codec); - } - - /** - * Add a new listener. - * - * @param listener Listener. - */ - @Override - public void addListener(RedisPubSubListener listener) { - getStatefulConnection().addListener(listener); - } - - /** - * Remove an existing listener. - * - * @param listener Listener. - */ - @Override - public void removeListener(RedisPubSubListener listener) { - getStatefulConnection().removeListener(listener); - } - - @Override - @SuppressWarnings("unchecked") - public RedisFuture psubscribe(K... patterns) { - return (RedisFuture) dispatch(commandBuilder.psubscribe(patterns)); - } - - @Override - @SuppressWarnings("unchecked") - public RedisFuture punsubscribe(K... patterns) { - return (RedisFuture) dispatch(commandBuilder.punsubscribe(patterns)); - } - - @Override - @SuppressWarnings("unchecked") - public RedisFuture subscribe(K... channels) { - return (RedisFuture) dispatch(commandBuilder.subscribe(channels)); - } - - @Override - @SuppressWarnings("unchecked") - public RedisFuture unsubscribe(K... channels) { - return (RedisFuture) dispatch(commandBuilder.unsubscribe(channels)); - } - - @Override - public RedisFuture publish(K channel, V message) { - return dispatch(commandBuilder.publish(channel, message)); - } - - @Override - public RedisFuture> pubsubChannels(K channel) { - return dispatch(commandBuilder.pubsubChannels(channel)); - } - - @Override - public RedisFuture> pubsubNumsub(K... channels) { - return dispatch(commandBuilder.pubsubNumsub(channels)); - } - - @Override - @SuppressWarnings("unchecked") - public StatefulRedisPubSubConnection getStatefulConnection() { - return (StatefulRedisPubSubConnection) super.getStatefulConnection(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubConnection.java b/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubConnection.java deleted file mode 100644 index ed47ea1d1d..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubConnection.java +++ /dev/null @@ -1,62 +0,0 @@ -package com.lambdaworks.redis.pubsub; - -import com.lambdaworks.redis.RedisAsyncConnection; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.pubsub.api.async.RedisPubSubAsyncCommands; - -/** - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - * @deprecated Use {@link RedisPubSubAsyncCommands} - */ -@Deprecated -public interface RedisPubSubConnection extends RedisAsyncConnection { - - /** - * Add a new listener. - * - * @param listener Listener. - */ - void addListener(RedisPubSubListener listener); - - /** - * Remove an existing listener. - * - * @param listener Listener. - */ - void removeListener(RedisPubSubListener listener); - - /** - * Listen for messages published to channels matching the given patterns. - * - * @param patterns the patterns - * @return RedisFuture<Void> Future to synchronize {@code psubscribe} completion - */ - RedisFuture psubscribe(K... patterns); - - /** - * Stop listening for messages posted to channels matching the given patterns. - * - * @param patterns the patterns - * @return RedisFuture<Void> Future to synchronize {@code punsubscribe} completion - */ - RedisFuture punsubscribe(K... patterns); - - /** - * Listen for messages published to the given channels. - * - * @param channels the channels - * @return RedisFuture<Void> Future to synchronize {@code subscribe} completion - */ - RedisFuture subscribe(K... channels); - - /** - * Stop listening for messages posted to the given channels. - * - * @param channels the channels - * @return RedisFuture<Void> Future to synchronize {@code unsubscribe} completion. - */ - RedisFuture unsubscribe(K... channels); -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubListener.java b/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubListener.java deleted file mode 100644 index 8a94f3afd4..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubListener.java +++ /dev/null @@ -1,62 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.pubsub; - -/** - * Interface for redis pub/sub listeners. - * - * @param Key type. - * @param Value type. - * - * @author Will Glozer - */ -public interface RedisPubSubListener { - /** - * Message received from a channel subscription. - * - * @param channel Channel. - * @param message Message. - */ - void message(K channel, V message); - - /** - * Message received from a pattern subscription. - * - * @param pattern Pattern - * @param channel Channel - * @param message Message - */ - void message(K pattern, K channel, V message); - - /** - * Subscribed to a channel. - * - * @param channel Channel - * @param count Subscription count. - */ - void subscribed(K channel, long count); - - /** - * Subscribed to a pattern. - * - * @param pattern Pattern. - * @param count Subscription count. - */ - void psubscribed(K pattern, long count); - - /** - * Unsubscribed from a channel. - * - * @param channel Channel - * @param count Subscription count. - */ - void unsubscribed(K channel, long count); - - /** - * Unsubscribed from a pattern. - * - * @param pattern Channel - * @param count Subscription count. - */ - void punsubscribed(K pattern, long count); -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubReactiveCommandsImpl.java b/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubReactiveCommandsImpl.java deleted file mode 100644 index e6b89ced0d..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/RedisPubSubReactiveCommandsImpl.java +++ /dev/null @@ -1,177 +0,0 @@ -package com.lambdaworks.redis.pubsub; - -import static com.lambdaworks.redis.protocol.CommandType.*; - -import com.lambdaworks.redis.protocol.Command; -import rx.Observable; -import rx.Subscriber; - -import com.lambdaworks.redis.RedisReactiveCommandsImpl; -import com.lambdaworks.redis.api.rx.Success; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.pubsub.api.rx.ChannelMessage; -import com.lambdaworks.redis.pubsub.api.rx.PatternMessage; -import com.lambdaworks.redis.pubsub.api.rx.RedisPubSubReactiveCommands; - -import java.util.Map; - -/** - * A reactive and thread-safe API for a Redis pub/sub connection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -public class RedisPubSubReactiveCommandsImpl extends RedisReactiveCommandsImpl implements - RedisPubSubReactiveCommands { - - private PubSubCommandBuilder commandBuilder; - - /** - * Initialize a new connection. - * - * @param connection the connection . - * @param codec Codec used to encode/decode keys and values. - */ - public RedisPubSubReactiveCommandsImpl(StatefulRedisPubSubConnection connection, RedisCodec codec) { - super(connection, codec); - this.connection = connection; - this.commandBuilder = new PubSubCommandBuilder<>(codec); - } - - /** - * Add a new listener. - * - * @param listener Listener. - */ - @Override - public void addListener(RedisPubSubListener listener) { - getStatefulConnection().addListener(listener); - } - - @Override - public Observable> observePatterns() { - - SubscriptionPubSubListener> listener = new SubscriptionPubSubListener>() { - @Override - public void message(K pattern, K channel, V message) { - if (subscriber == null) { - return; - } - - if (subscriber.isUnsubscribed()) { - subscriber.onCompleted(); - removeListener(this); - subscriber = null; - return; - } - - subscriber.onNext(new PatternMessage<>(pattern, channel, message)); - } - }; - - return Observable.create(new PubSubObservable<>(listener)); - } - - @Override - public Observable> observeChannels() { - - SubscriptionPubSubListener> listener = new SubscriptionPubSubListener>() { - @Override - public void message(K channel, V message) { - if (subscriber == null) { - return; - } - - if (subscriber.isUnsubscribed()) { - subscriber.onCompleted(); - removeListener(this); - subscriber = null; - return; - } - - subscriber.onNext(new ChannelMessage<>(channel, message)); - } - }; - - return Observable.create(new PubSubObservable<>(listener)); - } - - /** - * Remove an existing listener. - * - * @param listener Listener. - */ - @Override - public void removeListener(RedisPubSubListener listener) { - getStatefulConnection().removeListener(listener); - } - - @Override - public Observable psubscribe(K... patterns) { - return getSuccessObservable(createObservable(() -> commandBuilder.psubscribe(patterns))); - } - - @Override - public Observable punsubscribe(K... patterns) { - return getSuccessObservable(createObservable(() -> commandBuilder.punsubscribe(patterns))); - } - - @Override - public Observable subscribe(K... channels) { - return getSuccessObservable(createObservable(() -> commandBuilder.subscribe(channels))); - } - - @Override - public Observable unsubscribe(K... channels) { - return getSuccessObservable(createObservable(() -> commandBuilder.unsubscribe(channels))); - } - - @Override - public Observable publish(K channel, V message) { - return createObservable(() -> commandBuilder.publish(channel, message)); - } - - @Override - public Observable pubsubChannels(K channel) { - return createDissolvingObservable(() -> commandBuilder.pubsubChannels(channel)); - } - - @Override - public Observable> pubsubNumsub(K... channels) { - return createObservable(() -> commandBuilder.pubsubNumsub(channels)); - } - - @Override - @SuppressWarnings("unchecked") - public StatefulRedisPubSubConnection getStatefulConnection() { - return (StatefulRedisPubSubConnection) super.getStatefulConnection(); - } - - private class PubSubObservable implements Observable.OnSubscribe { - - private SubscriptionPubSubListener listener; - - public PubSubObservable(SubscriptionPubSubListener listener) { - this.listener = listener; - } - - @Override - public void call(Subscriber subscriber) { - - listener.activate(subscriber); - subscriber.onStart(); - addListener(listener); - - } - } - - private static class SubscriptionPubSubListener extends RedisPubSubAdapter { - protected Subscriber subscriber; - - public void activate(Subscriber subscriber) { - this.subscriber = subscriber; - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/StatefulRedisPubSubConnection.java b/src/main/java/com/lambdaworks/redis/pubsub/StatefulRedisPubSubConnection.java deleted file mode 100644 index 92e42ff701..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/StatefulRedisPubSubConnection.java +++ /dev/null @@ -1,60 +0,0 @@ -package com.lambdaworks.redis.pubsub; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.pubsub.api.async.RedisPubSubAsyncCommands; -import com.lambdaworks.redis.pubsub.api.rx.RedisPubSubReactiveCommands; -import com.lambdaworks.redis.pubsub.api.sync.RedisPubSubCommands; - -/** - * An asynchronous thread-safe pub/sub connection to a redis server. After one or more channels are subscribed to only pub/sub - * related commands or {@literal QUIT} may be called. - * - * Incoming messages and results of the {@literal subscribe}/{@literal unsubscribe} calls will be passed to all registered - * {@link RedisPubSubListener}s. - * - * A {@link com.lambdaworks.redis.protocol.ConnectionWatchdog} monitors each connection and reconnects automatically until - * {@link #close} is called. Channel and pattern subscriptions are renewed after reconnecting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface StatefulRedisPubSubConnection extends StatefulRedisConnection { - - /** - * Returns the {@link RedisPubSubCommands} API for the current connection. Does not create a new connection. - * - * @return the synchronous API for the underlying connection. - */ - RedisPubSubCommands sync(); - - /** - * Returns the {@link RedisPubSubAsyncCommands} API for the current connection. Does not create a new connection. - * - * @return the asynchronous API for the underlying connection. - */ - RedisPubSubAsyncCommands async(); - - /** - * Returns the {@link RedisPubSubReactiveCommands} API for the current connection. Does not create a new connection. - * - * @return the reactive API for the underlying connection. - */ - RedisPubSubReactiveCommands reactive(); - - /** - * Add a new listener. - * - * @param listener Listener. - */ - void addListener(RedisPubSubListener listener); - - /** - * Remove an existing listener. - * - * @param listener Listener. - */ - void removeListener(RedisPubSubListener listener); - -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/StatefulRedisPubSubConnectionImpl.java b/src/main/java/com/lambdaworks/redis/pubsub/StatefulRedisPubSubConnectionImpl.java deleted file mode 100644 index 8916cef152..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/StatefulRedisPubSubConnectionImpl.java +++ /dev/null @@ -1,201 +0,0 @@ -package com.lambdaworks.redis.pubsub; - -import java.lang.reflect.Array; -import java.util.ArrayList; -import java.util.Collection; -import java.util.List; -import java.util.Set; -import java.util.concurrent.CopyOnWriteArrayList; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; -import com.lambdaworks.redis.pubsub.api.async.RedisPubSubAsyncCommands; -import com.lambdaworks.redis.pubsub.api.rx.RedisPubSubReactiveCommands; -import com.lambdaworks.redis.pubsub.api.sync.RedisPubSubCommands; -import io.netty.channel.ChannelHandler; -import io.netty.util.internal.ConcurrentSet; - -/** - * An thread-safe pub/sub connection to a Redis server. Multiple threads may share one {@link StatefulRedisPubSubConnectionImpl} - * - * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All - * pending commands will be (re)sent after successful reconnection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - */ -@ChannelHandler.Sharable -public class StatefulRedisPubSubConnectionImpl extends StatefulRedisConnectionImpl implements - StatefulRedisPubSubConnection { - - protected final List> listeners; - protected final Set channels; - protected final Set patterns; - - /** - * Initialize a new connection. - * - * @param writer the channel writer - * @param codec Codec used to encode/decode keys and values. - * @param timeout Maximum time to wait for a response. - * @param unit Unit of time for the timeout. - */ - public StatefulRedisPubSubConnectionImpl(RedisChannelWriter writer, RedisCodec codec, long timeout, - TimeUnit unit) { - super(writer, codec, timeout, unit); - - listeners = new CopyOnWriteArrayList<>(); - channels = new ConcurrentSet<>(); - patterns = new ConcurrentSet<>(); - } - - /** - * Add a new listener. - * - * @param listener Listener. - */ - @Override - public void addListener(RedisPubSubListener listener) { - listeners.add(listener); - } - - /** - * Remove an existing listener. - * - * @param listener Listener. - */ - @Override - public void removeListener(RedisPubSubListener listener) { - listeners.remove(listener); - } - - @Override - public RedisPubSubAsyncCommands async() { - return (RedisPubSubAsyncCommands) async; - } - - @Override - protected RedisPubSubAsyncCommandsImpl newRedisAsyncCommandsImpl() { - return new RedisPubSubAsyncCommandsImpl<>(this, codec); - } - - @Override - public RedisPubSubCommands sync() { - return (RedisPubSubCommands) sync; - } - - @Override - protected RedisPubSubCommands newRedisSyncCommandsImpl() { - return syncHandler(async(), RedisConnection.class, RedisClusterConnection.class, RedisPubSubCommands.class); - } - - @Override - public RedisPubSubReactiveCommands reactive() { - return (RedisPubSubReactiveCommands) reactive; - } - - @Override - protected RedisPubSubReactiveCommandsImpl newRedisReactiveCommandsImpl() { - return new RedisPubSubReactiveCommandsImpl<>(this, codec); - } - - @Override - @SuppressWarnings("unchecked") - public void channelRead(Object msg) { - PubSubOutput output = (PubSubOutput) msg; - - // drop empty messages - if (output.type() == null || (output.pattern() == null && output.channel() == null && output.get() == null)) { - return; - } - - updateInternalState(output); - notifyListeners(output); - } - - private void notifyListeners(PubSubOutput output) { - // update listeners - for (RedisPubSubListener listener : listeners) { - switch (output.type()) { - case message: - listener.message(output.channel(), output.get()); - break; - case pmessage: - listener.message(output.pattern(), output.channel(), output.get()); - break; - case psubscribe: - listener.psubscribed(output.pattern(), output.count()); - break; - case punsubscribe: - listener.punsubscribed(output.pattern(), output.count()); - break; - case subscribe: - listener.subscribed(output.channel(), output.count()); - break; - case unsubscribe: - listener.unsubscribed(output.channel(), output.count()); - break; - default: - throw new UnsupportedOperationException("Operation " + output.type() + " not supported"); - } - } - } - - /** - * Re-subscribe to all previously subscribed channels and patterns. - * - * @return list of the futures of the {@literal subscribe} and {@literal psubscribe} commands. - */ - protected List> resubscribe() { - - List> result = new ArrayList<>(); - - if (!channels.isEmpty()) { - result.add(async().subscribe(toArray(channels))); - } - - if (!patterns.isEmpty()) { - result.add(async().psubscribe(toArray(patterns))); - } - - return result; - } - - @SuppressWarnings("unchecked") - private T[] toArray(Collection c) { - Class cls = (Class) c.iterator().next().getClass(); - T[] array = (T[]) Array.newInstance(cls, c.size()); - return c.toArray(array); - } - - private void updateInternalState(PubSubOutput output) { - // update internal state - switch (output.type()) { - case psubscribe: - patterns.add(output.pattern()); - break; - case punsubscribe: - patterns.remove(output.pattern()); - break; - case subscribe: - channels.add(output.channel()); - break; - case unsubscribe: - channels.remove(output.channel()); - break; - default: - break; - } - } - - @Override - public void activated() { - super.activated(); - resubscribe(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/api/async/RedisPubSubAsyncCommands.java b/src/main/java/com/lambdaworks/redis/pubsub/api/async/RedisPubSubAsyncCommands.java deleted file mode 100644 index 3e2f94a696..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/api/async/RedisPubSubAsyncCommands.java +++ /dev/null @@ -1,70 +0,0 @@ -package com.lambdaworks.redis.pubsub.api.async; - -import com.lambdaworks.redis.RedisAsyncConnection; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.pubsub.RedisPubSubConnection; -import com.lambdaworks.redis.pubsub.RedisPubSubListener; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; - -/** - * Asynchronous and thread-safe Redis PubSub API. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public interface RedisPubSubAsyncCommands extends RedisAsyncCommands, RedisPubSubConnection { - - /** - * Add a new listener. - * - * @param listener Listener. - */ - void addListener(RedisPubSubListener listener); - - /** - * Remove an existing listener. - * - * @param listener Listener. - */ - void removeListener(RedisPubSubListener listener); - - /** - * Listen for messages published to channels matching the given patterns. - * - * @param patterns the patterns - * @return RedisFuture<Void> Future to synchronize {@code psubscribe} completion - */ - RedisFuture psubscribe(K... patterns); - - /** - * Stop listening for messages posted to channels matching the given patterns. - * - * @param patterns the patterns - * @return RedisFuture<Void> Future to synchronize {@code punsubscribe} completion - */ - RedisFuture punsubscribe(K... patterns); - - /** - * Listen for messages published to the given channels. - * - * @param channels the channels - * @return RedisFuture<Void> Future to synchronize {@code subscribe} completion - */ - RedisFuture subscribe(K... channels); - - /** - * Stop listening for messages posted to the given channels. - * - * @param channels the channels - * @return RedisFuture<Void> Future to synchronize {@code unsubscribe} completion. - */ - RedisFuture unsubscribe(K... channels); - - /** - * @return the underlying connection. - */ - StatefulRedisPubSubConnection getStatefulConnection(); -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/api/async/package-info.java b/src/main/java/com/lambdaworks/redis/pubsub/api/async/package-info.java deleted file mode 100644 index b29a6841e0..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/api/async/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Pub/Sub Redis API for asynchronous executed commands. - */ -package com.lambdaworks.redis.pubsub.api.async; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/pubsub/api/rx/ChannelMessage.java b/src/main/java/com/lambdaworks/redis/pubsub/api/rx/ChannelMessage.java deleted file mode 100644 index d1b8d49071..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/api/rx/ChannelMessage.java +++ /dev/null @@ -1,38 +0,0 @@ -package com.lambdaworks.redis.pubsub.api.rx; - -/** - * Message payload for a subscription to a channel. - * - * @author Mark Paluch - */ -public class ChannelMessage { - - private final K channel; - private final V message; - - /** - * - * @param channel the channel - * @param message the message - */ - public ChannelMessage(K channel, V message) { - this.channel = channel; - this.message = message; - } - - /** - * - * @return the channel - */ - public K getChannel() { - return channel; - } - - /** - * - * @return the message - */ - public V getMessage() { - return message; - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/api/rx/PatternMessage.java b/src/main/java/com/lambdaworks/redis/pubsub/api/rx/PatternMessage.java deleted file mode 100644 index 019c6b4bee..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/api/rx/PatternMessage.java +++ /dev/null @@ -1,49 +0,0 @@ -package com.lambdaworks.redis.pubsub.api.rx; - -/** - * Message payload for a subscription to a pattern. - * - * @author Mark Paluch - */ -public class PatternMessage { - - private final K pattern; - private final K channel; - private final V message; - - /** - * - * @param pattern the pattern - * @param channel the channel - * @param message the message - */ - public PatternMessage(K pattern, K channel, V message) { - this.pattern = pattern; - this.channel = channel; - this.message = message; - } - - /** - * - * @return the pattern - */ - public K getPattern() { - return pattern; - } - - /** - * - * @return the channel - */ - public K getChannel() { - return channel; - } - - /** - * - * @return the message - */ - public V getMessage() { - return message; - } -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/api/rx/RedisPubSubReactiveCommands.java b/src/main/java/com/lambdaworks/redis/pubsub/api/rx/RedisPubSubReactiveCommands.java deleted file mode 100644 index 80866821f8..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/api/rx/RedisPubSubReactiveCommands.java +++ /dev/null @@ -1,86 +0,0 @@ -package com.lambdaworks.redis.pubsub.api.rx; - -import com.lambdaworks.redis.api.rx.Success; -import rx.Observable; - -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.pubsub.RedisPubSubListener; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; - -/** - * Asynchronous and thread-safe Redis PubSub API. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public interface RedisPubSubReactiveCommands extends RedisReactiveCommands { - - /** - * Add a new listener. - * - * @param listener Listener. - */ - void addListener(RedisPubSubListener listener); - - /** - * Remove an existing listener. - * - * @param listener Listener. - */ - void removeListener(RedisPubSubListener listener); - - /** - * Observable for messages ({@literal pmessage}) received though pattern subscriptions. The connection needs to be - * subscribed to one or more patterns using {@link #psubscribe(Object[])}. - * - * @return hot observable for subscriptions to {@literal pmessage}'s. - */ - Observable> observePatterns(); - - /** - * Observable for messages ({@literal message}) received though channel subscriptions. The connection needs to be subscribed - * to one or more channels using {@link #subscribe(Object[])}. - * - * @return hot observable for subscriptions to {@literal message}'s. - */ - Observable> observeChannels(); - - /** - * Listen for messages published to channels matching the given patterns. - * - * @param patterns the patterns - * @return Observable<Success> Observable for {@code psubscribe} command - */ - Observable psubscribe(K... patterns); - - /** - * Stop listening for messages posted to channels matching the given patterns. - * - * @param patterns the patterns - * @return Observable<Success> Observable for {@code punsubscribe} command - */ - Observable punsubscribe(K... patterns); - - /** - * Listen for messages published to the given channels. - * - * @param channels the channels - * @return Observable<Success> Observable for {@code subscribe} command - */ - Observable subscribe(K... channels); - - /** - * Stop listening for messages posted to the given channels. - * - * @param channels the channels - * @return Observable<Success> Observable for {@code unsubscribe} command. - */ - Observable unsubscribe(K... channels); - - /** - * @return the underlying connection. - */ - StatefulRedisPubSubConnection getStatefulConnection(); -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/api/rx/package-info.java b/src/main/java/com/lambdaworks/redis/pubsub/api/rx/package-info.java deleted file mode 100644 index 96ad27db25..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/api/rx/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Pub/Sub Redis API for reactive commands. - */ -package com.lambdaworks.redis.pubsub.api.rx; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/pubsub/api/sync/RedisPubSubCommands.java b/src/main/java/com/lambdaworks/redis/pubsub/api/sync/RedisPubSubCommands.java deleted file mode 100644 index 7b19319903..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/api/sync/RedisPubSubCommands.java +++ /dev/null @@ -1,65 +0,0 @@ -package com.lambdaworks.redis.pubsub.api.sync; - -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.pubsub.RedisPubSubListener; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; - -/** - * - * Synchronous and thread-safe Redis PubSub API. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisPubSubCommands extends RedisCommands { - - /** - * Add a new listener. - * - * @param listener Listener. - */ - void addListener(RedisPubSubListener listener); - - /** - * Remove an existing listener. - * - * @param listener Listener. - */ - void removeListener(RedisPubSubListener listener); - - /** - * Listen for messages published to channels matching the given patterns. - * - * @param patterns the patterns - */ - void psubscribe(K... patterns); - - /** - * Stop listening for messages posted to channels matching the given patterns. - * - * @param patterns the patterns - */ - void punsubscribe(K... patterns); - - /** - * Listen for messages published to the given channels. - * - * @param channels the channels - */ - void subscribe(K... channels); - - /** - * Stop listening for messages posted to the given channels. - * - * @param channels the channels - */ - void unsubscribe(K... channels); - - /** - * @return the underlying connection. - */ - StatefulRedisPubSubConnection getStatefulConnection(); -} diff --git a/src/main/java/com/lambdaworks/redis/pubsub/api/sync/package-info.java b/src/main/java/com/lambdaworks/redis/pubsub/api/sync/package-info.java deleted file mode 100644 index ece5c9d7e5..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/api/sync/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Pub/Sub Redis API for synchronous executed commands. - */ -package com.lambdaworks.redis.pubsub.api.sync; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/pubsub/package-info.java b/src/main/java/com/lambdaworks/redis/pubsub/package-info.java deleted file mode 100644 index d68879e934..0000000000 --- a/src/main/java/com/lambdaworks/redis/pubsub/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Pub/Sub connection classes. - */ -package com.lambdaworks.redis.pubsub; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/resource/ClientResources.java b/src/main/java/com/lambdaworks/redis/resource/ClientResources.java deleted file mode 100644 index d8516f0ea8..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/ClientResources.java +++ /dev/null @@ -1,124 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.event.EventPublisherOptions; -import com.lambdaworks.redis.metrics.CommandLatencyCollector; - -import io.netty.util.concurrent.EventExecutorGroup; -import io.netty.util.concurrent.Future; - -/** - * Client Configuration. The client configuration provides heavy-weight resources such as thread pools. {@link ClientResources} - * can be shared across different client instances. Shared instances are not shut down by the client, only dedicated instances - * are shut down. - *

- * This interface defines the contract. See the {@link DefaultClientResources} class for the default implementation. - *

- *

- * The {@link ClientResources} instance is stateful. You have to shutdown the instance if you're no longer using it. - *

- * - * {@link ClientResources} provide: - *
    - *
  • An instance of {@link EventLoopGroupProvider} to obtain particular {@link io.netty.channel.EventLoopGroup - * EventLoopGroups}
  • - *
  • An instance of {@link EventExecutorGroup} for performing internal computation tasks
  • - *
- * - * @author Mark Paluch - * @since 3.4 - */ -public interface ClientResources { - - /** - * Shutdown the {@link ClientResources}. - * - * @return eventually the success/failure of the shutdown without errors. - */ - Future shutdown(); - - /** - * Shutdown the {@link ClientResources}. - * - * @param quietPeriod the quiet period as described in the documentation - * @param timeout the maximum amount of time to wait until the executor is shutdown regardless if a task was submitted - * during the quiet period - * @param timeUnit the unit of {@code quietPeriod} and {@code timeout} - * @return eventually the success/failure of the shutdown without errors. - */ - Future shutdown(long quietPeriod, long timeout, TimeUnit timeUnit); - - /** - * Returns the {@link EventLoopGroupProvider} that provides access to the particular {@link io.netty.channel.EventLoopGroup - * event loop groups}. lettuce requires at least two implementations: {@link io.netty.channel.nio.NioEventLoopGroup} for - * TCP/IP connections and {@link io.netty.channel.epoll.EpollEventLoopGroup} for unix domain socket connections (epoll). - * - * You can use {@link DefaultEventLoopGroupProvider} as default implementation or implement an own - * {@link EventLoopGroupProvider} to share existing {@link io.netty.channel.EventLoopGroup EventLoopGroup's} with lettuce. - * - * @return the {@link EventLoopGroupProvider} which provides access to the particular {@link io.netty.channel.EventLoopGroup - * event loop groups} - */ - EventLoopGroupProvider eventLoopGroupProvider(); - - /** - * Returns the computation pool used for internal operations. Such tasks are periodic Redis Cluster and Redis Sentinel - * topology updates and scheduling of connection reconnection by {@link com.lambdaworks.redis.protocol.ConnectionWatchdog}. - * - * @return the computation pool used for internal operations - */ - EventExecutorGroup eventExecutorGroup(); - - /** - * Returns the pool size (number of threads) for IO threads. The indicated size does not reflect the number for all IO - * threads. TCP and socket connections (epoll) require different IO pool. - * - * @return the pool size (number of threads) for all IO tasks. - */ - int ioThreadPoolSize(); - - /** - * Returns the pool size (number of threads) for all computation tasks. - * - * @return the pool size (number of threads to use). - */ - int computationThreadPoolSize(); - - /** - * Returns the event bus used to publish events. - * - * @return the event bus - */ - EventBus eventBus(); - - /** - * Returns the {@link EventPublisherOptions} for latency event publishing. - * - * @return the {@link EventPublisherOptions} for latency event publishing - */ - EventPublisherOptions commandLatencyPublisherOptions(); - - /** - * Returns the {@link CommandLatencyCollector}. - * - * @return the command latency collector - */ - CommandLatencyCollector commandLatencyCollector(); - - /** - * Returns the {@link DnsResolver}. - * - * @return the DNS resolver - */ - DnsResolver dnsResolver(); - - /** - * Returns the {@link Delay} for reconnect attempts. Each connection uses its own attempt counter. - * - * @return the reconnect {@link Delay}. - */ - Delay reconnectDelay(); - -} diff --git a/src/main/java/com/lambdaworks/redis/resource/ConstantDelay.java b/src/main/java/com/lambdaworks/redis/resource/ConstantDelay.java deleted file mode 100644 index 51cb3d3ab2..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/ConstantDelay.java +++ /dev/null @@ -1,24 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.util.concurrent.TimeUnit; - -/** - * {@link Delay} with a constant delay for each attempt. - * - * @author Mark Paluch - */ -class ConstantDelay extends Delay { - - private final long delay; - - ConstantDelay(long delay, TimeUnit timeUnit) { - - super(timeUnit); - this.delay = delay; - } - - @Override - public long createDelay(long attempt) { - return delay; - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/DefaultClientResources.java b/src/main/java/com/lambdaworks/redis/resource/DefaultClientResources.java deleted file mode 100644 index 8589bfa0c5..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/DefaultClientResources.java +++ /dev/null @@ -1,463 +0,0 @@ -package com.lambdaworks.redis.resource; - -import static com.lambdaworks.redis.resource.Futures.toBooleanPromise; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.event.DefaultEventBus; -import com.lambdaworks.redis.event.DefaultEventPublisherOptions; -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.event.EventPublisherOptions; -import com.lambdaworks.redis.event.metrics.DefaultCommandLatencyEventPublisher; -import com.lambdaworks.redis.event.metrics.MetricEventPublisher; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.metrics.CommandLatencyCollector; -import com.lambdaworks.redis.metrics.CommandLatencyCollectorOptions; -import com.lambdaworks.redis.metrics.DefaultCommandLatencyCollector; -import com.lambdaworks.redis.metrics.DefaultCommandLatencyCollectorOptions; - -import io.netty.util.concurrent.*; -import io.netty.util.internal.SystemPropertyUtil; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Default instance of the client resources. - *

- * The {@link DefaultClientResources} instance is stateful, you have to shutdown the instance if you're no longer using it. - *

- * {@link DefaultClientResources} allow to configure: - *
    - *
  • the {@code ioThreadPoolSize}, alternatively
  • - *
  • a {@code eventLoopGroupProvider} which is a provided instance of {@link EventLoopGroupProvider}. Higher precedence than - * {@code ioThreadPoolSize}.
  • - *
  • computationThreadPoolSize
  • - *
  • a {@code eventExecutorGroup} which is a provided instance of {@link EventExecutorGroup}. Higher precedence than - * {@code computationThreadPoolSize}.
  • - *
  • an {@code eventBus} which is a provided instance of {@link EventBus}.
  • - *
  • a {@code commandLatencyCollector} which is a provided instance of - * {@link com.lambdaworks.redis.metrics.CommandLatencyCollector}.
  • - *
  • a {@code dnsResolver} which is a provided instance of {@link DnsResolver}.
  • - *
- * - * @author Mark Paluch - * @since 3.4 - */ -public class DefaultClientResources implements ClientResources { - - protected static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultClientResources.class); - - public static final int MIN_IO_THREADS = 3; - public static final int MIN_COMPUTATION_THREADS = 3; - - public static final int DEFAULT_IO_THREADS; - public static final int DEFAULT_COMPUTATION_THREADS; - - public static final Delay DEFAULT_RECONNECT_DELAY = Delay.exponential(); - - static { - int threads = Math.max(1, SystemPropertyUtil.getInt("io.netty.eventLoopThreads", - Math.max(MIN_IO_THREADS, Runtime.getRuntime().availableProcessors()))); - - DEFAULT_IO_THREADS = threads; - DEFAULT_COMPUTATION_THREADS = threads; - if (logger.isDebugEnabled()) { - logger.debug("-Dio.netty.eventLoopThreads: {}", threads); - } - } - - private final boolean sharedEventLoopGroupProvider; - private final EventLoopGroupProvider eventLoopGroupProvider; - private final boolean sharedEventExecutor; - private final EventExecutorGroup eventExecutorGroup; - private final EventBus eventBus; - private final CommandLatencyCollector commandLatencyCollector; - private final boolean sharedCommandLatencyCollector; - private final EventPublisherOptions commandLatencyPublisherOptions; - private final MetricEventPublisher metricEventPublisher; - private final DnsResolver dnsResolver; - private final Delay reconnectDelay; - - private volatile boolean shutdownCalled = false; - - protected DefaultClientResources(Builder builder) { - - if (builder.eventLoopGroupProvider == null) { - int ioThreadPoolSize = builder.ioThreadPoolSize; - - if (ioThreadPoolSize < MIN_IO_THREADS) { - logger.info("ioThreadPoolSize is less than {} ({}), setting to: {}", MIN_IO_THREADS, ioThreadPoolSize, - MIN_IO_THREADS); - ioThreadPoolSize = MIN_IO_THREADS; - } - - this.sharedEventLoopGroupProvider = false; - this.eventLoopGroupProvider = new DefaultEventLoopGroupProvider(ioThreadPoolSize); - - } else { - this.sharedEventLoopGroupProvider = true; - this.eventLoopGroupProvider = builder.eventLoopGroupProvider; - } - - if (builder.eventExecutorGroup == null) { - int computationThreadPoolSize = builder.computationThreadPoolSize; - if (computationThreadPoolSize < MIN_COMPUTATION_THREADS) { - - logger.info("computationThreadPoolSize is less than {} ({}), setting to: {}", MIN_COMPUTATION_THREADS, - computationThreadPoolSize, MIN_COMPUTATION_THREADS); - computationThreadPoolSize = MIN_COMPUTATION_THREADS; - } - - eventExecutorGroup = DefaultEventLoopGroupProvider.createEventLoopGroup(DefaultEventExecutorGroup.class, - computationThreadPoolSize); - sharedEventExecutor = false; - } else { - sharedEventExecutor = true; - eventExecutorGroup = builder.eventExecutorGroup; - } - - if (builder.eventBus == null) { - eventBus = new DefaultEventBus(new RxJavaEventExecutorGroupScheduler(eventExecutorGroup)); - } else { - eventBus = builder.eventBus; - } - - if (builder.commandLatencyCollector == null) { - if (DefaultCommandLatencyCollector.isAvailable()) { - if (builder.commandLatencyCollectorOptions != null) { - commandLatencyCollector = new DefaultCommandLatencyCollector(builder.commandLatencyCollectorOptions); - } else { - commandLatencyCollector = new DefaultCommandLatencyCollector( - DefaultCommandLatencyCollectorOptions.create()); - } - } else { - logger.debug("LatencyUtils/HdrUtils are not available, metrics are disabled"); - builder.commandLatencyCollectorOptions = DefaultCommandLatencyCollectorOptions.disabled(); - commandLatencyCollector = DefaultCommandLatencyCollector.disabled(); - } - - sharedCommandLatencyCollector = false; - } else { - sharedCommandLatencyCollector = true; - commandLatencyCollector = builder.commandLatencyCollector; - } - - commandLatencyPublisherOptions = builder.commandLatencyPublisherOptions; - - if (commandLatencyCollector.isEnabled() && commandLatencyPublisherOptions != null) { - metricEventPublisher = new DefaultCommandLatencyEventPublisher(eventExecutorGroup, commandLatencyPublisherOptions, - eventBus, commandLatencyCollector); - } else { - metricEventPublisher = null; - } - - if (builder.dnsResolver == null) { - dnsResolver = DnsResolvers.JVM_DEFAULT; - } else { - dnsResolver = builder.dnsResolver; - } - - reconnectDelay = builder.reconnectDelay; - } - - /** - * Returns a new {@link DefaultClientResources.Builder} to construct {@link DefaultClientResources}. - * - * @return a new {@link DefaultClientResources.Builder} to construct {@link DefaultClientResources}. - */ - public static DefaultClientResources.Builder builder() { - return new DefaultClientResources.Builder(); - } - - /** - * Create a new {@link DefaultClientResources} using default settings. - * - * @return a new instance of a default client resources. - */ - public static DefaultClientResources create() { - return builder().build(); - } - - /** - * Builder for {@link DefaultClientResources}. - */ - public static class Builder { - - private int ioThreadPoolSize = DEFAULT_IO_THREADS; - private int computationThreadPoolSize = DEFAULT_COMPUTATION_THREADS; - private EventExecutorGroup eventExecutorGroup; - private EventLoopGroupProvider eventLoopGroupProvider; - private EventBus eventBus; - private CommandLatencyCollectorOptions commandLatencyCollectorOptions = DefaultCommandLatencyCollectorOptions.create(); - private CommandLatencyCollector commandLatencyCollector; - private EventPublisherOptions commandLatencyPublisherOptions = DefaultEventPublisherOptions.create(); - private DnsResolver dnsResolver = DnsResolvers.JVM_DEFAULT; - private Delay reconnectDelay = DEFAULT_RECONNECT_DELAY; - - /** - * @deprecated Use {@link DefaultClientResources#builder()} - */ - @Deprecated - public Builder() { - } - - /** - * Sets the thread pool size (number of threads to use) for I/O operations (default value is the number of CPUs). The - * thread pool size is only effective if no {@code eventLoopGroupProvider} is provided. - * - * @param ioThreadPoolSize the thread pool size - * @return this - */ - public Builder ioThreadPoolSize(int ioThreadPoolSize) { - this.ioThreadPoolSize = ioThreadPoolSize; - return this; - } - - /** - * Sets a shared {@link EventLoopGroupProvider event executor provider} that can be used across different instances of - * the RedisClient. The provided {@link EventLoopGroupProvider} instance will not be shut down when shutting down the - * client resources. You have to take care of that. This is an advanced configuration that should only be used if you - * know what you are doing. - * - * @param eventLoopGroupProvider the shared eventLoopGroupProvider - * @return this - */ - public Builder eventLoopGroupProvider(EventLoopGroupProvider eventLoopGroupProvider) { - this.eventLoopGroupProvider = eventLoopGroupProvider; - return this; - } - - /** - * Sets the thread pool size (number of threads to use) for computation operations (default value is the number of - * CPUs). The thread pool size is only effective if no {@code eventExecutorGroup} is provided. - * - * @param computationThreadPoolSize the thread pool size - * @return this - */ - public Builder computationThreadPoolSize(int computationThreadPoolSize) { - this.computationThreadPoolSize = computationThreadPoolSize; - return this; - } - - /** - * Sets a shared {@link EventExecutorGroup event executor group} that can be used across different instances of the - * RedisClient. The provided {@link EventExecutorGroup} instance will not be shut down when shutting down the client - * resources. You have to take care of that. This is an advanced configuration that should only be used if you know what - * you are doing. - * - * @param eventExecutorGroup the shared eventExecutorGroup - * @return this - */ - public Builder eventExecutorGroup(EventExecutorGroup eventExecutorGroup) { - this.eventExecutorGroup = eventExecutorGroup; - return this; - } - - /** - * Sets the {@link EventBus} that can that can be used across different instances of the RedisClient. - * - * @param eventBus the event bus - * @return this - */ - public Builder eventBus(EventBus eventBus) { - this.eventBus = eventBus; - return this; - } - - /** - * Sets the {@link EventPublisherOptions} to publish command latency metrics using the {@link EventBus}. - * - * @param commandLatencyPublisherOptions the {@link EventPublisherOptions} to publish command latency metrics using the - * {@link EventBus}. - * @return this - */ - public Builder commandLatencyPublisherOptions(EventPublisherOptions commandLatencyPublisherOptions) { - this.commandLatencyPublisherOptions = commandLatencyPublisherOptions; - return this; - } - - /** - * Sets the {@link CommandLatencyCollectorOptions} that can that can be used across different instances of the - * RedisClient. The options are only effective if no {@code commandLatencyCollector} is provided. - * - * @param commandLatencyCollectorOptions the command latency collector options - * @return this - */ - public Builder commandLatencyCollectorOptions(CommandLatencyCollectorOptions commandLatencyCollectorOptions) { - this.commandLatencyCollectorOptions = commandLatencyCollectorOptions; - return this; - } - - /** - * Sets the {@link CommandLatencyCollector} that can that can be used across different instances of the RedisClient. - * - * @param commandLatencyCollector the command latency collector - * @return this - */ - public Builder commandLatencyCollector(CommandLatencyCollector commandLatencyCollector) { - this.commandLatencyCollector = commandLatencyCollector; - return this; - } - - /** - * Sets the {@link DnsResolver} that can that is used to resolve hostnames to {@link java.net.InetAddress}. Defaults to - * {@link DnsResolvers#JVM_DEFAULT} - * - * @param dnsResolver the DNS resolver, must not be {@link null}. - * @return this - */ - public Builder dnsResolver(DnsResolver dnsResolver) { - - LettuceAssert.notNull(dnsResolver, "DNSResolver must not be null"); - - this.dnsResolver = dnsResolver; - return this; - } - - /** - * Sets the reconnect {@link Delay} to delay reconnect attempts. Defaults to binary exponential delay capped at - * {@literal 30 SECONDS}. - * - * @param reconnectDelay the reconnect delay, must not be {@literal null}. - * @return this - */ - public Builder reconnectDelay(Delay reconnectDelay) { - - LettuceAssert.notNull(reconnectDelay, "Delay must not be null"); - - this.reconnectDelay = reconnectDelay; - return this; - } - - /** - * - * @return a new instance of {@link DefaultClientResources}. - */ - public DefaultClientResources build() { - return new DefaultClientResources(this); - } - } - - @Override - protected void finalize() throws Throwable { - if (!shutdownCalled) { - logger.warn(getClass().getName() - + " was not shut down properly, shutdown() was not called before it's garbage-collected. Call shutdown() or shutdown(long,long,TimeUnit) "); - } - super.finalize(); - } - - /** - * Shutdown the {@link ClientResources}. - * - * @return eventually the success/failure of the shutdown without errors. - */ - @Override - public Future shutdown() { - return shutdown(2, 15, TimeUnit.SECONDS); - } - - /** - * Shutdown the {@link ClientResources}. - * - * @param quietPeriod the quiet period as described in the documentation - * @param timeout the maximum amount of time to wait until the executor is shutdown regardless if a task was submitted - * during the quiet period - * @param timeUnit the unit of {@code quietPeriod} and {@code timeout} - * @return eventually the success/failure of the shutdown without errors. - */ - @SuppressWarnings("unchecked") - public Future shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { - - shutdownCalled = true; - DefaultPromise overall = new DefaultPromise(GlobalEventExecutor.INSTANCE); - DefaultPromise lastRelease = new DefaultPromise(GlobalEventExecutor.INSTANCE); - Futures.PromiseAggregator> aggregator = new Futures.PromiseAggregator>( - overall); - - aggregator.expectMore(1); - - if (!sharedEventLoopGroupProvider) { - aggregator.expectMore(1); - } - - if (!sharedEventExecutor) { - aggregator.expectMore(1); - } - - aggregator.arm(); - - if (metricEventPublisher != null) { - metricEventPublisher.shutdown(); - } - - if (!sharedEventLoopGroupProvider) { - Future shutdown = eventLoopGroupProvider.shutdown(quietPeriod, timeout, timeUnit); - if (shutdown instanceof Promise) { - aggregator.add((Promise) shutdown); - } else { - aggregator.add(toBooleanPromise(shutdown)); - } - } - - if (!sharedEventExecutor) { - Future shutdown = eventExecutorGroup.shutdownGracefully(quietPeriod, timeout, timeUnit); - aggregator.add(toBooleanPromise(shutdown)); - } - - if (!sharedCommandLatencyCollector) { - commandLatencyCollector.shutdown(); - } - - aggregator.add(lastRelease); - lastRelease.setSuccess(null); - - return toBooleanPromise(overall); - } - - @Override - public EventLoopGroupProvider eventLoopGroupProvider() { - return eventLoopGroupProvider; - } - - @Override - public EventExecutorGroup eventExecutorGroup() { - return eventExecutorGroup; - } - - @Override - public int ioThreadPoolSize() { - return eventLoopGroupProvider.threadPoolSize(); - } - - @Override - public int computationThreadPoolSize() { - return LettuceLists.newList(eventExecutorGroup.iterator()).size(); - } - - @Override - public EventBus eventBus() { - return eventBus; - } - - @Override - public CommandLatencyCollector commandLatencyCollector() { - return commandLatencyCollector; - } - - @Override - public EventPublisherOptions commandLatencyPublisherOptions() { - return commandLatencyPublisherOptions; - } - - @Override - public DnsResolver dnsResolver() { - return dnsResolver; - } - - @Override - public Delay reconnectDelay() { - return reconnectDelay; - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/DefaultEventLoopGroupProvider.java b/src/main/java/com/lambdaworks/redis/resource/DefaultEventLoopGroupProvider.java deleted file mode 100644 index c51dc914ed..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/DefaultEventLoopGroupProvider.java +++ /dev/null @@ -1,197 +0,0 @@ -package com.lambdaworks.redis.resource; - -import static com.lambdaworks.redis.resource.Futures.toBooleanPromise; - -import java.util.HashMap; -import java.util.Map; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.EpollProvider; - -import io.netty.channel.EventLoopGroup; -import io.netty.channel.nio.NioEventLoopGroup; -import io.netty.util.concurrent.*; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; - -/** - * Default implementation which manages one event loop group instance per type. - * - * @author Mark Paluch - * @since 3.4 - */ -public class DefaultEventLoopGroupProvider implements EventLoopGroupProvider { - - protected static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultEventLoopGroupProvider.class); - - private final Map, EventExecutorGroup> eventLoopGroups = new ConcurrentHashMap<>(2); - private final Map refCounter = new ConcurrentHashMap<>(2); - - private final int numberOfThreads; - - private volatile boolean shutdownCalled = false; - - /** - * Creates a new instance of {@link DefaultEventLoopGroupProvider}. - * - * @param numberOfThreads number of threads (pool size) - */ - public DefaultEventLoopGroupProvider(int numberOfThreads) { - this.numberOfThreads = numberOfThreads; - } - - @Override - public T allocate(Class type) { - synchronized (this) { - return addReference(getOrCreate(type)); - } - } - - private T addReference(T reference) { - - synchronized (refCounter){ - long counter = 0; - if(refCounter.containsKey(reference)){ - counter = refCounter.get(reference); - } - - logger.debug("Adding reference to {}, existing ref count {}", reference, counter); - counter++; - refCounter.put(reference, counter); - } - - return reference; - } - - private T release(T reference) { - - synchronized (refCounter) { - long counter = 0; - if (refCounter.containsKey(reference)) { - counter = refCounter.get(reference); - } - - if (counter < 1) { - logger.debug("Attempting to release {} but ref count is {}", reference, counter); - } - - counter--; - if (counter == 0) { - refCounter.remove(reference); - } else { - refCounter.put(reference, counter); - } - } - - return reference; - } - - @SuppressWarnings("unchecked") - private T getOrCreate(Class type) { - - if (shutdownCalled) { - throw new IllegalStateException("Provider is shut down and can not longer provide resources"); - } - - if (!eventLoopGroups.containsKey(type)) { - eventLoopGroups.put(type, createEventLoopGroup(type, numberOfThreads)); - } - - return (T) eventLoopGroups.get(type); - } - - /** - * Create an instance of a {@link EventExecutorGroup}. Supported types are: - *
    - *
  • DefaultEventExecutorGroup
  • - *
  • NioEventLoopGroup
  • - *
  • EpollEventLoopGroup
  • - *
- * - * @param type the type - * @param numberOfThreads the number of threads to use for the {@link EventExecutorGroup} - * @param type parameter - * @return a new instance of a {@link EventExecutorGroup} - * @throws IllegalArgumentException if the {@code type} is not supported. - */ - public static EventExecutorGroup createEventLoopGroup(Class type, int numberOfThreads) { - if (DefaultEventExecutorGroup.class.equals(type)) { - return new DefaultEventExecutorGroup(numberOfThreads, new DefaultThreadFactory("lettuce-eventExecutorLoop", true)); - } - - if (NioEventLoopGroup.class.equals(type)) { - return new NioEventLoopGroup(numberOfThreads, new DefaultThreadFactory("lettuce-nioEventLoop", true)); - } - - if (EpollProvider.epollEventLoopGroupClass != null && EpollProvider.epollEventLoopGroupClass.equals(type)) { - return EpollProvider.newEventLoopGroup(numberOfThreads, new DefaultThreadFactory("lettuce-epollEventLoop", true)); - } - throw new IllegalArgumentException("Type " + type.getName() + " not supported"); - } - - @Override - public Promise release(EventExecutorGroup eventLoopGroup, long quietPeriod, long timeout, TimeUnit unit) { - - Class key = getKey(release(eventLoopGroup)); - - if ((key == null && eventLoopGroup.isShuttingDown()) || refCounter.containsKey(eventLoopGroup)) { - DefaultPromise promise = new DefaultPromise(GlobalEventExecutor.INSTANCE); - promise.setSuccess(true); - return promise; - } - - if (key != null) { - eventLoopGroups.remove(key); - } - - Future shutdownFuture = eventLoopGroup.shutdownGracefully(quietPeriod, timeout, unit); - return toBooleanPromise(shutdownFuture); - } - - private Class getKey(EventExecutorGroup eventLoopGroup) { - Class key = null; - - Map, EventExecutorGroup> copy = new HashMap<>(eventLoopGroups); - for (Map.Entry, EventExecutorGroup> entry : copy.entrySet()) { - if (entry.getValue() == eventLoopGroup) { - key = entry.getKey(); - break; - } - } - return key; - } - - @Override - public int threadPoolSize() { - return numberOfThreads; - } - - @Override - @SuppressWarnings("unchecked") - public Future shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { - shutdownCalled = true; - - Map, EventExecutorGroup> copy = new HashMap<>(eventLoopGroups); - - DefaultPromise overall = new DefaultPromise(GlobalEventExecutor.INSTANCE); - DefaultPromise lastRelease = new DefaultPromise(GlobalEventExecutor.INSTANCE); - Futures.PromiseAggregator> aggregator = new Futures.PromiseAggregator>( - overall); - - aggregator.expectMore(1 + copy.size()); - - aggregator.arm(); - - for (EventExecutorGroup executorGroup : copy.values()) { - Promise shutdown = toBooleanPromise(release(executorGroup, quietPeriod, timeout, timeUnit)); - aggregator.add(shutdown); - } - - aggregator.add(lastRelease); - lastRelease.setSuccess(null); - - return toBooleanPromise(overall); - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/Delay.java b/src/main/java/com/lambdaworks/redis/resource/Delay.java deleted file mode 100644 index 61627931de..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/Delay.java +++ /dev/null @@ -1,96 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Base class for delays and factory class to create particular instances. {@link Delay} can be subclassed to create custom - * delay implementations based on attempts. Attempts start with {@value 1}. - * - * @author Mark Paluch - * @since 4.2 - */ -public abstract class Delay { - - /** - * The time unit of the delay. - */ - private final TimeUnit timeUnit; - - /** - * Creates a new {@link Delay}. - * - * @param timeUnit the time unit. - */ - Delay(TimeUnit timeUnit) { - - LettuceAssert.notNull(timeUnit, "TimeUnit must not be null"); - - this.timeUnit = timeUnit; - } - - /** - * Returns the {@link TimeUnit} associated with this {@link Delay}. - * - * @return the {@link TimeUnit} associated with this {@link Delay}. - */ - public TimeUnit getTimeUnit() { - return timeUnit; - } - - /** - * Calculate a specific delay based on the attempt. - * - * This method is to be implemented by the implementations and depending on the params that were set during construction - * time. - * - * @param attempt the attempt to calculate the delay from. - * @return the calculated delay. - */ - public abstract long createDelay(long attempt); - - /** - * Creates a new {@link ConstantDelay}. - * - * @param delay the delay, must be greater or equal to 0 - * @param timeUnit the unit of the delay. - * @return a created {@link ExponentialDelay}. - */ - public static Delay constant(int delay, TimeUnit timeUnit) { - - LettuceAssert.isTrue(delay >= 0, "Delay must be greater or equal to 0"); - - return new ConstantDelay(delay, timeUnit); - } - - /** - * Creates a new {@link ExponentialDelay} with default boundaries and factor (1, 2, 4, 8, 16, 32...). The delay begins with - * 1 and is capped at 30 milliseconds after reaching the 16th attempt. - * - * @return a created {@link ExponentialDelay}. - */ - public static Delay exponential() { - return exponential(0, TimeUnit.SECONDS.toMillis(30), TimeUnit.MILLISECONDS, 2); - } - - /** - * Creates a new {@link ExponentialDelay} on with custom boundaries and factor (eg. with upper 9000, lower 0, powerOf 10: 1, - * 10, 100, 1000, 9000, 9000, 9000, ...). - * - * @param lower the lower boundary, must be non-negative - * @param upper the upper boundary, must be greater than the lower boundary - * @param unit the unit of the delay. - * @param powersOf the base for exponential growth (eg. powers of 2, powers of 10, etc...), must be non-negative and greater - * than 1 - * @return a created {@link ExponentialDelay}. - */ - public static Delay exponential(long lower, long upper, TimeUnit unit, int powersOf) { - - LettuceAssert.isTrue(lower >= 0, "Lower boundary must be greater or equal to 0"); - LettuceAssert.isTrue(upper > lower, "Upper boundary must be greater than the lower boundary"); - LettuceAssert.isTrue(powersOf > 1, "PowersOf must be greater than 1"); - - return new ExponentialDelay(lower, upper, unit, powersOf); - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/DirContextDnsResolver.java b/src/main/java/com/lambdaworks/redis/resource/DirContextDnsResolver.java deleted file mode 100644 index 2dfb3d7deb..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/DirContextDnsResolver.java +++ /dev/null @@ -1,321 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.io.Closeable; -import java.io.IOException; -import java.net.InetAddress; -import java.net.UnknownHostException; -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.Properties; - -import javax.naming.Context; -import javax.naming.InitialContext; -import javax.naming.NamingEnumeration; -import javax.naming.NamingException; -import javax.naming.directory.Attribute; -import javax.naming.directory.Attributes; -import javax.naming.directory.InitialDirContext; - -import com.google.common.net.InetAddresses; -import com.lambdaworks.redis.LettuceStrings; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * DNS Resolver based on Java's {@link com.sun.jndi.dns.DnsContextFactory}. This resolver resolves hostnames to IPv4 and IPv6 - * addresses using {@code A}, {@code AAAA} and {@code CNAME} records. Java IP stack preferences are read from system properties - * and taken into account when resolving names. - *

- * The default configuration uses system-configured DNS server addresses to perform lookups but server adresses can be specified - * using {@link #DirContextDnsResolver(Iterable)}. Custom DNS servers can be specified by using - * {@link #DirContextDnsResolver(String)} or {@link #DirContextDnsResolver(Iterable)}. - *

- * - * @author Mark Paluch - * @since 4.2 - */ -public class DirContextDnsResolver implements DnsResolver, Closeable { - - final static String PREFER_IPV4_KEY = "java.net.preferIPv4Stack"; - final static String PREFER_IPV6_KEY = "java.net.preferIPv6Stack"; - - private static final String CTX_FACTORY_NAME = "com.sun.jndi.dns.DnsContextFactory"; - private static final String INITIAL_TIMEOUT = "com.sun.jndi.dns.timeout.initial"; - private static final String LOOKUP_RETRIES = "com.sun.jndi.dns.timeout.retries"; - - private static final String DEFAULT_INITIAL_TIMEOUT = "1000"; - private static final String DEFAULT_RETRIES = "4"; - - private final boolean preferIpv4; - private final boolean preferIpv6; - private final Properties properties; - private final InitialDirContext context; - - /** - * Creates a new {@link DirContextDnsResolver} using system-configured DNS servers. - */ - public DirContextDnsResolver() { - this(new Properties(), new StackPreference()); - } - - /** - * Creates a new {@link DirContextDnsResolver} using a collection of DNS servers. - * - * @param dnsServer must not be {@literal null} and not empty. - */ - public DirContextDnsResolver(String dnsServer) { - this(Collections.singleton(dnsServer)); - } - - /** - * Creates a new {@link DirContextDnsResolver} using a collection of DNS servers. - * - * @param dnsServers must not be {@literal null} and not empty. - */ - public DirContextDnsResolver(Iterable dnsServers) { - this(getProperties(dnsServers), new StackPreference()); - } - - /** - * Creates a new {@link DirContextDnsResolver} for the given stack preference and {@code properties}. - * - * @param preferIpv4 flag to prefer IPv4 over IPv6 address resolution. - * @param preferIpv6 flag to prefer IPv6 over IPv4 address resolution. - * @param properties custom properties for creating the context, must not be {@literal null}. - */ - public DirContextDnsResolver(boolean preferIpv4, boolean preferIpv6, Properties properties) { - - this.preferIpv4 = preferIpv4; - this.preferIpv6 = preferIpv6; - this.properties = properties; - this.context = createContext(properties); - } - - private DirContextDnsResolver(Properties properties, StackPreference stackPreference) { - - this.properties = new Properties(properties); - this.preferIpv4 = stackPreference.preferIpv4; - this.preferIpv6 = stackPreference.preferIpv6; - this.context = createContext(properties); - } - - private InitialDirContext createContext(Properties properties) { - - LettuceAssert.notNull(properties, "Properties must not be null"); - - Properties hashtable = (Properties) properties.clone(); - hashtable.put(InitialContext.INITIAL_CONTEXT_FACTORY, CTX_FACTORY_NAME); - - if (!hashtable.containsKey(INITIAL_TIMEOUT)) { - hashtable.put(INITIAL_TIMEOUT, DEFAULT_INITIAL_TIMEOUT); - } - - if (!hashtable.containsKey(LOOKUP_RETRIES)) { - hashtable.put(LOOKUP_RETRIES, DEFAULT_RETRIES); - } - - try { - return new InitialDirContext(hashtable); - } catch (NamingException e) { - throw new IllegalStateException(e); - } - } - - @Override - public void close() throws IOException { - try { - context.close(); - } catch (NamingException e) { - throw new IOException(e); - } - } - - /** - * Perform hostname to address resolution. - * - * @param host the hostname, must not be empty or {@literal null}. - * @return array of one or more {@link InetAddress adresses} - * @throws UnknownHostException - */ - @Override - public InetAddress[] resolve(String host) throws UnknownHostException { - - if (InetAddresses.isInetAddress(host)) { - return new InetAddress[] { InetAddresses.forString(host) }; - } - - List inetAddresses = new ArrayList<>(); - try { - resolve(host, inetAddresses); - } catch (NamingException e) { - throw new UnknownHostException(String.format("Cannot resolve %s to a hostname because of %s", host, e)); - } - - if (inetAddresses.isEmpty()) { - throw new UnknownHostException(String.format("Cannot resolve %s to a hostname", host)); - } - - return inetAddresses.toArray(new InetAddress[inetAddresses.size()]); - } - - /** - * Resolve a hostname - * - * @param hostname - * @param inetAddresses - * @throws NamingException - * @throws UnknownHostException - */ - private void resolve(String hostname, List inetAddresses) throws NamingException, UnknownHostException { - - if (preferIpv6 || (!preferIpv4 && !preferIpv6)) { - - inetAddresses.addAll(resolve(hostname, "AAAA")); - inetAddresses.addAll(resolve(hostname, "A")); - } else { - - inetAddresses.addAll(resolve(hostname, "A")); - inetAddresses.addAll(resolve(hostname, "AAAA")); - } - - if (inetAddresses.isEmpty()) { - inetAddresses.addAll(resolveCname(hostname)); - } - } - - /** - * Resolves {@code CNAME} records to {@link InetAddress adresses}. - * - * @param hostname - * @return - * @throws NamingException - */ - @SuppressWarnings("rawtypes") - private List resolveCname(String hostname) throws NamingException { - - List inetAddresses = new ArrayList<>(); - - Attributes attrs = context.getAttributes(hostname, new String[] { "CNAME" }); - Attribute attr = attrs.get("CNAME"); - - if (attr != null && attr.size() > 0) { - NamingEnumeration e = attr.getAll(); - - while (e.hasMore()) { - String h = (String) e.next(); - - if (h.endsWith(".")) { - h = h.substring(0, h.lastIndexOf('.')); - } - try { - InetAddress[] resolved = resolve(h); - for (InetAddress inetAddress : resolved) { - inetAddresses.add(InetAddress.getByAddress(hostname, inetAddress.getAddress())); - } - - } catch (UnknownHostException e1) { - // ignore - } - } - } - - return inetAddresses; - } - - /** - * Resolve an attribute for a hostname. - * - * @param hostname - * @param attrName - * @return - * @throws NamingException - * @throws UnknownHostException - */ - @SuppressWarnings("rawtypes") - private List resolve(String hostname, String attrName) throws NamingException, UnknownHostException { - - Attributes attrs = context.getAttributes(hostname, new String[] { attrName }); - - List inetAddresses = new ArrayList<>(); - Attribute attr = attrs.get(attrName); - - if (attr != null && attr.size() > 0) { - NamingEnumeration e = attr.getAll(); - - while (e.hasMore()) { - InetAddress inetAddress = InetAddress.getByName("" + e.next()); - inetAddresses.add(InetAddress.getByAddress(hostname, inetAddress.getAddress())); - } - } - - return inetAddresses; - } - - private static Properties getProperties(Iterable dnsServers) { - - Properties properties = new Properties(); - StringBuffer providerUrl = new StringBuffer(); - - for (String dnsServer : dnsServers) { - - LettuceAssert.isTrue(LettuceStrings.isNotEmpty(dnsServer), "DNS Server must not be empty"); - if (providerUrl.length() != 0) { - providerUrl.append(' '); - } - providerUrl.append(String.format("dns://%s", dnsServer)); - } - - if (providerUrl.length() == 0) { - throw new IllegalArgumentException("DNS Servers must not be empty"); - } - - properties.put(Context.PROVIDER_URL, providerUrl.toString()); - - return properties; - } - - /** - * Stack preference utility. - */ - private final static class StackPreference { - - final boolean preferIpv4; - final boolean preferIpv6; - - public StackPreference() { - - boolean preferIpv4 = false; - boolean preferIpv6 = false; - - if (System.getProperty(PREFER_IPV4_KEY) == null && System.getProperty(PREFER_IPV6_KEY) == null) { - preferIpv4 = false; - preferIpv6 = false; - } - - if (System.getProperty(PREFER_IPV4_KEY) == null && System.getProperty(PREFER_IPV6_KEY) != null) { - - preferIpv6 = Boolean.getBoolean(PREFER_IPV6_KEY); - if (!preferIpv6) { - preferIpv4 = true; - } - } - - if (System.getProperty(PREFER_IPV4_KEY) != null && System.getProperty(PREFER_IPV6_KEY) == null) { - - preferIpv4 = Boolean.getBoolean(PREFER_IPV4_KEY); - if (!preferIpv4) { - preferIpv6 = true; - } - } - - if (System.getProperty(PREFER_IPV4_KEY) != null && System.getProperty(PREFER_IPV6_KEY) != null) { - - preferIpv4 = Boolean.getBoolean(PREFER_IPV4_KEY); - preferIpv6 = Boolean.getBoolean(PREFER_IPV6_KEY); - } - - this.preferIpv4 = preferIpv4; - this.preferIpv6 = preferIpv6; - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/DnsResolver.java b/src/main/java/com/lambdaworks/redis/resource/DnsResolver.java deleted file mode 100644 index 59c71b4b9e..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/DnsResolver.java +++ /dev/null @@ -1,23 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.net.InetAddress; -import java.net.UnknownHostException; - -/** - * Users may implement this interface to override the normal DNS lookup offered by the OS. - * - * @author Mark Paluch - * @since 4.2 - */ -public interface DnsResolver { - - /** - * Returns the IP address for the specified host name. - * - * @param host the hostname, must not be empty or {@literal null}. - * @return array of one or more {@link InetAddress adresses} - * @throws UnknownHostException if the given host is not recognized or the associated IP address cannot be used to build an - * {@link InetAddress} instance - */ - InetAddress[] resolve(String host) throws UnknownHostException; -} diff --git a/src/main/java/com/lambdaworks/redis/resource/DnsResolvers.java b/src/main/java/com/lambdaworks/redis/resource/DnsResolvers.java deleted file mode 100644 index d342c6fc58..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/DnsResolvers.java +++ /dev/null @@ -1,24 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.net.InetAddress; -import java.net.UnknownHostException; - -/** - * - * Predefined DNS resolvers. - * - * @author Mark Paluch - * @since 4.2 - */ -public enum DnsResolvers implements DnsResolver { - - /** - * Java VM default resolver. - */ - JVM_DEFAULT; - - @Override - public InetAddress[] resolve(String host) throws UnknownHostException { - return InetAddress.getAllByName(host); - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/ExponentialDelay.java b/src/main/java/com/lambdaworks/redis/resource/ExponentialDelay.java deleted file mode 100644 index 5485a06929..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/ExponentialDelay.java +++ /dev/null @@ -1,79 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.util.concurrent.TimeUnit; - -/** - * Delay that increases exponentially on every attempt. - * - *

- * Considering retry attempts start at 1, attempt 0 would be the initial call and will always yield 0 (or the lower bound). Then - * each retry step will by default yield 1 * 2 ^ (attemptNumber-1). Actually each step can be based on a different - * number than 1 unit of time using the growBy parameter: growBy * 2 ^ (attemptNumber-1). - *

- * By default with growBy = 1 this gives us 0 (initial attempt), 1, 2, 4, 8, 16, 32... - * - * Each of the resulting values that is below the lowerBound will be replaced by the lower bound, and each value - * over the upperBound will be replaced by the upper bound. - * - * @author Mark Paluch - */ -class ExponentialDelay extends Delay { - - private final long lower; - private final long upper; - private final int powersOf; - - ExponentialDelay(long lower, long upper, TimeUnit unit, int powersOf) { - - super(unit); - this.lower = lower; - this.upper = upper; - this.powersOf = powersOf; - } - - @Override - public long createDelay(long attempt) { - - long delay; - if (attempt <= 0) { // safeguard against underflow - delay = 0; - } else if (powersOf == 2) { - delay = calculatePowerOfTwo(attempt); - } else { - delay = calculateAlternatePower(attempt); - } - - return applyBounds(delay); - } - - private long calculateAlternatePower(long attempt) { - - // round will cap at Long.MAX_VALUE and pow should prevent overflows - double step = Math.pow(this.powersOf, attempt - 1); // attempt > 0 - return Math.round(step); - } - - // fastpath with bitwise operator - private long calculatePowerOfTwo(long attempt) { - - long step; - if (attempt >= 64) { // safeguard against overflow in the bitshift operation - step = Long.MAX_VALUE; - } else { - step = (1L << (attempt - 1)); - } - // round will cap at Long.MAX_VALUE - return Math.round(step); - } - - private long applyBounds(long calculatedValue) { - - if (calculatedValue < lower) { - return lower; - } - if (calculatedValue > upper) { - return upper; - } - return calculatedValue; - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/Futures.java b/src/main/java/com/lambdaworks/redis/resource/Futures.java deleted file mode 100644 index e3a5de9893..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/Futures.java +++ /dev/null @@ -1,158 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.util.LinkedHashSet; -import java.util.Set; -import java.util.concurrent.atomic.AtomicInteger; - -import com.lambdaworks.redis.internal.LettuceAssert; -import io.netty.util.concurrent.*; - -/** - * Utility class to support netty's future handling. - * - * @author Mark Paluch - * @since 3.4 - */ -class Futures { - - /** - * Create a promise that emits a {@code Boolean} value on completion of the {@code future} - * - * @param future the future. - * @return Promise emitting a {@code Boolean} value. {@literal true} if the {@code future} completed successfully, otherwise - * the cause wil be transported. - */ - static Promise toBooleanPromise(Future future) { - final DefaultPromise result = new DefaultPromise(GlobalEventExecutor.INSTANCE); - - future.addListener(new GenericFutureListener>() { - @Override - public void operationComplete(Future future) throws Exception { - - if (future.isSuccess()) { - result.setSuccess(true); - } else { - result.setFailure(future.cause()); - } - } - }); - return result; - } - - /** - * Promise aggregator that aggregates multiple promises into one {@link Promise}. The aggregator workflow is: - *
    - *
  1. Create a new instance of {@link com.lambdaworks.redis.resource.Futures.PromiseAggregator}
  2. - *
  3. Call {@link #expectMore(int)} until the number of expected futures is reached
  4. - *
  5. Arm the aggregator using {@link #arm()}
  6. - *
  7. Add the number of futures using {@link #add(Promise[])} until the expectation is met. The added futures can be either - * done or in progress.
  8. - *
  9. The {@code aggregatePromise} is released/finished as soon as the last future/promise completes
  10. - * - *
- * - * @param Result value type - * @param Future type - */ - static class PromiseAggregator> implements GenericFutureListener { - - private final Promise aggregatePromise; - private Set> pendingPromises; - private AtomicInteger expectedPromises = new AtomicInteger(); - private AtomicInteger processedPromises = new AtomicInteger(); - private boolean armed; - - /** - * Creates a new instance. - * - * @param aggregatePromise the {@link Promise} to notify - */ - public PromiseAggregator(Promise aggregatePromise) { - LettuceAssert.notNull(aggregatePromise, "AggregatePromise must not be null"); - this.aggregatePromise = aggregatePromise; - } - - /** - * Add the number of {@code count} to the count of expected promises. - * - * @param count number of futures/promises, that is added to the overall expectation count. - * @throws IllegalStateException if the aggregator was armed - */ - public void expectMore(int count) { - LettuceAssert.assertState(!armed, "Aggregator is armed and does not allow any further expectations"); - - expectedPromises.addAndGet(count); - } - - /** - * Arm the aggregator to expect completion of the futures. - * - * @throws IllegalStateException if the aggregator was armed - */ - public void arm() { - LettuceAssert.assertState(!armed, "Aggregator is already armed"); - armed = true; - } - - /** - * Add the given {@link Promise}s to the aggregator. - * - * @param promises the promises - * @throws IllegalStateException if the aggregator was not armed - */ - @SafeVarargs - public final PromiseAggregator add(Promise... promises) { - - LettuceAssert.notNull(promises, "Promises must not be null"); - LettuceAssert.assertState(armed, - "Aggregator is not armed and does not allow adding promises in that state. Call arm() first."); - - if (promises.length == 0) { - return this; - } - synchronized (this) { - if (pendingPromises == null) { - int size; - if (promises.length > 1) { - size = promises.length; - } else { - size = 2; - } - pendingPromises = new LinkedHashSet<>(size); - } - for (Promise p : promises) { - if (p == null) { - continue; - } - pendingPromises.add(p); - p.addListener(this); - } - } - return this; - } - - @Override - public synchronized void operationComplete(F future) throws Exception { - if (pendingPromises == null) { - aggregatePromise.setSuccess(null); - } else { - pendingPromises.remove(future); - processedPromises.incrementAndGet(); - if (!future.isSuccess()) { - Throwable cause = future.cause(); - aggregatePromise.setFailure(cause); - for (Promise pendingFuture : pendingPromises) { - pendingFuture.setFailure(cause); - } - } else if (processedPromises.get() == expectedPromises.get()) { - if (pendingPromises.isEmpty()) { - aggregatePromise.setSuccess(null); - } else { - throw new IllegalStateException( - "Processed promises == expected promises but pending promises is not empty. This should not have happened!"); - } - } - } - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/RxJavaEventExecutorGroupScheduler.java b/src/main/java/com/lambdaworks/redis/resource/RxJavaEventExecutorGroupScheduler.java deleted file mode 100644 index 581a7b68e3..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/RxJavaEventExecutorGroupScheduler.java +++ /dev/null @@ -1,106 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.util.concurrent.Future; -import java.util.concurrent.ScheduledExecutorService; -import java.util.concurrent.TimeUnit; - -import rx.Scheduler; -import rx.Subscription; -import rx.functions.Action0; -import rx.internal.schedulers.ScheduledAction; -import rx.internal.util.SubscriptionList; -import rx.subscriptions.CompositeSubscription; -import rx.subscriptions.Subscriptions; -import io.netty.util.concurrent.EventExecutor; -import io.netty.util.concurrent.EventExecutorGroup; - -/** - * A scheduler that uses a provided {@link EventExecutorGroup} instance to schedule tasks. This should typically be used as a - * computation scheduler or any other scheduler that do not schedule blocking tasks. See also - * https://github.com/ReactiveX/RxNetty - * /blob/0.5.x/rxnetty-common/src/main/java/io/reactivex/netty/threads/RxJavaEventloopScheduler.java - */ -public class RxJavaEventExecutorGroupScheduler extends Scheduler { - - private final EventExecutorGroup eventLoopGroup; - - public RxJavaEventExecutorGroupScheduler(EventExecutorGroup eventLoopGroup) { - this.eventLoopGroup = eventLoopGroup; - } - - @Override - public Worker createWorker() { - final EventExecutor eventLoop = eventLoopGroup.next(); - return new ScheduledExecutorServiceWorker(eventLoop); - } - - /** - * This code is more or less copied from rx-netty's EventloopWorker worker code. - **/ - private static class ScheduledExecutorServiceWorker extends Worker { - - /** - * Why are there two subscription holders? - * - * The serial subscriptions are used for non-delayed schedules which are always executed (and hence removed) in order. - * Since SubscriptionList holds the subs as a linked list, removals are optimal for serial removes. OTOH, delayed - * schedules are executed (and hence removed) out of order and hence a CompositeSubscription, that stores the subs in a - * hash structure is more optimal for removals. - */ - private final SubscriptionList serial; - private final CompositeSubscription timed; - private final SubscriptionList both; - private final ScheduledExecutorService scheduledExecutor; - - public ScheduledExecutorServiceWorker(EventExecutor scheduledExecutor) { - this.scheduledExecutor = scheduledExecutor; - serial = new SubscriptionList(); - timed = new CompositeSubscription(); - both = new SubscriptionList(serial, timed); - } - - @Override - public Subscription schedule(final Action0 action) { - return schedule(action, 0, TimeUnit.DAYS); - } - - @Override - public Subscription schedule(final Action0 action, long delayTime, TimeUnit unit) { - - if (isUnsubscribed()) { - return Subscriptions.unsubscribed(); - } - - final ScheduledAction sa; - - if (delayTime <= 0) { - sa = new ScheduledAction(action, serial); - serial.add(sa); - } else { - sa = new ScheduledAction(action, timed); - timed.add(sa); - } - - final Future result = scheduledExecutor.schedule(sa, delayTime, unit); - Subscription cancelFuture = Subscriptions.create(new Action0() { - @Override - public void call() { - result.cancel(false); - } - }); - sa.add(cancelFuture); /* An unsubscribe of the returned sub should cancel the future */ - return sa; - } - - @Override - public void unsubscribe() { - both.unsubscribe(); - } - - @Override - public boolean isUnsubscribed() { - return both.isUnsubscribed(); - } - - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/SocketAddressResolver.java b/src/main/java/com/lambdaworks/redis/resource/SocketAddressResolver.java deleted file mode 100644 index 041fb724fa..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/SocketAddressResolver.java +++ /dev/null @@ -1,37 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.net.InetAddress; -import java.net.InetSocketAddress; -import java.net.SocketAddress; -import java.net.UnknownHostException; - -import com.lambdaworks.redis.RedisURI; - -/** - * Resolves a {@link com.lambdaworks.redis.RedisURI} to a {@link java.net.SocketAddress}. - * - * @author Mark Paluch - */ -public class SocketAddressResolver { - - /** - * Resolves a {@link com.lambdaworks.redis.RedisURI} to a {@link java.net.SocketAddress}. - * - * @param redisURI must not be {@literal null} - * @param dnsResolver must not be {@literal null} - * @return the resolved {@link SocketAddress} - */ - public static SocketAddress resolve(RedisURI redisURI, DnsResolver dnsResolver) { - - if (redisURI.getSocket() != null) { - return redisURI.getResolvedAddress(); - } - - try { - InetAddress inetAddress = dnsResolver.resolve(redisURI.getHost())[0]; - return new InetSocketAddress(inetAddress, redisURI.getPort()); - } catch (UnknownHostException e) { - return redisURI.getResolvedAddress(); - } - } -} diff --git a/src/main/java/com/lambdaworks/redis/resource/package-info.java b/src/main/java/com/lambdaworks/redis/resource/package-info.java deleted file mode 100644 index fed7ad5ff9..0000000000 --- a/src/main/java/com/lambdaworks/redis/resource/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Client resource infrastructure providers. - */ -package com.lambdaworks.redis.resource; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/sentinel/RedisSentinelAsyncCommandsImpl.java b/src/main/java/com/lambdaworks/redis/sentinel/RedisSentinelAsyncCommandsImpl.java deleted file mode 100644 index c97abe7f29..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/RedisSentinelAsyncCommandsImpl.java +++ /dev/null @@ -1,144 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import java.net.InetSocketAddress; -import java.net.SocketAddress; -import java.util.List; -import java.util.Map; -import java.util.concurrent.CompletionStage; -import java.util.concurrent.atomic.AtomicReference; - -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.RedisSentinelAsyncConnection; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.AsyncCommand; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; - -/** - * An asynchronous and thread-safe API for a Redis Sentinel connection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public class RedisSentinelAsyncCommandsImpl implements RedisSentinelAsyncCommands, - RedisSentinelAsyncConnection { - - private final SentinelCommandBuilder commandBuilder; - private final StatefulConnection connection; - - public RedisSentinelAsyncCommandsImpl(StatefulConnection connection, RedisCodec codec) { - this.connection = connection; - commandBuilder = new SentinelCommandBuilder(codec); - } - - @Override - public RedisFuture getMasterAddrByName(K key) { - - Command> cmd = commandBuilder.getMasterAddrByKey(key); - CompletionStage> future = dispatch(cmd); - AtomicReference ref = new AtomicReference<>(); - AsyncCommand convert = new AsyncCommand((RedisCommand) cmd) { - @Override - protected void completeResult() { - complete(ref.get()); - } - }; - - future.whenComplete((list, t) -> { - - if (t != null) { - convert.completeExceptionally(t); - return; - } - - if (!list.isEmpty()) { - LettuceAssert.isTrue(list.size() == 2, "List must contain exact 2 entries (Hostname, Port)"); - String hostname = (String) list.get(0); - String port = (String) list.get(1); - ref.set(new InetSocketAddress(hostname, Integer.parseInt(port))); - } - - convert.complete(); - - }); - - return convert; - } - - @Override - public RedisFuture>> masters() { - - return dispatch(commandBuilder.masters()); - } - - @Override - public RedisFuture> master(K key) { - - return dispatch(commandBuilder.master(key)); - } - - @Override - public RedisFuture>> slaves(K key) { - - return dispatch(commandBuilder.slaves(key)); - } - - @Override - public RedisFuture reset(K key) { - - return dispatch(commandBuilder.reset(key)); - } - - @Override - public RedisFuture failover(K key) { - - return dispatch(commandBuilder.failover(key)); - } - - @Override - public RedisFuture monitor(K key, String ip, int port, int quorum) { - - return dispatch(commandBuilder.monitor(key, ip, port, quorum)); - } - - @Override - public RedisFuture set(K key, String option, V value) { - - return dispatch(commandBuilder.set(key, option, value)); - } - - @Override - public RedisFuture remove(K key) { - return dispatch(commandBuilder.remove(key)); - } - - @Override - public RedisFuture ping() { - return dispatch(commandBuilder.ping()); - } - - public AsyncCommand dispatch(RedisCommand cmd) { - return connection.dispatch(new AsyncCommand<>(cmd)); - } - - @Override - public void close() { - connection.close(); - } - - @Override - public boolean isOpen() { - return connection.isOpen(); - } - - @Override - public StatefulRedisSentinelConnection getStatefulConnection() { - return (StatefulRedisSentinelConnection) connection; - } -} diff --git a/src/main/java/com/lambdaworks/redis/sentinel/RedisSentinelReactiveCommandsImpl.java b/src/main/java/com/lambdaworks/redis/sentinel/RedisSentinelReactiveCommandsImpl.java deleted file mode 100644 index a9bd5400d7..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/RedisSentinelReactiveCommandsImpl.java +++ /dev/null @@ -1,120 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import java.net.InetSocketAddress; -import java.net.SocketAddress; -import java.util.Map; -import java.util.function.Supplier; - -import rx.Observable; - -import com.lambdaworks.redis.ReactiveCommandDispatcher; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; -import com.lambdaworks.redis.sentinel.api.rx.RedisSentinelReactiveCommands; - -/** - * A reactive and thread-safe API for a Redis Sentinel connection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public class RedisSentinelReactiveCommandsImpl implements RedisSentinelReactiveCommands { - - private final SentinelCommandBuilder commandBuilder; - private final StatefulConnection connection; - - public RedisSentinelReactiveCommandsImpl(StatefulConnection connection, RedisCodec codec) { - this.connection = connection; - commandBuilder = new SentinelCommandBuilder(codec); - } - - @Override - public Observable getMasterAddrByName(K key) { - - Observable observable = createDissolvingObservable(() -> commandBuilder.getMasterAddrByKey(key)); - return observable.buffer(2).map(list -> { - if (list.isEmpty()) { - return null; - } - - LettuceAssert.isTrue(list.size() == 2, "List must contain exact 2 entries (Hostname, Port)"); - String hostname = (String) list.get(0); - String port = (String) list.get(1); - return new InetSocketAddress(hostname, Integer.parseInt(port)); - }); - } - - @Override - public Observable> masters() { - return createDissolvingObservable(() -> commandBuilder.masters()); - } - - @Override - public Observable> master(K key) { - return createObservable(() -> commandBuilder.master(key)); - } - - @Override - public Observable> slaves(K key) { - return createDissolvingObservable(() -> commandBuilder.slaves(key)); - } - - @Override - public Observable reset(K key) { - return createObservable(() -> commandBuilder.reset(key)); - } - - @Override - public Observable failover(K key) { - return createObservable(() -> commandBuilder.failover(key)); - } - - @Override - public Observable monitor(K key, String ip, int port, int quorum) { - return createObservable(() -> commandBuilder.monitor(key, ip, port, quorum)); - } - - @Override - public Observable set(K key, String option, V value) { - return createObservable(() -> commandBuilder.set(key, option, value)); - } - - @Override - public Observable remove(K key) { - return createObservable(() -> commandBuilder.remove(key)); - } - - @Override - public Observable ping() { - return createObservable(() -> commandBuilder.ping()); - } - - @Override - public void close() { - connection.close(); - } - - @Override - public boolean isOpen() { - return connection.isOpen(); - } - - @Override - public StatefulRedisSentinelConnection getStatefulConnection() { - return (StatefulRedisSentinelConnection) connection; - } - - public Observable createObservable(Supplier> commandSupplier) { - return Observable.create(new ReactiveCommandDispatcher(commandSupplier, connection, false)); - } - - @SuppressWarnings("unchecked") - public R createDissolvingObservable(Supplier> commandSupplier) { - return (R) Observable.create(new ReactiveCommandDispatcher<>(commandSupplier, connection, true)); - } -} diff --git a/src/main/java/com/lambdaworks/redis/sentinel/SentinelCommandBuilder.java b/src/main/java/com/lambdaworks/redis/sentinel/SentinelCommandBuilder.java deleted file mode 100644 index 57001f983c..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/SentinelCommandBuilder.java +++ /dev/null @@ -1,84 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import static com.lambdaworks.redis.protocol.CommandKeyword.FAILOVER; -import static com.lambdaworks.redis.protocol.CommandKeyword.RESET; -import static com.lambdaworks.redis.protocol.CommandKeyword.SLAVES; -import static com.lambdaworks.redis.protocol.CommandType.MONITOR; -import static com.lambdaworks.redis.protocol.CommandType.PING; -import static com.lambdaworks.redis.protocol.CommandType.SENTINEL; -import static com.lambdaworks.redis.protocol.CommandType.SET; - -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.output.IntegerOutput; -import com.lambdaworks.redis.output.ListOfMapsOutput; -import com.lambdaworks.redis.output.MapOutput; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.output.ValueListOutput; -import com.lambdaworks.redis.protocol.BaseRedisCommandBuilder; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandKeyword; - -/** - * @author Mark Paluch - * @since 3.0 - */ -class SentinelCommandBuilder extends BaseRedisCommandBuilder { - - public SentinelCommandBuilder(RedisCodec codec) { - super(codec); - } - - public Command> getMasterAddrByKey(K key) { - CommandArgs args = new CommandArgs(codec).add("get-master-addr-by-name").addKey(key); - return createCommand(SENTINEL, new ValueListOutput(codec), args); - } - - public Command>> masters() { - CommandArgs args = new CommandArgs(codec).add("masters"); - return createCommand(SENTINEL, new ListOfMapsOutput(codec), args); - } - - public Command> master(K key) { - CommandArgs args = new CommandArgs(codec).add("master").addKey(key); - return createCommand(SENTINEL, new MapOutput(codec), args); - } - - public Command>> slaves(K key) { - CommandArgs args = new CommandArgs(codec).add(SLAVES).addKey(key); - return createCommand(SENTINEL, new ListOfMapsOutput(codec), args); - } - - public Command reset(K key) { - CommandArgs args = new CommandArgs(codec).add(RESET).addKey(key); - return createCommand(SENTINEL, new IntegerOutput(codec), args); - } - - public Command failover(K key) { - CommandArgs args = new CommandArgs(codec).add(FAILOVER).addKey(key); - return createCommand(SENTINEL, new StatusOutput(codec), args); - } - - public Command monitor(K key, String ip, int port, int quorum) { - CommandArgs args = new CommandArgs(codec).add(MONITOR).addKey(key).add(ip).add(port).add(quorum); - return createCommand(SENTINEL, new StatusOutput(codec), args); - } - - public Command set(K key, String option, V value) { - CommandArgs args = new CommandArgs(codec).add(SET).addKey(key).add(option).addValue(value); - return createCommand(SENTINEL, new StatusOutput(codec), args); - } - - public Command ping() { - return createCommand(PING, new StatusOutput(codec)); - } - - public Command remove(K key) { - CommandArgs args = new CommandArgs(codec).add(CommandKeyword.REMOVE).addKey(key); - return createCommand(SENTINEL, new StatusOutput(codec), args); - } - -} diff --git a/src/main/java/com/lambdaworks/redis/sentinel/StatefulRedisSentinelConnectionImpl.java b/src/main/java/com/lambdaworks/redis/sentinel/StatefulRedisSentinelConnectionImpl.java deleted file mode 100644 index feea14bb95..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/StatefulRedisSentinelConnectionImpl.java +++ /dev/null @@ -1,56 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.RedisChannelHandler; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; -import com.lambdaworks.redis.sentinel.api.rx.RedisSentinelReactiveCommands; -import com.lambdaworks.redis.sentinel.api.sync.RedisSentinelCommands; -import io.netty.channel.ChannelHandler; - -/** - * @author Mark Paluch - */ -@ChannelHandler.Sharable -public class StatefulRedisSentinelConnectionImpl extends RedisChannelHandler implements - StatefulRedisSentinelConnection { - - protected final RedisCodec codec; - protected final RedisSentinelCommands sync; - protected final RedisSentinelAsyncCommands async; - protected final RedisSentinelReactiveCommands reactive; - - public StatefulRedisSentinelConnectionImpl(RedisChannelWriter writer, RedisCodec codec, long timeout, - TimeUnit unit) { - super(writer, timeout, unit); - - this.codec = codec; - this.async = new RedisSentinelAsyncCommandsImpl<>(this, codec); - this.sync = syncHandler(async, RedisSentinelCommands.class); - this.reactive = new RedisSentinelReactiveCommandsImpl<>(this, codec); - } - - @Override - public > C dispatch(C cmd) { - return super.dispatch(cmd); - } - - @Override - public RedisSentinelCommands sync() { - return sync; - } - - @Override - public RedisSentinelAsyncCommands async() { - return async; - } - - @Override - public RedisSentinelReactiveCommands reactive() { - return reactive; - } -} diff --git a/src/main/java/com/lambdaworks/redis/sentinel/api/StatefulRedisSentinelConnection.java b/src/main/java/com/lambdaworks/redis/sentinel/api/StatefulRedisSentinelConnection.java deleted file mode 100644 index c09f00f444..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/api/StatefulRedisSentinelConnection.java +++ /dev/null @@ -1,42 +0,0 @@ -package com.lambdaworks.redis.sentinel.api; - -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; -import com.lambdaworks.redis.sentinel.api.rx.RedisSentinelReactiveCommands; -import com.lambdaworks.redis.sentinel.api.sync.RedisSentinelCommands; - -/** - * A thread-safe connection to a redis server. Multiple threads may share one {@link StatefulRedisSentinelConnection}. - * - * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All - * pending commands will be (re)sent after successful reconnection. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface StatefulRedisSentinelConnection extends StatefulConnection { - - /** - * Returns the {@link RedisSentinelCommands} API for the current connection. Does not create a new connection. - * - * @return the synchronous API for the underlying connection. - */ - RedisSentinelCommands sync(); - - /** - * Returns the {@link RedisSentinelAsyncCommands} API for the current connection. Does not create a new connection. * - * - * @return the asynchronous API for the underlying connection. - */ - RedisSentinelAsyncCommands async(); - - /** - * Returns the {@link RedisSentinelReactiveCommands} API for the current connection. Does not create a new connection. * - * - * @return the reactive API for the underlying connection. - */ - RedisSentinelReactiveCommands reactive(); -} diff --git a/src/main/java/com/lambdaworks/redis/sentinel/api/async/RedisSentinelAsyncCommands.java b/src/main/java/com/lambdaworks/redis/sentinel/api/async/RedisSentinelAsyncCommands.java deleted file mode 100644 index 1204d29648..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/api/async/RedisSentinelAsyncCommands.java +++ /dev/null @@ -1,122 +0,0 @@ -package com.lambdaworks.redis.sentinel.api.async; - -import java.io.Closeable; -import java.net.SocketAddress; -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.RedisSentinelAsyncConnection; - -/** - * Asynchronous executed commands for Redis Sentinel. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi - */ -public interface RedisSentinelAsyncCommands extends Closeable, RedisSentinelAsyncConnection { - - /** - * Return the ip and port number of the master with that name. - * - * @param key the key - * @return SocketAddress; - */ - RedisFuture getMasterAddrByName(K key); - - /** - * Enumerates all the monitored masters and their states. - * - * @return Map<K, V>> - */ - RedisFuture>> masters(); - - /** - * Show the state and info of the specified master. - * - * @param key the key - * @return Map<K, V> - */ - RedisFuture> master(K key); - - /** - * Provides a list of slaves for the master with the specified name. - * - * @param key the key - * @return List<Map<K, V>> - */ - RedisFuture>> slaves(K key); - - /** - * This command will reset all the masters with matching name. - * - * @param key the key - * @return Long - */ - RedisFuture reset(K key); - - /** - * Perform a failover. - * - * @param key the master id - * @return String - */ - RedisFuture failover(K key); - - /** - * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. - * - * @param key the key - * @param ip the IP address - * @param port the port - * @param quorum the quorum count - * @return String - */ - RedisFuture monitor(K key, String ip, int port, int quorum); - - /** - * Multiple option / value pairs can be specified (or none at all). - * - * @param key the key - * @param option the option - * @param value the value - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - RedisFuture set(K key, String option, V value); - - /** - * remove the specified master. - * - * @param key the key - * @return String - */ - RedisFuture remove(K key); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - RedisFuture ping(); - - /** - * close the underlying connection. - */ - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * - * @return the underlying connection. - */ - StatefulRedisSentinelConnection getStatefulConnection(); -} diff --git a/src/main/java/com/lambdaworks/redis/sentinel/api/async/package-info.java b/src/main/java/com/lambdaworks/redis/sentinel/api/async/package-info.java deleted file mode 100644 index 96dfcd1ffa..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/api/async/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Sentinel API for asynchronous executed commands. - */ -package com.lambdaworks.redis.sentinel.api.async; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/sentinel/api/package-info.java b/src/main/java/com/lambdaworks/redis/sentinel/api/package-info.java deleted file mode 100644 index e00fe614b6..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/api/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Sentinel connection API. - */ -package com.lambdaworks.redis.sentinel.api; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/sentinel/api/rx/RedisSentinelReactiveCommands.java b/src/main/java/com/lambdaworks/redis/sentinel/api/rx/RedisSentinelReactiveCommands.java deleted file mode 100644 index be0217cfe2..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/api/rx/RedisSentinelReactiveCommands.java +++ /dev/null @@ -1,121 +0,0 @@ -package com.lambdaworks.redis.sentinel.api.rx; - -import java.io.Closeable; -import java.net.SocketAddress; -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; -import rx.Observable; - -/** - * Observable commands for Redis Sentinel. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateReactiveApi - */ -public interface RedisSentinelReactiveCommands extends Closeable { - - /** - * Return the ip and port number of the master with that name. - * - * @param key the key - * @return SocketAddress; - */ - Observable getMasterAddrByName(K key); - - /** - * Enumerates all the monitored masters and their states. - * - * @return Map<K, V>> - */ - Observable> masters(); - - /** - * Show the state and info of the specified master. - * - * @param key the key - * @return Map<K, V> - */ - Observable> master(K key); - - /** - * Provides a list of slaves for the master with the specified name. - * - * @param key the key - * @return Map<K, V> - */ - Observable> slaves(K key); - - /** - * This command will reset all the masters with matching name. - * - * @param key the key - * @return Long - */ - Observable reset(K key); - - /** - * Perform a failover. - * - * @param key the master id - * @return String - */ - Observable failover(K key); - - /** - * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. - * - * @param key the key - * @param ip the IP address - * @param port the port - * @param quorum the quorum count - * @return String - */ - Observable monitor(K key, String ip, int port, int quorum); - - /** - * Multiple option / value pairs can be specified (or none at all). - * - * @param key the key - * @param option the option - * @param value the value - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - Observable set(K key, String option, V value); - - /** - * remove the specified master. - * - * @param key the key - * @return String - */ - Observable remove(K key); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - Observable ping(); - - /** - * close the underlying connection. - */ - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * - * @return the underlying connection. - */ - StatefulRedisSentinelConnection getStatefulConnection(); -} diff --git a/src/main/java/com/lambdaworks/redis/sentinel/api/rx/package-info.java b/src/main/java/com/lambdaworks/redis/sentinel/api/rx/package-info.java deleted file mode 100644 index e126b5d5ab..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/api/rx/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Sentinel API for reactive commands. - */ -package com.lambdaworks.redis.sentinel.api.rx; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/sentinel/api/sync/RedisSentinelCommands.java b/src/main/java/com/lambdaworks/redis/sentinel/api/sync/RedisSentinelCommands.java deleted file mode 100644 index aacb8a89b0..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/api/sync/RedisSentinelCommands.java +++ /dev/null @@ -1,120 +0,0 @@ -package com.lambdaworks.redis.sentinel.api.sync; - -import java.io.Closeable; -import java.net.SocketAddress; -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; - -/** - * Synchronous executed commands for Redis Sentinel. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncApi - */ -public interface RedisSentinelCommands extends Closeable { - - /** - * Return the ip and port number of the master with that name. - * - * @param key the key - * @return SocketAddress; - */ - SocketAddress getMasterAddrByName(K key); - - /** - * Enumerates all the monitored masters and their states. - * - * @return Map<K, V>> - */ - List> masters(); - - /** - * Show the state and info of the specified master. - * - * @param key the key - * @return Map<K, V> - */ - Map master(K key); - - /** - * Provides a list of slaves for the master with the specified name. - * - * @param key the key - * @return List<Map<K, V>> - */ - List> slaves(K key); - - /** - * This command will reset all the masters with matching name. - * - * @param key the key - * @return Long - */ - Long reset(K key); - - /** - * Perform a failover. - * - * @param key the master id - * @return String - */ - String failover(K key); - - /** - * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. - * - * @param key the key - * @param ip the IP address - * @param port the port - * @param quorum the quorum count - * @return String - */ - String monitor(K key, String ip, int port, int quorum); - - /** - * Multiple option / value pairs can be specified (or none at all). - * - * @param key the key - * @param option the option - * @param value the value - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - String set(K key, String option, V value); - - /** - * remove the specified master. - * - * @param key the key - * @return String - */ - String remove(K key); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - String ping(); - - /** - * close the underlying connection. - */ - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * - * @return the underlying connection. - */ - StatefulRedisSentinelConnection getStatefulConnection(); -} diff --git a/src/main/java/com/lambdaworks/redis/sentinel/api/sync/package-info.java b/src/main/java/com/lambdaworks/redis/sentinel/api/sync/package-info.java deleted file mode 100644 index 9f53e21bc0..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/api/sync/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Sentinel API for synchronous executed commands. - */ -package com.lambdaworks.redis.sentinel.api.sync; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/sentinel/package-info.java b/src/main/java/com/lambdaworks/redis/sentinel/package-info.java deleted file mode 100644 index 5341dd5776..0000000000 --- a/src/main/java/com/lambdaworks/redis/sentinel/package-info.java +++ /dev/null @@ -1,4 +0,0 @@ -/** - * Redis Sentinel connection classes. - */ -package com.lambdaworks.redis.sentinel; \ No newline at end of file diff --git a/src/main/java/com/lambdaworks/redis/support/ClientResourcesFactoryBean.java b/src/main/java/com/lambdaworks/redis/support/ClientResourcesFactoryBean.java deleted file mode 100644 index c48ae6a619..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/ClientResourcesFactoryBean.java +++ /dev/null @@ -1,66 +0,0 @@ -package com.lambdaworks.redis.support; - -import org.springframework.beans.factory.FactoryBean; -import org.springframework.beans.factory.config.AbstractFactoryBean; - -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.DefaultClientResources; - -/** - * {@link FactoryBean} that creates a {@link ClientResources} instance representing the infrastructure resources (thread pools) - * for a Redis Client. - * - * @author Mark Paluch - */ -public class ClientResourcesFactoryBean extends AbstractFactoryBean { - - private int ioThreadPoolSize = DefaultClientResources.DEFAULT_IO_THREADS; - private int computationThreadPoolSize = DefaultClientResources.DEFAULT_COMPUTATION_THREADS; - - public int getIoThreadPoolSize() { - return ioThreadPoolSize; - } - - /** - * Sets the thread pool size (number of threads to use) for I/O operations (default value is the number of CPUs). - * - * @param ioThreadPoolSize the thread pool size - */ - public void setIoThreadPoolSize(int ioThreadPoolSize) { - this.ioThreadPoolSize = ioThreadPoolSize; - } - - public int getComputationThreadPoolSize() { - return computationThreadPoolSize; - } - - /** - * Sets the thread pool size (number of threads to use) for computation operations (default value is the number of CPUs). - * - * @param computationThreadPoolSize the thread pool size - */ - public void setComputationThreadPoolSize(int computationThreadPoolSize) { - this.computationThreadPoolSize = computationThreadPoolSize; - } - - @Override - public Class getObjectType() { - return ClientResources.class; - } - - @Override - protected ClientResources createInstance() throws Exception { - return new DefaultClientResources.Builder().computationThreadPoolSize(computationThreadPoolSize) - .ioThreadPoolSize(ioThreadPoolSize).build(); - } - - @Override - protected void destroyInstance(ClientResources instance) throws Exception { - instance.shutdown().get(); - } - - @Override - public boolean isSingleton() { - return true; - } -} diff --git a/src/main/java/com/lambdaworks/redis/support/PoolingProxyFactory.java b/src/main/java/com/lambdaworks/redis/support/PoolingProxyFactory.java deleted file mode 100644 index 237085bf59..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/PoolingProxyFactory.java +++ /dev/null @@ -1,41 +0,0 @@ -package com.lambdaworks.redis.support; - -import java.lang.reflect.Proxy; - -import com.lambdaworks.redis.RedisConnectionPool; - -/** - * Pooling proxy factory to create transparent pooling proxies. These proxies will allocate internally connections and use - * always valid connections. You don't need to allocate/free the connections anymore. - * - * @author Mark Paluch - * @since 3.0 - */ -public class PoolingProxyFactory { - - /** - * Utility constructor. - */ - private PoolingProxyFactory() { - - } - - /** - * Creates a transparent connection pooling proxy. Will re-check the connection every 5 secs. - * - * @param connectionPool The Redis connection pool - * @param Type of the connection. - * @return Transparent pooling proxy. - */ - @SuppressWarnings("unchecked") - public static T create(RedisConnectionPool connectionPool) { - Class componentType = connectionPool.getComponentType(); - - TransparentPoolingInvocationHandler h = new TransparentPoolingInvocationHandler(connectionPool); - - Object o = Proxy.newProxyInstance(PoolingProxyFactory.class.getClassLoader(), new Class[] { componentType }, h); - - return (T) o; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/support/RedisClientCdiBean.java b/src/main/java/com/lambdaworks/redis/support/RedisClientCdiBean.java deleted file mode 100644 index 147240167a..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/RedisClientCdiBean.java +++ /dev/null @@ -1,60 +0,0 @@ -package com.lambdaworks.redis.support; - -import java.lang.annotation.Annotation; -import java.util.Set; - -import javax.enterprise.context.spi.CreationalContext; -import javax.enterprise.inject.spi.Bean; -import javax.enterprise.inject.spi.BeanManager; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.resource.ClientResources; - -/** - * Factory Bean for {@link RedisClient} instances. Requires a {@link RedisURI} and allows to reuse - * {@link com.lambdaworks.redis.resource.ClientResources}. URI Formats: - * {@code - * redis-sentinel://host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId - * } - * - * {@code - * redis://host[:port][/databaseNumber] - * } - * - * @see RedisURI - * @author Mark Paluch - * @since 3.0 - */ -class RedisClientCdiBean extends AbstractCdiBean { - - RedisClientCdiBean(Bean redisURIBean, Bean clientResourcesBean, BeanManager beanManager, - Set qualifiers, String name) { - super(redisURIBean, clientResourcesBean, beanManager, qualifiers, name); - } - - @Override - public Class getBeanClass() { - return RedisClient.class; - } - - @Override - public RedisClient create(CreationalContext creationalContext) { - - CreationalContext uriCreationalContext = beanManager.createCreationalContext(redisURIBean); - RedisURI redisURI = (RedisURI) beanManager.getReference(redisURIBean, RedisURI.class, uriCreationalContext); - - if (clientResourcesBean != null) { - ClientResources clientResources = (ClientResources) beanManager.getReference(clientResourcesBean, - ClientResources.class, uriCreationalContext); - return RedisClient.create(clientResources, redisURI); - } - - return RedisClient.create(redisURI); - } - - @Override - public void destroy(RedisClient instance, CreationalContext creationalContext) { - instance.shutdown(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/support/RedisClientFactoryBean.java b/src/main/java/com/lambdaworks/redis/support/RedisClientFactoryBean.java deleted file mode 100644 index e04ca4456d..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/RedisClientFactoryBean.java +++ /dev/null @@ -1,59 +0,0 @@ -package com.lambdaworks.redis.support; - -import static com.lambdaworks.redis.LettuceStrings.isNotEmpty; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; - -/** - * Factory Bean for {@link RedisClient} instances. Needs either a {@link java.net.URI} or a {@link RedisURI} as input and allows - * to reuse {@link com.lambdaworks.redis.resource.ClientResources}. URI Formats: - * {@code - * redis-sentinel://host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId - * } - * - * {@code - * redis://host[:port][/databaseNumber] - * } - * - * @see RedisURI - * @see ClientResourcesFactoryBean - * @author Mark Paluch - * @since 3.0 - */ -public class RedisClientFactoryBean extends LettuceFactoryBeanSupport { - - @Override - public void afterPropertiesSet() throws Exception { - - if (getRedisURI() == null) { - RedisURI redisURI = RedisURI.create(getUri()); - - if (isNotEmpty(getPassword())) { - redisURI.setPassword(getPassword()); - } - setRedisURI(redisURI); - } - - super.afterPropertiesSet(); - } - - @Override - protected void destroyInstance(RedisClient instance) throws Exception { - instance.shutdown(); - } - - @Override - public Class getObjectType() { - return RedisClient.class; - } - - @Override - protected RedisClient createInstance() throws Exception { - - if (getClientResources() != null) { - return RedisClient.create(getClientResources(), getRedisURI()); - } - return RedisClient.create(getRedisURI()); - } -} diff --git a/src/main/java/com/lambdaworks/redis/support/RedisClusterClientCdiBean.java b/src/main/java/com/lambdaworks/redis/support/RedisClusterClientCdiBean.java deleted file mode 100644 index ee5227551b..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/RedisClusterClientCdiBean.java +++ /dev/null @@ -1,55 +0,0 @@ -package com.lambdaworks.redis.support; - -import java.lang.annotation.Annotation; -import java.util.Set; - -import javax.enterprise.context.spi.CreationalContext; -import javax.enterprise.inject.spi.Bean; -import javax.enterprise.inject.spi.BeanManager; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.resource.ClientResources; - -/** - * Factory Bean for {@link RedisClusterClient} instances. Requires a {@link RedisURI} and allows to reuse - * {@link com.lambdaworks.redis.resource.ClientResources}. URI Format: {@code - * redis://[password@]host[:port] - * } - * - * @see RedisURI - * @author Mark Paluch - * @since 3.0 - */ -class RedisClusterClientCdiBean extends AbstractCdiBean { - - public RedisClusterClientCdiBean(Bean redisURIBean, Bean clientResourcesBean, - BeanManager beanManager, Set qualifiers, String name) { - super(redisURIBean, clientResourcesBean, beanManager, qualifiers, name); - } - - @Override - public Class getBeanClass() { - return RedisClusterClient.class; - } - - @Override - public RedisClusterClient create(CreationalContext creationalContext) { - - CreationalContext uriCreationalContext = beanManager.createCreationalContext(redisURIBean); - RedisURI redisURI = (RedisURI) beanManager.getReference(redisURIBean, RedisURI.class, uriCreationalContext); - - if (clientResourcesBean != null) { - ClientResources clientResources = (ClientResources) beanManager.getReference(clientResourcesBean, - ClientResources.class, uriCreationalContext); - return RedisClusterClient.create(clientResources, redisURI); - } - - return RedisClusterClient.create(redisURI); - } - - @Override - public void destroy(RedisClusterClient instance, CreationalContext creationalContext) { - instance.shutdown(); - } -} diff --git a/src/main/java/com/lambdaworks/redis/support/RedisClusterClientFactoryBean.java b/src/main/java/com/lambdaworks/redis/support/RedisClusterClientFactoryBean.java deleted file mode 100644 index 5af28496b6..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/RedisClusterClientFactoryBean.java +++ /dev/null @@ -1,83 +0,0 @@ -package com.lambdaworks.redis.support; - -import static com.lambdaworks.redis.LettuceStrings.isNotEmpty; - -import java.net.URI; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.internal.LettuceAssert; - -/** - * Factory Bean for {@link RedisClusterClient} instances. Needs either a {@link URI} or a {@link RedisURI} as input and allows - * to reuse {@link com.lambdaworks.redis.resource.ClientResources}. URI Format: {@code - * redis://[password@]host[:port] - * } - * - * {@code - * rediss://[password@]host[:port] - * } - * - * @see RedisURI - * @see ClientResourcesFactoryBean - * @author Mark Paluch - * @since 3.0 - */ -public class RedisClusterClientFactoryBean extends LettuceFactoryBeanSupport { - - private boolean verifyPeer = false; - - @Override - public void afterPropertiesSet() throws Exception { - - if (getRedisURI() == null) { - URI uri = getUri(); - - LettuceAssert.isTrue(!uri.getScheme().equals(RedisURI.URI_SCHEME_REDIS_SENTINEL), - "Sentinel mode not supported when using RedisClusterClient"); - - RedisURI redisURI = RedisURI.create(uri); - if (isNotEmpty(getPassword())) { - redisURI.setPassword(getPassword()); - } - - if (RedisURI.URI_SCHEME_REDIS_SECURE.equals(uri.getScheme()) - || RedisURI.URI_SCHEME_REDIS_SECURE_ALT.equals(uri.getScheme()) - || RedisURI.URI_SCHEME_REDIS_TLS_ALT.equals(uri.getScheme())) { - redisURI.setVerifyPeer(verifyPeer); - } - - setRedisURI(redisURI); - } - - super.afterPropertiesSet(); - - } - - @Override - protected void destroyInstance(RedisClusterClient instance) throws Exception { - instance.shutdown(); - } - - @Override - public Class getObjectType() { - return RedisClusterClient.class; - } - - @Override - protected RedisClusterClient createInstance() throws Exception { - - if (getClientResources() != null) { - return RedisClusterClient.create(getClientResources(), getRedisURI()); - } - return RedisClusterClient.create(getRedisURI()); - } - - public boolean isVerifyPeer() { - return verifyPeer; - } - - public void setVerifyPeer(boolean verifyPeer) { - this.verifyPeer = verifyPeer; - } -} diff --git a/src/main/java/com/lambdaworks/redis/support/TransparentPoolingInvocationHandler.java b/src/main/java/com/lambdaworks/redis/support/TransparentPoolingInvocationHandler.java deleted file mode 100644 index 5234c45ccd..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/TransparentPoolingInvocationHandler.java +++ /dev/null @@ -1,48 +0,0 @@ -package com.lambdaworks.redis.support; - -import java.lang.reflect.Method; - -import com.lambdaworks.redis.RedisConnectionPool; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.internal.AbstractInvocationHandler; - -/** - * Invocation Handler with transparent pooling. This handler is thread-safe. - * - * @author Mark Paluch - * @since 3.0 - */ -public class TransparentPoolingInvocationHandler extends AbstractInvocationHandler { - - private RedisConnectionPool pool; - - public TransparentPoolingInvocationHandler(RedisConnectionPool pool) { - this.pool = pool; - } - - @Override - protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { - - if (pool == null) { - throw new RedisException("Connection pool is closed"); - } - - if (method.getName().equals("close")) { - pool.close(); - pool = null; - return null; - } - - T connection = pool.allocateConnection(); - try { - return method.invoke(connection, args); - } finally { - pool.freeConnection(connection); - } - } - - public RedisConnectionPool getPool() { - return pool; - } - -} diff --git a/src/main/java/com/lambdaworks/redis/support/WithConnection.java b/src/main/java/com/lambdaworks/redis/support/WithConnection.java deleted file mode 100644 index 70c0a2f58b..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/WithConnection.java +++ /dev/null @@ -1,35 +0,0 @@ -package com.lambdaworks.redis.support; - -import com.lambdaworks.redis.RedisConnectionPool; - -/** - * Execution-Template which allocates a connection around the run()-call. Use this class as adapter template and implement your - * redis calls within the run-method. - * - * @param Connection type. - * @author Mark Paluch - * @since 3.0 - */ -public abstract class WithConnection { - - /** - * Performs connection handling and invokes the run-method with a valid Redis connection. - * - * @param pool the connection pool. - */ - public WithConnection(RedisConnectionPool pool) { - T connection = pool.allocateConnection(); - try { - run(connection); - } finally { - pool.freeConnection(connection); - } - } - - /** - * Execution method. Will be called with a valid redis connection. - * - * @param connection the connection - */ - protected abstract void run(T connection); -} diff --git a/src/main/java/com/lambdaworks/redis/support/package-info.java b/src/main/java/com/lambdaworks/redis/support/package-info.java deleted file mode 100644 index 274f0f7446..0000000000 --- a/src/main/java/com/lambdaworks/redis/support/package-info.java +++ /dev/null @@ -1,5 +0,0 @@ -/** - * Supportive classes such as {@link com.lambdaworks.redis.support.RedisClientCdiBean} for CDI support, {@link com.lambdaworks.redis.support.RedisClientFactoryBean} for Spring. - */ -package com.lambdaworks.redis.support; - diff --git a/src/main/java/io/lettuce/core/AbstractRedisAsyncCommands.java b/src/main/java/io/lettuce/core/AbstractRedisAsyncCommands.java new file mode 100644 index 0000000000..46b80db0d4 --- /dev/null +++ b/src/main/java/io/lettuce/core/AbstractRedisAsyncCommands.java @@ -0,0 +1,2220 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.protocol.CommandType.*; + +import java.nio.charset.Charset; +import java.time.Duration; +import java.util.Date; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.GeoArgs.Unit; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.async.*; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.*; +import io.lettuce.core.protocol.*; + +/** + * An asynchronous and thread-safe API for a Redis connection. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + * @author Tugdual Grall + */ +@SuppressWarnings("unchecked") +public abstract class AbstractRedisAsyncCommands implements RedisHashAsyncCommands, RedisKeyAsyncCommands, + RedisStringAsyncCommands, RedisListAsyncCommands, RedisSetAsyncCommands, + RedisSortedSetAsyncCommands, RedisScriptingAsyncCommands, RedisServerAsyncCommands, + RedisHLLAsyncCommands, BaseRedisAsyncCommands, RedisTransactionalAsyncCommands, + RedisGeoAsyncCommands, RedisClusterAsyncCommands { + + private final StatefulConnection connection; + private final RedisCommandBuilder commandBuilder; + + /** + * Initialize a new instance. + * + * @param connection the connection to operate on + * @param codec the codec for command encoding + */ + public AbstractRedisAsyncCommands(StatefulConnection connection, RedisCodec codec) { + this.connection = connection; + this.commandBuilder = new RedisCommandBuilder<>(codec); + } + + @Override + public RedisFuture append(K key, V value) { + return dispatch(commandBuilder.append(key, value)); + } + + @Override + public RedisFuture asking() { + return dispatch(commandBuilder.asking()); + } + + @Override + public RedisFuture auth(CharSequence password) { + + LettuceAssert.notNull(password, "Password must not be null"); + return dispatch(commandBuilder.auth(password)); + } + + public RedisFuture auth(char[] password) { + + LettuceAssert.notNull(password, "Password must not be null"); + return dispatch(commandBuilder.auth(password)); + } + + @Override + public RedisFuture auth(String username, CharSequence password) { + LettuceAssert.notNull(username, "Username must not be null"); + LettuceAssert.notNull(password, "Password must not be null"); + return dispatch(commandBuilder.auth(username, password)); + } + + public RedisFuture auth(String username, char[] password) { + LettuceAssert.notNull(username, "Username must not be null"); + LettuceAssert.notNull(password, "Password must not be null"); + return dispatch(commandBuilder.auth(username, password)); + } + + @Override + public RedisFuture bgrewriteaof() { + return dispatch(commandBuilder.bgrewriteaof()); + } + + @Override + public RedisFuture bgsave() { + return dispatch(commandBuilder.bgsave()); + } + + @Override + public RedisFuture bitcount(K key) { + return dispatch(commandBuilder.bitcount(key)); + } + + @Override + public RedisFuture bitcount(K key, long start, long end) { + return dispatch(commandBuilder.bitcount(key, start, end)); + } + + @Override + public RedisFuture> bitfield(K key, BitFieldArgs bitFieldArgs) { + return dispatch(commandBuilder.bitfield(key, bitFieldArgs)); + } + + @Override + public RedisFuture bitopAnd(K destination, K... keys) { + return dispatch(commandBuilder.bitopAnd(destination, keys)); + } + + @Override + public RedisFuture bitopNot(K destination, K source) { + return dispatch(commandBuilder.bitopNot(destination, source)); + } + + @Override + public RedisFuture bitopOr(K destination, K... keys) { + return dispatch(commandBuilder.bitopOr(destination, keys)); + } + + @Override + public RedisFuture bitopXor(K destination, K... keys) { + return dispatch(commandBuilder.bitopXor(destination, keys)); + } + + @Override + public RedisFuture bitpos(K key, boolean state) { + return dispatch(commandBuilder.bitpos(key, state)); + } + + @Override + public RedisFuture bitpos(K key, boolean state, long start) { + return dispatch(commandBuilder.bitpos(key, state, start)); + } + + @Override + public RedisFuture bitpos(K key, boolean state, long start, long end) { + return dispatch(commandBuilder.bitpos(key, state, start, end)); + } + + @Override + public RedisFuture> blpop(long timeout, K... keys) { + return dispatch(commandBuilder.blpop(timeout, keys)); + } + + @Override + public RedisFuture> brpop(long timeout, K... keys) { + return dispatch(commandBuilder.brpop(timeout, keys)); + } + + @Override + public RedisFuture brpoplpush(long timeout, K source, K destination) { + return dispatch(commandBuilder.brpoplpush(timeout, source, destination)); + } + + @Override + public RedisFuture clientGetname() { + return dispatch(commandBuilder.clientGetname()); + } + + @Override + public RedisFuture clientKill(String addr) { + return dispatch(commandBuilder.clientKill(addr)); + } + + @Override + public RedisFuture clientKill(KillArgs killArgs) { + return dispatch(commandBuilder.clientKill(killArgs)); + } + + @Override + public RedisFuture clientList() { + return dispatch(commandBuilder.clientList()); + } + + @Override + public RedisFuture clientId() { + return dispatch(commandBuilder.clientId()); + } + + @Override + public RedisFuture clientPause(long timeout) { + return dispatch(commandBuilder.clientPause(timeout)); + } + + @Override + public RedisFuture clientSetname(K name) { + return dispatch(commandBuilder.clientSetname(name)); + } + + @Override + public RedisFuture clientUnblock(long id, UnblockType type) { + return dispatch(commandBuilder.clientUnblock(id, type)); + } + + @Override + public RedisFuture clusterAddSlots(int... slots) { + return dispatch(commandBuilder.clusterAddslots(slots)); + } + + @Override + public RedisFuture clusterBumpepoch() { + return dispatch(commandBuilder.clusterBumpepoch()); + } + + @Override + public RedisFuture clusterCountFailureReports(String nodeId) { + return dispatch(commandBuilder.clusterCountFailureReports(nodeId)); + } + + @Override + public RedisFuture clusterCountKeysInSlot(int slot) { + return dispatch(commandBuilder.clusterCountKeysInSlot(slot)); + } + + @Override + public RedisFuture clusterDelSlots(int... slots) { + return dispatch(commandBuilder.clusterDelslots(slots)); + } + + @Override + public RedisFuture clusterFailover(boolean force) { + return dispatch(commandBuilder.clusterFailover(force)); + } + + @Override + public RedisFuture clusterFlushslots() { + return dispatch(commandBuilder.clusterFlushslots()); + } + + @Override + public RedisFuture clusterForget(String nodeId) { + return dispatch(commandBuilder.clusterForget(nodeId)); + } + + @Override + public RedisFuture> clusterGetKeysInSlot(int slot, int count) { + return dispatch(commandBuilder.clusterGetKeysInSlot(slot, count)); + } + + @Override + public RedisFuture clusterInfo() { + return dispatch(commandBuilder.clusterInfo()); + } + + @Override + public RedisFuture clusterKeyslot(K key) { + return dispatch(commandBuilder.clusterKeyslot(key)); + } + + @Override + public RedisFuture clusterMeet(String ip, int port) { + return dispatch(commandBuilder.clusterMeet(ip, port)); + } + + @Override + public RedisFuture clusterMyId() { + return dispatch(commandBuilder.clusterMyId()); + } + + @Override + public RedisFuture clusterNodes() { + return dispatch(commandBuilder.clusterNodes()); + } + + @Override + public RedisFuture clusterReplicate(String nodeId) { + return dispatch(commandBuilder.clusterReplicate(nodeId)); + } + + @Override + public RedisFuture clusterReset(boolean hard) { + return dispatch(commandBuilder.clusterReset(hard)); + } + + @Override + public RedisFuture clusterSaveconfig() { + return dispatch(commandBuilder.clusterSaveconfig()); + } + + @Override + public RedisFuture clusterSetConfigEpoch(long configEpoch) { + return dispatch(commandBuilder.clusterSetConfigEpoch(configEpoch)); + } + + @Override + public RedisFuture clusterSetSlotImporting(int slot, String nodeId) { + return dispatch(commandBuilder.clusterSetSlotImporting(slot, nodeId)); + } + + @Override + public RedisFuture clusterSetSlotMigrating(int slot, String nodeId) { + return dispatch(commandBuilder.clusterSetSlotMigrating(slot, nodeId)); + } + + @Override + public RedisFuture clusterSetSlotNode(int slot, String nodeId) { + return dispatch(commandBuilder.clusterSetSlotNode(slot, nodeId)); + } + + @Override + public RedisFuture clusterSetSlotStable(int slot) { + return dispatch(commandBuilder.clusterSetSlotStable(slot)); + } + + @Override + public RedisFuture> clusterSlaves(String nodeId) { + return dispatch(commandBuilder.clusterSlaves(nodeId)); + } + + @Override + public RedisFuture> clusterSlots() { + return dispatch(commandBuilder.clusterSlots()); + } + + @Override + public RedisFuture> command() { + return dispatch(commandBuilder.command()); + } + + @Override + public RedisFuture commandCount() { + return dispatch(commandBuilder.commandCount()); + } + + @Override + public RedisFuture> commandInfo(String... commands) { + return dispatch(commandBuilder.commandInfo(commands)); + } + + @Override + public RedisFuture> commandInfo(CommandType... commands) { + String[] stringCommands = new String[commands.length]; + for (int i = 0; i < commands.length; i++) { + stringCommands[i] = commands[i].name(); + } + + return commandInfo(stringCommands); + } + + @Override + public RedisFuture> configGet(String parameter) { + return dispatch(commandBuilder.configGet(parameter)); + } + + @Override + public RedisFuture configResetstat() { + return dispatch(commandBuilder.configResetstat()); + } + + @Override + public RedisFuture configRewrite() { + return dispatch(commandBuilder.configRewrite()); + } + + @Override + public RedisFuture configSet(String parameter, String value) { + return dispatch(commandBuilder.configSet(parameter, value)); + } + + @Override + public RedisFuture dbsize() { + return dispatch(commandBuilder.dbsize()); + } + + @Override + public RedisFuture debugCrashAndRecover(Long delay) { + return dispatch(commandBuilder.debugCrashAndRecover(delay)); + } + + @Override + public RedisFuture debugHtstats(int db) { + return dispatch(commandBuilder.debugHtstats(db)); + } + + @Override + public RedisFuture debugObject(K key) { + return dispatch(commandBuilder.debugObject(key)); + } + + @Override + public void debugOom() { + dispatch(commandBuilder.debugOom()); + } + + @Override + public RedisFuture debugReload() { + return dispatch(commandBuilder.debugReload()); + } + + @Override + public RedisFuture debugRestart(Long delay) { + return dispatch(commandBuilder.debugRestart(delay)); + } + + @Override + public RedisFuture debugSdslen(K key) { + return dispatch(commandBuilder.debugSdslen(key)); + } + + @Override + public void debugSegfault() { + dispatch(commandBuilder.debugSegfault()); + } + + @Override + public RedisFuture decr(K key) { + return dispatch(commandBuilder.decr(key)); + } + + @Override + public RedisFuture decrby(K key, long amount) { + return dispatch(commandBuilder.decrby(key, amount)); + } + + @Override + public RedisFuture del(K... keys) { + return dispatch(commandBuilder.del(keys)); + } + + public RedisFuture del(Iterable keys) { + return dispatch(commandBuilder.del(keys)); + } + + @Override + public String digest(String script) { + return digest(encodeScript(script)); + } + + @Override + public String digest(byte[] script) { + return LettuceStrings.digest(script); + } + + @Override + public RedisFuture discard() { + return dispatch(commandBuilder.discard()); + } + + @Override + public RedisFuture dispatch(ProtocolKeyword type, CommandOutput output) { + + LettuceAssert.notNull(type, "Command type must not be null"); + LettuceAssert.notNull(output, "CommandOutput type must not be null"); + + return dispatch(new AsyncCommand<>(new Command<>(type, output))); + } + + @Override + public RedisFuture dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args) { + + LettuceAssert.notNull(type, "Command type must not be null"); + LettuceAssert.notNull(output, "CommandOutput type must not be null"); + LettuceAssert.notNull(args, "CommandArgs type must not be null"); + + return dispatch(new AsyncCommand<>(new Command<>(type, output, args))); + } + + protected RedisFuture dispatch(CommandType type, CommandOutput output) { + return dispatch(type, output, null); + } + + protected RedisFuture dispatch(CommandType type, CommandOutput output, CommandArgs args) { + return dispatch(new AsyncCommand<>(new Command<>(type, output, args))); + } + + public AsyncCommand dispatch(RedisCommand cmd) { + AsyncCommand asyncCommand = new AsyncCommand<>(cmd); + RedisCommand dispatched = connection.dispatch(asyncCommand); + if (dispatched instanceof AsyncCommand) { + return (AsyncCommand) dispatched; + } + return asyncCommand; + } + + @Override + public RedisFuture dump(K key) { + return dispatch(commandBuilder.dump(key)); + } + + @Override + public RedisFuture echo(V msg) { + return dispatch(commandBuilder.echo(msg)); + } + + @Override + @SuppressWarnings("unchecked") + public RedisFuture eval(String script, ScriptOutputType type, K... keys) { + return eval(encodeScript(script), type, keys); + } + + @Override + public RedisFuture eval(byte[] script, ScriptOutputType type, K... keys) { + return (RedisFuture) dispatch(commandBuilder.eval(script, type, keys)); + } + + @Override + @SuppressWarnings("unchecked") + public RedisFuture eval(String script, ScriptOutputType type, K[] keys, V... values) { + return eval(encodeScript(script), type, keys, values); + } + + @Override + public RedisFuture eval(byte[] script, ScriptOutputType type, K[] keys, V... values) { + return (RedisFuture) dispatch(commandBuilder.eval(script, type, keys, values)); + } + + @Override + @SuppressWarnings("unchecked") + public RedisFuture evalsha(String digest, ScriptOutputType type, K... keys) { + return (RedisFuture) dispatch(commandBuilder.evalsha(digest, type, keys)); + } + + @Override + @SuppressWarnings("unchecked") + public RedisFuture evalsha(String digest, ScriptOutputType type, K[] keys, V... values) { + return (RedisFuture) dispatch(commandBuilder.evalsha(digest, type, keys, values)); + } + + @Override + public RedisFuture exec() { + return dispatch(EXEC, null); + } + + @Override + public RedisFuture exists(K... keys) { + return dispatch(commandBuilder.exists(keys)); + } + + public RedisFuture exists(Iterable keys) { + return dispatch(commandBuilder.exists(keys)); + } + + @Override + public RedisFuture expire(K key, long seconds) { + return dispatch(commandBuilder.expire(key, seconds)); + } + + @Override + public RedisFuture expireat(K key, Date timestamp) { + return expireat(key, timestamp.getTime() / 1000); + } + + @Override + public RedisFuture expireat(K key, long timestamp) { + return dispatch(commandBuilder.expireat(key, timestamp)); + } + + @Override + public void flushCommands() { + connection.flushCommands(); + } + + @Override + public RedisFuture flushall() { + return dispatch(commandBuilder.flushall()); + } + + @Override + public RedisFuture flushallAsync() { + return dispatch(commandBuilder.flushallAsync()); + } + + @Override + public RedisFuture flushdb() { + return dispatch(commandBuilder.flushdb()); + } + + @Override + public RedisFuture flushdbAsync() { + return dispatch(commandBuilder.flushdbAsync()); + } + + @Override + public RedisFuture geoadd(K key, double longitude, double latitude, V member) { + return dispatch(commandBuilder.geoadd(key, longitude, latitude, member)); + } + + @Override + public RedisFuture geoadd(K key, Object... lngLatMember) { + return dispatch(commandBuilder.geoadd(key, lngLatMember)); + } + + @Override + public RedisFuture geodist(K key, V from, V to, GeoArgs.Unit unit) { + return dispatch(commandBuilder.geodist(key, from, to, unit)); + } + + @Override + public RedisFuture>> geohash(K key, V... members) { + return dispatch(commandBuilder.geohash(key, members)); + } + + @Override + public RedisFuture> geopos(K key, V... members) { + return dispatch(commandBuilder.geopos(key, members)); + } + + @Override + public RedisFuture> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit) { + return dispatch(commandBuilder.georadius(GEORADIUS, key, longitude, latitude, distance, unit.name())); + } + + @Override + public RedisFuture>> georadius(K key, double longitude, double latitude, double distance, + GeoArgs.Unit unit, GeoArgs geoArgs) { + return dispatch(commandBuilder.georadius(GEORADIUS, key, longitude, latitude, distance, unit.name(), geoArgs)); + } + + @Override + public RedisFuture georadius(K key, double longitude, double latitude, double distance, Unit unit, + GeoRadiusStoreArgs geoRadiusStoreArgs) { + return dispatch(commandBuilder.georadius(key, longitude, latitude, distance, unit.name(), geoRadiusStoreArgs)); + } + + protected RedisFuture> georadius_ro(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit) { + return dispatch(commandBuilder.georadius(GEORADIUS_RO, key, longitude, latitude, distance, unit.name())); + } + + protected RedisFuture>> georadius_ro(K key, double longitude, double latitude, double distance, + GeoArgs.Unit unit, GeoArgs geoArgs) { + return dispatch(commandBuilder.georadius(GEORADIUS_RO, key, longitude, latitude, distance, unit.name(), geoArgs)); + } + + @Override + public RedisFuture> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit) { + return dispatch(commandBuilder.georadiusbymember(GEORADIUSBYMEMBER, key, member, distance, unit.name())); + } + + @Override + public RedisFuture>> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, + GeoArgs geoArgs) { + return dispatch(commandBuilder.georadiusbymember(GEORADIUSBYMEMBER, key, member, distance, unit.name(), geoArgs)); + } + + @Override + public RedisFuture georadiusbymember(K key, V member, double distance, Unit unit, + GeoRadiusStoreArgs geoRadiusStoreArgs) { + return dispatch(commandBuilder.georadiusbymember(key, member, distance, unit.name(), geoRadiusStoreArgs)); + } + + protected RedisFuture> georadiusbymember_ro(K key, V member, double distance, GeoArgs.Unit unit) { + return dispatch(commandBuilder.georadiusbymember(GEORADIUSBYMEMBER_RO, key, member, distance, unit.name())); + } + + protected RedisFuture>> georadiusbymember_ro(K key, V member, double distance, GeoArgs.Unit unit, + GeoArgs geoArgs) { + return dispatch(commandBuilder.georadiusbymember(GEORADIUSBYMEMBER_RO, key, member, distance, unit.name(), geoArgs)); + } + + @Override + public RedisFuture get(K key) { + return dispatch(commandBuilder.get(key)); + } + + public StatefulConnection getConnection() { + return connection; + } + + @Override + public RedisFuture getbit(K key, long offset) { + return dispatch(commandBuilder.getbit(key, offset)); + } + + @Override + public RedisFuture getrange(K key, long start, long end) { + return dispatch(commandBuilder.getrange(key, start, end)); + } + + @Override + public RedisFuture getset(K key, V value) { + return dispatch(commandBuilder.getset(key, value)); + } + + @Override + public RedisFuture hdel(K key, K... fields) { + return dispatch(commandBuilder.hdel(key, fields)); + } + + @Override + public RedisFuture hexists(K key, K field) { + return dispatch(commandBuilder.hexists(key, field)); + } + + @Override + public RedisFuture hget(K key, K field) { + return dispatch(commandBuilder.hget(key, field)); + } + + @Override + public RedisFuture> hgetall(K key) { + return dispatch(commandBuilder.hgetall(key)); + } + + @Override + public RedisFuture hgetall(KeyValueStreamingChannel channel, K key) { + return dispatch(commandBuilder.hgetall(channel, key)); + } + + @Override + public RedisFuture hincrby(K key, K field, long amount) { + return dispatch(commandBuilder.hincrby(key, field, amount)); + } + + @Override + public RedisFuture hincrbyfloat(K key, K field, double amount) { + return dispatch(commandBuilder.hincrbyfloat(key, field, amount)); + } + + @Override + public RedisFuture> hkeys(K key) { + return dispatch(commandBuilder.hkeys(key)); + } + + @Override + public RedisFuture hkeys(KeyStreamingChannel channel, K key) { + return dispatch(commandBuilder.hkeys(channel, key)); + } + + @Override + public RedisFuture hlen(K key) { + return dispatch(commandBuilder.hlen(key)); + } + + @Override + public RedisFuture>> hmget(K key, K... fields) { + return dispatch(commandBuilder.hmgetKeyValue(key, fields)); + } + + @Override + public RedisFuture hmget(KeyValueStreamingChannel channel, K key, K... fields) { + return dispatch(commandBuilder.hmget(channel, key, fields)); + } + + @Override + public RedisFuture hmset(K key, Map map) { + return dispatch(commandBuilder.hmset(key, map)); + } + + @Override + public RedisFuture> hscan(K key) { + return dispatch(commandBuilder.hscan(key)); + } + + @Override + public RedisFuture> hscan(K key, ScanArgs scanArgs) { + return dispatch(commandBuilder.hscan(key, scanArgs)); + } + + @Override + public RedisFuture> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + return dispatch(commandBuilder.hscan(key, scanCursor, scanArgs)); + } + + @Override + public RedisFuture> hscan(K key, ScanCursor scanCursor) { + return dispatch(commandBuilder.hscan(key, scanCursor)); + } + + @Override + public RedisFuture hscan(KeyValueStreamingChannel channel, K key) { + return dispatch(commandBuilder.hscanStreaming(channel, key)); + } + + @Override + public RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs) { + return dispatch(commandBuilder.hscanStreaming(channel, key, scanArgs)); + } + + @Override + public RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, + ScanArgs scanArgs) { + return dispatch(commandBuilder.hscanStreaming(channel, key, scanCursor, scanArgs)); + } + + @Override + public RedisFuture hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor) { + return dispatch(commandBuilder.hscanStreaming(channel, key, scanCursor)); + } + + @Override + public RedisFuture hset(K key, K field, V value) { + return dispatch(commandBuilder.hset(key, field, value)); + } + + @Override + public RedisFuture hset(K key, Map map) { + return dispatch(commandBuilder.hset(key, map)); + } + + @Override + public RedisFuture hsetnx(K key, K field, V value) { + return dispatch(commandBuilder.hsetnx(key, field, value)); + } + + @Override + public RedisFuture hstrlen(K key, K field) { + return dispatch(commandBuilder.hstrlen(key, field)); + } + + @Override + public RedisFuture> hvals(K key) { + return dispatch(commandBuilder.hvals(key)); + } + + @Override + public RedisFuture hvals(ValueStreamingChannel channel, K key) { + return dispatch(commandBuilder.hvals(channel, key)); + } + + @Override + public RedisFuture incr(K key) { + return dispatch(commandBuilder.incr(key)); + } + + @Override + public RedisFuture incrby(K key, long amount) { + return dispatch(commandBuilder.incrby(key, amount)); + } + + @Override + public RedisFuture incrbyfloat(K key, double amount) { + return dispatch(commandBuilder.incrbyfloat(key, amount)); + } + + @Override + public RedisFuture info() { + return dispatch(commandBuilder.info()); + } + + @Override + public RedisFuture info(String section) { + return dispatch(commandBuilder.info(section)); + } + + @Override + public boolean isOpen() { + return connection.isOpen(); + } + + @Override + public RedisFuture> keys(K pattern) { + return dispatch(commandBuilder.keys(pattern)); + } + + @Override + public RedisFuture keys(KeyStreamingChannel channel, K pattern) { + return dispatch(commandBuilder.keys(channel, pattern)); + } + + @Override + public RedisFuture lastsave() { + return dispatch(commandBuilder.lastsave()); + } + + @Override + public RedisFuture lindex(K key, long index) { + return dispatch(commandBuilder.lindex(key, index)); + } + + @Override + public RedisFuture linsert(K key, boolean before, V pivot, V value) { + return dispatch(commandBuilder.linsert(key, before, pivot, value)); + } + + @Override + public RedisFuture llen(K key) { + return dispatch(commandBuilder.llen(key)); + } + + @Override + public RedisFuture lpop(K key) { + return dispatch(commandBuilder.lpop(key)); + } + + @Override + public RedisFuture lpush(K key, V... values) { + return dispatch(commandBuilder.lpush(key, values)); + } + + @Override + public RedisFuture lpushx(K key, V... values) { + return dispatch(commandBuilder.lpushx(key, values)); + } + + @Override + public RedisFuture> lrange(K key, long start, long stop) { + return dispatch(commandBuilder.lrange(key, start, stop)); + } + + @Override + public RedisFuture lrange(ValueStreamingChannel channel, K key, long start, long stop) { + return dispatch(commandBuilder.lrange(channel, key, start, stop)); + } + + @Override + public RedisFuture lrem(K key, long count, V value) { + return dispatch(commandBuilder.lrem(key, count, value)); + } + + @Override + public RedisFuture lset(K key, long index, V value) { + return dispatch(commandBuilder.lset(key, index, value)); + } + + @Override + public RedisFuture ltrim(K key, long start, long stop) { + return dispatch(commandBuilder.ltrim(key, start, stop)); + } + + @Override + public RedisFuture memoryUsage(K key) { + return dispatch(commandBuilder.memoryUsage(key)); + } + + @Override + public RedisFuture>> mget(K... keys) { + return dispatch(commandBuilder.mgetKeyValue(keys)); + } + + public RedisFuture>> mget(Iterable keys) { + return dispatch(commandBuilder.mgetKeyValue(keys)); + } + + @Override + public RedisFuture mget(KeyValueStreamingChannel channel, K... keys) { + return dispatch(commandBuilder.mget(channel, keys)); + } + + public RedisFuture mget(KeyValueStreamingChannel channel, Iterable keys) { + return dispatch(commandBuilder.mget(channel, keys)); + } + + @Override + public RedisFuture migrate(String host, int port, K key, int db, long timeout) { + return dispatch(commandBuilder.migrate(host, port, key, db, timeout)); + } + + @Override + public RedisFuture migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs) { + return dispatch(commandBuilder.migrate(host, port, db, timeout, migrateArgs)); + } + + @Override + public RedisFuture move(K key, int db) { + return dispatch(commandBuilder.move(key, db)); + } + + @Override + public RedisFuture mset(Map map) { + return dispatch(commandBuilder.mset(map)); + } + + @Override + public RedisFuture msetnx(Map map) { + return dispatch(commandBuilder.msetnx(map)); + } + + @Override + public RedisFuture multi() { + return dispatch(commandBuilder.multi()); + } + + @Override + public RedisFuture objectEncoding(K key) { + return dispatch(commandBuilder.objectEncoding(key)); + } + + @Override + public RedisFuture objectIdletime(K key) { + return dispatch(commandBuilder.objectIdletime(key)); + } + + @Override + public RedisFuture objectRefcount(K key) { + return dispatch(commandBuilder.objectRefcount(key)); + } + + @Override + public RedisFuture persist(K key) { + return dispatch(commandBuilder.persist(key)); + } + + @Override + public RedisFuture pexpire(K key, long milliseconds) { + return dispatch(commandBuilder.pexpire(key, milliseconds)); + } + + @Override + public RedisFuture pexpireat(K key, Date timestamp) { + return pexpireat(key, timestamp.getTime()); + } + + @Override + public RedisFuture pexpireat(K key, long timestamp) { + return dispatch(commandBuilder.pexpireat(key, timestamp)); + } + + @Override + public RedisFuture pfadd(K key, V... values) { + return dispatch(commandBuilder.pfadd(key, values)); + } + + @Override + public RedisFuture pfcount(K... keys) { + return dispatch(commandBuilder.pfcount(keys)); + } + + @Override + public RedisFuture pfmerge(K destkey, K... sourcekeys) { + return dispatch(commandBuilder.pfmerge(destkey, sourcekeys)); + } + + @Override + public RedisFuture ping() { + return dispatch(commandBuilder.ping()); + } + + @Override + public RedisFuture psetex(K key, long milliseconds, V value) { + return dispatch(commandBuilder.psetex(key, milliseconds, value)); + } + + @Override + public RedisFuture pttl(K key) { + return dispatch(commandBuilder.pttl(key)); + } + + @Override + public RedisFuture publish(K channel, V message) { + return dispatch(commandBuilder.publish(channel, message)); + } + + @Override + public RedisFuture> pubsubChannels() { + return dispatch(commandBuilder.pubsubChannels()); + } + + @Override + public RedisFuture> pubsubChannels(K channel) { + return dispatch(commandBuilder.pubsubChannels(channel)); + } + + @Override + public RedisFuture pubsubNumpat() { + return dispatch(commandBuilder.pubsubNumpat()); + } + + @Override + public RedisFuture> pubsubNumsub(K... channels) { + return dispatch(commandBuilder.pubsubNumsub(channels)); + } + + @Override + public RedisFuture quit() { + return dispatch(commandBuilder.quit()); + } + + @Override + public RedisFuture randomkey() { + return dispatch(commandBuilder.randomkey()); + } + + @Override + public RedisFuture readOnly() { + return dispatch(commandBuilder.readOnly()); + } + + @Override + public RedisFuture readWrite() { + return dispatch(commandBuilder.readWrite()); + } + + @Override + public RedisFuture rename(K key, K newKey) { + return dispatch(commandBuilder.rename(key, newKey)); + } + + @Override + public RedisFuture renamenx(K key, K newKey) { + return dispatch(commandBuilder.renamenx(key, newKey)); + } + + @Override + public void reset() { + getConnection().reset(); + } + + @Override + public RedisFuture restore(K key, long ttl, byte[] value) { + return dispatch(commandBuilder.restore(key, value, RestoreArgs.Builder.ttl(ttl))); + } + + @Override + public RedisFuture restore(K key, byte[] value, RestoreArgs args) { + return dispatch(commandBuilder.restore(key, value, args)); + } + + @Override + public RedisFuture> role() { + return dispatch(commandBuilder.role()); + } + + @Override + public RedisFuture rpop(K key) { + return dispatch(commandBuilder.rpop(key)); + } + + @Override + public RedisFuture rpoplpush(K source, K destination) { + return dispatch(commandBuilder.rpoplpush(source, destination)); + } + + @Override + public RedisFuture rpush(K key, V... values) { + return dispatch(commandBuilder.rpush(key, values)); + } + + @Override + public RedisFuture rpushx(K key, V... values) { + return dispatch(commandBuilder.rpushx(key, values)); + } + + @Override + public RedisFuture sadd(K key, V... members) { + return dispatch(commandBuilder.sadd(key, members)); + } + + @Override + public RedisFuture save() { + return dispatch(commandBuilder.save()); + } + + @Override + public RedisFuture> scan() { + return dispatch(commandBuilder.scan()); + } + + @Override + public RedisFuture> scan(ScanArgs scanArgs) { + return dispatch(commandBuilder.scan(scanArgs)); + } + + @Override + public RedisFuture> scan(ScanCursor scanCursor, ScanArgs scanArgs) { + return dispatch(commandBuilder.scan(scanCursor, scanArgs)); + } + + @Override + public RedisFuture> scan(ScanCursor scanCursor) { + return dispatch(commandBuilder.scan(scanCursor)); + } + + @Override + public RedisFuture scan(KeyStreamingChannel channel) { + return dispatch(commandBuilder.scanStreaming(channel)); + } + + @Override + public RedisFuture scan(KeyStreamingChannel channel, ScanArgs scanArgs) { + return dispatch(commandBuilder.scanStreaming(channel, scanArgs)); + } + + @Override + public RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { + return dispatch(commandBuilder.scanStreaming(channel, scanCursor, scanArgs)); + } + + @Override + public RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor) { + return dispatch(commandBuilder.scanStreaming(channel, scanCursor)); + } + + @Override + public RedisFuture scard(K key) { + return dispatch(commandBuilder.scard(key)); + } + + @Override + public RedisFuture> scriptExists(String... digests) { + return dispatch(commandBuilder.scriptExists(digests)); + } + + @Override + public RedisFuture scriptFlush() { + return dispatch(commandBuilder.scriptFlush()); + } + + @Override + public RedisFuture scriptKill() { + return dispatch(commandBuilder.scriptKill()); + } + + @Override + public RedisFuture scriptLoad(String script) { + return scriptLoad(encodeScript(script)); + } + + @Override + public RedisFuture scriptLoad(byte[] script) { + return dispatch(commandBuilder.scriptLoad(script)); + } + + @Override + public RedisFuture> sdiff(K... keys) { + return dispatch(commandBuilder.sdiff(keys)); + } + + @Override + public RedisFuture sdiff(ValueStreamingChannel channel, K... keys) { + return dispatch(commandBuilder.sdiff(channel, keys)); + } + + @Override + public RedisFuture sdiffstore(K destination, K... keys) { + return dispatch(commandBuilder.sdiffstore(destination, keys)); + } + + public RedisFuture select(int db) { + return dispatch(commandBuilder.select(db)); + } + + @Override + public RedisFuture set(K key, V value) { + return dispatch(commandBuilder.set(key, value)); + } + + @Override + public RedisFuture set(K key, V value, SetArgs setArgs) { + return dispatch(commandBuilder.set(key, value, setArgs)); + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + connection.setAutoFlushCommands(autoFlush); + } + + public void setTimeout(Duration timeout) { + connection.setTimeout(timeout); + } + + @Override + public RedisFuture setbit(K key, long offset, int value) { + return dispatch(commandBuilder.setbit(key, offset, value)); + } + + @Override + public RedisFuture setex(K key, long seconds, V value) { + return dispatch(commandBuilder.setex(key, seconds, value)); + } + + @Override + public RedisFuture setnx(K key, V value) { + return dispatch(commandBuilder.setnx(key, value)); + } + + @Override + public RedisFuture setrange(K key, long offset, V value) { + return dispatch(commandBuilder.setrange(key, offset, value)); + } + + @Override + public void shutdown(boolean save) { + dispatch(commandBuilder.shutdown(save)); + } + + @Override + public RedisFuture> sinter(K... keys) { + return dispatch(commandBuilder.sinter(keys)); + } + + @Override + public RedisFuture sinter(ValueStreamingChannel channel, K... keys) { + return dispatch(commandBuilder.sinter(channel, keys)); + } + + @Override + public RedisFuture sinterstore(K destination, K... keys) { + return dispatch(commandBuilder.sinterstore(destination, keys)); + } + + @Override + public RedisFuture sismember(K key, V member) { + return dispatch(commandBuilder.sismember(key, member)); + } + + @Override + public RedisFuture slaveof(String host, int port) { + return dispatch(commandBuilder.slaveof(host, port)); + } + + @Override + public RedisFuture slaveofNoOne() { + return dispatch(commandBuilder.slaveofNoOne()); + } + + @Override + public RedisFuture> slowlogGet() { + return dispatch(commandBuilder.slowlogGet()); + } + + @Override + public RedisFuture> slowlogGet(int count) { + return dispatch(commandBuilder.slowlogGet(count)); + } + + @Override + public RedisFuture slowlogLen() { + return dispatch(commandBuilder.slowlogLen()); + } + + @Override + public RedisFuture slowlogReset() { + return dispatch(commandBuilder.slowlogReset()); + } + + @Override + public RedisFuture> smembers(K key) { + return dispatch(commandBuilder.smembers(key)); + } + + @Override + public RedisFuture smembers(ValueStreamingChannel channel, K key) { + return dispatch(commandBuilder.smembers(channel, key)); + } + + @Override + public RedisFuture smove(K source, K destination, V member) { + return dispatch(commandBuilder.smove(source, destination, member)); + } + + @Override + public RedisFuture> sort(K key) { + return dispatch(commandBuilder.sort(key)); + } + + @Override + public RedisFuture sort(ValueStreamingChannel channel, K key) { + return dispatch(commandBuilder.sort(channel, key)); + } + + @Override + public RedisFuture> sort(K key, SortArgs sortArgs) { + return dispatch(commandBuilder.sort(key, sortArgs)); + } + + @Override + public RedisFuture sort(ValueStreamingChannel channel, K key, SortArgs sortArgs) { + return dispatch(commandBuilder.sort(channel, key, sortArgs)); + } + + @Override + public RedisFuture sortStore(K key, SortArgs sortArgs, K destination) { + return dispatch(commandBuilder.sortStore(key, sortArgs, destination)); + } + + @Override + public RedisFuture spop(K key) { + return dispatch(commandBuilder.spop(key)); + } + + @Override + public RedisFuture> spop(K key, long count) { + return dispatch(commandBuilder.spop(key, count)); + } + + @Override + public RedisFuture srandmember(K key) { + return dispatch(commandBuilder.srandmember(key)); + } + + @Override + public RedisFuture> srandmember(K key, long count) { + return dispatch(commandBuilder.srandmember(key, count)); + } + + @Override + public RedisFuture srandmember(ValueStreamingChannel channel, K key, long count) { + return dispatch(commandBuilder.srandmember(channel, key, count)); + } + + @Override + public RedisFuture srem(K key, V... members) { + return dispatch(commandBuilder.srem(key, members)); + } + + @Override + public RedisFuture> sscan(K key) { + return dispatch(commandBuilder.sscan(key)); + } + + @Override + public RedisFuture> sscan(K key, ScanArgs scanArgs) { + return dispatch(commandBuilder.sscan(key, scanArgs)); + } + + @Override + public RedisFuture> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + return dispatch(commandBuilder.sscan(key, scanCursor, scanArgs)); + } + + @Override + public RedisFuture> sscan(K key, ScanCursor scanCursor) { + return dispatch(commandBuilder.sscan(key, scanCursor)); + } + + @Override + public RedisFuture sscan(ValueStreamingChannel channel, K key) { + return dispatch(commandBuilder.sscanStreaming(channel, key)); + } + + @Override + public RedisFuture sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs) { + return dispatch(commandBuilder.sscanStreaming(channel, key, scanArgs)); + } + + @Override + public RedisFuture sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, + ScanArgs scanArgs) { + return dispatch(commandBuilder.sscanStreaming(channel, key, scanCursor, scanArgs)); + } + + @Override + public RedisFuture sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor) { + return dispatch(commandBuilder.sscanStreaming(channel, key, scanCursor)); + } + + @Override + public RedisFuture strlen(K key) { + return dispatch(commandBuilder.strlen(key)); + } + + @Override + public RedisFuture> sunion(K... keys) { + return dispatch(commandBuilder.sunion(keys)); + } + + @Override + public RedisFuture sunion(ValueStreamingChannel channel, K... keys) { + return dispatch(commandBuilder.sunion(channel, keys)); + } + + @Override + public RedisFuture sunionstore(K destination, K... keys) { + return dispatch(commandBuilder.sunionstore(destination, keys)); + } + + public RedisFuture swapdb(int db1, int db2) { + return dispatch(commandBuilder.swapdb(db1, db2)); + } + + @Override + public RedisFuture> time() { + return dispatch(commandBuilder.time()); + } + + @Override + public RedisFuture touch(K... keys) { + return dispatch(commandBuilder.touch(keys)); + } + + public RedisFuture touch(Iterable keys) { + return dispatch(commandBuilder.touch(keys)); + } + + @Override + public RedisFuture ttl(K key) { + return dispatch(commandBuilder.ttl(key)); + } + + @Override + public RedisFuture type(K key) { + return dispatch(commandBuilder.type(key)); + } + + @Override + public RedisFuture unlink(K... keys) { + return dispatch(commandBuilder.unlink(keys)); + } + + public RedisFuture unlink(Iterable keys) { + return dispatch(commandBuilder.unlink(keys)); + } + + @Override + public RedisFuture unwatch() { + return dispatch(commandBuilder.unwatch()); + } + + @Override + public RedisFuture waitForReplication(int replicas, long timeout) { + return dispatch(commandBuilder.wait(replicas, timeout)); + } + + @Override + public RedisFuture watch(K... keys) { + return dispatch(commandBuilder.watch(keys)); + } + + @Override + public RedisFuture xack(K key, K group, String... messageIds) { + return dispatch(commandBuilder.xack(key, group, messageIds)); + } + + @Override + public RedisFuture xadd(K key, Map body) { + return dispatch(commandBuilder.xadd(key, null, body)); + } + + @Override + public RedisFuture xadd(K key, XAddArgs args, Map body) { + return dispatch(commandBuilder.xadd(key, args, body)); + } + + @Override + public RedisFuture xadd(K key, Object... keysAndValues) { + return dispatch(commandBuilder.xadd(key, null, keysAndValues)); + } + + @Override + public RedisFuture xadd(K key, XAddArgs args, Object... keysAndValues) { + return dispatch(commandBuilder.xadd(key, args, keysAndValues)); + } + + @Override + public RedisFuture>> xclaim(K key, Consumer consumer, long minIdleTime, String... messageIds) { + return dispatch(commandBuilder.xclaim(key, consumer, XClaimArgs.Builder.minIdleTime(minIdleTime), messageIds)); + } + + @Override + public RedisFuture>> xclaim(K key, Consumer consumer, XClaimArgs args, String... messageIds) { + return dispatch(commandBuilder.xclaim(key, consumer, args, messageIds)); + } + + @Override + public RedisFuture xdel(K key, String... messageIds) { + return dispatch(commandBuilder.xdel(key, messageIds)); + } + + @Override + public RedisFuture xgroupCreate(XReadArgs.StreamOffset offset, K group) { + return dispatch(commandBuilder.xgroupCreate(offset, group, null)); + } + + @Override + public RedisFuture xgroupCreate(XReadArgs.StreamOffset offset, K group, XGroupCreateArgs args) { + return dispatch(commandBuilder.xgroupCreate(offset, group, args)); + } + + @Override + public RedisFuture xgroupDelconsumer(K key, Consumer consumer) { + return dispatch(commandBuilder.xgroupDelconsumer(key, consumer)); + } + + @Override + public RedisFuture xgroupDestroy(K key, K group) { + return dispatch(commandBuilder.xgroupDestroy(key, group)); + } + + @Override + public RedisFuture xgroupSetid(XReadArgs.StreamOffset offset, K group) { + return dispatch(commandBuilder.xgroupSetid(offset, group)); + } + + @Override + public RedisFuture> xinfoStream(K key) { + return dispatch(commandBuilder.xinfoStream(key)); + } + + @Override + public RedisFuture> xinfoGroups(K key) { + return dispatch(commandBuilder.xinfoGroups(key)); + } + + @Override + public RedisFuture> xinfoConsumers(K key, K group) { + return dispatch(commandBuilder.xinfoConsumers(key, group)); + } + + @Override + public RedisFuture xlen(K key) { + return dispatch(commandBuilder.xlen(key)); + } + + @Override + public RedisFuture> xpending(K key, K group) { + return dispatch(commandBuilder.xpending(key, group, Range.unbounded(), Limit.unlimited())); + } + + @Override + public RedisFuture> xpending(K key, K group, Range range, Limit limit) { + return dispatch(commandBuilder.xpending(key, group, range, limit)); + } + + @Override + public RedisFuture> xpending(K key, Consumer consumer, Range range, Limit limit) { + return dispatch(commandBuilder.xpending(key, consumer, range, limit)); + } + + @Override + public RedisFuture>> xrange(K key, Range range) { + return dispatch(commandBuilder.xrange(key, range, Limit.unlimited())); + } + + @Override + public RedisFuture>> xrange(K key, Range range, Limit limit) { + return dispatch(commandBuilder.xrange(key, range, limit)); + } + + @Override + public RedisFuture>> xread(XReadArgs.StreamOffset... streams) { + return dispatch(commandBuilder.xread(null, streams)); + } + + @Override + public RedisFuture>> xread(XReadArgs args, XReadArgs.StreamOffset... streams) { + return dispatch(commandBuilder.xread(args, streams)); + } + + @Override + public RedisFuture>> xreadgroup(Consumer consumer, XReadArgs.StreamOffset... streams) { + return dispatch(commandBuilder.xreadgroup(consumer, null, streams)); + } + + @Override + public RedisFuture>> xreadgroup(Consumer consumer, XReadArgs args, + XReadArgs.StreamOffset... streams) { + return dispatch(commandBuilder.xreadgroup(consumer, args, streams)); + } + + @Override + public RedisFuture>> xrevrange(K key, Range range) { + return dispatch(commandBuilder.xrevrange(key, range, Limit.unlimited())); + } + + @Override + public RedisFuture>> xrevrange(K key, Range range, Limit limit) { + return dispatch(commandBuilder.xrevrange(key, range, limit)); + } + + @Override + public RedisFuture xtrim(K key, long count) { + return xtrim(key, false, count); + } + + @Override + public RedisFuture xtrim(K key, boolean approximateTrimming, long count) { + return dispatch(commandBuilder.xtrim(key, approximateTrimming, count)); + } + + @Override + public RedisFuture>> bzpopmin(long timeout, K... keys) { + return dispatch(commandBuilder.bzpopmin(timeout, keys)); + } + + @Override + public RedisFuture>> bzpopmax(long timeout, K... keys) { + return dispatch(commandBuilder.bzpopmax(timeout, keys)); + } + + @Override + public RedisFuture zadd(K key, double score, V member) { + return dispatch(commandBuilder.zadd(key, null, score, member)); + } + + @Override + public RedisFuture zadd(K key, Object... scoresAndValues) { + return dispatch(commandBuilder.zadd(key, null, scoresAndValues)); + } + + @Override + public RedisFuture zadd(K key, ScoredValue... scoredValues) { + return dispatch(commandBuilder.zadd(key, null, (Object[]) scoredValues)); + } + + @Override + public RedisFuture zadd(K key, ZAddArgs zAddArgs, double score, V member) { + return dispatch(commandBuilder.zadd(key, zAddArgs, score, member)); + } + + @Override + public RedisFuture zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues) { + return dispatch(commandBuilder.zadd(key, zAddArgs, scoresAndValues)); + } + + @Override + public RedisFuture zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues) { + return dispatch(commandBuilder.zadd(key, zAddArgs, (Object[]) scoredValues)); + } + + @Override + public RedisFuture zaddincr(K key, double score, V member) { + return dispatch(commandBuilder.zaddincr(key, null, score, member)); + } + + @Override + public RedisFuture zaddincr(K key, ZAddArgs zAddArgs, double score, V member) { + return dispatch(commandBuilder.zaddincr(key, zAddArgs, score, member)); + } + + @Override + public RedisFuture zcard(K key) { + return dispatch(commandBuilder.zcard(key)); + } + + @Override + public RedisFuture zcount(K key, double min, double max) { + return dispatch(commandBuilder.zcount(key, min, max)); + } + + @Override + public RedisFuture zcount(K key, String min, String max) { + return dispatch(commandBuilder.zcount(key, min, max)); + } + + @Override + public RedisFuture zcount(K key, Range range) { + return dispatch(commandBuilder.zcount(key, range)); + } + + @Override + public RedisFuture zincrby(K key, double amount, V member) { + return dispatch(commandBuilder.zincrby(key, amount, member)); + } + + @Override + public RedisFuture zinterstore(K destination, K... keys) { + return dispatch(commandBuilder.zinterstore(destination, keys)); + } + + @Override + public RedisFuture zinterstore(K destination, ZStoreArgs storeArgs, K... keys) { + return dispatch(commandBuilder.zinterstore(destination, storeArgs, keys)); + } + + @Override + public RedisFuture zlexcount(K key, String min, String max) { + return dispatch(commandBuilder.zlexcount(key, min, max)); + } + + @Override + public RedisFuture zlexcount(K key, Range range) { + return dispatch(commandBuilder.zlexcount(key, range)); + } + + @Override + public RedisFuture> zpopmin(K key) { + return dispatch(commandBuilder.zpopmin(key)); + } + + @Override + public RedisFuture>> zpopmin(K key, long count) { + return dispatch(commandBuilder.zpopmin(key, count)); + } + + @Override + public RedisFuture> zpopmax(K key) { + return dispatch(commandBuilder.zpopmax(key)); + } + + @Override + public RedisFuture>> zpopmax(K key, long count) { + return dispatch(commandBuilder.zpopmax(key, count)); + } + + @Override + public RedisFuture> zrange(K key, long start, long stop) { + return dispatch(commandBuilder.zrange(key, start, stop)); + } + + @Override + public RedisFuture zrange(ValueStreamingChannel channel, K key, long start, long stop) { + return dispatch(commandBuilder.zrange(channel, key, start, stop)); + } + + @Override + public RedisFuture>> zrangeWithScores(K key, long start, long stop) { + return dispatch(commandBuilder.zrangeWithScores(key, start, stop)); + } + + @Override + public RedisFuture zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { + return dispatch(commandBuilder.zrangeWithScores(channel, key, start, stop)); + } + + @Override + public RedisFuture> zrangebylex(K key, String min, String max) { + return dispatch(commandBuilder.zrangebylex(key, min, max)); + } + + @Override + public RedisFuture> zrangebylex(K key, Range range) { + return dispatch(commandBuilder.zrangebylex(key, range, Limit.unlimited())); + } + + @Override + public RedisFuture> zrangebylex(K key, String min, String max, long offset, long count) { + return dispatch(commandBuilder.zrangebylex(key, min, max, offset, count)); + } + + @Override + public RedisFuture> zrangebylex(K key, Range range, Limit limit) { + return dispatch(commandBuilder.zrangebylex(key, range, limit)); + } + + @Override + public RedisFuture> zrangebyscore(K key, double min, double max) { + return dispatch(commandBuilder.zrangebyscore(key, min, max)); + } + + @Override + public RedisFuture> zrangebyscore(K key, String min, String max) { + return dispatch(commandBuilder.zrangebyscore(key, min, max)); + } + + @Override + public RedisFuture> zrangebyscore(K key, Range range) { + return dispatch(commandBuilder.zrangebyscore(key, range, Limit.unlimited())); + } + + @Override + public RedisFuture> zrangebyscore(K key, double min, double max, long offset, long count) { + return dispatch(commandBuilder.zrangebyscore(key, min, max, offset, count)); + } + + @Override + public RedisFuture> zrangebyscore(K key, String min, String max, long offset, long count) { + return dispatch(commandBuilder.zrangebyscore(key, min, max, offset, count)); + } + + @Override + public RedisFuture> zrangebyscore(K key, Range range, Limit limit) { + return dispatch(commandBuilder.zrangebyscore(key, range, limit)); + } + + @Override + public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max) { + return dispatch(commandBuilder.zrangebyscore(channel, key, min, max)); + } + + @Override + public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max) { + return dispatch(commandBuilder.zrangebyscore(channel, key, min, max)); + } + + @Override + public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, Range range) { + return dispatch(commandBuilder.zrangebyscore(channel, key, range, Limit.unlimited())); + } + + @Override + public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, + long count) { + return dispatch(commandBuilder.zrangebyscore(channel, key, min, max, offset, count)); + } + + @Override + public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, + long count) { + return dispatch(commandBuilder.zrangebyscore(channel, key, min, max, offset, count)); + } + + @Override + public RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, Range range, + Limit limit) { + return dispatch(commandBuilder.zrangebyscore(channel, key, range, limit)); + } + + @Override + public RedisFuture>> zrangebyscoreWithScores(K key, double min, double max) { + return dispatch(commandBuilder.zrangebyscoreWithScores(key, min, max)); + } + + @Override + public RedisFuture>> zrangebyscoreWithScores(K key, String min, String max) { + return dispatch(commandBuilder.zrangebyscoreWithScores(key, min, max)); + } + + @Override + public RedisFuture>> zrangebyscoreWithScores(K key, Range range) { + return dispatch(commandBuilder.zrangebyscoreWithScores(key, range, Limit.unlimited())); + } + + @Override + public RedisFuture>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count) { + return dispatch(commandBuilder.zrangebyscoreWithScores(key, min, max, offset, count)); + } + + @Override + public RedisFuture>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count) { + return dispatch(commandBuilder.zrangebyscoreWithScores(key, min, max, offset, count)); + } + + @Override + public RedisFuture>> zrangebyscoreWithScores(K key, Range range, Limit limit) { + return dispatch(commandBuilder.zrangebyscoreWithScores(key, range, limit)); + } + + @Override + public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max) { + return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, min, max)); + } + + @Override + public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max) { + return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, min, max)); + } + + @Override + public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, + Range range) { + return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, range, Limit.unlimited())); + } + + @Override + public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, + long offset, long count) { + return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, min, max, offset, count)); + } + + @Override + public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, + long offset, long count) { + return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, min, max, offset, count)); + } + + @Override + public RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, + Range range, Limit limit) { + return dispatch(commandBuilder.zrangebyscoreWithScores(channel, key, range, limit)); + } + + @Override + public RedisFuture zrank(K key, V member) { + return dispatch(commandBuilder.zrank(key, member)); + } + + @Override + public RedisFuture zrem(K key, V... members) { + return dispatch(commandBuilder.zrem(key, members)); + } + + @Override + public RedisFuture zremrangebylex(K key, String min, String max) { + return dispatch(commandBuilder.zremrangebylex(key, min, max)); + } + + @Override + public RedisFuture zremrangebylex(K key, Range range) { + return dispatch(commandBuilder.zremrangebylex(key, range)); + } + + @Override + public RedisFuture zremrangebyrank(K key, long start, long stop) { + return dispatch(commandBuilder.zremrangebyrank(key, start, stop)); + } + + @Override + public RedisFuture zremrangebyscore(K key, double min, double max) { + return dispatch(commandBuilder.zremrangebyscore(key, min, max)); + } + + @Override + public RedisFuture zremrangebyscore(K key, String min, String max) { + return dispatch(commandBuilder.zremrangebyscore(key, min, max)); + } + + @Override + public RedisFuture zremrangebyscore(K key, Range range) { + return dispatch(commandBuilder.zremrangebyscore(key, range)); + } + + @Override + public RedisFuture> zrevrange(K key, long start, long stop) { + return dispatch(commandBuilder.zrevrange(key, start, stop)); + } + + @Override + public RedisFuture zrevrange(ValueStreamingChannel channel, K key, long start, long stop) { + return dispatch(commandBuilder.zrevrange(channel, key, start, stop)); + } + + @Override + public RedisFuture>> zrevrangeWithScores(K key, long start, long stop) { + return dispatch(commandBuilder.zrevrangeWithScores(key, start, stop)); + } + + @Override + public RedisFuture zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { + return dispatch(commandBuilder.zrevrangeWithScores(channel, key, start, stop)); + } + + @Override + public RedisFuture> zrevrangebylex(K key, Range range) { + return dispatch(commandBuilder.zrevrangebylex(key, range, Limit.unlimited())); + } + + @Override + public RedisFuture> zrevrangebylex(K key, Range range, Limit limit) { + return dispatch(commandBuilder.zrevrangebylex(key, range, limit)); + } + + @Override + public RedisFuture> zrevrangebyscore(K key, double max, double min) { + return dispatch(commandBuilder.zrevrangebyscore(key, max, min)); + } + + @Override + public RedisFuture> zrevrangebyscore(K key, String max, String min) { + return dispatch(commandBuilder.zrevrangebyscore(key, max, min)); + } + + @Override + public RedisFuture> zrevrangebyscore(K key, Range range) { + return dispatch(commandBuilder.zrevrangebyscore(key, range, Limit.unlimited())); + } + + @Override + public RedisFuture> zrevrangebyscore(K key, double max, double min, long offset, long count) { + return dispatch(commandBuilder.zrevrangebyscore(key, max, min, offset, count)); + } + + @Override + public RedisFuture> zrevrangebyscore(K key, String max, String min, long offset, long count) { + return dispatch(commandBuilder.zrevrangebyscore(key, max, min, offset, count)); + } + + @Override + public RedisFuture> zrevrangebyscore(K key, Range range, Limit limit) { + return dispatch(commandBuilder.zrevrangebyscore(key, range, limit)); + } + + @Override + public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min) { + return dispatch(commandBuilder.zrevrangebyscore(channel, key, max, min)); + } + + @Override + public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min) { + return dispatch(commandBuilder.zrevrangebyscore(channel, key, max, min)); + } + + @Override + public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, Range range) { + return dispatch(commandBuilder.zrevrangebyscore(channel, key, range, Limit.unlimited())); + } + + @Override + public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, + long count) { + return dispatch(commandBuilder.zrevrangebyscore(channel, key, max, min, offset, count)); + } + + @Override + public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, + long count) { + return dispatch(commandBuilder.zrevrangebyscore(channel, key, max, min, offset, count)); + } + + @Override + public RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, + Limit limit) { + return dispatch(commandBuilder.zrevrangebyscore(channel, key, range, limit)); + } + + @Override + public RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, max, min)); + } + + @Override + public RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, max, min)); + } + + @Override + public RedisFuture>> zrevrangebyscoreWithScores(K key, Range range) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, range, Limit.unlimited())); + } + + @Override + public RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, + long count) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, max, min, offset, count)); + } + + @Override + public RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, + long count) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, max, min, offset, count)); + } + + @Override + public RedisFuture>> zrevrangebyscoreWithScores(K key, Range range, Limit limit) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(key, range, limit)); + } + + @Override + public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min)); + } + + @Override + public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min)); + } + + @Override + public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, + Range range) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, range, Limit.unlimited())); + } + + @Override + public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, + long offset, long count) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min, offset, count)); + } + + @Override + public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, + long offset, long count) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min, offset, count)); + } + + @Override + public RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, + Range range, Limit limit) { + return dispatch(commandBuilder.zrevrangebyscoreWithScores(channel, key, range, limit)); + } + + @Override + public RedisFuture zrevrank(K key, V member) { + return dispatch(commandBuilder.zrevrank(key, member)); + } + + @Override + public RedisFuture> zscan(K key) { + return dispatch(commandBuilder.zscan(key)); + } + + @Override + public RedisFuture> zscan(K key, ScanArgs scanArgs) { + return dispatch(commandBuilder.zscan(key, scanArgs)); + } + + @Override + public RedisFuture> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + return dispatch(commandBuilder.zscan(key, scanCursor, scanArgs)); + } + + @Override + public RedisFuture> zscan(K key, ScanCursor scanCursor) { + return dispatch(commandBuilder.zscan(key, scanCursor)); + } + + @Override + public RedisFuture zscan(ScoredValueStreamingChannel channel, K key) { + return dispatch(commandBuilder.zscanStreaming(channel, key)); + } + + @Override + public RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs) { + return dispatch(commandBuilder.zscanStreaming(channel, key, scanArgs)); + } + + @Override + public RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, + ScanArgs scanArgs) { + return dispatch(commandBuilder.zscanStreaming(channel, key, scanCursor, scanArgs)); + } + + @Override + public RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor) { + return dispatch(commandBuilder.zscanStreaming(channel, key, scanCursor)); + } + + @Override + public RedisFuture zscore(K key, V member) { + return dispatch(commandBuilder.zscore(key, member)); + } + + @Override + public RedisFuture zunionstore(K destination, K... keys) { + return dispatch(commandBuilder.zunionstore(destination, keys)); + } + + @Override + public RedisFuture zunionstore(K destination, ZStoreArgs storeArgs, K... keys) { + return dispatch(commandBuilder.zunionstore(destination, storeArgs, keys)); + } + + private byte[] encodeScript(String script) { + LettuceAssert.notNull(script, "Lua script must not be null"); + LettuceAssert.notEmpty(script, "Lua script must not be empty"); + return script.getBytes(getConnection().getOptions().getScriptCharset()); + } +} diff --git a/src/main/java/io/lettuce/core/AbstractRedisClient.java b/src/main/java/io/lettuce/core/AbstractRedisClient.java new file mode 100644 index 0000000000..e631606866 --- /dev/null +++ b/src/main/java/io/lettuce/core/AbstractRedisClient.java @@ -0,0 +1,560 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.io.Closeable; +import java.net.SocketAddress; +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.*; +import java.util.concurrent.atomic.AtomicBoolean; + +import reactor.core.publisher.Mono; +import io.lettuce.core.Transports.NativeTransports; +import io.lettuce.core.internal.AsyncCloseable; +import io.lettuce.core.internal.Exceptions; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.ConnectionWatchdog; +import io.lettuce.core.protocol.RedisHandshakeHandler; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DefaultClientResources; +import io.netty.bootstrap.Bootstrap; +import io.netty.buffer.ByteBufAllocator; +import io.netty.channel.*; +import io.netty.channel.group.ChannelGroup; +import io.netty.channel.group.DefaultChannelGroup; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.concurrent.Future; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Base Redis client. This class holds the netty infrastructure, {@link ClientOptions} and the basic connection procedure. This + * class creates the netty {@link EventLoopGroup}s for NIO ({@link NioEventLoopGroup}) and EPoll ( + * {@link io.netty.channel.epoll.EpollEventLoopGroup}) with a default of {@code Runtime.getRuntime().availableProcessors() * 4} + * threads. Reuse the instance as much as possible since the {@link EventLoopGroup} instances are expensive and can consume a + * huge part of your resources, if you create multiple instances. + *

+ * You can set the number of threads per {@link NioEventLoopGroup} by setting the {@code io.netty.eventLoopThreads} system + * property to a reasonable number of threads. + *

+ * + * @author Mark Paluch + * @author Jongyeol Choi + * @author Poorva Gokhale + * @since 3.0 + * @see ClientResources + */ +public abstract class AbstractRedisClient { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(AbstractRedisClient.class); + + protected final ConnectionEvents connectionEvents = new ConnectionEvents(); + protected final Set closeableResources = ConcurrentHashMap.newKeySet(); + protected final ChannelGroup channels; + + private final ClientResources clientResources; + private final Map, EventLoopGroup> eventLoopGroups = new ConcurrentHashMap<>(2); + private final boolean sharedResources; + private final AtomicBoolean shutdown = new AtomicBoolean(); + + private volatile ClientOptions clientOptions = ClientOptions.create(); + private volatile Duration defaultTimeout = RedisURI.DEFAULT_TIMEOUT_DURATION; + + /** + * Create a new instance with client resources. + * + * @param clientResources the client resources. If {@literal null}, the client will create a new dedicated instance of + * client resources and keep track of them. + */ + protected AbstractRedisClient(ClientResources clientResources) { + + if (clientResources == null) { + this.sharedResources = false; + this.clientResources = DefaultClientResources.create(); + } else { + this.sharedResources = true; + this.clientResources = clientResources; + } + + this.channels = new DefaultChannelGroup(this.clientResources.eventExecutorGroup().next()); + } + + protected int getChannelCount() { + return channels.size(); + } + + /** + * Returns the default {@link Duration timeout} for commands. + * + * @return the default {@link Duration timeout} for commands. + */ + public Duration getDefaultTimeout() { + return defaultTimeout; + } + + /** + * Set the default timeout for connections created by this client. The timeout applies to connection attempts and + * non-blocking commands. + * + * @param timeout default connection timeout, must not be {@literal null}. + * @since 5.0 + */ + public void setDefaultTimeout(Duration timeout) { + + LettuceAssert.notNull(timeout, "Timeout duration must not be null"); + LettuceAssert.isTrue(!timeout.isNegative(), "Timeout duration must be greater or equal to zero"); + + this.defaultTimeout = timeout; + } + + /** + * Set the default timeout for connections created by this client. The timeout applies to connection attempts and + * non-blocking commands. + * + * @param timeout Default connection timeout. + * @param unit Unit of time for the timeout. + * @deprecated since 5.0, use {@link #setDefaultTimeout(Duration)}. + */ + @Deprecated + public void setDefaultTimeout(long timeout, TimeUnit unit) { + setDefaultTimeout(Duration.ofNanos(unit.toNanos(timeout))); + } + + /** + * Returns the {@link ClientOptions} which are valid for that client. Connections inherit the current options at the moment + * the connection is created. Changes to options will not affect existing connections. + * + * @return the {@link ClientOptions} for this client + */ + public ClientOptions getOptions() { + return clientOptions; + } + + /** + * Set the {@link ClientOptions} for the client. + * + * @param clientOptions client options for the client and connections that are created after setting the options + */ + protected void setOptions(ClientOptions clientOptions) { + LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); + this.clientOptions = clientOptions; + } + + /** + * Returns the {@link ClientResources} which are used with that client. + * + * @return the {@link ClientResources} for this client. + * @since 6.0 + * + */ + public ClientResources getResources() { + return clientResources; + } + + protected int getResourceCount() { + return closeableResources.size(); + } + + /** + * Add a listener for the RedisConnectionState. The listener is notified every time a connect/disconnect/IO exception + * happens. The listeners are not bound to a specific connection, so every time a connection event happens on any + * connection, the listener will be notified. The corresponding netty channel handler (async connection) is passed on the + * event. + * + * @param listener must not be {@literal null} + */ + public void addListener(RedisConnectionStateListener listener) { + LettuceAssert.notNull(listener, "RedisConnectionStateListener must not be null"); + connectionEvents.addListener(listener); + } + + /** + * Removes a listener. + * + * @param listener must not be {@literal null} + */ + public void removeListener(RedisConnectionStateListener listener) { + + LettuceAssert.notNull(listener, "RedisConnectionStateListener must not be null"); + connectionEvents.removeListener(listener); + } + + /** + * Populate connection builder with necessary resources. + * + * @param socketAddressSupplier address supplier for initial connect and re-connect + * @param connectionBuilder connection builder to configure the connection + * @param redisURI URI of the Redis instance + */ + protected void connectionBuilder(Mono socketAddressSupplier, ConnectionBuilder connectionBuilder, + RedisURI redisURI) { + + Bootstrap redisBootstrap = new Bootstrap(); + redisBootstrap.option(ChannelOption.ALLOCATOR, ByteBufAllocator.DEFAULT); + + ClientOptions clientOptions = getOptions(); + SocketOptions socketOptions = clientOptions.getSocketOptions(); + + redisBootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, + Math.toIntExact(socketOptions.getConnectTimeout().toMillis())); + + if (LettuceStrings.isEmpty(redisURI.getSocket())) { + redisBootstrap.option(ChannelOption.SO_KEEPALIVE, socketOptions.isKeepAlive()); + redisBootstrap.option(ChannelOption.TCP_NODELAY, socketOptions.isTcpNoDelay()); + } + + connectionBuilder.apply(redisURI); + + connectionBuilder.bootstrap(redisBootstrap); + connectionBuilder.channelGroup(channels).connectionEvents(connectionEvents); + connectionBuilder.socketAddressSupplier(socketAddressSupplier); + } + + protected void channelType(ConnectionBuilder connectionBuilder, ConnectionPoint connectionPoint) { + + LettuceAssert.notNull(connectionPoint, "ConnectionPoint must not be null"); + + connectionBuilder.bootstrap().group(getEventLoopGroup(connectionPoint)); + + if (connectionPoint.getSocket() != null) { + NativeTransports.assertAvailable(); + connectionBuilder.bootstrap().channel(NativeTransports.domainSocketChannelClass()); + } else { + connectionBuilder.bootstrap().channel(Transports.socketChannelClass()); + } + } + + private synchronized EventLoopGroup getEventLoopGroup(ConnectionPoint connectionPoint) { + + if (connectionPoint.getSocket() == null && !eventLoopGroups.containsKey(Transports.eventLoopGroupClass())) { + eventLoopGroups.put(Transports.eventLoopGroupClass(), + clientResources.eventLoopGroupProvider().allocate(Transports.eventLoopGroupClass())); + } + + if (connectionPoint.getSocket() != null) { + + NativeTransports.assertAvailable(); + + Class eventLoopGroupClass = NativeTransports.eventLoopGroupClass(); + + if (!eventLoopGroups.containsKey(NativeTransports.eventLoopGroupClass())) { + eventLoopGroups.put(eventLoopGroupClass, + clientResources.eventLoopGroupProvider().allocate(eventLoopGroupClass)); + } + } + + if (connectionPoint.getSocket() == null) { + return eventLoopGroups.get(Transports.eventLoopGroupClass()); + } + + if (connectionPoint.getSocket() != null) { + NativeTransports.assertAvailable(); + return eventLoopGroups.get(NativeTransports.eventLoopGroupClass()); + } + + throw new IllegalStateException("This should not have happened in a binary decision. Please file a bug."); + } + + /** + * Retrieve the connection from {@link ConnectionFuture}. Performs a blocking {@link ConnectionFuture#get()} to synchronize + * the channel/connection initialization. Any exception is rethrown as {@link RedisConnectionException}. + * + * @param connectionFuture must not be null. + * @param Connection type. + * @return the connection. + * @throws RedisConnectionException in case of connection failures. + * @since 4.4 + */ + protected T getConnection(ConnectionFuture connectionFuture) { + + try { + return connectionFuture.get(); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw RedisConnectionException.create(connectionFuture.getRemoteAddress(), e); + } catch (Exception e) { + throw RedisConnectionException.create(connectionFuture.getRemoteAddress(), Exceptions.unwrap(e)); + } + } + + /** + * Retrieve the connection from {@link ConnectionFuture}. Performs a blocking {@link ConnectionFuture#get()} to synchronize + * the channel/connection initialization. Any exception is rethrown as {@link RedisConnectionException}. + * + * @param connectionFuture must not be null. + * @param Connection type. + * @return the connection. + * @throws RedisConnectionException in case of connection failures. + * @since 5.0 + */ + protected T getConnection(CompletableFuture connectionFuture) { + + try { + return connectionFuture.get(); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw RedisConnectionException.create(e); + } catch (Exception e) { + throw RedisConnectionException.create(Exceptions.unwrap(e)); + } + } + + /** + * Connect and initialize a channel from {@link ConnectionBuilder}. + * + * @param connectionBuilder must not be {@literal null}. + * @return the {@link ConnectionFuture} to synchronize the connection process. + * @since 4.4 + */ + @SuppressWarnings("unchecked") + protected > ConnectionFuture initializeChannelAsync( + ConnectionBuilder connectionBuilder) { + + Mono socketAddressSupplier = connectionBuilder.socketAddress(); + + if (clientResources.eventExecutorGroup().isShuttingDown()) { + throw new IllegalStateException("Cannot connect, Event executor group is terminated."); + } + + CompletableFuture socketAddressFuture = new CompletableFuture<>(); + CompletableFuture channelReadyFuture = new CompletableFuture<>(); + + socketAddressSupplier.doOnError(socketAddressFuture::completeExceptionally).doOnNext(socketAddressFuture::complete) + .subscribe(redisAddress -> { + + if (channelReadyFuture.isCancelled()) { + return; + } + initializeChannelAsync0(connectionBuilder, channelReadyFuture, redisAddress); + }, channelReadyFuture::completeExceptionally); + + return new DefaultConnectionFuture<>(socketAddressFuture, + channelReadyFuture.thenApply(channel -> (T) connectionBuilder.connection())); + } + + private void initializeChannelAsync0(ConnectionBuilder connectionBuilder, CompletableFuture channelReadyFuture, + SocketAddress redisAddress) { + + logger.debug("Connecting to Redis at {}", redisAddress); + + Bootstrap redisBootstrap = connectionBuilder.bootstrap(); + + ChannelInitializer initializer = connectionBuilder.build(redisAddress); + redisBootstrap.handler(initializer); + + clientResources.nettyCustomizer().afterBootstrapInitialized(redisBootstrap); + ChannelFuture connectFuture = redisBootstrap.connect(redisAddress); + + channelReadyFuture.whenComplete((c, t) -> { + + if (t instanceof CancellationException) { + connectFuture.cancel(true); + } + }); + + connectFuture.addListener(future -> { + + if (!future.isSuccess()) { + + logger.debug("Connecting to Redis at {}: {}", redisAddress, future.cause()); + connectionBuilder.endpoint().initialState(); + channelReadyFuture.completeExceptionally(future.cause()); + return; + } + + RedisHandshakeHandler handshakeHandler = connectFuture.channel().pipeline().get(RedisHandshakeHandler.class); + + if (handshakeHandler == null) { + channelReadyFuture.completeExceptionally(new IllegalStateException("RedisHandshakeHandler not registered")); + return; + } + + handshakeHandler.channelInitialized().whenComplete((success, throwable) -> { + + if (throwable == null) { + + logger.debug("Connecting to Redis at {}: Success", redisAddress); + RedisChannelHandler connection = connectionBuilder.connection(); + connection.registerCloseables(closeableResources, connection); + channelReadyFuture.complete(connectFuture.channel()); + return; + } + + logger.debug("Connecting to Redis at {}, initialization: {}", redisAddress, throwable); + connectionBuilder.endpoint().initialState(); + Throwable failure; + + if (throwable instanceof RedisConnectionException) { + failure = throwable; + } else if (throwable instanceof TimeoutException) { + failure = new RedisConnectionException( + "Could not initialize channel within " + connectionBuilder.getTimeout(), throwable); + } else { + failure = throwable; + } + channelReadyFuture.completeExceptionally(failure); + }); + }); + } + + /** + * Shutdown this client and close all open connections once this method is called. Once all connections are closed, the + * associated {@link ClientResources} are shut down/released gracefully considering quiet time and the shutdown timeout. The + * client should be discarded after calling shutdown. The shutdown is executed without quiet time and a timeout of 2 + * {@link TimeUnit#SECONDS}. + * + * @see EventExecutorGroup#shutdownGracefully(long, long, TimeUnit) + */ + public void shutdown() { + shutdown(0, 2, TimeUnit.SECONDS); + } + + /** + * Shutdown this client and close all open connections once this method is called. Once all connections are closed, the + * associated {@link ClientResources} are shut down/released gracefully considering quiet time and the shutdown timeout. The + * client should be discarded after calling shutdown. + * + * @param quietPeriod the quiet period to allow the executor gracefully shut down. + * @param timeout the maximum amount of time to wait until the backing executor is shutdown regardless if a task was + * submitted during the quiet period. + * @since 5.0 + * @see EventExecutorGroup#shutdownGracefully(long, long, TimeUnit) + */ + public void shutdown(Duration quietPeriod, Duration timeout) { + shutdown(quietPeriod.toNanos(), timeout.toNanos(), TimeUnit.NANOSECONDS); + } + + /** + * Shutdown this client and close all open connections once this method is called. Once all connections are closed, the + * associated {@link ClientResources} are shut down/released gracefully considering quiet time and the shutdown timeout. The + * client should be discarded after calling shutdown. + * + * @param quietPeriod the quiet period to allow the executor gracefully shut down. + * @param timeout the maximum amount of time to wait until the backing executor is shutdown regardless if a task was + * submitted during the quiet period. + * @param timeUnit the unit of {@code quietPeriod} and {@code timeout}. + * @see EventExecutorGroup#shutdownGracefully(long, long, TimeUnit) + */ + public void shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { + + try { + shutdownAsync(quietPeriod, timeout, timeUnit).get(); + } catch (Exception e) { + throw Exceptions.bubble(e); + } + } + + /** + * Shutdown this client and close all open connections asynchronously. Once all connections are closed, the associated + * {@link ClientResources} are shut down/released gracefully considering quiet time and the shutdown timeout. The client + * should be discarded after calling shutdown. The shutdown is executed without quiet time and a timeout of 2 + * {@link TimeUnit#SECONDS}. + * + * @since 4.4 + * @see EventExecutorGroup#shutdownGracefully(long, long, TimeUnit) + */ + public CompletableFuture shutdownAsync() { + return shutdownAsync(0, 2, TimeUnit.SECONDS); + } + + /** + * Shutdown this client and close all open connections asynchronously. Once all connections are closed, the associated + * {@link ClientResources} are shut down/released gracefully considering quiet time and the shutdown timeout. The client + * should be discarded after calling shutdown. + * + * @param quietPeriod the quiet period to allow the executor gracefully shut down. + * @param timeout the maximum amount of time to wait until the backing executor is shutdown regardless if a task was + * submitted during the quiet period. + * @param timeUnit the unit of {@code quietPeriod} and {@code timeout}. + * @since 4.4 + * @see EventExecutorGroup#shutdownGracefully(long, long, TimeUnit) + */ + public CompletableFuture shutdownAsync(long quietPeriod, long timeout, TimeUnit timeUnit) { + + if (shutdown.compareAndSet(false, true)) { + + logger.debug("Initiate shutdown ({}, {}, {})", quietPeriod, timeout, timeUnit); + return closeResources().thenCompose((value) -> closeClientResources(quietPeriod, timeout, timeUnit)); + } + + return CompletableFuture.completedFuture(null); + } + + private CompletableFuture closeResources() { + + List> closeFutures = new ArrayList<>(); + + while (!closeableResources.isEmpty()) { + Closeable closeableResource = closeableResources.iterator().next(); + + if (closeableResource instanceof AsyncCloseable) { + + closeFutures.add(((AsyncCloseable) closeableResource).closeAsync()); + } else { + try { + closeableResource.close(); + } catch (Exception e) { + logger.debug("Exception on Close: " + e.getMessage(), e); + } + } + closeableResources.remove(closeableResource); + } + + for (Channel c : channels) { + + ChannelPipeline pipeline = c.pipeline(); + + ConnectionWatchdog commandHandler = pipeline.get(ConnectionWatchdog.class); + if (commandHandler != null) { + commandHandler.setListenOnChannelInactive(false); + } + } + + try { + closeFutures.add(Futures.toCompletionStage(channels.close())); + } catch (Exception e) { + logger.debug("Cannot close channels", e); + } + + return Futures.allOf(closeFutures); + } + + private CompletableFuture closeClientResources(long quietPeriod, long timeout, TimeUnit timeUnit) { + List> groupCloseFutures = new ArrayList<>(); + if (!sharedResources) { + Future groupCloseFuture = clientResources.shutdown(quietPeriod, timeout, timeUnit); + groupCloseFutures.add(Futures.toCompletionStage(groupCloseFuture)); + } else { + for (EventLoopGroup eventExecutors : eventLoopGroups.values()) { + Future groupCloseFuture = clientResources.eventLoopGroupProvider().release(eventExecutors, quietPeriod, + timeout, timeUnit); + groupCloseFutures.add(Futures.toCompletionStage(groupCloseFuture)); + } + } + return Futures.allOf(groupCloseFutures); + } + + protected RedisHandshake createHandshake(ConnectionState state) { + return new RedisHandshake(clientOptions.getConfiguredProtocolVersion(), clientOptions.isPingBeforeActivateConnection(), + state); + } +} diff --git a/src/main/java/io/lettuce/core/AbstractRedisReactiveCommands.java b/src/main/java/io/lettuce/core/AbstractRedisReactiveCommands.java new file mode 100644 index 0000000000..c341d96c94 --- /dev/null +++ b/src/main/java/io/lettuce/core/AbstractRedisReactiveCommands.java @@ -0,0 +1,2284 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.protocol.CommandType.*; + +import java.nio.charset.Charset; +import java.time.Duration; +import java.util.Date; +import java.util.Map; +import java.util.function.Supplier; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.GeoArgs.Unit; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.reactive.*; +import io.lettuce.core.cluster.api.reactive.RedisClusterReactiveCommands; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.*; +import io.lettuce.core.protocol.*; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.tracing.TraceContext; +import io.lettuce.core.tracing.TraceContextProvider; +import io.lettuce.core.tracing.Tracing; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.concurrent.ImmediateEventExecutor; + +/** + * A reactive and thread-safe API for a Redis connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @author Nikolai Perevozchikov + * @author Tugdual Grall + * @since 4.0 + */ +public abstract class AbstractRedisReactiveCommands implements RedisHashReactiveCommands, + RedisKeyReactiveCommands, RedisStringReactiveCommands, RedisListReactiveCommands, + RedisSetReactiveCommands, RedisSortedSetReactiveCommands, RedisScriptingReactiveCommands, + RedisServerReactiveCommands, RedisHLLReactiveCommands, BaseRedisReactiveCommands, + RedisTransactionalReactiveCommands, RedisGeoReactiveCommands, RedisClusterReactiveCommands { + + private final Object mutex = new Object(); + private final StatefulConnection connection; + private final RedisCommandBuilder commandBuilder; + private final ClientResources clientResources; + private final boolean tracingEnabled; + + private EventExecutorGroup scheduler; + + /** + * Initialize a new instance. + * + * @param connection the connection to operate on. + * @param codec the codec for command encoding. + */ + public AbstractRedisReactiveCommands(StatefulConnection connection, RedisCodec codec) { + this.connection = connection; + this.commandBuilder = new RedisCommandBuilder<>(codec); + this.clientResources = connection.getResources(); + this.tracingEnabled = clientResources.tracing().isEnabled(); + } + + private EventExecutorGroup getScheduler() { + + if (this.scheduler != null) { + return this.scheduler; + } + + synchronized (mutex) { + + EventExecutorGroup scheduler = ImmediateEventExecutor.INSTANCE; + + if (connection.getOptions().isPublishOnScheduler()) { + scheduler = connection.getResources().eventExecutorGroup(); + } + + return this.scheduler = scheduler; + } + } + + @Override + public Mono append(K key, V value) { + return createMono(() -> commandBuilder.append(key, value)); + } + + @Override + public Mono asking() { + return createMono(commandBuilder::asking); + } + + @Override + public Mono auth(CharSequence password) { + return createMono(() -> commandBuilder.auth(password)); + } + + @Override + public Mono auth(String username, CharSequence password) { + return createMono(() -> commandBuilder.auth(username, password)); + } + + @Override + public Mono bgrewriteaof() { + return createMono(commandBuilder::bgrewriteaof); + } + + @Override + public Mono bgsave() { + return createMono(commandBuilder::bgsave); + } + + @Override + public Mono bitcount(K key) { + return createMono(() -> commandBuilder.bitcount(key)); + } + + @Override + public Mono bitcount(K key, long start, long end) { + return createMono(() -> commandBuilder.bitcount(key, start, end)); + } + + @Override + public Flux> bitfield(K key, BitFieldArgs args) { + return createDissolvingFlux(() -> commandBuilder.bitfieldValue(key, args)); + } + + @Override + public Mono bitopAnd(K destination, K... keys) { + return createMono(() -> commandBuilder.bitopAnd(destination, keys)); + } + + @Override + public Mono bitopNot(K destination, K source) { + return createMono(() -> commandBuilder.bitopNot(destination, source)); + } + + @Override + public Mono bitopOr(K destination, K... keys) { + return createMono(() -> commandBuilder.bitopOr(destination, keys)); + } + + @Override + public Mono bitopXor(K destination, K... keys) { + return createMono(() -> commandBuilder.bitopXor(destination, keys)); + } + + @Override + public Mono bitpos(K key, boolean state) { + return createMono(() -> commandBuilder.bitpos(key, state)); + } + + @Override + public Mono bitpos(K key, boolean state, long start) { + return createMono(() -> commandBuilder.bitpos(key, state, start)); + } + + @Override + public Mono bitpos(K key, boolean state, long start, long end) { + return createMono(() -> commandBuilder.bitpos(key, state, start, end)); + } + + @Override + public Mono> blpop(long timeout, K... keys) { + return createMono(() -> commandBuilder.blpop(timeout, keys)); + } + + @Override + public Mono> brpop(long timeout, K... keys) { + return createMono(() -> commandBuilder.brpop(timeout, keys)); + } + + @Override + public Mono brpoplpush(long timeout, K source, K destination) { + return createMono(() -> commandBuilder.brpoplpush(timeout, source, destination)); + } + + @Override + public Mono clientGetname() { + return createMono(commandBuilder::clientGetname); + } + + @Override + public Mono clientKill(String addr) { + return createMono(() -> commandBuilder.clientKill(addr)); + } + + @Override + public Mono clientKill(KillArgs killArgs) { + return createMono(() -> commandBuilder.clientKill(killArgs)); + } + + @Override + public Mono clientList() { + return createMono(commandBuilder::clientList); + } + + @Override + public Mono clientId() { + return createMono(commandBuilder::clientId); + } + + @Override + public Mono clientPause(long timeout) { + return createMono(() -> commandBuilder.clientPause(timeout)); + } + + @Override + public Mono clientSetname(K name) { + return createMono(() -> commandBuilder.clientSetname(name)); + } + + @Override + public Mono clientUnblock(long id, UnblockType type) { + return createMono(() -> commandBuilder.clientUnblock(id, type)); + } + + public void close() { + connection.close(); + } + + @Override + public Mono clusterAddSlots(int... slots) { + return createMono(() -> commandBuilder.clusterAddslots(slots)); + } + + @Override + public Mono clusterBumpepoch() { + return createMono(() -> commandBuilder.clusterBumpepoch()); + } + + @Override + public Mono clusterCountFailureReports(String nodeId) { + return createMono(() -> commandBuilder.clusterCountFailureReports(nodeId)); + } + + @Override + public Mono clusterCountKeysInSlot(int slot) { + return createMono(() -> commandBuilder.clusterCountKeysInSlot(slot)); + } + + @Override + public Mono clusterDelSlots(int... slots) { + return createMono(() -> commandBuilder.clusterDelslots(slots)); + } + + @Override + public Mono clusterFailover(boolean force) { + return createMono(() -> commandBuilder.clusterFailover(force)); + } + + @Override + public Mono clusterFlushslots() { + return createMono(commandBuilder::clusterFlushslots); + } + + @Override + public Mono clusterForget(String nodeId) { + return createMono(() -> commandBuilder.clusterForget(nodeId)); + } + + @Override + public Flux clusterGetKeysInSlot(int slot, int count) { + return createDissolvingFlux(() -> commandBuilder.clusterGetKeysInSlot(slot, count)); + } + + @Override + public Mono clusterInfo() { + return createMono(commandBuilder::clusterInfo); + } + + @Override + public Mono clusterKeyslot(K key) { + return createMono(() -> commandBuilder.clusterKeyslot(key)); + } + + @Override + public Mono clusterMeet(String ip, int port) { + return createMono(() -> commandBuilder.clusterMeet(ip, port)); + } + + @Override + public Mono clusterMyId() { + return createMono(commandBuilder::clusterMyId); + } + + @Override + public Mono clusterNodes() { + return createMono(commandBuilder::clusterNodes); + } + + @Override + public Mono clusterReplicate(String nodeId) { + return createMono(() -> commandBuilder.clusterReplicate(nodeId)); + } + + @Override + public Mono clusterReset(boolean hard) { + return createMono(() -> commandBuilder.clusterReset(hard)); + } + + @Override + public Mono clusterSaveconfig() { + return createMono(() -> commandBuilder.clusterSaveconfig()); + } + + @Override + public Mono clusterSetConfigEpoch(long configEpoch) { + return createMono(() -> commandBuilder.clusterSetConfigEpoch(configEpoch)); + } + + @Override + public Mono clusterSetSlotImporting(int slot, String nodeId) { + return createMono(() -> commandBuilder.clusterSetSlotImporting(slot, nodeId)); + } + + @Override + public Mono clusterSetSlotMigrating(int slot, String nodeId) { + return createMono(() -> commandBuilder.clusterSetSlotMigrating(slot, nodeId)); + } + + @Override + public Mono clusterSetSlotNode(int slot, String nodeId) { + return createMono(() -> commandBuilder.clusterSetSlotNode(slot, nodeId)); + } + + @Override + public Mono clusterSetSlotStable(int slot) { + return createMono(() -> commandBuilder.clusterSetSlotStable(slot)); + } + + @Override + public Flux clusterSlaves(String nodeId) { + return createDissolvingFlux(() -> commandBuilder.clusterSlaves(nodeId)); + } + + @Override + public Flux clusterSlots() { + return createDissolvingFlux(commandBuilder::clusterSlots); + } + + @Override + public Flux command() { + return createDissolvingFlux(commandBuilder::command); + } + + @Override + public Mono commandCount() { + return createMono(commandBuilder::commandCount); + } + + @Override + public Flux commandInfo(String... commands) { + return createDissolvingFlux(() -> commandBuilder.commandInfo(commands)); + } + + @Override + public Flux commandInfo(CommandType... commands) { + String[] stringCommands = new String[commands.length]; + for (int i = 0; i < commands.length; i++) { + stringCommands[i] = commands[i].name(); + } + + return commandInfo(stringCommands); + } + + @Override + public Mono> configGet(String parameter) { + return createMono(() -> commandBuilder.configGet(parameter)); + } + + @Override + public Mono configResetstat() { + return createMono(commandBuilder::configResetstat); + } + + @Override + public Mono configRewrite() { + return createMono(commandBuilder::configRewrite); + } + + @Override + public Mono configSet(String parameter, String value) { + return createMono(() -> commandBuilder.configSet(parameter, value)); + } + + @SuppressWarnings("unchecked") + public Flux createDissolvingFlux(Supplier> commandSupplier) { + return (Flux) createFlux(commandSupplier, true); + } + + public Flux createFlux(Supplier> commandSupplier) { + return createFlux(commandSupplier, false); + } + + private Flux createFlux(Supplier> commandSupplier, boolean dissolve) { + + if (tracingEnabled) { + + return withTraceContext().flatMapMany(it -> Flux + .from(new RedisPublisher<>(decorate(commandSupplier, it), connection, dissolve, getScheduler().next()))); + } + + return Flux.from(new RedisPublisher<>(commandSupplier, connection, dissolve, getScheduler().next())); + } + + private Mono withTraceContext() { + + return Tracing.getContext() + .switchIfEmpty(Mono.fromSupplier(() -> clientResources.tracing().initialTraceContextProvider())) + .flatMap(TraceContextProvider::getTraceContextLater).defaultIfEmpty(TraceContext.EMPTY); + } + + protected Mono createMono(CommandType type, CommandOutput output, CommandArgs args) { + return createMono(() -> new Command<>(type, output, args)); + } + + public Mono createMono(Supplier> commandSupplier) { + + if (tracingEnabled) { + + return withTraceContext().flatMap(it -> Mono + .from(new RedisPublisher<>(decorate(commandSupplier, it), connection, false, getScheduler().next()))); + } + + return Mono.from(new RedisPublisher<>(commandSupplier, connection, false, getScheduler().next())); + } + + private Supplier> decorate(Supplier> commandSupplier, + TraceContext traceContext) { + return () -> new TracedCommand<>(commandSupplier.get(), traceContext); + } + + @Override + public Mono dbsize() { + return createMono(commandBuilder::dbsize); + } + + @Override + public Mono debugCrashAndRecover(Long delay) { + return createMono(() -> (commandBuilder.debugCrashAndRecover(delay))); + } + + @Override + public Mono debugHtstats(int db) { + return createMono(() -> commandBuilder.debugHtstats(db)); + } + + @Override + public Mono debugObject(K key) { + return createMono(() -> commandBuilder.debugObject(key)); + } + + @Override + public Mono debugOom() { + return createMono(commandBuilder::debugOom).then(); + } + + @Override + public Mono debugReload() { + return createMono(() -> (commandBuilder.debugReload())); + } + + @Override + public Mono debugRestart(Long delay) { + return createMono(() -> (commandBuilder.debugRestart(delay))); + } + + @Override + public Mono debugSdslen(K key) { + return createMono(() -> (commandBuilder.debugSdslen(key))); + } + + @Override + public Mono debugSegfault() { + return createFlux(commandBuilder::debugSegfault).then(); + } + + @Override + public Mono decr(K key) { + return createMono(() -> commandBuilder.decr(key)); + } + + @Override + public Mono decrby(K key, long amount) { + return createMono(() -> commandBuilder.decrby(key, amount)); + } + + @Override + public Mono del(K... keys) { + return createMono(() -> commandBuilder.del(keys)); + } + + public Mono del(Iterable keys) { + return createMono(() -> commandBuilder.del(keys)); + } + + @Override + public String digest(String script) { + return digest(encodeScript(script)); + } + + @Override + public String digest(byte[] script) { + return LettuceStrings.digest(script); + } + + @Override + public Mono discard() { + return createMono(commandBuilder::discard); + } + + @SuppressWarnings("unchecked") + public Flux dispatch(ProtocolKeyword type, CommandOutput output) { + + LettuceAssert.notNull(type, "Command type must not be null"); + LettuceAssert.notNull(output, "CommandOutput type must not be null"); + + return (Flux) createFlux(() -> new Command<>(type, output)); + } + + @SuppressWarnings("unchecked") + public Flux dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args) { + + LettuceAssert.notNull(type, "Command type must not be null"); + LettuceAssert.notNull(output, "CommandOutput type must not be null"); + LettuceAssert.notNull(args, "CommandArgs type must not be null"); + + return (Flux) createFlux(() -> new Command<>(type, output, args)); + } + + @Override + public Mono dump(K key) { + return createMono(() -> commandBuilder.dump(key)); + } + + @Override + public Mono echo(V msg) { + return createMono(() -> commandBuilder.echo(msg)); + } + + @Override + @SuppressWarnings("unchecked") + public Flux eval(String script, ScriptOutputType type, K... keys) { + return eval(encodeScript(script), type, keys); + } + + @Override + @SuppressWarnings("unchecked") + public Flux eval(byte[] script, ScriptOutputType type, K... keys) { + return (Flux) createFlux(() -> commandBuilder.eval(script, type, keys)); + } + + @Override + @SuppressWarnings("unchecked") + public Flux eval(String script, ScriptOutputType type, K[] keys, V... values) { + return eval(encodeScript(script), type, keys, values); + } + + @Override + @SuppressWarnings("unchecked") + public Flux eval(byte[] script, ScriptOutputType type, K[] keys, V... values) { + return (Flux) createFlux(() -> commandBuilder.eval(script, type, keys, values)); + } + + @Override + @SuppressWarnings("unchecked") + public Flux evalsha(String digest, ScriptOutputType type, K... keys) { + return (Flux) createFlux(() -> commandBuilder.evalsha(digest, type, keys)); + } + + @Override + @SuppressWarnings("unchecked") + public Flux evalsha(String digest, ScriptOutputType type, K[] keys, V... values) { + return (Flux) createFlux(() -> commandBuilder.evalsha(digest, type, keys, values)); + } + + @Override + public Mono exec() { + return createMono(EXEC, null, null); + } + + public Mono exists(K key) { + return createMono(() -> commandBuilder.exists(key)); + } + + @Override + public Mono exists(K... keys) { + return createMono(() -> commandBuilder.exists(keys)); + } + + public Mono exists(Iterable keys) { + return createMono(() -> commandBuilder.exists(keys)); + } + + @Override + public Mono expire(K key, long seconds) { + return createMono(() -> commandBuilder.expire(key, seconds)); + } + + @Override + public Mono expireat(K key, long timestamp) { + return createMono(() -> commandBuilder.expireat(key, timestamp)); + } + + @Override + public Mono expireat(K key, Date timestamp) { + return expireat(key, timestamp.getTime() / 1000); + } + + @Override + public void flushCommands() { + connection.flushCommands(); + } + + @Override + public Mono flushall() { + return createMono(commandBuilder::flushall); + } + + @Override + public Mono flushallAsync() { + return createMono(commandBuilder::flushallAsync); + } + + @Override + public Mono flushdb() { + return createMono(commandBuilder::flushdb); + } + + @Override + public Mono flushdbAsync() { + return createMono(commandBuilder::flushdbAsync); + } + + @Override + public Mono geoadd(K key, double longitude, double latitude, V member) { + return createMono(() -> commandBuilder.geoadd(key, longitude, latitude, member)); + } + + @Override + public Mono geoadd(K key, Object... lngLatMember) { + return createMono(() -> commandBuilder.geoadd(key, lngLatMember)); + } + + @Override + public Mono geodist(K key, V from, V to, Unit unit) { + return createMono(() -> commandBuilder.geodist(key, from, to, unit)); + } + + @Override + public Flux> geohash(K key, V... members) { + return createDissolvingFlux(() -> commandBuilder.geohash(key, members)); + } + + @Override + public Flux> geopos(K key, V... members) { + return createDissolvingFlux(() -> commandBuilder.geoposValues(key, members)); + } + + @Override + public Flux georadius(K key, double longitude, double latitude, double distance, Unit unit) { + return createDissolvingFlux(() -> commandBuilder.georadius(GEORADIUS, key, longitude, latitude, distance, unit.name())); + } + + @Override + public Flux> georadius(K key, double longitude, double latitude, double distance, Unit unit, GeoArgs geoArgs) { + return createDissolvingFlux( + () -> commandBuilder.georadius(GEORADIUS, key, longitude, latitude, distance, unit.name(), geoArgs)); + } + + @Override + public Mono georadius(K key, double longitude, double latitude, double distance, Unit unit, + GeoRadiusStoreArgs geoRadiusStoreArgs) { + return createMono(() -> commandBuilder.georadius(key, longitude, latitude, distance, unit.name(), geoRadiusStoreArgs)); + } + + protected Flux georadius_ro(K key, double longitude, double latitude, double distance, Unit unit) { + return createDissolvingFlux( + () -> commandBuilder.georadius(GEORADIUS_RO, key, longitude, latitude, distance, unit.name())); + } + + protected Flux> georadius_ro(K key, double longitude, double latitude, double distance, Unit unit, + GeoArgs geoArgs) { + return createDissolvingFlux( + () -> commandBuilder.georadius(GEORADIUS_RO, key, longitude, latitude, distance, unit.name(), geoArgs)); + } + + @Override + public Flux georadiusbymember(K key, V member, double distance, Unit unit) { + return createDissolvingFlux( + () -> commandBuilder.georadiusbymember(GEORADIUSBYMEMBER, key, member, distance, unit.name())); + } + + @Override + public Flux> georadiusbymember(K key, V member, double distance, Unit unit, GeoArgs geoArgs) { + return createDissolvingFlux( + () -> commandBuilder.georadiusbymember(GEORADIUSBYMEMBER, key, member, distance, unit.name(), geoArgs)); + } + + @Override + public Mono georadiusbymember(K key, V member, double distance, Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs) { + return createMono(() -> commandBuilder.georadiusbymember(key, member, distance, unit.name(), geoRadiusStoreArgs)); + } + + protected Flux georadiusbymember_ro(K key, V member, double distance, Unit unit) { + return createDissolvingFlux( + () -> commandBuilder.georadiusbymember(GEORADIUSBYMEMBER_RO, key, member, distance, unit.name())); + } + + protected Flux> georadiusbymember_ro(K key, V member, double distance, Unit unit, GeoArgs geoArgs) { + return createDissolvingFlux( + () -> commandBuilder.georadiusbymember(GEORADIUSBYMEMBER_RO, key, member, distance, unit.name(), geoArgs)); + } + + @Override + public Mono get(K key) { + return createMono(() -> commandBuilder.get(key)); + } + + public StatefulConnection getConnection() { + return connection; + } + + @Override + public Mono getbit(K key, long offset) { + return createMono(() -> commandBuilder.getbit(key, offset)); + } + + @Override + public Mono getrange(K key, long start, long end) { + return createMono(() -> commandBuilder.getrange(key, start, end)); + } + + @Override + public Mono getset(K key, V value) { + return createMono(() -> commandBuilder.getset(key, value)); + } + + @Override + public Mono hdel(K key, K... fields) { + return createMono(() -> commandBuilder.hdel(key, fields)); + } + + @Override + public Mono hexists(K key, K field) { + return createMono(() -> commandBuilder.hexists(key, field)); + } + + @Override + public Mono hget(K key, K field) { + return createMono(() -> commandBuilder.hget(key, field)); + } + + @Override + public Mono> hgetall(K key) { + return createMono(() -> commandBuilder.hgetall(key)); + } + + @Override + public Mono hgetall(KeyValueStreamingChannel channel, K key) { + return createMono(() -> commandBuilder.hgetall(channel, key)); + } + + @Override + public Mono hincrby(K key, K field, long amount) { + return createMono(() -> commandBuilder.hincrby(key, field, amount)); + } + + @Override + public Mono hincrbyfloat(K key, K field, double amount) { + return createMono(() -> commandBuilder.hincrbyfloat(key, field, amount)); + } + + @Override + public Flux hkeys(K key) { + return createDissolvingFlux(() -> commandBuilder.hkeys(key)); + } + + @Override + public Mono hkeys(KeyStreamingChannel channel, K key) { + return createMono(() -> commandBuilder.hkeys(channel, key)); + } + + @Override + public Mono hlen(K key) { + return createMono(() -> commandBuilder.hlen(key)); + } + + @Override + public Flux> hmget(K key, K... fields) { + return createDissolvingFlux(() -> commandBuilder.hmgetKeyValue(key, fields)); + } + + @Override + public Mono hmget(KeyValueStreamingChannel channel, K key, K... fields) { + return createMono(() -> commandBuilder.hmget(channel, key, fields)); + } + + @Override + public Mono hmset(K key, Map map) { + return createMono(() -> commandBuilder.hmset(key, map)); + } + + @Override + public Mono> hscan(K key) { + return createMono(() -> commandBuilder.hscan(key)); + } + + @Override + public Mono> hscan(K key, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.hscan(key, scanArgs)); + } + + @Override + public Mono> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.hscan(key, scanCursor, scanArgs)); + } + + @Override + public Mono> hscan(K key, ScanCursor scanCursor) { + return createMono(() -> commandBuilder.hscan(key, scanCursor)); + } + + @Override + public Mono hscan(KeyValueStreamingChannel channel, K key) { + return createMono(() -> commandBuilder.hscanStreaming(channel, key)); + } + + @Override + public Mono hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.hscanStreaming(channel, key, scanArgs)); + } + + @Override + public Mono hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, + ScanArgs scanArgs) { + return createMono(() -> commandBuilder.hscanStreaming(channel, key, scanCursor, scanArgs)); + } + + @Override + public Mono hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor) { + return createMono(() -> commandBuilder.hscanStreaming(channel, key, scanCursor)); + } + + @Override + public Mono hset(K key, K field, V value) { + return createMono(() -> commandBuilder.hset(key, field, value)); + } + + @Override + public Mono hset(K key, Map map) { + return createMono(() -> commandBuilder.hset(key, map)); + } + + @Override + public Mono hsetnx(K key, K field, V value) { + return createMono(() -> commandBuilder.hsetnx(key, field, value)); + } + + @Override + public Mono hstrlen(K key, K field) { + return createMono(() -> commandBuilder.hstrlen(key, field)); + } + + @Override + public Flux hvals(K key) { + return createDissolvingFlux(() -> commandBuilder.hvals(key)); + } + + @Override + public Mono hvals(ValueStreamingChannel channel, K key) { + return createMono(() -> commandBuilder.hvals(channel, key)); + } + + @Override + public Mono incr(K key) { + return createMono(() -> commandBuilder.incr(key)); + } + + @Override + public Mono incrby(K key, long amount) { + return createMono(() -> commandBuilder.incrby(key, amount)); + } + + @Override + public Mono incrbyfloat(K key, double amount) { + return createMono(() -> commandBuilder.incrbyfloat(key, amount)); + } + + @Override + public Mono info() { + return createMono(commandBuilder::info); + } + + @Override + public Mono info(String section) { + return createMono(() -> commandBuilder.info(section)); + } + + @Override + public boolean isOpen() { + return connection.isOpen(); + } + + @Override + public Flux keys(K pattern) { + return createDissolvingFlux(() -> commandBuilder.keys(pattern)); + } + + @Override + public Mono keys(KeyStreamingChannel channel, K pattern) { + return createMono(() -> commandBuilder.keys(channel, pattern)); + } + + @Override + public Mono lastsave() { + return createMono(commandBuilder::lastsave); + } + + @Override + public Mono lindex(K key, long index) { + return createMono(() -> commandBuilder.lindex(key, index)); + } + + @Override + public Mono linsert(K key, boolean before, V pivot, V value) { + return createMono(() -> commandBuilder.linsert(key, before, pivot, value)); + } + + @Override + public Mono llen(K key) { + return createMono(() -> commandBuilder.llen(key)); + } + + @Override + public Mono lpop(K key) { + return createMono(() -> commandBuilder.lpop(key)); + } + + @Override + public Mono lpush(K key, V... values) { + return createMono(() -> commandBuilder.lpush(key, values)); + } + + @Override + public Mono lpushx(K key, V... values) { + return createMono(() -> commandBuilder.lpushx(key, values)); + } + + @Override + public Flux lrange(K key, long start, long stop) { + return createDissolvingFlux(() -> commandBuilder.lrange(key, start, stop)); + } + + @Override + public Mono lrange(ValueStreamingChannel channel, K key, long start, long stop) { + return createMono(() -> commandBuilder.lrange(channel, key, start, stop)); + } + + @Override + public Mono lrem(K key, long count, V value) { + return createMono(() -> commandBuilder.lrem(key, count, value)); + } + + @Override + public Mono lset(K key, long index, V value) { + return createMono(() -> commandBuilder.lset(key, index, value)); + } + + @Override + public Mono ltrim(K key, long start, long stop) { + return createMono(() -> commandBuilder.ltrim(key, start, stop)); + } + + @Override + public Mono memoryUsage(K key) { + return createMono(() -> commandBuilder.memoryUsage(key)); + } + + @Override + public Flux> mget(K... keys) { + return createDissolvingFlux(() -> commandBuilder.mgetKeyValue(keys)); + } + + public Flux> mget(Iterable keys) { + return createDissolvingFlux(() -> commandBuilder.mgetKeyValue(keys)); + } + + @Override + public Mono mget(KeyValueStreamingChannel channel, K... keys) { + return createMono(() -> commandBuilder.mget(channel, keys)); + } + + public Mono mget(ValueStreamingChannel channel, Iterable keys) { + return createMono(() -> commandBuilder.mget(channel, keys)); + } + + public Mono mget(KeyValueStreamingChannel channel, Iterable keys) { + return createMono(() -> commandBuilder.mget(channel, keys)); + } + + @Override + public Mono migrate(String host, int port, K key, int db, long timeout) { + return createMono(() -> commandBuilder.migrate(host, port, key, db, timeout)); + } + + @Override + public Mono migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs) { + return createMono(() -> commandBuilder.migrate(host, port, db, timeout, migrateArgs)); + } + + @Override + public Mono move(K key, int db) { + return createMono(() -> commandBuilder.move(key, db)); + } + + @Override + public Mono mset(Map map) { + return createMono(() -> commandBuilder.mset(map)); + } + + @Override + public Mono msetnx(Map map) { + return createMono(() -> commandBuilder.msetnx(map)); + } + + @Override + public Mono multi() { + return createMono(commandBuilder::multi); + } + + @Override + public Mono objectEncoding(K key) { + return createMono(() -> commandBuilder.objectEncoding(key)); + } + + @Override + public Mono objectIdletime(K key) { + return createMono(() -> commandBuilder.objectIdletime(key)); + } + + @Override + public Mono objectRefcount(K key) { + return createMono(() -> commandBuilder.objectRefcount(key)); + } + + @Override + public Mono persist(K key) { + return createMono(() -> commandBuilder.persist(key)); + } + + @Override + public Mono pexpire(K key, long milliseconds) { + return createMono(() -> commandBuilder.pexpire(key, milliseconds)); + } + + @Override + public Mono pexpireat(K key, Date timestamp) { + return pexpireat(key, timestamp.getTime()); + } + + @Override + public Mono pexpireat(K key, long timestamp) { + return createMono(() -> commandBuilder.pexpireat(key, timestamp)); + } + + @Override + public Mono pfadd(K key, V... values) { + return createMono(() -> commandBuilder.pfadd(key, values)); + } + + public Mono pfadd(K key, V value, V... values) { + return createMono(() -> commandBuilder.pfadd(key, value, values)); + } + + @Override + public Mono pfcount(K... keys) { + return createMono(() -> commandBuilder.pfcount(keys)); + } + + public Mono pfcount(K key, K... keys) { + return createMono(() -> commandBuilder.pfcount(key, keys)); + } + + @Override + public Mono pfmerge(K destkey, K... sourcekeys) { + return createMono(() -> commandBuilder.pfmerge(destkey, sourcekeys)); + } + + public Mono pfmerge(K destkey, K sourceKey, K... sourcekeys) { + return createMono(() -> commandBuilder.pfmerge(destkey, sourceKey, sourcekeys)); + } + + @Override + public Mono ping() { + return createMono(commandBuilder::ping); + } + + @Override + public Mono psetex(K key, long milliseconds, V value) { + return createMono(() -> commandBuilder.psetex(key, milliseconds, value)); + } + + @Override + public Mono pttl(K key) { + return createMono(() -> commandBuilder.pttl(key)); + } + + @Override + public Mono publish(K channel, V message) { + return createMono(() -> commandBuilder.publish(channel, message)); + } + + @Override + public Flux pubsubChannels() { + return createDissolvingFlux(commandBuilder::pubsubChannels); + } + + @Override + public Flux pubsubChannels(K channel) { + return createDissolvingFlux(() -> commandBuilder.pubsubChannels(channel)); + } + + @Override + public Mono pubsubNumpat() { + return createMono(commandBuilder::pubsubNumpat); + } + + @Override + public Mono> pubsubNumsub(K... channels) { + return createMono(() -> commandBuilder.pubsubNumsub(channels)); + } + + @Override + public Mono quit() { + return createMono(commandBuilder::quit); + } + + @Override + public Mono randomkey() { + return createMono(commandBuilder::randomkey); + } + + @Override + public Mono readOnly() { + return createMono(commandBuilder::readOnly); + } + + @Override + public Mono readWrite() { + return createMono(commandBuilder::readWrite); + } + + @Override + public Mono rename(K key, K newKey) { + return createMono(() -> commandBuilder.rename(key, newKey)); + } + + @Override + public Mono renamenx(K key, K newKey) { + return createMono(() -> commandBuilder.renamenx(key, newKey)); + } + + @Override + public void reset() { + getConnection().reset(); + } + + @Override + public Mono restore(K key, long ttl, byte[] value) { + return createMono(() -> commandBuilder.restore(key, value, RestoreArgs.Builder.ttl(ttl))); + } + + @Override + public Mono restore(K key, byte[] value, RestoreArgs args) { + return createMono(() -> commandBuilder.restore(key, value, args)); + } + + @Override + public Flux role() { + return createDissolvingFlux(commandBuilder::role); + } + + @Override + public Mono rpop(K key) { + return createMono(() -> commandBuilder.rpop(key)); + } + + @Override + public Mono rpoplpush(K source, K destination) { + return createMono(() -> commandBuilder.rpoplpush(source, destination)); + } + + @Override + public Mono rpush(K key, V... values) { + return createMono(() -> commandBuilder.rpush(key, values)); + } + + @Override + public Mono rpushx(K key, V... values) { + return createMono(() -> commandBuilder.rpushx(key, values)); + } + + @Override + public Mono sadd(K key, V... members) { + return createMono(() -> commandBuilder.sadd(key, members)); + } + + @Override + public Mono save() { + return createMono(commandBuilder::save); + } + + @Override + public Mono> scan() { + return createMono(commandBuilder::scan); + } + + @Override + public Mono> scan(ScanArgs scanArgs) { + return createMono(() -> commandBuilder.scan(scanArgs)); + } + + @Override + public Mono> scan(ScanCursor scanCursor, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.scan(scanCursor, scanArgs)); + } + + @Override + public Mono> scan(ScanCursor scanCursor) { + return createMono(() -> commandBuilder.scan(scanCursor)); + } + + @Override + public Mono scan(KeyStreamingChannel channel) { + return createMono(() -> commandBuilder.scanStreaming(channel)); + } + + @Override + public Mono scan(KeyStreamingChannel channel, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.scanStreaming(channel, scanArgs)); + } + + @Override + public Mono scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.scanStreaming(channel, scanCursor, scanArgs)); + } + + @Override + public Mono scan(KeyStreamingChannel channel, ScanCursor scanCursor) { + return createMono(() -> commandBuilder.scanStreaming(channel, scanCursor)); + } + + @Override + public Mono scard(K key) { + return createMono(() -> commandBuilder.scard(key)); + } + + @Override + public Flux scriptExists(String... digests) { + return createDissolvingFlux(() -> commandBuilder.scriptExists(digests)); + } + + @Override + public Mono scriptFlush() { + return createMono(commandBuilder::scriptFlush); + } + + @Override + public Mono scriptKill() { + return createMono(commandBuilder::scriptKill); + } + + @Override + public Mono scriptLoad(String script) { + return scriptLoad(encodeScript(script)); + } + + @Override + public Mono scriptLoad(byte[] script) { + return createMono(() -> commandBuilder.scriptLoad(script)); + } + + @Override + public Flux sdiff(K... keys) { + return createDissolvingFlux(() -> commandBuilder.sdiff(keys)); + } + + @Override + public Mono sdiff(ValueStreamingChannel channel, K... keys) { + return createMono(() -> commandBuilder.sdiff(channel, keys)); + } + + @Override + public Mono sdiffstore(K destination, K... keys) { + return createMono(() -> commandBuilder.sdiffstore(destination, keys)); + } + + public Mono select(int db) { + return createMono(() -> commandBuilder.select(db)); + } + + @Override + public Mono set(K key, V value) { + return createMono(() -> commandBuilder.set(key, value)); + } + + @Override + public Mono set(K key, V value, SetArgs setArgs) { + return createMono(() -> commandBuilder.set(key, value, setArgs)); + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + connection.setAutoFlushCommands(autoFlush); + } + + public void setTimeout(Duration timeout) { + connection.setTimeout(timeout); + } + + @Override + public Mono setbit(K key, long offset, int value) { + return createMono(() -> commandBuilder.setbit(key, offset, value)); + } + + @Override + public Mono setex(K key, long seconds, V value) { + return createMono(() -> commandBuilder.setex(key, seconds, value)); + } + + @Override + public Mono setnx(K key, V value) { + return createMono(() -> commandBuilder.setnx(key, value)); + } + + @Override + public Mono setrange(K key, long offset, V value) { + return createMono(() -> commandBuilder.setrange(key, offset, value)); + } + + @Override + public Mono shutdown(boolean save) { + return createMono(() -> commandBuilder.shutdown(save)).then(); + } + + @Override + public Flux sinter(K... keys) { + return createDissolvingFlux(() -> commandBuilder.sinter(keys)); + } + + @Override + public Mono sinter(ValueStreamingChannel channel, K... keys) { + return createMono(() -> commandBuilder.sinter(channel, keys)); + } + + @Override + public Mono sinterstore(K destination, K... keys) { + return createMono(() -> commandBuilder.sinterstore(destination, keys)); + } + + @Override + public Mono sismember(K key, V member) { + return createMono(() -> commandBuilder.sismember(key, member)); + } + + @Override + public Mono slaveof(String host, int port) { + return createMono(() -> commandBuilder.slaveof(host, port)); + } + + @Override + public Mono slaveofNoOne() { + return createMono(() -> commandBuilder.slaveofNoOne()); + } + + @Override + public Flux slowlogGet() { + return createDissolvingFlux(() -> commandBuilder.slowlogGet()); + } + + @Override + public Flux slowlogGet(int count) { + return createDissolvingFlux(() -> commandBuilder.slowlogGet(count)); + } + + @Override + public Mono slowlogLen() { + return createMono(() -> commandBuilder.slowlogLen()); + } + + @Override + public Mono slowlogReset() { + return createMono(() -> commandBuilder.slowlogReset()); + } + + @Override + public Flux smembers(K key) { + return createDissolvingFlux(() -> commandBuilder.smembers(key)); + } + + @Override + public Mono smembers(ValueStreamingChannel channel, K key) { + return createMono(() -> commandBuilder.smembers(channel, key)); + } + + @Override + public Mono smove(K source, K destination, V member) { + return createMono(() -> commandBuilder.smove(source, destination, member)); + } + + @Override + public Flux sort(K key) { + return createDissolvingFlux(() -> commandBuilder.sort(key)); + } + + @Override + public Mono sort(ValueStreamingChannel channel, K key) { + return createMono(() -> commandBuilder.sort(channel, key)); + } + + @Override + public Flux sort(K key, SortArgs sortArgs) { + return createDissolvingFlux(() -> commandBuilder.sort(key, sortArgs)); + } + + @Override + public Mono sort(ValueStreamingChannel channel, K key, SortArgs sortArgs) { + return createMono(() -> commandBuilder.sort(channel, key, sortArgs)); + } + + @Override + public Mono sortStore(K key, SortArgs sortArgs, K destination) { + return createMono(() -> commandBuilder.sortStore(key, sortArgs, destination)); + } + + @Override + public Mono spop(K key) { + return createMono(() -> commandBuilder.spop(key)); + } + + @Override + public Flux spop(K key, long count) { + return createDissolvingFlux(() -> commandBuilder.spop(key, count)); + } + + @Override + public Mono srandmember(K key) { + return createMono(() -> commandBuilder.srandmember(key)); + } + + @Override + public Flux srandmember(K key, long count) { + return createDissolvingFlux(() -> commandBuilder.srandmember(key, count)); + } + + @Override + public Mono srandmember(ValueStreamingChannel channel, K key, long count) { + return createMono(() -> commandBuilder.srandmember(channel, key, count)); + } + + @Override + public Mono srem(K key, V... members) { + return createMono(() -> commandBuilder.srem(key, members)); + } + + @Override + public Mono> sscan(K key) { + return createMono(() -> commandBuilder.sscan(key)); + } + + @Override + public Mono> sscan(K key, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.sscan(key, scanArgs)); + } + + @Override + public Mono> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.sscan(key, scanCursor, scanArgs)); + } + + @Override + public Mono> sscan(K key, ScanCursor scanCursor) { + return createMono(() -> commandBuilder.sscan(key, scanCursor)); + } + + @Override + public Mono sscan(ValueStreamingChannel channel, K key) { + return createMono(() -> commandBuilder.sscanStreaming(channel, key)); + } + + @Override + public Mono sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.sscanStreaming(channel, key, scanArgs)); + } + + @Override + public Mono sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.sscanStreaming(channel, key, scanCursor, scanArgs)); + } + + @Override + public Mono sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor) { + return createMono(() -> commandBuilder.sscanStreaming(channel, key, scanCursor)); + } + + @Override + public Mono strlen(K key) { + return createMono(() -> commandBuilder.strlen(key)); + } + + @Override + public Flux sunion(K... keys) { + return createDissolvingFlux(() -> commandBuilder.sunion(keys)); + } + + @Override + public Mono sunion(ValueStreamingChannel channel, K... keys) { + return createMono(() -> commandBuilder.sunion(channel, keys)); + } + + @Override + public Mono sunionstore(K destination, K... keys) { + return createMono(() -> commandBuilder.sunionstore(destination, keys)); + } + + public Mono swapdb(int db1, int db2) { + return createMono(() -> commandBuilder.swapdb(db1, db2)); + } + + @Override + public Flux time() { + return createDissolvingFlux(commandBuilder::time); + } + + @Override + public Mono touch(K... keys) { + return createMono(() -> commandBuilder.touch(keys)); + } + + public Mono touch(Iterable keys) { + return createMono(() -> commandBuilder.touch(keys)); + } + + @Override + public Mono ttl(K key) { + return createMono(() -> commandBuilder.ttl(key)); + } + + @Override + public Mono type(K key) { + return createMono(() -> commandBuilder.type(key)); + } + + @Override + public Mono unlink(K... keys) { + return createMono(() -> commandBuilder.unlink(keys)); + } + + public Mono unlink(Iterable keys) { + return createMono(() -> commandBuilder.unlink(keys)); + } + + @Override + public Mono unwatch() { + return createMono(commandBuilder::unwatch); + } + + @Override + public Mono waitForReplication(int replicas, long timeout) { + return createMono(() -> commandBuilder.wait(replicas, timeout)); + } + + @Override + public Mono watch(K... keys) { + return createMono(() -> commandBuilder.watch(keys)); + } + + @Override + public Mono xack(K key, K group, String... messageIds) { + return createMono(() -> commandBuilder.xack(key, group, messageIds)); + } + + @Override + public Mono xadd(K key, Map body) { + return createMono(() -> commandBuilder.xadd(key, null, body)); + } + + @Override + public Mono xadd(K key, XAddArgs args, Map body) { + return createMono(() -> commandBuilder.xadd(key, args, body)); + } + + @Override + public Mono xadd(K key, Object... keysAndValues) { + return createMono(() -> commandBuilder.xadd(key, null, keysAndValues)); + } + + @Override + public Mono xadd(K key, XAddArgs args, Object... keysAndValues) { + return createMono(() -> commandBuilder.xadd(key, args, keysAndValues)); + } + + @Override + public Flux> xclaim(K key, Consumer consumer, long minIdleTime, String... messageIds) { + return createDissolvingFlux( + () -> commandBuilder.xclaim(key, consumer, XClaimArgs.Builder.minIdleTime(minIdleTime), messageIds)); + } + + @Override + public Flux> xclaim(K key, Consumer consumer, XClaimArgs args, String... messageIds) { + return createDissolvingFlux(() -> commandBuilder.xclaim(key, consumer, args, messageIds)); + } + + @Override + public Mono xdel(K key, String... messageIds) { + return createMono(() -> commandBuilder.xdel(key, messageIds)); + } + + @Override + public Mono xgroupCreate(XReadArgs.StreamOffset streamOffset, K group) { + return createMono(() -> commandBuilder.xgroupCreate(streamOffset, group, null)); + } + + @Override + public Mono xgroupCreate(XReadArgs.StreamOffset streamOffset, K group, XGroupCreateArgs args) { + return createMono(() -> commandBuilder.xgroupCreate(streamOffset, group, args)); + } + + @Override + public Mono xgroupDelconsumer(K key, Consumer consumer) { + return createMono(() -> commandBuilder.xgroupDelconsumer(key, consumer)); + } + + @Override + public Mono xgroupDestroy(K key, K group) { + return createMono(() -> commandBuilder.xgroupDestroy(key, group)); + } + + @Override + public Mono xgroupSetid(XReadArgs.StreamOffset streamOffset, K group) { + return createMono(() -> commandBuilder.xgroupSetid(streamOffset, group)); + } + + @Override + public Flux xinfoStream(K key) { + return createDissolvingFlux(() -> commandBuilder.xinfoStream(key)); + } + + @Override + public Flux xinfoGroups(K key) { + return createDissolvingFlux(() -> commandBuilder.xinfoGroups(key)); + } + + @Override + public Flux xinfoConsumers(K key, K group) { + return createDissolvingFlux(() -> commandBuilder.xinfoConsumers(key, group)); + } + + @Override + public Mono xlen(K key) { + return createMono(() -> commandBuilder.xlen(key)); + } + + @Override + public Flux xpending(K key, K group) { + return xpending(key, group, Range.unbounded(), Limit.unlimited()); + } + + @Override + public Flux xpending(K key, K group, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.xpending(key, group, range, limit)); + } + + @Override + public Flux xpending(K key, Consumer consumer, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.xpending(key, consumer, range, limit)); + } + + @Override + public Flux> xrange(K key, Range range) { + return createDissolvingFlux(() -> commandBuilder.xrange(key, range, Limit.unlimited())); + } + + @Override + public Flux> xrange(K key, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.xrange(key, range, limit)); + } + + @Override + public Flux> xread(XReadArgs.StreamOffset... streams) { + return createDissolvingFlux(() -> commandBuilder.xread(null, streams)); + } + + @Override + public Flux> xread(XReadArgs args, XReadArgs.StreamOffset... streams) { + return createDissolvingFlux(() -> commandBuilder.xread(args, streams)); + } + + @Override + public Flux> xreadgroup(Consumer consumer, XReadArgs.StreamOffset... streams) { + return createDissolvingFlux(() -> commandBuilder.xreadgroup(consumer, null, streams)); + } + + @Override + public Flux> xreadgroup(Consumer consumer, XReadArgs args, XReadArgs.StreamOffset... streams) { + return createDissolvingFlux(() -> commandBuilder.xreadgroup(consumer, args, streams)); + } + + @Override + public Flux> xrevrange(K key, Range range) { + return xrevrange(key, range, Limit.unlimited()); + } + + @Override + public Flux> xrevrange(K key, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.xrevrange(key, range, limit)); + } + + @Override + public Mono xtrim(K key, long count) { + return xtrim(key, false, count); + } + + @Override + public Mono xtrim(K key, boolean approximateTrimming, long count) { + return createMono(() -> commandBuilder.xtrim(key, approximateTrimming, count)); + } + + @Override + public Mono>> bzpopmin(long timeout, K... keys) { + return createMono(() -> commandBuilder.bzpopmin(timeout, keys)); + } + + @Override + public Mono>> bzpopmax(long timeout, K... keys) { + return createMono(() -> commandBuilder.bzpopmax(timeout, keys)); + } + + @Override + public Mono zadd(K key, double score, V member) { + return createMono(() -> commandBuilder.zadd(key, null, score, member)); + } + + @Override + public Mono zadd(K key, Object... scoresAndValues) { + return createMono(() -> commandBuilder.zadd(key, null, scoresAndValues)); + } + + @Override + public Mono zadd(K key, ScoredValue... scoredValues) { + return createMono(() -> commandBuilder.zadd(key, null, (Object[]) scoredValues)); + } + + @Override + public Mono zadd(K key, ZAddArgs zAddArgs, double score, V member) { + return createMono(() -> commandBuilder.zadd(key, zAddArgs, score, member)); + } + + @Override + public Mono zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues) { + return createMono(() -> commandBuilder.zadd(key, zAddArgs, scoresAndValues)); + } + + @Override + public Mono zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues) { + return createMono(() -> commandBuilder.zadd(key, zAddArgs, (Object[]) scoredValues)); + } + + @Override + public Mono zaddincr(K key, double score, V member) { + return createMono(() -> commandBuilder.zaddincr(key, null, score, member)); + } + + @Override + public Mono zaddincr(K key, ZAddArgs zAddArgs, double score, V member) { + return createMono(() -> commandBuilder.zaddincr(key, zAddArgs, score, member)); + } + + @Override + public Mono zcard(K key) { + return createMono(() -> commandBuilder.zcard(key)); + } + + public Mono zcount(K key, double min, double max) { + return createMono(() -> commandBuilder.zcount(key, min, max)); + } + + @Override + public Mono zcount(K key, String min, String max) { + return createMono(() -> commandBuilder.zcount(key, min, max)); + } + + @Override + public Mono zcount(K key, Range range) { + return createMono(() -> commandBuilder.zcount(key, range)); + } + + @Override + public Mono zincrby(K key, double amount, V member) { + return createMono(() -> commandBuilder.zincrby(key, amount, member)); + } + + @Override + public Mono zinterstore(K destination, K... keys) { + return createMono(() -> commandBuilder.zinterstore(destination, keys)); + } + + @Override + public Mono zinterstore(K destination, ZStoreArgs storeArgs, K... keys) { + return createMono(() -> commandBuilder.zinterstore(destination, storeArgs, keys)); + } + + @Override + public Mono zlexcount(K key, String min, String max) { + return createMono(() -> commandBuilder.zlexcount(key, min, max)); + } + + @Override + public Mono zlexcount(K key, Range range) { + return createMono(() -> commandBuilder.zlexcount(key, range)); + } + + @Override + public Mono> zpopmin(K key) { + return createMono(() -> commandBuilder.zpopmin(key)); + } + + @Override + public Flux> zpopmin(K key, long count) { + return createDissolvingFlux(() -> commandBuilder.zpopmin(key, count)); + } + + @Override + public Mono> zpopmax(K key) { + return createMono(() -> commandBuilder.zpopmax(key)); + } + + @Override + public Flux> zpopmax(K key, long count) { + return createDissolvingFlux(() -> commandBuilder.zpopmax(key, count)); + } + + @Override + public Flux zrange(K key, long start, long stop) { + return createDissolvingFlux(() -> commandBuilder.zrange(key, start, stop)); + } + + @Override + public Mono zrange(ValueStreamingChannel channel, K key, long start, long stop) { + return createMono(() -> commandBuilder.zrange(channel, key, start, stop)); + } + + @Override + public Flux> zrangeWithScores(K key, long start, long stop) { + return createDissolvingFlux(() -> commandBuilder.zrangeWithScores(key, start, stop)); + } + + @Override + public Mono zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { + return createMono(() -> commandBuilder.zrangeWithScores(channel, key, start, stop)); + } + + @Override + public Flux zrangebylex(K key, String min, String max) { + return createDissolvingFlux(() -> commandBuilder.zrangebylex(key, min, max)); + } + + @Override + public Flux zrangebylex(K key, Range range) { + return createDissolvingFlux(() -> commandBuilder.zrangebylex(key, range, Limit.unlimited())); + } + + @Override + public Flux zrangebylex(K key, String min, String max, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrangebylex(key, min, max, offset, count)); + } + + @Override + public Flux zrangebylex(K key, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.zrangebylex(key, range, limit)); + } + + @Override + public Flux zrangebyscore(K key, double min, double max) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscore(key, min, max)); + } + + @Override + public Flux zrangebyscore(K key, String min, String max) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscore(key, min, max)); + } + + @Override + public Flux zrangebyscore(K key, double min, double max, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscore(key, min, max, offset, count)); + } + + @Override + public Flux zrangebyscore(K key, String min, String max, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscore(key, min, max, offset, count)); + } + + @Override + public Flux zrangebyscore(K key, Range range) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscore(key, range, Limit.unlimited())); + } + + @Override + public Flux zrangebyscore(K key, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscore(key, range, limit)); + } + + @Override + public Mono zrangebyscore(ValueStreamingChannel channel, K key, double min, double max) { + return createMono(() -> commandBuilder.zrangebyscore(channel, key, min, max)); + } + + @Override + public Mono zrangebyscore(ValueStreamingChannel channel, K key, String min, String max) { + return createMono(() -> commandBuilder.zrangebyscore(channel, key, min, max)); + } + + @Override + public Mono zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count) { + return createMono(() -> commandBuilder.zrangebyscore(channel, key, min, max, offset, count)); + } + + @Override + public Mono zrangebyscore(ValueStreamingChannel channel, K key, Range range) { + return createMono(() -> commandBuilder.zrangebyscore(channel, key, range, Limit.unlimited())); + } + + @Override + public Mono zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count) { + return createMono(() -> commandBuilder.zrangebyscore(channel, key, min, max, offset, count)); + } + + @Override + public Mono zrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit) { + return createMono(() -> commandBuilder.zrangebyscore(channel, key, range, limit)); + } + + @Override + public Flux> zrangebyscoreWithScores(K key, double min, double max) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscoreWithScores(key, min, max)); + } + + @Override + public Flux> zrangebyscoreWithScores(K key, String min, String max) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscoreWithScores(key, min, max)); + } + + @Override + public Flux> zrangebyscoreWithScores(K key, double min, double max, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscoreWithScores(key, min, max, offset, count)); + } + + @Override + public Flux> zrangebyscoreWithScores(K key, String min, String max, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscoreWithScores(key, min, max, offset, count)); + } + + @Override + public Flux> zrangebyscoreWithScores(K key, Range range) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscoreWithScores(key, range, Limit.unlimited())); + } + + @Override + public Flux> zrangebyscoreWithScores(K key, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.zrangebyscoreWithScores(key, range, limit)); + } + + @Override + public Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max) { + return createMono(() -> commandBuilder.zrangebyscoreWithScores(channel, key, min, max)); + } + + @Override + public Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max) { + return createMono(() -> commandBuilder.zrangebyscoreWithScores(channel, key, min, max)); + } + + @Override + public Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range) { + return createMono(() -> commandBuilder.zrangebyscoreWithScores(channel, key, range, Limit.unlimited())); + } + + @Override + public Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, + long offset, long count) { + return createMono(() -> commandBuilder.zrangebyscoreWithScores(channel, key, min, max, offset, count)); + } + + @Override + public Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, + long offset, long count) { + return createMono(() -> commandBuilder.zrangebyscoreWithScores(channel, key, min, max, offset, count)); + } + + @Override + public Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, + Limit limit) { + return createMono(() -> commandBuilder.zrangebyscoreWithScores(channel, key, range, limit)); + } + + @Override + public Mono zrank(K key, V member) { + return createMono(() -> commandBuilder.zrank(key, member)); + } + + @Override + public Mono zrem(K key, V... members) { + return createMono(() -> commandBuilder.zrem(key, members)); + } + + @Override + public Mono zremrangebylex(K key, String min, String max) { + return createMono(() -> commandBuilder.zremrangebylex(key, min, max)); + } + + @Override + public Mono zremrangebylex(K key, Range range) { + return createMono(() -> commandBuilder.zremrangebylex(key, range)); + } + + @Override + public Mono zremrangebyrank(K key, long start, long stop) { + return createMono(() -> commandBuilder.zremrangebyrank(key, start, stop)); + } + + @Override + public Mono zremrangebyscore(K key, double min, double max) { + return createMono(() -> commandBuilder.zremrangebyscore(key, min, max)); + } + + @Override + public Mono zremrangebyscore(K key, String min, String max) { + return createMono(() -> commandBuilder.zremrangebyscore(key, min, max)); + } + + @Override + public Mono zremrangebyscore(K key, Range range) { + return createMono(() -> commandBuilder.zremrangebyscore(key, range)); + } + + @Override + public Flux zrevrange(K key, long start, long stop) { + return createDissolvingFlux(() -> commandBuilder.zrevrange(key, start, stop)); + } + + @Override + public Mono zrevrange(ValueStreamingChannel channel, K key, long start, long stop) { + return createMono(() -> commandBuilder.zrevrange(channel, key, start, stop)); + } + + @Override + public Flux> zrevrangeWithScores(K key, long start, long stop) { + return createDissolvingFlux(() -> commandBuilder.zrevrangeWithScores(key, start, stop)); + } + + @Override + public Mono zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { + return createMono(() -> commandBuilder.zrevrangeWithScores(channel, key, start, stop)); + } + + @Override + public Flux zrevrangebylex(K key, Range range) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebylex(key, range, Limit.unlimited())); + } + + @Override + public Flux zrevrangebylex(K key, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebylex(key, range, limit)); + } + + @Override + public Flux zrevrangebyscore(K key, double max, double min) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscore(key, max, min)); + } + + @Override + public Flux zrevrangebyscore(K key, String max, String min) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscore(key, max, min)); + } + + @Override + public Flux zrevrangebyscore(K key, Range range) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscore(key, range, Limit.unlimited())); + } + + @Override + public Flux zrevrangebyscore(K key, double max, double min, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscore(key, max, min, offset, count)); + } + + @Override + public Flux zrevrangebyscore(K key, String max, String min, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscore(key, max, min, offset, count)); + } + + @Override + public Flux zrevrangebyscore(K key, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscore(key, range, limit)); + } + + @Override + public Mono zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min) { + return createMono(() -> commandBuilder.zrevrangebyscore(channel, key, max, min)); + } + + @Override + public Mono zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min) { + return createMono(() -> commandBuilder.zrevrangebyscore(channel, key, max, min)); + } + + @Override + public Mono zrevrangebyscore(ValueStreamingChannel channel, K key, Range range) { + return createMono(() -> commandBuilder.zrevrangebyscore(channel, key, range, Limit.unlimited())); + } + + @Override + public Mono zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, + long count) { + return createMono(() -> commandBuilder.zrevrangebyscore(channel, key, max, min, offset, count)); + } + + @Override + public Mono zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, + long count) { + return createMono(() -> commandBuilder.zrevrangebyscore(channel, key, max, min, offset, count)); + } + + @Override + public Mono zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit) { + return createMono(() -> commandBuilder.zrevrangebyscore(channel, key, range, limit)); + } + + @Override + public Flux> zrevrangebyscoreWithScores(K key, double max, double min) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscoreWithScores(key, max, min)); + } + + @Override + public Flux> zrevrangebyscoreWithScores(K key, String max, String min) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscoreWithScores(key, max, min)); + } + + @Override + public Flux> zrevrangebyscoreWithScores(K key, Range range) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscoreWithScores(key, range, Limit.unlimited())); + } + + @Override + public Flux> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscoreWithScores(key, max, min, offset, count)); + } + + @Override + public Flux> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscoreWithScores(key, max, min, offset, count)); + } + + @Override + public Flux> zrevrangebyscoreWithScores(K key, Range range, Limit limit) { + return createDissolvingFlux(() -> commandBuilder.zrevrangebyscoreWithScores(key, range, limit)); + } + + @Override + public Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min) { + return createMono(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min)); + } + + @Override + public Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min) { + return createMono(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min)); + } + + @Override + public Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range) { + return createMono(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, range, Limit.unlimited())); + } + + @Override + public Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, + long offset, long count) { + return createMono(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min, offset, count)); + } + + @Override + public Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, + long offset, long count) { + return createMono(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, max, min, offset, count)); + } + + @Override + public Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, + Limit limit) { + return createMono(() -> commandBuilder.zrevrangebyscoreWithScores(channel, key, range, limit)); + } + + @Override + public Mono zrevrank(K key, V member) { + return createMono(() -> commandBuilder.zrevrank(key, member)); + } + + @Override + public Mono> zscan(K key) { + return createMono(() -> commandBuilder.zscan(key)); + } + + @Override + public Mono> zscan(K key, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.zscan(key, scanArgs)); + } + + @Override + public Mono> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.zscan(key, scanCursor, scanArgs)); + } + + @Override + public Mono> zscan(K key, ScanCursor scanCursor) { + return createMono(() -> commandBuilder.zscan(key, scanCursor)); + } + + @Override + public Mono zscan(ScoredValueStreamingChannel channel, K key) { + return createMono(() -> commandBuilder.zscanStreaming(channel, key)); + } + + @Override + public Mono zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs) { + return createMono(() -> commandBuilder.zscanStreaming(channel, key, scanArgs)); + } + + @Override + public Mono zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, + ScanArgs scanArgs) { + return createMono(() -> commandBuilder.zscanStreaming(channel, key, scanCursor, scanArgs)); + } + + @Override + public Mono zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor) { + return createMono(() -> commandBuilder.zscanStreaming(channel, key, scanCursor)); + } + + @Override + public Mono zscore(K key, V member) { + return createMono(() -> commandBuilder.zscore(key, member)); + } + + @Override + public Mono zunionstore(K destination, K... keys) { + return createMono(() -> commandBuilder.zunionstore(destination, keys)); + } + + @Override + public Mono zunionstore(K destination, ZStoreArgs storeArgs, K... keys) { + return createMono(() -> commandBuilder.zunionstore(destination, storeArgs, keys)); + } + + private byte[] encodeScript(String script) { + LettuceAssert.notNull(script, "Lua script must not be null"); + LettuceAssert.notEmpty(script, "Lua script must not be empty"); + return script.getBytes(getConnection().getOptions().getScriptCharset()); + } +} diff --git a/src/main/java/io/lettuce/core/BitFieldArgs.java b/src/main/java/io/lettuce/core/BitFieldArgs.java new file mode 100644 index 0000000000..9c17fdb31b --- /dev/null +++ b/src/main/java/io/lettuce/core/BitFieldArgs.java @@ -0,0 +1,664 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Argument list builder for the Redis BITFIELD command. + *

+ * {@link BitFieldArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + * @author Ian Pojman + * @since 4.2 + */ +public class BitFieldArgs implements CompositeArgument { + + private List commands; + + /** + * Creates a new {@link BitFieldArgs} instance. + */ + public BitFieldArgs() { + this(new ArrayList<>()); + } + + private BitFieldArgs(List commands) { + LettuceAssert.notNull(commands, "Commands must not be null"); + this.commands = commands; + } + + /** + * Builder entry points for {@link BitFieldArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + + } + + /** + * Create a new {@code GET} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset + * @return a new {@code GET} subcommand for the given {@code bitFieldType} and {@code offset}. + */ + public static BitFieldArgs get(BitFieldType bitFieldType, int offset) { + return new BitFieldArgs().get(bitFieldType, offset); + } + + /** + * Create a new {@code GET} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset, must not be {@literal null}. + * @return a new {@code GET} subcommand for the given {@code bitFieldType} and {@code offset}. + * @since 4.3 + */ + public static BitFieldArgs get(BitFieldType bitFieldType, Offset offset) { + return new BitFieldArgs().get(bitFieldType, offset); + } + + /** + * Create a new {@code SET} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset + * @param value the value + * @return a new {@code SET} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + */ + public static BitFieldArgs set(BitFieldType bitFieldType, int offset, long value) { + return new BitFieldArgs().set(bitFieldType, offset, value); + } + + /** + * Create a new {@code SET} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset, must not be {@literal null}. + * @param value the value + * @return a new {@code SET} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + * @since 4.3 + */ + public static BitFieldArgs set(BitFieldType bitFieldType, Offset offset, long value) { + return new BitFieldArgs().set(bitFieldType, offset, value); + } + + /** + * Create a new {@code INCRBY} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset + * @param value the value + * @return a new {@code INCRBY} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value} . + */ + public static BitFieldArgs incrBy(BitFieldType bitFieldType, int offset, long value) { + return new BitFieldArgs().incrBy(bitFieldType, offset, value); + } + + /** + * Create a new {@code INCRBY} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset, must not be {@literal null}. + * @param value the value + * @return a new {@code INCRBY} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value} . + * @since 4.3 + */ + public static BitFieldArgs incrBy(BitFieldType bitFieldType, Offset offset, long value) { + return new BitFieldArgs().incrBy(bitFieldType, offset, value); + } + + /** + * Adds a new {@code OVERFLOW} subcommand. + * + * @param overflowType type of overflow, must not be {@literal null}. + * @return a new {@code OVERFLOW} subcommand for the given {@code overflowType}. + */ + public static BitFieldArgs overflow(OverflowType overflowType) { + return new BitFieldArgs().overflow(overflowType); + } + } + + /** + * Creates a new signed {@link BitFieldType} for the given number of {@code bits}. + * + * Redis allows up to {@code 64} bits for unsigned integers. + * + * @param bits number of bits to define the integer type width. + * @return the {@link BitFieldType}. + */ + public static BitFieldType signed(int bits) { + return new BitFieldType(true, bits); + } + + /** + * Creates a new unsigned {@link BitFieldType} for the given number of {@code bits}. Redis allows up to {@code 63} bits for + * unsigned integers. + * + * @param bits number of bits to define the integer type width. + * @return the {@link BitFieldType}. + */ + public static BitFieldType unsigned(int bits) { + return new BitFieldType(false, bits); + } + + /** + * Creates a new {@link Offset} for the given {@code offset}. + * + * @param offset zero-based offset. + * @return the {@link Offset}. + * @since 4.3 + */ + public static Offset offset(int offset) { + return new Offset(false, offset); + } + + /** + * Creates a new {@link Offset} for the given {@code offset} that is multiplied by the integer type width used in the sub + * command. + * + * @param offset offset to be multiplied by the integer type width. + * @return the {@link Offset}. + * @since 4.3 + */ + public static Offset typeWidthBasedOffset(int offset) { + return new Offset(true, offset); + } + + /** + * Adds a new {@link SubCommand} to the {@code BITFIELD} execution. + * + * @param subCommand must not be {@literal null}. + */ + private BitFieldArgs addSubCommand(SubCommand subCommand) { + + LettuceAssert.notNull(subCommand, "SubCommand must not be null"); + commands.add(subCommand); + return this; + } + + /** + * Adds a new {@code GET} subcommand using offset {@code 0} and the field type of the previous command. + * + * @return a new {@code GET} subcommand for the given {@code bitFieldType} and {@code offset}. + * @throws IllegalStateException if no previous field type was found + */ + public BitFieldArgs get() { + return get(previousFieldType()); + } + + /** + * Adds a new {@code GET} subcommand using offset {@code 0}. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @return a new {@code GET} subcommand for the given {@code bitFieldType} and {@code offset}. + */ + public BitFieldArgs get(BitFieldType bitFieldType) { + return get(bitFieldType, 0); + } + + /** + * Adds a new {@code GET} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset + * @return a new {@code GET} subcommand for the given {@code bitFieldType} and {@code offset}. + */ + public BitFieldArgs get(BitFieldType bitFieldType, int offset) { + return addSubCommand(new Get(bitFieldType, false, offset)); + } + + /** + * Adds a new {@code GET} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset + * @return a new {@code GET} subcommand for the given {@code bitFieldType} and {@code offset}. + * @since 4.3 + */ + public BitFieldArgs get(BitFieldType bitFieldType, Offset offset) { + + LettuceAssert.notNull(offset, "BitFieldOffset must not be null"); + + return addSubCommand(new Get(bitFieldType, offset.isMultiplyByTypeWidth(), offset.getOffset())); + } + + /** + * Adds a new {@code GET} subcommand using the field type of the previous command. + * + * @param offset bitfield offset + * @return a new {@code GET} subcommand for the given {@code bitFieldType} and {@code offset}. + * @throws IllegalStateException if no previous field type was found + */ + public BitFieldArgs get(int offset) { + return get(previousFieldType(), offset); + } + + /** + * Adds a new {@code SET} subcommand using offset {@code 0} and the field type of the previous command. + * + * @param value the value + * @return a new {@code SET} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + * @throws IllegalStateException if no previous field type was found + */ + public BitFieldArgs set(long value) { + return set(previousFieldType(), value); + } + + /** + * Adds a new {@code SET} subcommand using offset {@code 0}. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param value the value + * @return a new {@code SET} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + */ + public BitFieldArgs set(BitFieldType bitFieldType, long value) { + return set(bitFieldType, 0, value); + } + + /** + * Adds a new {@code SET} subcommand using the field type of the previous command. + * + * @param offset bitfield offset + * @param value the value + * @return a new {@code SET} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + * @throws IllegalStateException if no previous field type was found + */ + public BitFieldArgs set(int offset, long value) { + return set(previousFieldType(), offset, value); + } + + /** + * Adds a new {@code SET} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset + * @param value the value + * @return a new {@code SET} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + */ + public BitFieldArgs set(BitFieldType bitFieldType, int offset, long value) { + return addSubCommand(new Set(bitFieldType, false, offset, value)); + } + + /** + * Adds a new {@code SET} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset, must not be {@literal null}. + * @param value the value + * @return a new {@code SET} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + * @since 4.3 + */ + public BitFieldArgs set(BitFieldType bitFieldType, Offset offset, long value) { + + LettuceAssert.notNull(offset, "BitFieldOffset must not be null"); + + return addSubCommand(new Set(bitFieldType, offset.isMultiplyByTypeWidth(), offset.getOffset(), value)); + } + + /** + * Adds a new {@code INCRBY} subcommand using offset {@code 0} and the field type of the previous command. + * + * @param value the value + * @return a new {@code INCRBY} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + * @throws IllegalStateException if no previous field type was found + */ + public BitFieldArgs incrBy(long value) { + return incrBy(previousFieldType(), value); + } + + /** + * Adds a new {@code INCRBY} subcommand using offset {@code 0}. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param value the value + * @return a new {@code INCRBY} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + */ + public BitFieldArgs incrBy(BitFieldType bitFieldType, long value) { + return incrBy(bitFieldType, 0, value); + } + + /** + * Adds a new {@code INCRBY} subcommand using the field type of the previous command. + * + * @param offset bitfield offset + * @param value the value + * @return a new {@code INCRBY} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + * @throws IllegalStateException if no previous field type was found + */ + public BitFieldArgs incrBy(int offset, long value) { + return incrBy(previousFieldType(), offset, value); + } + + /** + * Adds a new {@code INCRBY} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset + * @param value the value + * @return a new {@code INCRBY} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + */ + public BitFieldArgs incrBy(BitFieldType bitFieldType, int offset, long value) { + return addSubCommand(new IncrBy(bitFieldType, false, offset, value)); + } + + /** + * Adds a new {@code INCRBY} subcommand. + * + * @param bitFieldType the bit field type, must not be {@literal null}. + * @param offset bitfield offset, must not be {@literal null}. + * @param value the value + * @return a new {@code INCRBY} subcommand for the given {@code bitFieldType}, {@code offset} and {@code value}. + * @since 4.3 + */ + public BitFieldArgs incrBy(BitFieldType bitFieldType, Offset offset, long value) { + + LettuceAssert.notNull(offset, "BitFieldOffset must not be null"); + + return addSubCommand(new IncrBy(bitFieldType, offset.isMultiplyByTypeWidth(), offset.getOffset(), value)); + } + + /** + * Adds a new {@code OVERFLOW} subcommand. + * + * @param overflowType type of overflow, must not be {@literal null}. + * @return a new {@code OVERFLOW} subcommand for the given {@code overflowType}. + */ + public BitFieldArgs overflow(OverflowType overflowType) { + return addSubCommand(new Overflow(overflowType)); + } + + private BitFieldType previousFieldType() { + + List list = new ArrayList<>(commands); + Collections.reverse(list); + + for (SubCommand command : list) { + + if (command instanceof Get) { + return ((Get) command).bitFieldType; + } + + if (command instanceof Set) { + return ((Set) command).bitFieldType; + } + + if (command instanceof IncrBy) { + return ((IncrBy) command).bitFieldType; + } + } + + throw new IllegalStateException("No previous field type found"); + } + + /** + * Representation for the {@code SET} subcommand for {@code BITFIELD}. + */ + private static class Set extends SubCommand { + + private final BitFieldType bitFieldType; + private final boolean bitOffset; + private final long offset; + private final long value; + + private Set(BitFieldType bitFieldType, boolean bitOffset, int offset, long value) { + + LettuceAssert.notNull(bitFieldType, "BitFieldType must not be null"); + LettuceAssert.isTrue(offset > -1, "Offset must be greater or equal to 0"); + + this.bitFieldType = bitFieldType; + this.bitOffset = bitOffset; + this.offset = offset; + this.value = value; + } + + @Override + void build(CommandArgs args) { + + args.add(CommandType.SET).add(bitFieldType.asString()); + + if (bitOffset) { + args.add("#" + offset); + } else { + args.add(offset); + } + + args.add(value); + } + } + + /** + * Representation for the {@code GET} subcommand for {@code BITFIELD}. + */ + private static class Get extends SubCommand { + + private final BitFieldType bitFieldType; + private final boolean bitOffset; + private final int offset; + + private Get(BitFieldType bitFieldType, boolean bitOffset, int offset) { + + LettuceAssert.notNull(bitFieldType, "BitFieldType must not be null"); + LettuceAssert.isTrue(offset > -1, "Offset must be greater or equal to 0"); + + this.bitFieldType = bitFieldType; + this.bitOffset = bitOffset; + this.offset = offset; + } + + @Override + void build(CommandArgs args) { + + args.add(CommandType.GET).add(bitFieldType.asString()); + + if (bitOffset) { + args.add("#" + offset); + } else { + args.add(offset); + } + } + } + + /** + * Representation for the {@code INCRBY} subcommand for {@code BITFIELD}. + */ + private static class IncrBy extends SubCommand { + + private final BitFieldType bitFieldType; + private final boolean bitOffset; + private final long offset; + private final long value; + + private IncrBy(BitFieldType bitFieldType, boolean offsetWidthMultiplier, int offset, long value) { + + LettuceAssert.notNull(bitFieldType, "BitFieldType must not be null"); + LettuceAssert.isTrue(offset > -1, "Offset must be greater or equal to 0"); + + this.bitFieldType = bitFieldType; + this.bitOffset = offsetWidthMultiplier; + this.offset = offset; + this.value = value; + } + + @Override + void build(CommandArgs args) { + + args.add(CommandType.INCRBY).add(bitFieldType.asString()); + + if (bitOffset) { + args.add("#" + offset); + } else { + args.add(offset); + } + + args.add(value); + + } + } + + /** + * Representation for the {@code INCRBY} subcommand for {@code BITFIELD}. + */ + private static class Overflow extends SubCommand { + + private final OverflowType overflowType; + + private Overflow(OverflowType overflowType) { + + LettuceAssert.notNull(overflowType, "OverflowType must not be null"); + this.overflowType = overflowType; + } + + @Override + void build(CommandArgs args) { + args.add("OVERFLOW").add(overflowType); + } + } + + /** + * Base class for bitfield subcommands. + */ + private abstract static class SubCommand { + abstract void build(CommandArgs args); + } + + public void build(CommandArgs args) { + + for (SubCommand command : commands) { + command.build(args); + } + } + + /** + * Represents the overflow types for the {@code OVERFLOW} subcommand argument. + */ + public enum OverflowType implements ProtocolKeyword { + + WRAP, SAT, FAIL; + + public final byte[] bytes; + + OverflowType() { + bytes = name().getBytes(StandardCharsets.US_ASCII); + } + + @Override + public byte[] getBytes() { + return bytes; + } + } + + /** + * Represents a bit field type with details about signed/unsigned and the number of bits. + */ + public static class BitFieldType { + + private final boolean signed; + private final int bits; + + private BitFieldType(boolean signed, int bits) { + + LettuceAssert.isTrue(bits > 0, "Bits must be greater 0"); + + if (signed) { + LettuceAssert.isTrue(bits < 65, "Signed integers support only up to 64 bits"); + } else { + LettuceAssert.isTrue(bits < 64, "Unsigned integers support only up to 63 bits"); + } + + this.signed = signed; + this.bits = bits; + } + + /** + * + * @return {@literal true} if the bitfield type is signed. + */ + public boolean isSigned() { + return signed; + } + + /** + * + * @return number of bits. + */ + public int getBits() { + return bits; + } + + private String asString() { + return (signed ? "i" : "u") + bits; + } + + @Override + public String toString() { + return asString(); + } + } + + /** + * Represents a bit field offset. See also Bits and + * positional offsets + * + * @since 4.3 + */ + public static class Offset { + + private final boolean multiplyByTypeWidth; + private final int offset; + + private Offset(boolean multiplyByTypeWidth, int offset) { + + this.multiplyByTypeWidth = multiplyByTypeWidth; + this.offset = offset; + } + + /** + * @return {@literal true} if the offset should be multiplied by integer width that is represented with a leading hash ( + * {@code #}) when constructing the command + */ + public boolean isMultiplyByTypeWidth() { + return multiplyByTypeWidth; + } + + /** + * + * @return the offset. + */ + public int getOffset() { + return offset; + } + + @Override + public String toString() { + return (multiplyByTypeWidth ? "#" : "") + offset; + } + } +} diff --git a/src/main/java/io/lettuce/core/ChannelGroupListener.java b/src/main/java/io/lettuce/core/ChannelGroupListener.java new file mode 100644 index 0000000000..f12eea1fe4 --- /dev/null +++ b/src/main/java/io/lettuce/core/ChannelGroupListener.java @@ -0,0 +1,59 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.ConnectionEventTrigger.local; +import static io.lettuce.core.ConnectionEventTrigger.remote; + +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.connection.ConnectedEvent; +import io.lettuce.core.event.connection.DisconnectedEvent; +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.group.ChannelGroup; + +/** + * A netty {@link ChannelHandler} responsible for monitoring the channel and adding/removing the channel from/to the + * ChannelGroup. + * + * @author Will Glozer + * @author Mark Paluch + */ +class ChannelGroupListener extends ChannelInboundHandlerAdapter { + + private final ChannelGroup channels; + private final EventBus eventBus; + + public ChannelGroupListener(ChannelGroup channels, EventBus eventBus) { + this.channels = channels; + this.eventBus = eventBus; + } + + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + eventBus.publish(new ConnectedEvent(local(ctx), remote(ctx))); + channels.add(ctx.channel()); + super.channelActive(ctx); + } + + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + eventBus.publish(new DisconnectedEvent(local(ctx), remote(ctx))); + channels.remove(ctx.channel()); + super.channelInactive(ctx); + } +} diff --git a/src/main/java/io/lettuce/core/ClientOptions.java b/src/main/java/io/lettuce/core/ClientOptions.java new file mode 100644 index 0000000000..811428c7a7 --- /dev/null +++ b/src/main/java/io/lettuce/core/ClientOptions.java @@ -0,0 +1,539 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.io.Serializable; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.ProtocolVersion; +import io.lettuce.core.resource.ClientResources; + +/** + * Client Options to control the behavior of {@link RedisClient}. + * + * @author Mark Paluch + * @author Gavin Cook + */ +@SuppressWarnings("serial") +public class ClientOptions implements Serializable { + + public static final boolean DEFAULT_PING_BEFORE_ACTIVATE_CONNECTION = true; + public static final ProtocolVersion DEFAULT_PROTOCOL_VERSION = ProtocolVersion.newestSupported(); + public static final boolean DEFAULT_AUTO_RECONNECT = true; + public static final boolean DEFAULT_CANCEL_CMD_RECONNECT_FAIL = false; + public static final boolean DEFAULT_PUBLISH_ON_SCHEDULER = false; + public static final boolean DEFAULT_SUSPEND_RECONNECT_PROTO_FAIL = false; + public static final int DEFAULT_REQUEST_QUEUE_SIZE = Integer.MAX_VALUE; + public static final DisconnectedBehavior DEFAULT_DISCONNECTED_BEHAVIOR = DisconnectedBehavior.DEFAULT; + public static final Charset DEFAULT_SCRIPT_CHARSET = StandardCharsets.UTF_8; + public static final SocketOptions DEFAULT_SOCKET_OPTIONS = SocketOptions.create(); + public static final SslOptions DEFAULT_SSL_OPTIONS = SslOptions.create(); + public static final TimeoutOptions DEFAULT_TIMEOUT_OPTIONS = TimeoutOptions.create(); + public static final int DEFAULT_BUFFER_USAGE_RATIO = 3; + + private final boolean pingBeforeActivateConnection; + private final ProtocolVersion protocolVersion; + private final boolean autoReconnect; + private final boolean cancelCommandsOnReconnectFailure; + private final boolean publishOnScheduler; + private final boolean suspendReconnectOnProtocolFailure; + private final int requestQueueSize; + private final DisconnectedBehavior disconnectedBehavior; + private final Charset scriptCharset; + private final SocketOptions socketOptions; + private final SslOptions sslOptions; + private final TimeoutOptions timeoutOptions; + private final int bufferUsageRatio; + + protected ClientOptions(Builder builder) { + this.pingBeforeActivateConnection = builder.pingBeforeActivateConnection; + this.protocolVersion = builder.protocolVersion; + this.cancelCommandsOnReconnectFailure = builder.cancelCommandsOnReconnectFailure; + this.publishOnScheduler = builder.publishOnScheduler; + this.autoReconnect = builder.autoReconnect; + this.suspendReconnectOnProtocolFailure = builder.suspendReconnectOnProtocolFailure; + this.requestQueueSize = builder.requestQueueSize; + this.disconnectedBehavior = builder.disconnectedBehavior; + this.scriptCharset = builder.scriptCharset; + this.socketOptions = builder.socketOptions; + this.sslOptions = builder.sslOptions; + this.timeoutOptions = builder.timeoutOptions; + this.bufferUsageRatio = builder.bufferUsageRatio; + } + + protected ClientOptions(ClientOptions original) { + this.pingBeforeActivateConnection = original.isPingBeforeActivateConnection(); + this.protocolVersion = original.getConfiguredProtocolVersion(); + this.autoReconnect = original.isAutoReconnect(); + this.cancelCommandsOnReconnectFailure = original.isCancelCommandsOnReconnectFailure(); + this.publishOnScheduler = original.isPublishOnScheduler(); + this.suspendReconnectOnProtocolFailure = original.isSuspendReconnectOnProtocolFailure(); + this.requestQueueSize = original.getRequestQueueSize(); + this.disconnectedBehavior = original.getDisconnectedBehavior(); + this.scriptCharset = original.getScriptCharset(); + this.socketOptions = original.getSocketOptions(); + this.sslOptions = original.getSslOptions(); + this.timeoutOptions = original.getTimeoutOptions(); + this.bufferUsageRatio = original.getBufferUsageRatio(); + } + + /** + * Create a copy of {@literal options} + * + * @param options the original + * @return A new instance of {@link ClientOptions} containing the values of {@literal options} + */ + public static ClientOptions copyOf(ClientOptions options) { + return new ClientOptions(options); + } + + /** + * Returns a new {@link ClientOptions.Builder} to construct {@link ClientOptions}. + * + * @return a new {@link ClientOptions.Builder} to construct {@link ClientOptions}. + */ + public static ClientOptions.Builder builder() { + return new ClientOptions.Builder(); + } + + /** + * Create a new instance of {@link ClientOptions} with default settings. + * + * @return a new instance of {@link ClientOptions} with default settings + */ + public static ClientOptions create() { + return builder().build(); + } + + /** + * Builder for {@link ClientOptions}. + */ + public static class Builder { + + private boolean pingBeforeActivateConnection = DEFAULT_PING_BEFORE_ACTIVATE_CONNECTION; + private ProtocolVersion protocolVersion; + private boolean autoReconnect = DEFAULT_AUTO_RECONNECT; + private boolean cancelCommandsOnReconnectFailure = DEFAULT_CANCEL_CMD_RECONNECT_FAIL; + private boolean publishOnScheduler = DEFAULT_PUBLISH_ON_SCHEDULER; + private boolean suspendReconnectOnProtocolFailure = DEFAULT_SUSPEND_RECONNECT_PROTO_FAIL; + private int requestQueueSize = DEFAULT_REQUEST_QUEUE_SIZE; + private DisconnectedBehavior disconnectedBehavior = DEFAULT_DISCONNECTED_BEHAVIOR; + private Charset scriptCharset = DEFAULT_SCRIPT_CHARSET; + private SocketOptions socketOptions = DEFAULT_SOCKET_OPTIONS; + private SslOptions sslOptions = DEFAULT_SSL_OPTIONS; + private TimeoutOptions timeoutOptions = DEFAULT_TIMEOUT_OPTIONS; + private int bufferUsageRatio = DEFAULT_BUFFER_USAGE_RATIO; + + protected Builder() { + } + + /** + * Sets the {@literal PING} before activate connection flag. Defaults to {@literal true}. See + * {@link #DEFAULT_PING_BEFORE_ACTIVATE_CONNECTION}. This option has no effect unless forcing to use the RESP 2 protocol + * version. + * + * @param pingBeforeActivateConnection true/false + * @return {@code this} + */ + public Builder pingBeforeActivateConnection(boolean pingBeforeActivateConnection) { + this.pingBeforeActivateConnection = pingBeforeActivateConnection; + return this; + } + + /** + * Sets the {@link ProtocolVersion} to use. Defaults to {@literal RESP3}. See {@link #DEFAULT_PROTOCOL_VERSION}. + * + * @param protocolVersion version to use. + * @return {@code this} + * @since 6.0 + * @see ProtocolVersion#newestSupported() + */ + public Builder protocolVersion(ProtocolVersion protocolVersion) { + + this.protocolVersion = protocolVersion; + return this; + } + + /** + * Enables or disables auto reconnection on connection loss. Defaults to {@literal true}. See + * {@link #DEFAULT_AUTO_RECONNECT}. + * + * @param autoReconnect true/false + * @return {@code this} + */ + public Builder autoReconnect(boolean autoReconnect) { + this.autoReconnect = autoReconnect; + return this; + } + + /** + * Suspends reconnect when reconnects run into protocol failures (SSL verification, PING before connect fails). Defaults + * to {@literal false}. See {@link #DEFAULT_SUSPEND_RECONNECT_PROTO_FAIL}. + * + * @param suspendReconnectOnProtocolFailure true/false + * @return {@code this} + */ + public Builder suspendReconnectOnProtocolFailure(boolean suspendReconnectOnProtocolFailure) { + this.suspendReconnectOnProtocolFailure = suspendReconnectOnProtocolFailure; + return this; + } + + /** + * Allows cancelling queued commands in case a reconnect fails.Defaults to {@literal false}. See + * {@link #DEFAULT_CANCEL_CMD_RECONNECT_FAIL}. + * + * @param cancelCommandsOnReconnectFailure true/false + * @return {@code this} + */ + public Builder cancelCommandsOnReconnectFailure(boolean cancelCommandsOnReconnectFailure) { + this.cancelCommandsOnReconnectFailure = cancelCommandsOnReconnectFailure; + return this; + } + + /** + * Use a dedicated {@link reactor.core.scheduler.Scheduler} to emit reactive data signals. Enabling this option can be + * useful for reactive sequences that require a significant amount of processing with a single/a few Redis connections. + *

+ * A single Redis connection operates on a single thread. Operations that require a significant amount of processing can + * lead to a single-threaded-like behavior for all consumers of the Redis connection. When enabled, data signals will be + * emitted using a different thread served by {@link ClientResources#eventExecutorGroup()}. Defaults to {@literal false} + * , see {@link #DEFAULT_PUBLISH_ON_SCHEDULER}. + * + * @param publishOnScheduler true/false + * @return {@code this} + * @since 5.2 + * @see org.reactivestreams.Subscriber#onNext(Object) + * @see ClientResources#eventExecutorGroup() + */ + public Builder publishOnScheduler(boolean publishOnScheduler) { + this.publishOnScheduler = publishOnScheduler; + return this; + } + + /** + * Set the per-connection request queue size. The command invocation will lead to a {@link RedisException} if the queue + * size is exceeded. Setting the {@code requestQueueSize} to a lower value will lead earlier to exceptions during + * overload or while the connection is in a disconnected state. A higher value means hitting the boundary will take + * longer to occur, but more requests will potentially be queued up and more heap space is used. Defaults to + * {@link Integer#MAX_VALUE}. See {@link #DEFAULT_REQUEST_QUEUE_SIZE}. + * + * @param requestQueueSize the queue size. + * @return {@code this} + */ + public Builder requestQueueSize(int requestQueueSize) { + this.requestQueueSize = requestQueueSize; + return this; + } + + /** + * Sets the behavior for command invocation when connections are in a disconnected state. Defaults to {@literal true}. + * See {@link #DEFAULT_DISCONNECTED_BEHAVIOR}. + * + * @param disconnectedBehavior must not be {@literal null}. + * @return {@code this} + */ + public Builder disconnectedBehavior(DisconnectedBehavior disconnectedBehavior) { + + LettuceAssert.notNull(disconnectedBehavior, "DisconnectedBehavior must not be null"); + this.disconnectedBehavior = disconnectedBehavior; + return this; + } + + /** + * Sets the Lua script {@link Charset} to use to encode {@link String scripts} to {@code byte[]}. Defaults to + * {@link StandardCharsets#UTF_8}. See {@link #DEFAULT_SCRIPT_CHARSET}. + * + * @param scriptCharset must not be {@literal null}. + * @return {@code this} + * @since 6.0 + */ + public Builder scriptCharset(Charset scriptCharset) { + + LettuceAssert.notNull(scriptCharset, "ScriptCharset must not be null"); + this.scriptCharset = scriptCharset; + return this; + } + + /** + * Sets the low-level {@link SocketOptions} for the connections kept to Redis servers. See + * {@link #DEFAULT_SOCKET_OPTIONS}. + * + * @param socketOptions must not be {@literal null}. + * @return {@code this} + */ + public Builder socketOptions(SocketOptions socketOptions) { + + LettuceAssert.notNull(socketOptions, "SocketOptions must not be null"); + this.socketOptions = socketOptions; + return this; + } + + /** + * Sets the {@link SslOptions} for SSL connections kept to Redis servers. See {@link #DEFAULT_SSL_OPTIONS}. + * + * @param sslOptions must not be {@literal null}. + * @return {@code this} + */ + public Builder sslOptions(SslOptions sslOptions) { + + LettuceAssert.notNull(sslOptions, "SslOptions must not be null"); + this.sslOptions = sslOptions; + return this; + } + + /** + * Sets the {@link TimeoutOptions} to expire and cancel commands. See {@link #DEFAULT_TIMEOUT_OPTIONS}. + * + * @param timeoutOptions must not be {@literal null}. + * @return {@code this} + * @since 5.1 + */ + public Builder timeoutOptions(TimeoutOptions timeoutOptions) { + + LettuceAssert.notNull(timeoutOptions, "TimeoutOptions must not be null"); + this.timeoutOptions = timeoutOptions; + return this; + } + + /** + * Buffer usage ratio for {@link io.lettuce.core.protocol.CommandHandler}. This ratio controls how often bytes are + * discarded during decoding. In particular, when buffer usage reaches {@code bufferUsageRatio / bufferUsageRatio + 1}. + * E.g. setting {@code bufferUsageRatio} to {@literal 3}, will discard read bytes once the buffer usage reaches 75 + * percent. See {@link #DEFAULT_BUFFER_USAGE_RATIO}. + * + * @param bufferUsageRatio must greater between 0 and 2^31-1, typically a value between 1 and 10 representing 50% to + * 90%. + * @return {@code this} + * @since 5.2 + */ + public Builder bufferUsageRatio(int bufferUsageRatio) { + + LettuceAssert.isTrue(bufferUsageRatio > 0 && bufferUsageRatio < Integer.MAX_VALUE, + "BufferUsageRatio must grater than 0"); + + this.bufferUsageRatio = bufferUsageRatio; + return this; + } + + /** + * Create a new instance of {@link ClientOptions}. + * + * @return new instance of {@link ClientOptions} + */ + public ClientOptions build() { + return new ClientOptions(this); + } + } + + /** + * Returns a builder to create new {@link ClientOptions} whose settings are replicated from the current + * {@link ClientOptions}. + * + * @return a {@link ClientOptions.Builder} to create new {@link ClientOptions} whose settings are replicated from the + * current {@link ClientOptions}. + * + * @since 5.1 + */ + public ClientOptions.Builder mutate() { + Builder builder = new Builder(); + + builder.autoReconnect(isAutoReconnect()).bufferUsageRatio(getBufferUsageRatio()) + .cancelCommandsOnReconnectFailure(isCancelCommandsOnReconnectFailure()) + .disconnectedBehavior(getDisconnectedBehavior()).scriptCharset(getScriptCharset()) + .publishOnScheduler(isPublishOnScheduler()).pingBeforeActivateConnection(isPingBeforeActivateConnection()) + .protocolVersion(getConfiguredProtocolVersion()).requestQueueSize(getRequestQueueSize()) + .socketOptions(getSocketOptions()).sslOptions(getSslOptions()) + .suspendReconnectOnProtocolFailure(isSuspendReconnectOnProtocolFailure()).timeoutOptions(getTimeoutOptions()); + + return builder; + } + + /** + * Enables initial {@literal PING} barrier before any connection is usable. If {@literal true} (default is {@literal true} + * ), every connection and reconnect will issue a {@literal PING} command and awaits its response before the connection is + * activated and enabled for use. If the check fails, the connect/reconnect is treated as failure. This option has no effect + * unless forcing to use the RESP 2 protocol version. + * + * @return {@literal true} if {@literal PING} barrier is enabled. + */ + public boolean isPingBeforeActivateConnection() { + return pingBeforeActivateConnection; + } + + /** + * Returns the {@link ProtocolVersion} to use. + * + * @return the {@link ProtocolVersion} to use. + */ + public ProtocolVersion getProtocolVersion() { + + ProtocolVersion protocolVersion = getConfiguredProtocolVersion(); + return protocolVersion == null ? DEFAULT_PROTOCOL_VERSION : protocolVersion; + } + + /** + * Returns the configured {@link ProtocolVersion}. May return {@code null} if unconfigured. + * + * @return the {@link ProtocolVersion} to use. May be {@code null}. + * @since 6.0 + */ + public ProtocolVersion getConfiguredProtocolVersion() { + return protocolVersion; + } + + /** + * Controls auto-reconnect behavior on connections. If auto-reconnect is {@literal true} (default), it is enabled. As soon + * as a connection gets closed/reset without the intention to close it, the client will try to reconnect and re-issue any + * queued commands. + * + * This flag has also the effect that disconnected connections will refuse commands and cancel these with an exception. + * + * @return {@literal true} if auto-reconnect is enabled. + */ + public boolean isAutoReconnect() { + return autoReconnect; + } + + /** + * If this flag is {@literal true} any queued commands will be canceled when a reconnect fails within the activation + * sequence. Default is {@literal false}. + * + * @return {@literal true} if commands should be cancelled on reconnect failures. + */ + public boolean isCancelCommandsOnReconnectFailure() { + return cancelCommandsOnReconnectFailure; + } + + /** + * Use a dedicated {@link reactor.core.scheduler.Scheduler} to emit reactive data signals. Enabling this option can be + * useful for reactive sequences that require a significant amount of processing with a single/a few Redis connections. + *

+ * A single Redis connection operates on a single thread. Operations that require a significant amount of processing can + * lead to a single-threaded-like behavior for all consumers of the Redis connection. When enabled, data signals will be + * emitted using a different thread served by {@link ClientResources#eventExecutorGroup()}. Defaults to {@literal false} , + * see {@link #DEFAULT_PUBLISH_ON_SCHEDULER}. + * + * @return {@literal true} to use a dedicated {@link reactor.core.scheduler.Scheduler} + * @since 5.2 + */ + public boolean isPublishOnScheduler() { + return publishOnScheduler; + } + + /** + * If this flag is {@literal true} the reconnect will be suspended on protocol errors. Protocol errors are errors while SSL + * negotiation or when PING before connect fails. + * + * @return {@literal true} if reconnect will be suspended on protocol errors. + */ + public boolean isSuspendReconnectOnProtocolFailure() { + return suspendReconnectOnProtocolFailure; + } + + /** + * Request queue size for a connection. This value applies per connection. The command invocation will throw a + * {@link RedisException} if the queue size is exceeded and a new command is requested. Defaults to + * {@link Integer#MAX_VALUE}. + * + * @return the request queue size. + */ + public int getRequestQueueSize() { + return requestQueueSize; + } + + /** + * Behavior for command invocation when connections are in a disconnected state. Defaults to + * {@link DisconnectedBehavior#DEFAULT true}. See {@link #DEFAULT_DISCONNECTED_BEHAVIOR}. + * + * @return the behavior for command invocation when connections are in a disconnected state + */ + public DisconnectedBehavior getDisconnectedBehavior() { + return disconnectedBehavior; + } + + /** + * Returns the Lua script {@link Charset}. + * + * @return the script {@link Charset}. + * @since 6.0 + */ + public Charset getScriptCharset() { + return scriptCharset; + } + + /** + * Returns the {@link SocketOptions}. + * + * @return the {@link SocketOptions}. + */ + public SocketOptions getSocketOptions() { + return socketOptions; + } + + /** + * Returns the {@link SslOptions}. + * + * @return the {@link SslOptions}. + */ + public SslOptions getSslOptions() { + return sslOptions; + } + + /** + * Returns the {@link TimeoutOptions}. + * + * @return the {@link TimeoutOptions}. + * @since 5.1 + */ + public TimeoutOptions getTimeoutOptions() { + return timeoutOptions; + } + + /** + * Buffer usage ratio for {@link io.lettuce.core.protocol.CommandHandler}. This ratio controls how often bytes are discarded + * during decoding. In particular, when buffer usage reaches {@code bufferUsageRatio / bufferUsageRatio + 1}. E.g. setting + * {@code bufferUsageRatio} to {@literal 3}, will discard read bytes once the buffer usage reaches 75 percent. + * + * @return the buffer usage ratio. + * @since 5.2 + */ + public int getBufferUsageRatio() { + return bufferUsageRatio; + } + + /** + * Behavior of connections in disconnected state. + */ + public enum DisconnectedBehavior { + + /** + * Accept commands when auto-reconnect is enabled, reject commands when auto-reconnect is disabled. + */ + DEFAULT, + + /** + * Accept commands in disconnected state. + */ + ACCEPT_COMMANDS, + + /** + * Reject commands in disconnected state. + */ + REJECT_COMMANDS, + } +} diff --git a/src/main/java/io/lettuce/core/CloseEvents.java b/src/main/java/io/lettuce/core/CloseEvents.java new file mode 100644 index 0000000000..c56b03589c --- /dev/null +++ b/src/main/java/io/lettuce/core/CloseEvents.java @@ -0,0 +1,45 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; + + +/** + * Close Events Facility. Can register/unregister CloseListener and fire a closed event to all registered listeners. + * + * @author Mark Paluch + * @since 3.0 + */ +class CloseEvents { + + private Set listeners = ConcurrentHashMap.newKeySet(); + + public void fireEventClosed(Object resource) { + for (CloseListener listener : listeners) { + listener.resourceClosed(resource); + } + } + + public void addListener(CloseListener listener) { + listeners.add(listener); + } + + interface CloseListener { + void resourceClosed(Object resource); + } +} diff --git a/src/main/java/io/lettuce/core/CompositeArgument.java b/src/main/java/io/lettuce/core/CompositeArgument.java new file mode 100644 index 0000000000..70e9d0f28a --- /dev/null +++ b/src/main/java/io/lettuce/core/CompositeArgument.java @@ -0,0 +1,48 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.protocol.CommandArgs; + +/** + * Interface for composite command argument objects. Implementing classes of {@link CompositeArgument} consolidate multiple + * arguments for a particular Redis command in to one type and reduce the amount of individual arguments passed in a method + * signature. + *

+ * Command builder call {@link #build(CommandArgs)} during command construction to contribute command arguments for command + * invocation. A composite argument is usually stateless as it can be reused multiple times by different commands. + * + * @author Mark Paluch + * @since 5.0 + * @see CommandArgs + * @see SetArgs + * @see ZStoreArgs + * @see GeoArgs + */ +public interface CompositeArgument { + + /** + * Build command arguments and contribute arguments to {@link CommandArgs}. + *

+ * Implementing classes are required to implement this method. Depending on the command nature and configured arguments, + * this method may contribute arguments but is not required to add arguments if none are specified. + * + * @param args the command arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + */ + void build(CommandArgs args); +} diff --git a/src/main/java/io/lettuce/core/ConnectionBuilder.java b/src/main/java/io/lettuce/core/ConnectionBuilder.java new file mode 100644 index 0000000000..5b48868b3b --- /dev/null +++ b/src/main/java/io/lettuce/core/ConnectionBuilder.java @@ -0,0 +1,239 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.function.Supplier; + +import reactor.core.publisher.Mono; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.*; +import io.lettuce.core.resource.ClientResources; +import io.netty.bootstrap.Bootstrap; +import io.netty.channel.Channel; +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelInitializer; +import io.netty.channel.group.ChannelGroup; +import io.netty.util.Timer; + +/** + * Connection builder for connections. This class is part of the internal API. + * + * @author Mark Paluch + */ +public class ConnectionBuilder { + + private Mono socketAddressSupplier; + private ConnectionEvents connectionEvents; + private RedisChannelHandler connection; + private Endpoint endpoint; + private Supplier commandHandlerSupplier; + private ChannelGroup channelGroup; + private Bootstrap bootstrap; + private ClientOptions clientOptions; + private Duration timeout; + private ClientResources clientResources; + private ConnectionInitializer connectionInitializer; + private ReconnectionListener reconnectionListener = ReconnectionListener.NO_OP; + private ConnectionWatchdog connectionWatchdog; + + public static ConnectionBuilder connectionBuilder() { + return new ConnectionBuilder(); + } + + /** + * Apply settings from {@link RedisURI} + * + * @param redisURI + */ + public void apply(RedisURI redisURI) { + timeout(redisURI.getTimeout()); + } + + protected List buildHandlers() { + + LettuceAssert.assertState(channelGroup != null, "ChannelGroup must be set"); + LettuceAssert.assertState(connectionEvents != null, "ConnectionEvents must be set"); + LettuceAssert.assertState(connection != null, "Connection must be set"); + LettuceAssert.assertState(clientResources != null, "ClientResources must be set"); + LettuceAssert.assertState(endpoint != null, "Endpoint must be set"); + LettuceAssert.assertState(connectionInitializer != null, "ConnectionInitializer must be set"); + + List handlers = new ArrayList<>(); + + connection.setOptions(clientOptions); + + handlers.add(new ChannelGroupListener(channelGroup, clientResources.eventBus())); + handlers.add(new CommandEncoder()); + handlers.add(getHandshakeHandler()); + handlers.add(commandHandlerSupplier.get()); + + handlers.add(new ConnectionEventTrigger(connectionEvents, connection, clientResources.eventBus())); + + if (clientOptions.isAutoReconnect()) { + handlers.add(createConnectionWatchdog()); + } + + return handlers; + } + + protected ChannelHandler getHandshakeHandler() { + return new RedisHandshakeHandler(connectionInitializer, clientResources, timeout); + } + + protected ConnectionWatchdog createConnectionWatchdog() { + + if (connectionWatchdog != null) { + return connectionWatchdog; + } + + LettuceAssert.assertState(bootstrap != null, "Bootstrap must be set for autoReconnect=true"); + LettuceAssert.assertState(socketAddressSupplier != null, "SocketAddressSupplier must be set for autoReconnect=true"); + + ConnectionWatchdog watchdog = new ConnectionWatchdog(clientResources.reconnectDelay(), clientOptions, bootstrap, + clientResources.timer(), + clientResources.eventExecutorGroup(), socketAddressSupplier, reconnectionListener, connection, + clientResources.eventBus()); + + endpoint.registerConnectionWatchdog(watchdog); + + connectionWatchdog = watchdog; + return watchdog; + } + + public ChannelInitializer build(SocketAddress socketAddress) { + return new PlainChannelInitializer(this::buildHandlers, clientResources); + } + + public ConnectionBuilder socketAddressSupplier(Mono socketAddressSupplier) { + this.socketAddressSupplier = socketAddressSupplier; + return this; + } + + public Mono socketAddress() { + LettuceAssert.assertState(socketAddressSupplier != null, "SocketAddressSupplier must be set"); + return socketAddressSupplier; + } + + public ConnectionBuilder timeout(Duration timeout) { + this.timeout = timeout; + return this; + } + + public Duration getTimeout() { + return timeout; + } + + public ConnectionBuilder reconnectionListener(ReconnectionListener reconnectionListener) { + + LettuceAssert.notNull(reconnectionListener, "ReconnectionListener must not be null"); + this.reconnectionListener = reconnectionListener; + return this; + } + + public ConnectionBuilder clientOptions(ClientOptions clientOptions) { + this.clientOptions = clientOptions; + return this; + } + + public ConnectionBuilder connectionEvents(ConnectionEvents connectionEvents) { + this.connectionEvents = connectionEvents; + return this; + } + + public ConnectionBuilder connection(RedisChannelHandler connection) { + this.connection = connection; + return this; + } + + public ConnectionBuilder channelGroup(ChannelGroup channelGroup) { + this.channelGroup = channelGroup; + return this; + } + + public ConnectionBuilder commandHandler(Supplier supplier) { + this.commandHandlerSupplier = supplier; + return this; + } + + public ConnectionBuilder bootstrap(Bootstrap bootstrap) { + this.bootstrap = bootstrap; + return this; + } + + public ConnectionBuilder endpoint(Endpoint endpoint) { + this.endpoint = endpoint; + return this; + } + + public ConnectionBuilder clientResources(ClientResources clientResources) { + this.clientResources = clientResources; + return this; + } + + public ConnectionBuilder connectionInitializer(ConnectionInitializer connectionInitializer) { + this.connectionInitializer = connectionInitializer; + return this; + } + + public RedisChannelHandler connection() { + return connection; + } + + public Bootstrap bootstrap() { + return bootstrap; + } + + public ClientOptions clientOptions() { + return clientOptions; + } + + public ClientResources clientResources() { + return clientResources; + } + + public Endpoint endpoint() { + return endpoint; + } + + static class PlainChannelInitializer extends ChannelInitializer { + + private final Supplier> handlers; + private final ClientResources clientResources; + + PlainChannelInitializer(Supplier> handlers, ClientResources clientResources) { + this.handlers = handlers; + this.clientResources = clientResources; + } + + @Override + protected void initChannel(Channel channel) { + doInitialize(channel); + } + + private void doInitialize(Channel channel) { + + for (ChannelHandler handler : handlers.get()) { + channel.pipeline().addLast(handler); + } + + clientResources.nettyCustomizer().afterChannelInitialized(channel); + } + } +} diff --git a/src/main/java/io/lettuce/core/ConnectionEventTrigger.java b/src/main/java/io/lettuce/core/ConnectionEventTrigger.java new file mode 100644 index 0000000000..274f61d723 --- /dev/null +++ b/src/main/java/io/lettuce/core/ConnectionEventTrigger.java @@ -0,0 +1,79 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; + +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.connection.ConnectionActivatedEvent; +import io.lettuce.core.event.connection.ConnectionDeactivatedEvent; +import io.netty.channel.Channel; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.local.LocalAddress; + +/** + * @author Mark Paluch + * @since 3.0 + */ +class ConnectionEventTrigger extends ChannelInboundHandlerAdapter { + + private final ConnectionEvents connectionEvents; + private final RedisChannelHandler connection; + private final EventBus eventBus; + + ConnectionEventTrigger(ConnectionEvents connectionEvents, RedisChannelHandler connection, EventBus eventBus) { + this.connectionEvents = connectionEvents; + this.connection = connection; + this.eventBus = eventBus; + } + + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + connectionEvents.fireEventRedisConnected(connection, ctx.channel().remoteAddress()); + eventBus.publish(new ConnectionActivatedEvent(local(ctx), remote(ctx))); + super.channelActive(ctx); + } + + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + connectionEvents.fireEventRedisDisconnected(connection); + eventBus.publish(new ConnectionDeactivatedEvent(local(ctx), remote(ctx))); + super.channelInactive(ctx); + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { + connectionEvents.fireEventRedisExceptionCaught(connection, cause); + super.exceptionCaught(ctx, cause); + } + + static SocketAddress remote(ChannelHandlerContext ctx) { + if (ctx.channel() != null && ctx.channel().remoteAddress() != null) { + return ctx.channel().remoteAddress(); + } + return new LocalAddress("unknown"); + } + + static SocketAddress local(ChannelHandlerContext ctx) { + Channel channel = ctx.channel(); + if (channel != null && channel.localAddress() != null) { + return channel.localAddress(); + } + return LocalAddress.ANY; + } + +} diff --git a/src/main/java/io/lettuce/core/ConnectionEvents.java b/src/main/java/io/lettuce/core/ConnectionEvents.java new file mode 100644 index 0000000000..b1ceae2449 --- /dev/null +++ b/src/main/java/io/lettuce/core/ConnectionEvents.java @@ -0,0 +1,80 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; + +/** + * Close Events Facility. Can register/unregister CloseListener and fire a closed event to all registered listeners. This class + * is part of the internal API and may change without further notice. + * + * @author Mark Paluch + * @since 3.0 + */ +public class ConnectionEvents { + + private final Set listeners = ConcurrentHashMap.newKeySet(); + + void fireEventRedisConnected(RedisChannelHandler connection, SocketAddress socketAddress) { + for (RedisConnectionStateListener listener : listeners) { + listener.onRedisConnected(connection, socketAddress); + } + } + + void fireEventRedisDisconnected(RedisChannelHandler connection) { + for (RedisConnectionStateListener listener : listeners) { + listener.onRedisDisconnected(connection); + } + } + + void fireEventRedisExceptionCaught(RedisChannelHandler connection, Throwable cause) { + for (RedisConnectionStateListener listener : listeners) { + listener.onRedisExceptionCaught(connection, cause); + } + } + + public void addListener(RedisConnectionStateListener listener) { + listeners.add(listener); + } + + public void removeListener(RedisConnectionStateListener listener) { + listeners.remove(listener); + } + + /** + * Internal event when a channel is closed. + */ + public static class Reset { + } + + /** + * Internal event when a reconnect is initiated. + */ + public static class Reconnect { + + private final int attempt; + + public Reconnect(int attempt) { + this.attempt = attempt; + } + + public int getAttempt() { + return attempt; + } + } +} diff --git a/src/main/java/io/lettuce/core/ConnectionFuture.java b/src/main/java/io/lettuce/core/ConnectionFuture.java new file mode 100644 index 0000000000..f9f7ec1397 --- /dev/null +++ b/src/main/java/io/lettuce/core/ConnectionFuture.java @@ -0,0 +1,203 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; +import java.util.concurrent.*; +import java.util.function.BiConsumer; +import java.util.function.BiFunction; +import java.util.function.Consumer; +import java.util.function.Function; + +import io.lettuce.core.api.StatefulConnection; + +/** + * A {@code ConnectionFuture} represents the result of an asynchronous connection initialization. The future provides a + * {@link StatefulConnection} on successful completion. It also provides the remote {@link SocketAddress}. + * + * @since 4.4 + */ +public interface ConnectionFuture extends CompletionStage, Future { + + /** + * Create a {@link ConnectionFuture} given {@link SocketAddress} and {@link CompletableFuture} holding the connection + * progress. + * + * @param remoteAddress initial connection endpoint, must not be {@literal null}. + * @param delegate must not be {@literal null}. + * @return the {@link ConnectionFuture} for {@link SocketAddress} and {@link CompletableFuture}. + * @since 5.0 + */ + static ConnectionFuture from(SocketAddress remoteAddress, CompletableFuture delegate) { + return new DefaultConnectionFuture<>(remoteAddress, delegate); + } + + /** + * Create a completed {@link ConnectionFuture} given {@link SocketAddress} and {@code value} holding the value. + * + * @param remoteAddress initial connection endpoint, must not be {@literal null}. + * @param value must not be {@literal null}. + * @return the {@link ConnectionFuture} for {@link SocketAddress} and {@code value}. + * @since 5.1 + */ + static ConnectionFuture completed(SocketAddress remoteAddress, T value) { + return new DefaultConnectionFuture<>(remoteAddress, CompletableFuture.completedFuture(value)); + } + + /** + * Waits if necessary for the computation to complete, and then retrieves its result. + * + * @return the computed result + * @throws CancellationException if the computation was cancelled + * @throws ExecutionException if the computation threw an exception + * @throws InterruptedException if the current thread was interrupted while waiting + */ + T get() throws InterruptedException, ExecutionException; + + /** + * Return the remote {@link SocketAddress}. + * + * @return the remote {@link SocketAddress}. May be {@literal null} until the socket address is resolved. + */ + SocketAddress getRemoteAddress(); + + /** + * Returns the result value when complete, or throws an (unchecked) exception if completed exceptionally. To better conform + * with the use of common functional forms, if a computation involved in the completion of this CompletableFuture threw an + * exception, this method throws an (unchecked) {@link CompletionException} with the underlying exception as its cause. + * + * @return the result value + * @throws CancellationException if the computation was cancelled + * @throws CompletionException if this future completed exceptionally or a completion computation threw an exception + */ + T join(); + + @Override + ConnectionFuture thenApply(Function fn); + + @Override + ConnectionFuture thenApplyAsync(Function fn); + + @Override + ConnectionFuture thenApplyAsync(Function fn, Executor executor); + + @Override + ConnectionFuture thenAccept(Consumer action); + + @Override + ConnectionFuture thenAcceptAsync(Consumer action); + + @Override + ConnectionFuture thenAcceptAsync(Consumer action, Executor executor); + + @Override + ConnectionFuture thenRun(Runnable action); + + @Override + ConnectionFuture thenRunAsync(Runnable action); + + @Override + ConnectionFuture thenRunAsync(Runnable action, Executor executor); + + @Override + ConnectionFuture thenCombine(CompletionStage other, BiFunction fn); + + @Override + ConnectionFuture thenCombineAsync(CompletionStage other, + BiFunction fn); + + @Override + ConnectionFuture thenCombineAsync(CompletionStage other, + BiFunction fn, Executor executor); + + @Override + ConnectionFuture thenAcceptBoth(CompletionStage other, BiConsumer action); + + @Override + ConnectionFuture thenAcceptBothAsync(CompletionStage other, BiConsumer action); + + @Override + ConnectionFuture thenAcceptBothAsync(CompletionStage other, BiConsumer action, + Executor executor); + + @Override + ConnectionFuture runAfterBoth(CompletionStage other, Runnable action); + + @Override + ConnectionFuture runAfterBothAsync(CompletionStage other, Runnable action); + + @Override + ConnectionFuture runAfterBothAsync(CompletionStage other, Runnable action, Executor executor); + + @Override + ConnectionFuture applyToEither(CompletionStage other, Function fn); + + @Override + ConnectionFuture applyToEitherAsync(CompletionStage other, Function fn); + + @Override + ConnectionFuture applyToEitherAsync(CompletionStage other, Function fn, Executor executor); + + @Override + ConnectionFuture acceptEither(CompletionStage other, Consumer action); + + @Override + ConnectionFuture acceptEitherAsync(CompletionStage other, Consumer action); + + @Override + ConnectionFuture acceptEitherAsync(CompletionStage other, Consumer action, Executor executor); + + @Override + ConnectionFuture runAfterEither(CompletionStage other, Runnable action); + + @Override + ConnectionFuture runAfterEitherAsync(CompletionStage other, Runnable action); + + @Override + ConnectionFuture runAfterEitherAsync(CompletionStage other, Runnable action, Executor executor); + + @Override + ConnectionFuture thenCompose(Function> fn); + + ConnectionFuture thenCompose(BiFunction> fn); + + @Override + ConnectionFuture thenComposeAsync(Function> fn); + + @Override + ConnectionFuture thenComposeAsync(Function> fn, Executor executor); + + @Override + ConnectionFuture exceptionally(Function fn); + + @Override + ConnectionFuture whenComplete(BiConsumer action); + + @Override + ConnectionFuture whenCompleteAsync(BiConsumer action); + + @Override + ConnectionFuture whenCompleteAsync(BiConsumer action, Executor executor); + + @Override + ConnectionFuture handle(BiFunction fn); + + @Override + ConnectionFuture handleAsync(BiFunction fn); + + @Override + ConnectionFuture handleAsync(BiFunction fn, Executor executor); +} diff --git a/src/main/java/io/lettuce/core/ConnectionId.java b/src/main/java/io/lettuce/core/ConnectionId.java new file mode 100644 index 0000000000..5107217708 --- /dev/null +++ b/src/main/java/io/lettuce/core/ConnectionId.java @@ -0,0 +1,41 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; + +/** + * Connection identifier. A connection identifier consists of the {@link #localAddress()} and the {@link #remoteAddress()}. + * + * @author Mark Paluch + * @since 3.4 + */ +public interface ConnectionId { + + /** + * Returns the local address. + * + * @return the local address + */ + SocketAddress localAddress(); + + /** + * Returns the remote address. + * + * @return the remote address + */ + SocketAddress remoteAddress(); +} diff --git a/src/main/java/io/lettuce/core/ConnectionPoint.java b/src/main/java/io/lettuce/core/ConnectionPoint.java new file mode 100644 index 0000000000..65acdb58a6 --- /dev/null +++ b/src/main/java/io/lettuce/core/ConnectionPoint.java @@ -0,0 +1,45 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Interface for a connection point described with a host and port or socket. + * + * @author Mark Paluch + */ +public interface ConnectionPoint { + + /** + * Returns the host that should represent the hostname or IPv4/IPv6 literal. + * + * @return the hostname/IP address + */ + String getHost(); + + /** + * Get the current port number. + * + * @return the port number + */ + int getPort(); + + /** + * Get the socket path. + * + * @return path to a Unix Domain Socket + */ + String getSocket(); +} diff --git a/src/main/java/io/lettuce/core/ConnectionState.java b/src/main/java/io/lettuce/core/ConnectionState.java new file mode 100644 index 0000000000..90df2aa46c --- /dev/null +++ b/src/main/java/io/lettuce/core/ConnectionState.java @@ -0,0 +1,208 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.List; + +import io.lettuce.core.protocol.ProtocolVersion; + +/** + * Internal connection state representing the negotiated {@link ProtocolVersion} and other options for connection initialization + * and connection state restoration. This class is part of the internal API. + * + * @author Mark Paluch + * @since 6.0 + */ +public class ConnectionState { + + private volatile HandshakeResponse handshakeResponse; + + private volatile String username; + private volatile char[] password; + private volatile int db; + private volatile boolean readOnly; + private volatile String clientName; + + /** + * Applies settings from {@link RedisURI}. + * + * @param redisURI the URI to apply the client name and authentication. + */ + public void apply(RedisURI redisURI) { + + setClientName(redisURI.getClientName()); + setUsername(redisURI.getUsername()); + setPassword(redisURI.getPassword()); + } + + /** + * Returns the negotiated {@link ProtocolVersion}. + * + * @return the negotiated {@link ProtocolVersion} once the connection is established. + */ + public ProtocolVersion getNegotiatedProtocolVersion() { + return handshakeResponse != null ? handshakeResponse.getNegotiatedProtocolVersion() : null; + } + + /** + * Returns the client connection id. Only available when using {@link ProtocolVersion#RESP3}. + * + * @return the client connection id. Can be {@literal null} if Redis uses RESP2. + */ + public Long getConnectionId() { + return handshakeResponse != null ? handshakeResponse.getConnectionId() : null; + } + + /** + * Returns the Redis server version. Only available when using {@link ProtocolVersion#RESP3}. + * + * @return the Redis server version. + */ + public String getRedisVersion() { + return handshakeResponse != null ? handshakeResponse.getRedisVersion() : null; + } + + /** + * Returns the Redis server mode. Only available when using {@link ProtocolVersion#RESP3}. + * + * @return the Redis server mode. + */ + public String getMode() { + return handshakeResponse != null ? handshakeResponse.getMode() : null; + } + + /** + * Returns the Redis server role. Only available when using {@link ProtocolVersion#RESP3}. + * + * @return the Redis server role. + */ + public String getRole() { + return handshakeResponse != null ? handshakeResponse.getRole() : null; + } + + void setHandshakeResponse(HandshakeResponse handshakeResponse) { + this.handshakeResponse = handshakeResponse; + } + + /** + * Sets username/password state based on the argument count from an {@code AUTH} command. + * + * @param args + */ + protected void setUserNamePassword(List args) { + + if (args.isEmpty()) { + return; + } + + if (args.size() > 1) { + setUsername(new String(args.get(0))); + setPassword(args.get(1)); + } else { + setUsername(null); + setPassword(args.get(0)); + } + } + + protected void setUsername(String username) { + this.username = username; + } + + String getUsername() { + return username; + } + + protected void setPassword(char[] password) { + this.password = password; + } + + char[] getPassword() { + return password; + } + + boolean hasPassword() { + return this.password != null && this.password.length > 0; + } + + boolean hasUsername() { + return this.username != null && !this.username.isEmpty(); + } + + protected void setDb(int db) { + this.db = db; + } + + int getDb() { + return db; + } + + protected void setReadOnly(boolean readOnly) { + this.readOnly = readOnly; + } + + boolean isReadOnly() { + return readOnly; + } + + protected void setClientName(String clientName) { + this.clientName = clientName; + } + + String getClientName() { + return clientName; + } + + /** + * HELLO Handshake response. + */ + static class HandshakeResponse { + + private final ProtocolVersion negotiatedProtocolVersion; + private final Long connectionId; + private final String redisVersion; + private final String mode; + private final String role; + + public HandshakeResponse(ProtocolVersion negotiatedProtocolVersion, Long connectionId, String redisVersion, String mode, + String role) { + this.negotiatedProtocolVersion = negotiatedProtocolVersion; + this.connectionId = connectionId; + this.redisVersion = redisVersion; + this.role = role; + this.mode = mode; + } + + public ProtocolVersion getNegotiatedProtocolVersion() { + return negotiatedProtocolVersion; + } + + public Long getConnectionId() { + return connectionId; + } + + public String getRedisVersion() { + return redisVersion; + } + + public String getMode() { + return mode; + } + + public String getRole() { + return role; + } + } +} diff --git a/src/main/java/io/lettuce/core/Consumer.java b/src/main/java/io/lettuce/core/Consumer.java new file mode 100644 index 0000000000..1e8d57234d --- /dev/null +++ b/src/main/java/io/lettuce/core/Consumer.java @@ -0,0 +1,88 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Objects; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Value object representing a Stream consumer within a consumer group. Group name and consumer name are encoded as keys. + * + * @author Mark Paluch + * @since 5.1 + * @see io.lettuce.core.codec.RedisCodec + */ +public class Consumer { + + final K group; + final K name; + + private Consumer(K group, K name) { + + this.group = group; + this.name = name; + } + + /** + * Create a new consumer. + * + * @param group name of the consumer group, must not be {@literal null} or empty. + * @param name name of the consumer, must not be {@literal null} or empty. + * @return the consumer {@link Consumer} object. + */ + public static Consumer from(K group, K name) { + + LettuceAssert.notNull(group, "Group must not be null"); + LettuceAssert.notNull(name, "Name must not be null"); + + return new Consumer<>(group, name); + } + + /** + * @return name of the group. + */ + public K getGroup() { + return group; + } + + /** + * @return name of the consumer. + */ + public K getName() { + return name; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof Consumer)) + return false; + Consumer consumer = (Consumer) o; + return Objects.equals(group, consumer.group) && Objects.equals(name, consumer.name); + } + + @Override + public int hashCode() { + return Objects.hash(group, name); + } + + @Override + public String toString() { + return String.format("%s:%s", group, name); + } +} diff --git a/src/main/java/io/lettuce/core/DefaultConnectionFuture.java b/src/main/java/io/lettuce/core/DefaultConnectionFuture.java new file mode 100644 index 0000000000..39c8c16c99 --- /dev/null +++ b/src/main/java/io/lettuce/core/DefaultConnectionFuture.java @@ -0,0 +1,354 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; +import java.util.concurrent.*; +import java.util.function.BiConsumer; +import java.util.function.BiFunction; +import java.util.function.Consumer; +import java.util.function.Function; + +/** + * Default {@link CompletableFuture} implementation. Delegates calls to the decorated {@link CompletableFuture} and provides a + * {@link SocketAddress}. + * + * @since 4.4 + */ +class DefaultConnectionFuture extends CompletableFuture implements ConnectionFuture { + + private final CompletableFuture remoteAddress; + private final CompletableFuture delegate; + + public DefaultConnectionFuture(SocketAddress remoteAddress, CompletableFuture delegate) { + + this.remoteAddress = CompletableFuture.completedFuture(remoteAddress); + this.delegate = delegate; + } + + public DefaultConnectionFuture(CompletableFuture remoteAddress, CompletableFuture delegate) { + + this.remoteAddress = remoteAddress; + this.delegate = delegate; + } + + public SocketAddress getRemoteAddress() { + + if (remoteAddress.isDone() && !remoteAddress.isCompletedExceptionally()) { + return remoteAddress.join(); + } + + return null; + } + + private DefaultConnectionFuture adopt(CompletableFuture newFuture) { + return new DefaultConnectionFuture<>(remoteAddress, newFuture); + } + + @Override + public boolean isDone() { + return delegate.isDone(); + } + + @Override + public T get() throws InterruptedException, ExecutionException { + return delegate.get(); + } + + @Override + public T get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException { + return delegate.get(timeout, unit); + } + + @Override + public T join() { + return delegate.join(); + } + + @Override + public T getNow(T valueIfAbsent) { + return delegate.getNow(valueIfAbsent); + } + + @Override + public boolean complete(T value) { + return delegate.complete(value); + } + + @Override + public boolean completeExceptionally(Throwable ex) { + return delegate.completeExceptionally(ex); + } + + @Override + public DefaultConnectionFuture thenApply(Function fn) { + return adopt(delegate.thenApply(fn)); + } + + @Override + public DefaultConnectionFuture thenApplyAsync(Function fn) { + return adopt(delegate.thenApplyAsync(fn)); + } + + @Override + public DefaultConnectionFuture thenApplyAsync(Function fn, Executor executor) { + return adopt(delegate.thenApplyAsync(fn, executor)); + } + + @Override + public DefaultConnectionFuture thenAccept(Consumer action) { + return adopt(delegate.thenAccept(action)); + } + + @Override + public DefaultConnectionFuture thenAcceptAsync(Consumer action) { + return adopt(delegate.thenAcceptAsync(action)); + } + + @Override + public DefaultConnectionFuture thenAcceptAsync(Consumer action, Executor executor) { + return adopt(delegate.thenAcceptAsync(action, executor)); + } + + @Override + public DefaultConnectionFuture thenRun(Runnable action) { + return adopt(delegate.thenRun(action)); + } + + @Override + public DefaultConnectionFuture thenRunAsync(Runnable action) { + return adopt(delegate.thenRunAsync(action)); + } + + @Override + public DefaultConnectionFuture thenRunAsync(Runnable action, Executor executor) { + return adopt(delegate.thenRunAsync(action, executor)); + } + + @Override + public DefaultConnectionFuture thenCombine(CompletionStage other, + BiFunction fn) { + return adopt(delegate.thenCombine(other, fn)); + } + + @Override + public DefaultConnectionFuture thenCombineAsync(CompletionStage other, + BiFunction fn) { + return adopt(delegate.thenCombineAsync(other, fn)); + } + + @Override + public DefaultConnectionFuture thenCombineAsync(CompletionStage other, + BiFunction fn, Executor executor) { + return adopt(delegate.thenCombineAsync(other, fn, executor)); + } + + @Override + public DefaultConnectionFuture thenAcceptBoth(CompletionStage other, + BiConsumer action) { + return adopt(delegate.thenAcceptBoth(other, action)); + } + + @Override + public DefaultConnectionFuture thenAcceptBothAsync(CompletionStage other, + BiConsumer action) { + return adopt(delegate.thenAcceptBothAsync(other, action)); + } + + @Override + public DefaultConnectionFuture thenAcceptBothAsync(CompletionStage other, + BiConsumer action, Executor executor) { + return adopt(delegate.thenAcceptBothAsync(other, action, executor)); + } + + @Override + public DefaultConnectionFuture runAfterBoth(CompletionStage other, Runnable action) { + return adopt(delegate.runAfterBoth(other, action)); + } + + @Override + public DefaultConnectionFuture runAfterBothAsync(CompletionStage other, Runnable action) { + return adopt(delegate.runAfterBothAsync(other, action)); + } + + @Override + public DefaultConnectionFuture runAfterBothAsync(CompletionStage other, Runnable action, Executor executor) { + return adopt(delegate.runAfterBothAsync(other, action, executor)); + } + + @Override + public DefaultConnectionFuture applyToEither(CompletionStage other, Function fn) { + return adopt(delegate.applyToEither(other, fn)); + } + + @Override + public DefaultConnectionFuture applyToEitherAsync(CompletionStage other, Function fn) { + return adopt(delegate.applyToEitherAsync(other, fn)); + } + + @Override + public DefaultConnectionFuture applyToEitherAsync(CompletionStage other, Function fn, + Executor executor) { + return adopt(delegate.applyToEitherAsync(other, fn, executor)); + } + + @Override + public DefaultConnectionFuture acceptEither(CompletionStage other, Consumer action) { + return adopt(delegate.acceptEither(other, action)); + } + + @Override + public DefaultConnectionFuture acceptEitherAsync(CompletionStage other, Consumer action) { + return adopt(delegate.acceptEitherAsync(other, action)); + } + + @Override + public DefaultConnectionFuture acceptEitherAsync(CompletionStage other, Consumer action, + Executor executor) { + return adopt(delegate.acceptEitherAsync(other, action, executor)); + } + + @Override + public DefaultConnectionFuture runAfterEither(CompletionStage other, Runnable action) { + return adopt(delegate.runAfterEither(other, action)); + } + + @Override + public DefaultConnectionFuture runAfterEitherAsync(CompletionStage other, Runnable action) { + return adopt(delegate.runAfterEitherAsync(other, action)); + } + + @Override + public DefaultConnectionFuture runAfterEitherAsync(CompletionStage other, Runnable action, Executor executor) { + return adopt(delegate.runAfterEitherAsync(other, action, executor)); + } + + @Override + public DefaultConnectionFuture thenCompose(Function> fn) { + return adopt(delegate.thenCompose(fn)); + } + + @Override + public ConnectionFuture thenCompose(BiFunction> fn) { + + CompletableFuture future = new CompletableFuture<>(); + + delegate.whenComplete((v, e) -> { + + try { + CompletionStage apply = fn.apply(v, e); + apply.whenComplete((u, t) -> { + + if (t != null) { + future.completeExceptionally(t); + } else { + future.complete(u); + } + }); + } catch (Exception ex) { + ExecutionException result = new ExecutionException("Exception while applying thenCompose", ex); + + if (e != null) { + result.addSuppressed(e); + } + future.completeExceptionally(result); + } + }); + + return adopt(future); + } + + @Override + public DefaultConnectionFuture thenComposeAsync(Function> fn) { + return adopt(delegate.thenComposeAsync(fn)); + } + + @Override + public DefaultConnectionFuture thenComposeAsync(Function> fn, + Executor executor) { + return adopt(delegate.thenComposeAsync(fn, executor)); + } + + @Override + public DefaultConnectionFuture whenComplete(BiConsumer action) { + return adopt(delegate.whenComplete(action)); + } + + @Override + public DefaultConnectionFuture whenCompleteAsync(BiConsumer action) { + return adopt(delegate.whenCompleteAsync(action)); + } + + @Override + public DefaultConnectionFuture whenCompleteAsync(BiConsumer action, Executor executor) { + return adopt(delegate.whenCompleteAsync(action, executor)); + } + + @Override + public DefaultConnectionFuture handle(BiFunction fn) { + return adopt(delegate.handle(fn)); + } + + @Override + public DefaultConnectionFuture handleAsync(BiFunction fn) { + return adopt(delegate.handleAsync(fn)); + } + + @Override + public DefaultConnectionFuture handleAsync(BiFunction fn, Executor executor) { + return adopt(delegate.handleAsync(fn, executor)); + } + + @Override + public CompletableFuture toCompletableFuture() { + return delegate.toCompletableFuture(); + } + + @Override + public DefaultConnectionFuture exceptionally(Function fn) { + return adopt(delegate.exceptionally(fn)); + } + + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + return delegate.cancel(mayInterruptIfRunning); + } + + @Override + public boolean isCancelled() { + return delegate.isCancelled(); + } + + @Override + public boolean isCompletedExceptionally() { + return delegate.isCompletedExceptionally(); + } + + @Override + public void obtrudeValue(T value) { + delegate.obtrudeValue(value); + } + + @Override + public void obtrudeException(Throwable ex) { + delegate.obtrudeException(ex); + } + + @Override + public int getNumberOfDependents() { + return delegate.getNumberOfDependents(); + } +} diff --git a/src/main/java/io/lettuce/core/EpollProvider.java b/src/main/java/io/lettuce/core/EpollProvider.java new file mode 100644 index 0000000000..b5b7193dd0 --- /dev/null +++ b/src/main/java/io/lettuce/core/EpollProvider.java @@ -0,0 +1,35 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Wraps and provides Epoll classes. This is to protect the user from {@link ClassNotFoundException}'s caused by the absence of + * the {@literal netty-transport-native-epoll} library during runtime. Internal API. + * + * @author Mark Paluch + * @since 4.4 + * @deprecated Use {@link io.lettuce.core.resource.EpollProvider} instead. + */ +@Deprecated +public class EpollProvider { + + /** + * @return {@literal true} if epoll is available. + */ + public static boolean isAvailable() { + return io.lettuce.core.resource.EpollProvider.isAvailable(); + } +} diff --git a/src/main/java/io/lettuce/core/FutureSyncInvocationHandler.java b/src/main/java/io/lettuce/core/FutureSyncInvocationHandler.java new file mode 100644 index 0000000000..783cc99437 --- /dev/null +++ b/src/main/java/io/lettuce/core/FutureSyncInvocationHandler.java @@ -0,0 +1,93 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.internal.AbstractInvocationHandler; +import io.lettuce.core.internal.TimeoutProvider; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Invocation-handler to synchronize API calls which use Futures as backend. This class leverages the need to implement a full + * sync class which just delegates every request. + * + * @author Mark Paluch + * @since 3.0 + */ +class FutureSyncInvocationHandler extends AbstractInvocationHandler { + + private final StatefulConnection connection; + private final TimeoutProvider timeoutProvider; + private final Object asyncApi; + private final MethodTranslator translator; + + FutureSyncInvocationHandler(StatefulConnection connection, Object asyncApi, Class[] interfaces) { + this.connection = connection; + this.timeoutProvider = new TimeoutProvider(() -> connection.getOptions().getTimeoutOptions(), + () -> connection.getTimeout().toNanos()); + this.asyncApi = asyncApi; + this.translator = MethodTranslator.of(asyncApi.getClass(), interfaces); + } + + @Override + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + try { + + Method targetMethod = this.translator.get(method); + Object result = targetMethod.invoke(asyncApi, args); + + if (result instanceof RedisFuture) { + + RedisFuture command = (RedisFuture) result; + + if (isNonTxControlMethod(method.getName()) && isTransactionActive(connection)) { + return null; + } + + long timeout = getTimeoutNs(command); + + return LettuceFutures.awaitOrCancel(command, timeout, TimeUnit.NANOSECONDS); + } + + return result; + } catch (InvocationTargetException e) { + throw e.getTargetException(); + } + } + + private long getTimeoutNs(RedisFuture command) { + + if (command instanceof RedisCommand) { + return timeoutProvider.getTimeoutNs((RedisCommand) command); + } + + return connection.getTimeout().toNanos(); + } + + private static boolean isTransactionActive(StatefulConnection connection) { + return connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection).isMulti(); + } + + private static boolean isNonTxControlMethod(String methodName) { + return !methodName.equals("exec") && !methodName.equals("multi") && !methodName.equals("discard"); + } +} diff --git a/src/main/java/io/lettuce/core/GeoArgs.java b/src/main/java/io/lettuce/core/GeoArgs.java new file mode 100644 index 0000000000..138b9e09ee --- /dev/null +++ b/src/main/java/io/lettuce/core/GeoArgs.java @@ -0,0 +1,275 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; + +/** + * + * Argument list builder for the Redis GEORADIUS and + * GEORADIUSBYMEMBER commands. + *

+ * {@link GeoArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + */ +public class GeoArgs implements CompositeArgument { + + private boolean withdistance; + private boolean withcoordinates; + private boolean withhash; + private Long count; + private Sort sort = Sort.none; + + /** + * Builder entry points for {@link GeoArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link GeoArgs} with {@literal WITHDIST} enabled. + * + * @return new {@link GeoArgs} with {@literal WITHDIST} enabled. + * @see GeoArgs#withDistance() + */ + public static GeoArgs distance() { + return new GeoArgs().withDistance(); + } + + /** + * Creates new {@link GeoArgs} with {@literal WITHCOORD} enabled. + * + * @return new {@link GeoArgs} with {@literal WITHCOORD} enabled. + * @see GeoArgs#withCoordinates() + */ + public static GeoArgs coordinates() { + return new GeoArgs().withCoordinates(); + } + + /** + * Creates new {@link GeoArgs} with {@literal WITHHASH} enabled. + * + * @return new {@link GeoArgs} with {@literal WITHHASH} enabled. + * @see GeoArgs#withHash() + */ + public static GeoArgs hash() { + return new GeoArgs().withHash(); + } + + /** + * Creates new {@link GeoArgs} with distance, coordinates and hash enabled. + * + * @return new {@link GeoArgs} with {@literal WITHDIST}, {@literal WITHCOORD}, {@literal WITHHASH} enabled. + * @see GeoArgs#withDistance() + * @see GeoArgs#withCoordinates() + * @see GeoArgs#withHash() + */ + public static GeoArgs full() { + return new GeoArgs().withDistance().withCoordinates().withHash(); + } + + /** + * Creates new {@link GeoArgs} with {@literal COUNT} set. + * + * @param count number greater 0. + * @return new {@link GeoArgs} with {@literal COUNT} set. + * @see GeoArgs#withCount(long) + */ + public static GeoArgs count(long count) { + return new GeoArgs().withCount(count); + } + } + + /** + * Request distance for results. + * + * @return {@code this} {@link GeoArgs}. + */ + public GeoArgs withDistance() { + + withdistance = true; + return this; + } + + /** + * Request coordinates for results. + * + * @return {@code this} {@link GeoArgs}. + */ + public GeoArgs withCoordinates() { + + withcoordinates = true; + return this; + } + + /** + * Request geohash for results. + * + * @return {@code this} {@link GeoArgs}. + */ + public GeoArgs withHash() { + withhash = true; + return this; + } + + /** + * Limit results to {@code count} entries. + * + * @param count number greater 0. + * @return {@code this} {@link GeoArgs}. + */ + public GeoArgs withCount(long count) { + + LettuceAssert.isTrue(count > 0, "Count must be greater 0"); + + this.count = count; + return this; + } + + /** + * + * @return {@literal true} if distance is requested. + */ + public boolean isWithDistance() { + return withdistance; + } + + /** + * + * @return {@literal true} if coordinates are requested. + */ + public boolean isWithCoordinates() { + return withcoordinates; + } + + /** + * + * @return {@literal true} if geohash is requested. + */ + public boolean isWithHash() { + return withhash; + } + + /** + * Sort results ascending. + * + * @return {@code this} + */ + public GeoArgs asc() { + return sort(Sort.asc); + } + + /** + * Sort results descending. + * + * @return {@code this} + */ + public GeoArgs desc() { + return sort(Sort.desc); + } + + /** + * Sort results. + * + * @param sort sort order, must not be {@literal null} + * @return {@code this} + */ + public GeoArgs sort(Sort sort) { + + LettuceAssert.notNull(sort, "Sort must not be null"); + + this.sort = sort; + return this; + } + + /** + * Sort order. + */ + public enum Sort { + + /** + * ascending. + */ + asc, + + /** + * descending. + */ + desc, + + /** + * no sort order. + */ + none; + } + + /** + * Supported geo unit. + */ + public enum Unit { + + /** + * meter. + */ + m, + + /** + * kilometer. + */ + km, + + /** + * feet. + */ + ft, + + /** + * mile. + */ + mi; + } + + public void build(CommandArgs args) { + + if (withdistance) { + args.add("WITHDIST"); + } + + if (withhash) { + args.add("WITHHASH"); + } + + if (withcoordinates) { + args.add("WITHCOORD"); + } + + if (sort != null && sort != Sort.none) { + args.add(sort.name()); + } + + if (count != null) { + args.add(CommandKeyword.COUNT).add(count); + } + } +} diff --git a/src/main/java/io/lettuce/core/GeoCoordinates.java b/src/main/java/io/lettuce/core/GeoCoordinates.java new file mode 100644 index 0000000000..134dd4f08f --- /dev/null +++ b/src/main/java/io/lettuce/core/GeoCoordinates.java @@ -0,0 +1,97 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * A tuple consisting of numerical geo data points to describe geo coordinates. + * + * @author Mark Paluch + */ +public class GeoCoordinates { + + private final Number x; + private final Number y; + + /** + * Creates new {@link GeoCoordinates}. + * + * @param x the longitude, must not be {@literal null}. + * @param y the latitude, must not be {@literal null}. + */ + public GeoCoordinates(Number x, Number y) { + + LettuceAssert.notNull(x, "X must not be null"); + LettuceAssert.notNull(y, "Y must not be null"); + + this.x = x; + this.y = y; + } + + /** + * Creates new {@link GeoCoordinates}. + * + * @param x the longitude, must not be {@literal null}. + * @param y the latitude, must not be {@literal null}. + * @return {@link GeoCoordinates}. + */ + public static GeoCoordinates create(Number x, Number y) { + return new GeoCoordinates(x, y); + } + + /** + * + * @return the longitude. + */ + public Number getX() { + return x; + } + + /** + * + * @return the latitude. + */ + public Number getY() { + return y; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof GeoCoordinates)) + return false; + + GeoCoordinates geoCoords = (GeoCoordinates) o; + + if (x != null ? !x.equals(geoCoords.x) : geoCoords.x != null) + return false; + return !(y != null ? !y.equals(geoCoords.y) : geoCoords.y != null); + } + + @Override + public int hashCode() { + int result = x != null ? x.hashCode() : 0; + result = 31 * result + (y != null ? y.hashCode() : 0); + return result; + } + + @Override + public String toString() { + return String.format("(%s, %s)", getX(), getY()); + } +} diff --git a/src/main/java/io/lettuce/core/GeoRadiusStoreArgs.java b/src/main/java/io/lettuce/core/GeoRadiusStoreArgs.java new file mode 100644 index 0000000000..3a6769b2e2 --- /dev/null +++ b/src/main/java/io/lettuce/core/GeoRadiusStoreArgs.java @@ -0,0 +1,191 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.GeoArgs.Sort; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; + +/** + * Argument list builder for the Redis GEORADIUS command to store + * {@literal GEORADIUS} results or {@literal GEORADIUS} distances in a sorted set. + *

+ * {@link GeoRadiusStoreArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + */ +public class GeoRadiusStoreArgs implements CompositeArgument { + + private K storeKey; + private K storeDistKey; + private Long count; + private Sort sort = Sort.none; + + /** + * Builder entry points for {@link GeoRadiusStoreArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link GeoRadiusStoreArgs} with {@literal STORE} enabled. + * + * @param key must not be {@literal null}. + * @return new {@link GeoRadiusStoreArgs} with {@literal STORE} enabled. + * @see GeoRadiusStoreArgs#withStore(Object) + */ + public static GeoRadiusStoreArgs store(K key) { + return new GeoRadiusStoreArgs<>().withStore(key); + } + + /** + * Creates new {@link GeoRadiusStoreArgs} with {@literal STOREDIST} enabled. + * + * @param key must not be {@literal null}. + * @return new {@link GeoRadiusStoreArgs} with {@literal STOREDIST} enabled. + * @see GeoRadiusStoreArgs#withStoreDist(Object) + */ + public static GeoRadiusStoreArgs withStoreDist(K key) { + return new GeoRadiusStoreArgs<>().withStoreDist(key); + } + + /** + * Creates new {@link GeoRadiusStoreArgs} with {@literal COUNT} set. + * + * @param count number greater 0. + * @return new {@link GeoRadiusStoreArgs} with {@literal COUNT} set. + * @see GeoRadiusStoreArgs#withStoreDist(Object) + */ + public static GeoRadiusStoreArgs count(long count) { + return new GeoRadiusStoreArgs<>().withCount(count); + } + } + + /** + * Store the resulting members with their location in the new Geo set {@code storeKey}. Cannot be used together with + * {@link #withStoreDist(Object)}. + * + * @param storeKey the destination key. + * @return {@code this} {@link GeoRadiusStoreArgs}. + */ + public GeoRadiusStoreArgs withStore(K storeKey) { + + LettuceAssert.notNull(storeKey, "StoreKey must not be null"); + + this.storeKey = storeKey; + return this; + } + + /** + * Store the resulting members with their distance in the sorted set {@code storeKey}. Cannot be used together with + * {@link #withStore(Object)}. + * + * @param storeKey the destination key. + * @return {@code this} {@link GeoRadiusStoreArgs}. + */ + public GeoRadiusStoreArgs withStoreDist(K storeKey) { + + LettuceAssert.notNull(storeKey, "StoreKey must not be null"); + + this.storeDistKey = storeKey; + return this; + } + + /** + * Limit results to {@code count} entries. + * + * @param count number greater 0. + * @return {@code this} {@link GeoRadiusStoreArgs}. + */ + public GeoRadiusStoreArgs withCount(long count) { + + LettuceAssert.isTrue(count > 0, "Count must be greater 0"); + + this.count = count; + return this; + } + + /** + * Sort results ascending. + * + * @return {@code this} {@link GeoRadiusStoreArgs}. + */ + public GeoRadiusStoreArgs asc() { + return sort(Sort.asc); + } + + /** + * Sort results descending. + * + * @return {@code this} {@link GeoRadiusStoreArgs}. + */ + public GeoRadiusStoreArgs desc() { + return sort(Sort.desc); + } + + /** + * @return the key for storing results + */ + public K getStoreKey() { + return storeKey; + } + + /** + * @return the key for storing distance results + */ + public K getStoreDistKey() { + return storeDistKey; + } + + /** + * Sort results. + * + * @param sort sort order, must not be {@literal null} + * @return {@code this} {@link GeoRadiusStoreArgs}. + */ + public GeoRadiusStoreArgs sort(Sort sort) { + LettuceAssert.notNull(sort, "Sort must not be null"); + + this.sort = sort; + return this; + } + + @SuppressWarnings("unchecked") + public void build(CommandArgs args) { + + if (sort != null && sort != Sort.none) { + args.add(sort.name()); + } + + if (count != null) { + args.add(CommandKeyword.COUNT).add(count); + } + + if (storeKey != null) { + args.add("STORE").addKey((K) storeKey); + } + + if (storeDistKey != null) { + args.add("STOREDIST").addKey((K) storeDistKey); + } + } +} diff --git a/src/main/java/io/lettuce/core/GeoWithin.java b/src/main/java/io/lettuce/core/GeoWithin.java new file mode 100644 index 0000000000..2bb3ab69b2 --- /dev/null +++ b/src/main/java/io/lettuce/core/GeoWithin.java @@ -0,0 +1,123 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Geo element within a certain radius. Contains: + *

    + *
  • the member
  • + *
  • the distance from the reference point (if requested)
  • + *
  • the geohash (if requested)
  • + *
  • the coordinates (if requested)
  • + *
+ * + * @param Value type. + * @author Mark Paluch + */ +public class GeoWithin { + + private final V member; + private final Double distance; + private final Long geohash; + private final GeoCoordinates coordinates; + + /** + * Creates a new {@link GeoWithin}. + * + * @param member the member. + * @param distance the distance, may be {@literal null}. + * @param geohash the geohash, may be {@literal null}. + * @param coordinates the coordinates, may be {@literal null}. + */ + public GeoWithin(V member, Double distance, Long geohash, GeoCoordinates coordinates) { + + this.member = member; + this.distance = distance; + this.geohash = geohash; + this.coordinates = coordinates; + } + + /** + * + * @return the member within the Geo set. + */ + public V getMember() { + return member; + } + + /** + * + * @return distance if requested otherwise {@literal null}. + */ + public Double getDistance() { + return distance; + } + + /** + * + * @return geohash if requested otherwise {@literal null}. + */ + public Long getGeohash() { + return geohash; + } + + /** + * + * @return coordinates if requested otherwise {@literal null}. + */ + public GeoCoordinates getCoordinates() { + return coordinates; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof GeoWithin)) + return false; + + GeoWithin geoWithin = (GeoWithin) o; + + if (member != null ? !member.equals(geoWithin.member) : geoWithin.member != null) + return false; + if (distance != null ? !distance.equals(geoWithin.distance) : geoWithin.distance != null) + return false; + if (geohash != null ? !geohash.equals(geoWithin.geohash) : geoWithin.geohash != null) + return false; + return !(coordinates != null ? !coordinates.equals(geoWithin.coordinates) : geoWithin.coordinates != null); + } + + @Override + public int hashCode() { + int result = member != null ? member.hashCode() : 0; + result = 31 * result + (distance != null ? distance.hashCode() : 0); + result = 31 * result + (geohash != null ? geohash.hashCode() : 0); + result = 31 * result + (coordinates != null ? coordinates.hashCode() : 0); + return result; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [member=").append(member); + sb.append(", distance=").append(distance); + sb.append(", geohash=").append(geohash); + sb.append(", coordinates=").append(coordinates); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/JavaRuntime.java b/src/main/java/io/lettuce/core/JavaRuntime.java new file mode 100644 index 0000000000..a17a4edb01 --- /dev/null +++ b/src/main/java/io/lettuce/core/JavaRuntime.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.internal.LettuceClassUtils.isPresent; + +/** + * Utility to determine which Java runtime is used. + * + * @author Mark Paluch + */ +public class JavaRuntime { + + /** + * Constant whether the current JDK is Java 8 or higher. + */ + public static final boolean AT_LEAST_JDK_8 = isPresent("java.lang.FunctionalInterface"); + +} diff --git a/src/main/java/io/lettuce/core/KeyScanCursor.java b/src/main/java/io/lettuce/core/KeyScanCursor.java new file mode 100644 index 0000000000..f898dd602a --- /dev/null +++ b/src/main/java/io/lettuce/core/KeyScanCursor.java @@ -0,0 +1,35 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.ArrayList; +import java.util.List; + +/** + * Cursor providing a list of keys. + * + * @param Key type. + * @author Mark Paluch + * @since 3.0 + */ +public class KeyScanCursor extends ScanCursor { + + private final List keys = new ArrayList<>(); + + public List getKeys() { + return keys; + } +} diff --git a/src/main/java/io/lettuce/core/KeyValue.java b/src/main/java/io/lettuce/core/KeyValue.java new file mode 100644 index 0000000000..696bd852e9 --- /dev/null +++ b/src/main/java/io/lettuce/core/KeyValue.java @@ -0,0 +1,178 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Optional; +import java.util.function.Function; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * A key-value container extension to {@link Value}. A {@link KeyValue} requires always a non-null key on construction. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class KeyValue extends Value { + + private final K key; + + /** + * Serializable constructor. + */ + protected KeyValue() { + super(null); + this.key = null; + } + + private KeyValue(K key, V value) { + + super(value); + + LettuceAssert.notNull(key, "Key must not be null"); + this.key = key; + } + + /** + * Creates a {@link KeyValue} from a {@code key} and an {@link Optional}. The resulting value contains the value from the + * {@link Optional} if a value is present. Value is empty if the {@link Optional} is empty. + * + * @param key the key, must not be {@literal null}. + * @param optional the optional. May be empty but never {@literal null}. + * @param + * @param + * @param + * @return the {@link KeyValue} + */ + public static KeyValue from(K key, Optional optional) { + + LettuceAssert.notNull(optional, "Optional must not be null"); + + if (optional.isPresent()) { + return new KeyValue(key, optional.get()); + } + + return empty(key); + } + + /** + * Creates a {@link KeyValue} from a {@code key} and{@code value}. The resulting value contains the value if the + * {@code value} is not null. + * + * @param key the key, must not be {@literal null}. + * @param value the value. May be {@literal null}. + * @param + * @param + * @param + * @return the {@link KeyValue} + */ + public static KeyValue fromNullable(K key, T value) { + + if (value == null) { + return empty(key); + } + + return new KeyValue(key, value); + } + + /** + * Returns an empty {@code KeyValue} instance with the {@code key} set. No value is present for this instance. + * + * @param key the key, must not be {@literal null}. + * @param + * @param + * @return the {@link KeyValue} + */ + public static KeyValue empty(K key) { + return new KeyValue(key, null); + } + + /** + * Creates a {@link KeyValue} from a {@code key} and {@code value}. The resulting value contains the value. + * + * @param key the key. Must not be {@literal null}. + * @param value the value. Must not be {@literal null}. + * @param + * @param + * @param + * @return the {@link KeyValue} + */ + public static KeyValue just(K key, T value) { + + LettuceAssert.notNull(value, "Value must not be null"); + + return new KeyValue(key, value); + } + + @Override + public boolean equals(Object o) { + + if (this == o) + return true; + if (!(o instanceof KeyValue)) + return false; + + if (!super.equals(o)) + return false; + + KeyValue keyValue = (KeyValue) o; + + return key.equals(keyValue.key); + } + + @Override + public int hashCode() { + int result = key.hashCode(); + result = 31 * result + (hasValue() ? getValue().hashCode() : 0); + return result; + } + + @Override + public String toString() { + return hasValue() ? String.format("KeyValue[%s, %s]", key, getValue()) : String.format("KeyValue[%s].empty", key); + } + + /** + * + * @return the key + */ + public K getKey() { + return key; + } + + /** + * Returns a {@link KeyValue} consisting of the results of applying the given function to the value of this element. Mapping + * is performed only if a {@link #hasValue() value is present}. + * + * @param The element type of the new {@link KeyValue} + * @param mapper a stateless function to apply to each element + * @return the new {@link KeyValue} + */ + @SuppressWarnings("unchecked") + public KeyValue map(Function mapper) { + + LettuceAssert.notNull(mapper, "Mapper function must not be null"); + + if (hasValue()) { + return new KeyValue<>(getKey(), mapper.apply(getValue())); + } + + return (KeyValue) this; + } +} diff --git a/src/main/java/io/lettuce/core/KillArgs.java b/src/main/java/io/lettuce/core/KillArgs.java new file mode 100644 index 0000000000..52d71b60af --- /dev/null +++ b/src/main/java/io/lettuce/core/KillArgs.java @@ -0,0 +1,214 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.protocol.CommandKeyword.ADDR; +import static io.lettuce.core.protocol.CommandKeyword.ID; +import static io.lettuce.core.protocol.CommandKeyword.SKIPME; +import static io.lettuce.core.protocol.CommandType.TYPE; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; + +/** + * + * Argument list builder for the Redis CLIENT KILL command. Static import the + * methods from {@link Builder} and chain the method calls: {@code id(1).skipme()}. + *

+ * {@link KillArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + * @since 3.0 + */ +public class KillArgs implements CompositeArgument { + + private enum Type { + NORMAL, MASTER, SLAVE, PUBSUB + } + + private Boolean skipme; + private String addr; + private Long id; + private Type type; + + /** + * Builder entry points for {@link KillArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link KillArgs} and enabling {@literal SKIPME YES}. + * + * @return new {@link KillArgs} with {@literal SKIPME YES} enabled. + * @see KillArgs#skipme() + */ + public static KillArgs skipme() { + return new KillArgs().skipme(); + } + + /** + * Creates new {@link KillArgs} setting {@literal ADDR}. + * + * @param addr must not be {@literal null}. + * @return new {@link KillArgs} with {@literal ADDR} set. + * @see KillArgs#addr(String) + */ + public static KillArgs addr(String addr) { + return new KillArgs().addr(addr); + } + + /** + * Creates new {@link KillArgs} setting {@literal ID}. + * + * @param id client id. + * @return new {@link KillArgs} with {@literal ID} set. + * @see KillArgs#id(long) + */ + public static KillArgs id(long id) { + return new KillArgs().id(id); + } + + /** + * Creates new {@link KillArgs} setting {@literal TYPE PUBSUB}. + * + * @return new {@link KillArgs} with {@literal TYPE PUBSUB} set. + * @see KillArgs#type(Type) + */ + public static KillArgs typePubsub() { + return new KillArgs().type(Type.PUBSUB); + } + + /** + * Creates new {@link KillArgs} setting {@literal TYPE NORMAL}. + * + * @return new {@link KillArgs} with {@literal TYPE NORMAL} set. + * @see KillArgs#type(Type) + */ + public static KillArgs typeNormal() { + return new KillArgs().type(Type.NORMAL); + } + + /** + * Creates new {@link KillArgs} setting {@literal TYPE MASTER}. + * + * @return new {@link KillArgs} with {@literal TYPE MASTER} set. + * @see KillArgs#type(Type) + * @since 5.0.4 + */ + public static KillArgs typeMaster() { + return new KillArgs().type(Type.MASTER); + } + + /** + * Creates new {@link KillArgs} setting {@literal TYPE SLAVE}. + * + * @return new {@link KillArgs} with {@literal TYPE SLAVE} set. + * @see KillArgs#type(Type) + */ + public static KillArgs typeSlave() { + return new KillArgs().type(Type.SLAVE); + } + } + + /** + * By default this option is enabled, that is, the client calling the command will not get killed, however setting this + * option to no will have the effect of also killing the client calling the command. + * + * @return {@code this} {@link MigrateArgs}. + */ + public KillArgs skipme() { + return this.skipme(true); + } + + /** + * By default this option is enabled, that is, the client calling the command will not get killed, however setting this + * option to no will have the effect of also killing the client calling the command. + * + * @param state + * @return {@code this} {@link KillArgs}. + */ + public KillArgs skipme(boolean state) { + + this.skipme = state; + return this; + } + + /** + * Kill the client at {@code addr}. + * + * @param addr must not be {@literal null}. + * @return {@code this} {@link KillArgs}. + */ + public KillArgs addr(String addr) { + + LettuceAssert.notNull(addr, "Client address must not be null"); + + this.addr = addr; + return this; + } + + /** + * Kill the client with its client {@code id}. + * + * @param id + * @return {@code this} {@link KillArgs}. + */ + public KillArgs id(long id) { + + this.id = id; + return this; + } + + /** + * This closes the connections of all the clients in the specified {@link Type class}. Note that clients blocked into the + * {@literal MONITOR} command are considered to belong to the normal class. + * + * @param type must not be {@literal null}. + * @return {@code this} {@link KillArgs}. + */ + public KillArgs type(Type type) { + + LettuceAssert.notNull(type, "Type must not be null"); + + this.type = type; + return this; + } + + public void build(CommandArgs args) { + + if (skipme != null) { + args.add(SKIPME).add(skipme ? "YES" : "NO"); + } + + if (id != null) { + args.add(ID).add(id); + } + + if (addr != null) { + args.add(ADDR).add(addr); + } + + if (type != null) { + args.add(TYPE).add(type.name().toLowerCase()); + } + } +} diff --git a/src/main/java/io/lettuce/core/KqueueProvider.java b/src/main/java/io/lettuce/core/KqueueProvider.java new file mode 100644 index 0000000000..9d96fce58f --- /dev/null +++ b/src/main/java/io/lettuce/core/KqueueProvider.java @@ -0,0 +1,35 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Wraps and provides kqueue classes. This is to protect the user from {@link ClassNotFoundException}'s caused by the absence of + * the {@literal netty-transport-native-kqueue} library during runtime. Internal API. + * + * @author Mark Paluch + * @since 4.4 + * @deprecated since 6.0, use {@link io.lettuce.core.resource.KqueueProvider} instead. + */ +@Deprecated +public class KqueueProvider { + + /** + * @return {@literal true} if kqueue is available. + */ + public static boolean isAvailable() { + return io.lettuce.core.resource.KqueueProvider.isAvailable(); + } +} diff --git a/src/main/java/io/lettuce/core/LettuceFutures.java b/src/main/java/io/lettuce/core/LettuceFutures.java new file mode 100644 index 0000000000..cdb47838f7 --- /dev/null +++ b/src/main/java/io/lettuce/core/LettuceFutures.java @@ -0,0 +1,76 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.time.Duration; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.internal.Futures; + +/** + * Utility to {@link #awaitAll(long, TimeUnit, Future[])} futures until they are done and to synchronize future execution using + * {@link #awaitOrCancel(RedisFuture, long, TimeUnit)}. + * + * @author Mark Paluch + * @since 3.0 + */ +public class LettuceFutures { + + private LettuceFutures() { + } + + /** + * Wait until futures are complete or the supplied timeout is reached. Commands are not canceled (in contrast to + * {@link #awaitOrCancel(RedisFuture, long, TimeUnit)}) when the timeout expires. + * + * @param timeout Maximum time to wait for futures to complete. + * @param futures Futures to wait for. + * @return {@literal true} if all futures complete in time, otherwise {@literal false} + * @since 5.0 + */ + public static boolean awaitAll(Duration timeout, Future... futures) { + return awaitAll(timeout.toNanos(), TimeUnit.NANOSECONDS, futures); + } + + /** + * Wait until futures are complete or the supplied timeout is reached. Commands are not canceled (in contrast to + * {@link #awaitOrCancel(RedisFuture, long, TimeUnit)}) when the timeout expires. + * + * @param timeout Maximum time to wait for futures to complete. + * @param unit Unit of time for the timeout. + * @param futures Futures to wait for. + * @return {@literal true} if all futures complete in time, otherwise {@literal false} + */ + public static boolean awaitAll(long timeout, TimeUnit unit, Future... futures) { + return Futures.awaitAll(timeout, unit, futures); + } + + /** + * Wait until futures are complete or the supplied timeout is reached. Commands are canceled if the timeout is reached but + * the command is not finished. + * + * @param cmd Command to wait for + * @param timeout Maximum time to wait for futures to complete + * @param unit Unit of time for the timeout + * @param Result type + * + * @return Result of the command. + */ + public static T awaitOrCancel(RedisFuture cmd, long timeout, TimeUnit unit) { + return Futures.awaitOrCancel(cmd, timeout, unit); + } +} diff --git a/src/main/java/io/lettuce/core/LettuceStrings.java b/src/main/java/io/lettuce/core/LettuceStrings.java new file mode 100644 index 0000000000..a0434660e1 --- /dev/null +++ b/src/main/java/io/lettuce/core/LettuceStrings.java @@ -0,0 +1,181 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.nio.ByteBuffer; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.Collection; +import java.util.Iterator; + +import io.lettuce.core.codec.Base16; + +/** + * Helper for {@link String} checks. This class is part of the internal API and may change without further notice. + * + * @author Mark Paluch + * @since 3.0 + */ +public class LettuceStrings { + + /** + * Utility constructor. + */ + private LettuceStrings() { + + } + + /** + * Checks if a CharSequence is empty ("") or null. + * + * @param cs the char sequence + * @return true if empty + */ + public static boolean isEmpty(final CharSequence cs) { + return cs == null || cs.length() == 0; + } + + /** + * Checks if a CharSequence is not empty ("") and not null. + * + * @param cs the char sequence + * @return true if not empty + * + */ + public static boolean isNotEmpty(final CharSequence cs) { + return !isEmpty(cs); + } + + /** + * Convert {@code double} to {@link String}. If {@code n} is infinite, returns positive/negative infinity {@code +inf} and + * {@code -inf}. + * + * @param n the double. + * @return string representation of {@code n} + */ + public static String string(double n) { + if (Double.isInfinite(n)) { + return (n > 0) ? "+inf" : "-inf"; + } + return Double.toString(n); + } + + /** + * Convert {@link String} to {@code double}. If {@code s} is {@literal +inf}/{@literal -inf}, returns positive/negative + * infinity. + * + * @param s string representation of the number + * @return the {@code double} value. + * @since 4.3.3 + */ + public static double toDouble(String s) { + + if ("+inf".equals(s) || "inf".equals(s)) { + return Double.POSITIVE_INFINITY; + } + + if ("-inf".equals(s)) { + return Double.NEGATIVE_INFINITY; + } + + return Double.parseDouble(s); + } + + /** + * Create SHA1 digest from Lua script. + * + * @param script the script + * @return the Base16 encoded SHA1 value + */ + public static String digest(byte[] script) { + return digest(ByteBuffer.wrap(script)); + } + + /** + * Create SHA1 digest from Lua script. + * + * @param script the script + * @return the Base16 encoded SHA1 value + */ + public static String digest(ByteBuffer script) { + try { + MessageDigest md = MessageDigest.getInstance("SHA1"); + md.update(script); + return new String(Base16.encode(md.digest(), false)); + } catch (NoSuchAlgorithmException e) { + throw new RedisException("JVM does not support SHA1"); + } + } + + /** + * Convert a {@code String} array into a delimited {@code String} (e.g. CSV). + *

+ * Useful for {@code toString()} implementations. + * + * @param arr the array to display + * @param delim the delimiter to use (typically a ",") + * @return the delimited {@code String} + */ + public static String arrayToDelimitedString(Object[] arr, String delim) { + + if ((arr == null || arr.length == 0)) { + return ""; + } + + if (arr.length == 1) { + return "" + arr[0]; + } + + StringBuilder sb = new StringBuilder(); + for (int i = 0; i < arr.length; i++) { + if (i > 0) { + sb.append(delim); + } + sb.append(arr[i]); + } + return sb.toString(); + } + + /** + * Convert a {@link Collection} to a delimited {@code String} (e.g. CSV). + *

+ * Useful for {@code toString()} implementations. + * + * @param coll the {@code Collection} to convert + * @param delim the delimiter to use (typically a ",") + * @param prefix the {@code String} to start each element with + * @param suffix the {@code String} to end each element with + * @return the delimited {@code String} + */ + public static String collectionToDelimitedString(Collection coll, String delim, String prefix, String suffix) { + + if (coll == null || coll.isEmpty()) { + return ""; + } + + StringBuilder sb = new StringBuilder(); + Iterator it = coll.iterator(); + while (it.hasNext()) { + sb.append(prefix).append(it.next()).append(suffix); + if (it.hasNext()) { + sb.append(delim); + } + } + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/Limit.java b/src/main/java/io/lettuce/core/Limit.java new file mode 100644 index 0000000000..8d235a8c3b --- /dev/null +++ b/src/main/java/io/lettuce/core/Limit.java @@ -0,0 +1,109 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Value object for a slice of data (offset/count). + * + * @author Mark Paluch + * @since 4.2 + */ +public class Limit { + + private static final Limit UNLIMITED = new Limit(null, null); + + private final Long offset; + private final Long count; + + protected Limit(Long offset, Long count) { + this.offset = offset; + this.count = count; + } + + /** + * + * @return an unlimited limit. + */ + public static Limit unlimited() { + return UNLIMITED; + } + + /** + * Creates a {@link Limit} given {@code offset} and {@code count}. + * + * @param offset the offset. + * @param count the limit count. + * @return the {@link Limit} + */ + public static Limit create(long offset, long count) { + return new Limit(offset, count); + } + + /** + * Creates a {@link Limit} given {@code count}. + * + * @param count the limit count. + * @return the {@link Limit}. + * @since 4.5 + */ + public static Limit from(long count) { + return new Limit(0L, count); + } + + /** + * @return the offset or {@literal -1} if unlimited. + */ + public long getOffset() { + + if (offset != null) { + return offset; + } + + return -1; + } + + /** + * @return the count or {@literal -1} if unlimited. + */ + public long getCount() { + + if (count != null) { + return count; + } + + return -1; + } + + /** + * + * @return {@literal true} if the {@link Limit} contains a limitation. + */ + public boolean isLimited() { + return offset != null && count != null; + } + + @Override + public String toString() { + + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + if (isLimited()) { + return sb.append(" [offset=").append(getOffset()).append(", count=").append(getCount()).append("]").toString(); + } + + return sb.append(" [unlimited]").toString(); + } +} diff --git a/src/main/java/io/lettuce/core/MapScanCursor.java b/src/main/java/io/lettuce/core/MapScanCursor.java new file mode 100644 index 0000000000..837da26349 --- /dev/null +++ b/src/main/java/io/lettuce/core/MapScanCursor.java @@ -0,0 +1,40 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.LinkedHashMap; +import java.util.Map; + +/** + * Scan cursor for maps. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public class MapScanCursor extends ScanCursor { + + private final Map map = new LinkedHashMap<>(); + + /** + * + * @return the map result. + */ + public Map getMap() { + return map; + } +} diff --git a/src/main/java/io/lettuce/core/MigrateArgs.java b/src/main/java/io/lettuce/core/MigrateArgs.java new file mode 100644 index 0000000000..8d284bf217 --- /dev/null +++ b/src/main/java/io/lettuce/core/MigrateArgs.java @@ -0,0 +1,250 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; +import io.lettuce.core.protocol.CommandType; + +/** + * Argument list builder for the Redis MIGRATE command. Static import the methods + * from {@link Builder} and chain the method calls: {@code copy().auth("foobar")}. + *

+ * {@link MigrateArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + */ +public class MigrateArgs implements CompositeArgument { + + private boolean copy = false; + private boolean replace = false; + List keys = new ArrayList<>(); + private char[] password; + + /** + * Builder entry points for {@link MigrateArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link MigrateArgs} and enabling {@literal COPY}. + * + * @return new {@link MigrateArgs} with {@literal COPY} enabled. + * @see MigrateArgs#copy() + */ + public static MigrateArgs copy() { + return new MigrateArgs().copy(); + } + + /** + * Creates new {@link MigrateArgs} and enabling {@literal REPLACE}. + * + * @return new {@link MigrateArgs} with {@literal REPLACE} enabled. + * @see MigrateArgs#replace() + */ + public static MigrateArgs replace() { + return new MigrateArgs().replace(); + } + + /** + * Creates new {@link MigrateArgs} setting a {@code key} to migrate. + * + * @param key must not be {@literal null}. + * @return new {@link MigrateArgs} for {@code key} to migrate. + * @see MigrateArgs#key(Object) + */ + public static MigrateArgs key(K key) { + return new MigrateArgs().key(key); + } + + /** + * Creates new {@link MigrateArgs} setting {@code keys} to migrate. + * + * @param keys must not be {@literal null}. + * @return new {@link MigrateArgs} for {@code keys} to migrate. + * @see MigrateArgs#keys(Object[]) + */ + @SafeVarargs + public static MigrateArgs keys(K... keys) { + return new MigrateArgs().keys(keys); + } + + /** + * Creates new {@link MigrateArgs} setting {@code keys} to migrate. + * + * @param keys must not be {@literal null}. + * @return new {@link MigrateArgs} for {@code keys} to migrate. + * @see MigrateArgs#keys(Iterable) + */ + public static MigrateArgs keys(Iterable keys) { + return new MigrateArgs().keys(keys); + } + + /** + * Creates new {@link MigrateArgs} with {@code AUTH} (target authentication) enabled. + * + * @return new {@link MigrateArgs} with {@code AUTH} (target authentication) enabled. + * @since 4.4.5 + * @see MigrateArgs#auth(CharSequence) + */ + public static MigrateArgs auth(CharSequence password) { + // TODO : implement auth(username,password) when https://github.com/antirez/redis/pull/7035 is fixed + return new MigrateArgs().auth(password); + } + + /** + * Creates new {@link MigrateArgs} with {@code AUTH} (target authentication) enabled. + * + * @return new {@link MigrateArgs} with {@code AUTH} (target authentication) enabled. + * @since 4.4.5 + * @see MigrateArgs#auth(char[]) + */ + public static MigrateArgs auth(char[] password) { + return new MigrateArgs().auth(password); + } + } + + /** + * Do not remove the key from the local instance by setting {@code COPY}. + * + * @return {@code this} {@link MigrateArgs}. + */ + public MigrateArgs copy() { + this.copy = true; + return this; + } + + /** + * Replace existing key on the remote instance by setting {@code REPLACE}. + * + * @return {@code this} {@link MigrateArgs}. + */ + public MigrateArgs replace() { + this.replace = true; + return this; + } + + /** + * Migrate a single {@code key}. + * + * @param key must not be {@literal null}. + * @return {@code this} {@link MigrateArgs}. + */ + public MigrateArgs key(K key) { + + LettuceAssert.notNull(key, "Key must not be null"); + + this.keys.add(key); + return this; + } + + /** + * Migrate one or more {@code keys}. + * + * @param keys must not be {@literal null}. + * @return {@code this} {@link MigrateArgs}. + */ + @SafeVarargs + public final MigrateArgs keys(K... keys) { + + LettuceAssert.notEmpty(keys, "Keys must not be empty"); + + this.keys.addAll(Arrays.asList(keys)); + return this; + } + + /** + * Migrate one or more {@code keys}. + * + * @param keys must not be {@literal null}. + * @return {@code this} {@link MigrateArgs}. + */ + public MigrateArgs keys(Iterable keys) { + LettuceAssert.notNull(keys, "Keys must not be null"); + for (K key : keys) { + this.keys.add(key); + } + return this; + } + + /** + * Set {@literal AUTH} {@code password} option. + * + * @param password must not be {@literal null}. + * @return {@code this} {@link MigrateArgs}. + * @since 4.4.5 + */ + public MigrateArgs auth(CharSequence password) { + + LettuceAssert.notNull(password, "Password must not be null"); + + char[] chars = new char[password.length()]; + + for (int i = 0; i < password.length(); i++) { + chars[i] = password.charAt(i); + } + + this.password = chars; + return this; + } + + /** + * Set {@literal AUTH} {@code password} option. + * + * @param password must not be {@literal null}. + * @return {@code this} {@link MigrateArgs}. + * @since 4.4.5 + */ + public MigrateArgs auth(char[] password) { + + LettuceAssert.notNull(password, "Password must not be null"); + + this.password = Arrays.copyOf(password, password.length); + return this; + } + + @SuppressWarnings("unchecked") + public void build(CommandArgs args) { + + if (copy) { + args.add(CommandKeyword.COPY); + } + + if (replace) { + args.add(CommandKeyword.REPLACE); + } + + if (password != null) { + args.add(CommandType.AUTH).add(password); + } + + if (keys.size() > 1) { + args.add(CommandType.KEYS); + args.addKeys((List) keys); + } + } +} diff --git a/src/main/java/io/lettuce/core/Operators.java b/src/main/java/io/lettuce/core/Operators.java new file mode 100644 index 0000000000..88295f95da --- /dev/null +++ b/src/main/java/io/lettuce/core/Operators.java @@ -0,0 +1,253 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.lang.reflect.Field; +import java.lang.reflect.Method; +import java.security.AccessController; +import java.security.PrivilegedActionException; +import java.security.PrivilegedExceptionAction; +import java.util.Queue; +import java.util.concurrent.atomic.AtomicLongFieldUpdater; +import java.util.function.BiFunction; +import java.util.function.Supplier; + +import org.reactivestreams.Subscription; + +import reactor.core.Exceptions; +import reactor.core.publisher.Hooks; +import reactor.util.annotation.Nullable; +import reactor.util.concurrent.Queues; +import reactor.util.context.Context; +import io.lettuce.core.internal.LettuceFactories; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Operator utilities to handle noop subscriptions, validate request size and to cap concurrent additive operations to + * Long.MAX_VALUE, which is generic to {@link Subscription#request(long)} handling. + *

+ * This class duplicates some methods from {@link reactor.core.publisher.Operators} to be independent from Reactor API changes. + * + * @author Mark Paluch + * @since 5.0 + */ +class Operators { + + private static final InternalLogger LOG = InternalLoggerFactory.getInstance(Operators.class); + + /** + * A key that can be used to store a sequence-specific {@link Hooks#onOperatorError(BiFunction)} hook in a {@link Context}, + * as a {@link BiFunction BiFunction<Throwable, Object, Throwable>}. + */ + private static final String KEY_ON_OPERATOR_ERROR = "reactor.onOperatorError.local"; + private static final Field onOperatorErrorHook = findOnOperatorErrorHookField(); + private static final Supplier> queueSupplier = getQueueSupplier(); + + private static Field findOnOperatorErrorHookField() { + + try { + return AccessController.doPrivileged((PrivilegedExceptionAction) () -> { + + Field field = Hooks.class.getDeclaredField("onOperatorErrorHook"); + + if (!field.isAccessible()) { + field.setAccessible(true); + } + + return field; + }); + + } catch (PrivilegedActionException e) { + return null; + } + } + + @SuppressWarnings("unchecked") + private static Supplier> getQueueSupplier() { + + try { + return AccessController.doPrivileged((PrivilegedExceptionAction>>) () -> { + Method unbounded = Queues.class.getMethod("unbounded"); + return (Supplier) unbounded.invoke(Queues.class); + }); + + } catch (PrivilegedActionException e) { + return LettuceFactories::newSpScQueue; + } + } + + /** + * Cap an addition to Long.MAX_VALUE + * + * @param a left operand + * @param b right operand + * + * @return Addition result or Long.MAX_VALUE if overflow + */ + static long addCap(long a, long b) { + + long res = a + b; + if (res < 0L) { + return Long.MAX_VALUE; + } + return res; + } + + /** + * Concurrent addition bound to Long.MAX_VALUE. Any concurrent write will "happen before" this operation. + * + * @param the parent instance type + * @param updater current field updater + * @param instance current instance to update + * @param toAdd delta to add + * @return {@literal true} if the operation succeeded. + * @since 5.0.1 + */ + public static boolean request(AtomicLongFieldUpdater updater, T instance, long toAdd) { + + if (validate(toAdd)) { + addCap(updater, instance, toAdd); + + return true; + } + + return false; + } + + /** + * Concurrent addition bound to Long.MAX_VALUE. Any concurrent write will "happen before" this operation. + * + * @param the parent instance type + * @param updater current field updater + * @param instance current instance to update + * @param toAdd delta to add + * @return value before addition or Long.MAX_VALUE + */ + static long addCap(AtomicLongFieldUpdater updater, T instance, long toAdd) { + + long r, u; + for (;;) { + r = updater.get(instance); + if (r == Long.MAX_VALUE) { + return Long.MAX_VALUE; + } + u = addCap(r, toAdd); + if (updater.compareAndSet(instance, r, u)) { + return r; + } + } + } + + /** + * Evaluate if a request is strictly positive otherwise {@link #reportBadRequest(long)} + * + * @param n the request value + * @return true if valid + */ + static boolean validate(long n) { + + if (n <= 0) { + reportBadRequest(n); + return false; + } + return true; + } + + /** + * Log an {@link IllegalArgumentException} if the request is null or negative. + * + * @param n the failing demand + * + * @see Exceptions#nullOrNegativeRequestException(long) + */ + static void reportBadRequest(long n) { + + if (LOG.isDebugEnabled()) { + LOG.debug("Negative request", Exceptions.nullOrNegativeRequestException(n)); + } + } + + /** + * @param elements the invalid requested demand + * + * @return a new {@link IllegalArgumentException} with a cause message abiding to reactive stream specification rule 3.9. + */ + static IllegalArgumentException nullOrNegativeRequestException(long elements) { + return new IllegalArgumentException("Spec. Rule 3.9 - Cannot request a non strictly positive number: " + elements); + } + + /** + * Map an "operator" error given an operator parent {@link Subscription}. The result error will be passed via onError to the + * operator downstream. {@link Subscription} will be cancelled after checking for fatal error via + * {@link Exceptions#throwIfFatal(Throwable)}. Takes an additional signal, which can be added as a suppressed exception if + * it is a {@link Throwable} and the default {@link Hooks#onOperatorError(BiFunction) hook} is in place. + * + * @param subscription the linked operator parent {@link Subscription} + * @param error the callback or operator error + * @param dataSignal the value (onNext or onError) signal processed during failure + * @param context a context that might hold a local error consumer + * @return mapped {@link Throwable} + */ + static Throwable onOperatorError(@Nullable Subscription subscription, Throwable error, @Nullable Object dataSignal, + Context context) { + + Exceptions.throwIfFatal(error); + if (subscription != null) { + subscription.cancel(); + } + + Throwable t = Exceptions.unwrap(error); + BiFunction hook = context.getOrDefault(KEY_ON_OPERATOR_ERROR, null); + if (hook == null && onOperatorErrorHook != null) { + hook = getOnOperatorErrorHook(); + } + + if (hook == null) { + if (dataSignal != null) { + if (dataSignal != t && dataSignal instanceof Throwable) { + t = Exceptions.addSuppressed(t, (Throwable) dataSignal); + } + // do not wrap original value to avoid strong references + /* + * else { } + */ + } + return t; + } + return hook.apply(error, dataSignal); + } + + /** + * Create a new {@link Queue}. + * + * @return the new queue. + */ + @SuppressWarnings("unchecked") + static Queue newQueue() { + return (Queue) queueSupplier.get(); + } + + @SuppressWarnings("unchecked") + private static BiFunction getOnOperatorErrorHook() { + + try { + return (BiFunction) onOperatorErrorHook.get(Hooks.class); + } catch (ReflectiveOperationException e) { + return null; + } + } +} diff --git a/src/main/java/io/lettuce/core/OrderingReadFromAccessor.java b/src/main/java/io/lettuce/core/OrderingReadFromAccessor.java new file mode 100644 index 0000000000..08c793c53f --- /dev/null +++ b/src/main/java/io/lettuce/core/OrderingReadFromAccessor.java @@ -0,0 +1,44 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Accessor for {@link ReadFrom} ordering. Internal utility class. + * + * @author Mark Paluch + * @since 5.2 + */ +public abstract class OrderingReadFromAccessor { + + /** + * Utility constructor. + */ + private OrderingReadFromAccessor() { + } + + /** + * Returns whether this {@link ReadFrom} requires ordering of the resulting + * {@link io.lettuce.core.models.role.RedisNodeDescription nodes}. + * + * @return {@literal true} if code using {@link ReadFrom} should retain ordering or {@literal false} to allow reordering of + * {@link io.lettuce.core.models.role.RedisNodeDescription nodes}. + * @since 5.2 + * @see ReadFrom#isOrderSensitive() + */ + public static boolean isOrderSensitive(ReadFrom readFrom) { + return readFrom.isOrderSensitive(); + } +} diff --git a/src/main/java/io/lettuce/core/Range.java b/src/main/java/io/lettuce/core/Range.java new file mode 100644 index 0000000000..bef71c2176 --- /dev/null +++ b/src/main/java/io/lettuce/core/Range.java @@ -0,0 +1,269 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Objects; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link Range} defines {@literal lower} and {@literal upper} boundaries to retrieve items from a sorted set. + * + * @author Mark Paluch + * @since 4.3 + */ +public class Range { + + private Boundary lower; + private Boundary upper; + + private Range(Boundary lower, Boundary upper) { + + LettuceAssert.notNull(lower, "Lower boundary must not be null"); + LettuceAssert.notNull(upper, "Upper boundary must not be null"); + + this.lower = lower; + this.upper = upper; + } + + /** + * Create a new range from {@code lower} and {@code upper} boundary values. Both values are included (greater than or equals + * and less than or equals). + * + * @param lower lower boundary, must not be {@literal null}. + * @param upper upper boundary, must not be {@literal null}. + * @param value type + * @return new {@link Range} + */ + public static Range create(T lower, T upper) { + + LettuceAssert.isTrue(!(lower instanceof Boundary), + "Lower must not be a Boundary. Use #from(Boundary, Boundary) instead"); + LettuceAssert.isTrue(!(upper instanceof Boundary), + "Upper must not be a Boundary. Use #from(Boundary, Boundary) instead"); + + return new Range(Boundary.including(lower), Boundary.including(upper)); + } + + /** + * Create a new range from {@code lower} and {@code upper} boundaries. + * + * @param lower lower boundary, must not be {@literal null}. + * @param upper upper boundary, must not be {@literal null}. + * @param value type. + * @return new {@link Range} + */ + public static Range from(Boundary lower, Boundary upper) { + return new Range(lower, upper); + } + + /** + * @param value type. + * @return new {@link Range} with {@code lower} and {@code upper} set to {@link Boundary#unbounded()}. + */ + public static Range unbounded() { + return new Range(Boundary.unbounded(), Boundary.unbounded()); + } + + /** + * Greater than or equals {@code lower}. + * + * @param lower the lower boundary value. + * @return {@code this} {@link Range} with {@code lower} applied. + */ + public Range gte(T lower) { + + this.lower = Boundary.including(lower); + return this; + } + + /** + * Greater than {@code lower}. + * + * @param lower the lower boundary value. + * @return {@code this} {@link Range} with {@code lower} applied. + */ + public Range gt(T lower) { + + this.lower = Boundary.excluding(lower); + return this; + } + + /** + * Less than or equals {@code lower}. + * + * @param upper the upper boundary value. + * @return {@code this} {@link Range} with {@code upper} applied. + */ + public Range lte(T upper) { + + this.upper = Boundary.including(upper); + return this; + } + + /** + * Less than {@code lower}. + * + * @param upper the upper boundary value. + * @return {@code this} {@link Range} with {@code upper} applied. + */ + public Range lt(T upper) { + + this.upper = Boundary.excluding(upper); + return this; + } + + /** + * @return the lower boundary. + */ + public Boundary getLower() { + return lower; + } + + /** + * @return the upper boundary. + */ + public Boundary getUpper() { + return upper; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof Range)) + return false; + Range range = (Range) o; + return Objects.equals(lower, range.lower) && Objects.equals(upper, range.upper); + } + + @Override + public int hashCode() { + return Objects.hash(lower, upper); + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()).append(" ["); + sb.append(lower).append(" to ").append(upper).append("]"); + return sb.toString(); + } + + /** + * @author Mark Paluch + */ + public static class Boundary { + + private static final Boundary UNBOUNDED = new Boundary<>(null, true); + + private final T value; + private final boolean including; + + private Boundary(T value, boolean including) { + this.value = value; + this.including = including; + } + + /** + * Creates an unbounded (infinite) boundary that marks the beginning/end of the range. + * + * @return the unbounded boundary. + * @param inferred type. + */ + @SuppressWarnings("unchecked") + public static Boundary unbounded() { + return (Boundary) UNBOUNDED; + } + + /** + * Create a {@link Boundary} based on the {@code value} that includes the value when comparing ranges. Greater or + * equals, less or equals. but not Greater or equal, less or equal to {@code value}. + * + * @param value must not be {@literal null}. + * @param value type. + * @return the {@link Boundary}. + */ + public static Boundary including(T value) { + + LettuceAssert.notNull(value, "Value must not be null"); + + return new Boundary<>(value, true); + } + + /** + * Create a {@link Boundary} based on the {@code value} that excludes the value when comparing ranges. Greater or less + * to {@code value} but not greater or equal, less or equal. + * + * @param value must not be {@literal null}. + * @param value type. + * @return the {@link Boundary}. + */ + public static Boundary excluding(T value) { + + LettuceAssert.notNull(value, "Value must not be null"); + + return new Boundary<>(value, false); + } + + /** + * @return the value + */ + public T getValue() { + return value; + } + + /** + * @return {@literal true} if the boundary includes the value. + */ + public boolean isIncluding() { + return including; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof Boundary)) + return false; + Boundary boundary = (Boundary) o; + return including == boundary.including && Objects.equals(value, boundary.value); + } + + @Override + public int hashCode() { + return Objects.hash(value, including); + } + + @Override + public String toString() { + + if (value == null) { + return "[unbounded]"; + } + + StringBuilder sb = new StringBuilder(); + if (including) { + sb.append('['); + } else { + sb.append('('); + } + + sb.append(value); + return sb.toString(); + } + } +} diff --git a/src/main/java/io/lettuce/core/ReadFrom.java b/src/main/java/io/lettuce/core/ReadFrom.java new file mode 100644 index 0000000000..81a084acb6 --- /dev/null +++ b/src/main/java/io/lettuce/core/ReadFrom.java @@ -0,0 +1,156 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.List; + +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * Defines from which Redis nodes data is read. + * + * @author Mark Paluch + * @author Ryosuke Hasebe + * @since 4.0 + */ +public abstract class ReadFrom { + + /** + * Setting to read from the master only. + */ + public static final ReadFrom MASTER = new ReadFromImpl.ReadFromMaster(); + + /** + * Setting to read preferred from the master and fall back to a replica if the master is not available. + */ + public static final ReadFrom MASTER_PREFERRED = new ReadFromImpl.ReadFromMasterPreferred(); + + /** + * Setting to read preferred from replica and fall back to master if no replica is not available. + * + * @since 5.2 + */ + public static final ReadFrom REPLICA_PREFERRED = new ReadFromImpl.ReadFromReplicaPreferred(); + + /** + * Setting to read preferred from replicas and fall back to master if no replica is not available. + * + * @since 4.4 + * @deprecated Renamed to {@link #REPLICA_PREFERRED}. + */ + @Deprecated + public static final ReadFrom SLAVE_PREFERRED = REPLICA_PREFERRED; + + /** + * Setting to read from the replica only. + * + * @since 5.2 + */ + public static final ReadFrom REPLICA = new ReadFromImpl.ReadFromReplica(); + + /** + * Setting to read from the replica only. + * + * @deprecated renamed to {@link #REPLICA}. + */ + @Deprecated + public static final ReadFrom SLAVE = REPLICA; + + /** + * Setting to read from the nearest node. + */ + public static final ReadFrom NEAREST = new ReadFromImpl.ReadFromNearest(); + + /** + * Setting to read from any node. + * + * @since 5.2 + */ + public static final ReadFrom ANY = new ReadFromImpl.ReadFromAnyNode(); + + /** + * Chooses the nodes from the matching Redis nodes that match this read selector. + * + * @param nodes set of nodes that are suitable for reading + * @return List of {@link RedisNodeDescription}s that are selected for reading + */ + public abstract List select(Nodes nodes); + + /** + * Returns whether this {@link ReadFrom} requires ordering of the resulting {@link RedisNodeDescription nodes}. + * + * @return {@literal true} if code using {@link ReadFrom} should retain ordering or {@literal false} to allow reordering of + * {@link RedisNodeDescription nodes}. + * @since 5.2 + */ + protected boolean isOrderSensitive() { + return false; + } + + /** + * Retrieve the {@link ReadFrom} preset by name. + * + * @param name the name of the read from setting + * @return the {@link ReadFrom} preset + * @throws IllegalArgumentException if {@code name} is empty, {@literal null} or the {@link ReadFrom} preset is unknown. + */ + public static ReadFrom valueOf(String name) { + + if (LettuceStrings.isEmpty(name)) { + throw new IllegalArgumentException("Name must not be empty"); + } + + if (name.equalsIgnoreCase("master")) { + return MASTER; + } + + if (name.equalsIgnoreCase("masterPreferred")) { + return MASTER_PREFERRED; + } + + if (name.equalsIgnoreCase("slave") || name.equalsIgnoreCase("replica")) { + return REPLICA; + } + + if (name.equalsIgnoreCase("slavePreferred") || name.equalsIgnoreCase("replicaPreferred")) { + return REPLICA_PREFERRED; + } + + if (name.equalsIgnoreCase("nearest")) { + return NEAREST; + } + + if (name.equalsIgnoreCase("any")) { + return ANY; + } + + throw new IllegalArgumentException("ReadFrom " + name + " not supported"); + } + + /** + * Descriptor of nodes that are available for the current read operation. + */ + public interface Nodes extends Iterable { + + /** + * Returns the list of nodes that are applicable for the read operation. The list is ordered by latency. + * + * @return the collection of nodes that are applicable for reading. + * + */ + List getNodes(); + } +} diff --git a/src/main/java/io/lettuce/core/ReadFromImpl.java b/src/main/java/io/lettuce/core/ReadFromImpl.java new file mode 100644 index 0000000000..dde460282f --- /dev/null +++ b/src/main/java/io/lettuce/core/ReadFromImpl.java @@ -0,0 +1,167 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.function.Predicate; + +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * Collection of common read setting implementations. + * + * @author Mark Paluch + * @since 4.0 + */ +class ReadFromImpl { + + private static final Predicate IS_MASTER = node -> node.getRole() == RedisInstance.Role.MASTER; + + private static final Predicate IS_REPLICA = node -> node.getRole() == RedisInstance.Role.SLAVE; + + /** + * Read from master only. + */ + static final class ReadFromMaster extends ReadFrom { + + @Override + public List select(Nodes nodes) { + + for (RedisNodeDescription node : nodes) { + if (node.getRole() == RedisInstance.Role.MASTER) { + return LettuceLists.newList(node); + } + } + + return Collections.emptyList(); + } + } + + /** + * Read from master and replicas. Prefer master reads and fall back to replicas if the master is not available. + */ + static final class ReadFromMasterPreferred extends OrderedPredicateReadFromAdapter { + + ReadFromMasterPreferred() { + super(IS_MASTER, IS_REPLICA); + } + } + + /** + * Read from replica only. + */ + static final class ReadFromReplica extends OrderedPredicateReadFromAdapter { + + ReadFromReplica() { + super(IS_REPLICA); + } + } + + /** + * Read from master and replicas. Prefer replica reads and fall back to master if the no replica is not available. + */ + static final class ReadFromReplicaPreferred extends OrderedPredicateReadFromAdapter { + + ReadFromReplicaPreferred() { + super(IS_REPLICA, IS_MASTER); + } + } + + /** + * Read from nearest node. + */ + static final class ReadFromNearest extends ReadFrom { + + @Override + public List select(Nodes nodes) { + return nodes.getNodes(); + } + + @Override + protected boolean isOrderSensitive() { + return true; + } + } + + /** + * Read from any node. + */ + static final class ReadFromAnyNode extends UnorderedPredicateReadFromAdapter { + + public ReadFromAnyNode() { + super(x -> true); + } + } + + /** + * {@link Predicate}-based {@link ReadFrom} implementation. + * + * @since 5.2 + */ + static class OrderedPredicateReadFromAdapter extends ReadFrom { + + private final Predicate predicates[]; + + @SafeVarargs + OrderedPredicateReadFromAdapter(Predicate... predicates) { + this.predicates = predicates; + } + + @Override + public List select(Nodes nodes) { + + List result = new ArrayList<>(nodes.getNodes().size()); + + for (Predicate predicate : predicates) { + + for (RedisNodeDescription node : nodes) { + if (predicate.test(node)) { + result.add(node); + } + } + } + + return result; + } + + @Override + protected boolean isOrderSensitive() { + return true; + } + } + + /** + * Unordered {@link Predicate}-based {@link ReadFrom} implementation. + * + * @since 5.2 + */ + static class UnorderedPredicateReadFromAdapter extends OrderedPredicateReadFromAdapter { + + @SafeVarargs + UnorderedPredicateReadFromAdapter(Predicate... predicates) { + super(predicates); + } + + @Override + protected boolean isOrderSensitive() { + return false; + } + } +} diff --git a/src/main/java/io/lettuce/core/RedisAsyncCommandsImpl.java b/src/main/java/io/lettuce/core/RedisAsyncCommandsImpl.java new file mode 100644 index 0000000000..4d3b4882a1 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisAsyncCommandsImpl.java @@ -0,0 +1,48 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.codec.RedisCodec; + +/** + * An asynchronous and thread-safe API for a Redis connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class RedisAsyncCommandsImpl extends AbstractRedisAsyncCommands + implements RedisAsyncCommands, RedisClusterAsyncCommands { + + /** + * Initialize a new instance. + * + * @param connection the connection to operate on + * @param codec the codec for command encoding + * + */ + public RedisAsyncCommandsImpl(StatefulRedisConnection connection, RedisCodec codec) { + super(connection, codec); + } + + @Override + public StatefulRedisConnection getStatefulConnection() { + return (StatefulRedisConnection) super.getConnection(); + } +} diff --git a/src/main/java/io/lettuce/core/RedisBusyException.java b/src/main/java/io/lettuce/core/RedisBusyException.java new file mode 100644 index 0000000000..4893706919 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisBusyException.java @@ -0,0 +1,45 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Exception that gets thrown when Redis is busy executing a Lua script with a {@code BUSY} error response. + * + * @author Mark Paluch + * @since 4.5 + */ +@SuppressWarnings("serial") +public class RedisBusyException extends RedisCommandExecutionException { + + /** + * Create a {@code RedisBusyException} with the specified detail message. + * + * @param msg the detail message. + */ + public RedisBusyException(String msg) { + super(msg); + } + + /** + * Create a {@code RedisNoScriptException} with the specified detail message and nested exception. + * + * @param msg the detail message. + * @param cause the nested exception. + */ + public RedisBusyException(String msg, Throwable cause) { + super(msg, cause); + } +} diff --git a/src/main/java/io/lettuce/core/RedisChannelHandler.java b/src/main/java/io/lettuce/core/RedisChannelHandler.java new file mode 100644 index 0000000000..3fdcb910ef --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisChannelHandler.java @@ -0,0 +1,316 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.io.Closeable; +import java.io.IOException; +import java.lang.reflect.Proxy; +import java.time.Duration; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.internal.AsyncCloseable; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.*; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.tracing.TraceContextProvider; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Abstract base for every Redis connection. Provides basic connection functionality and tracks open resources. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public abstract class RedisChannelHandler implements Closeable, ConnectionFacade { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(RedisChannelHandler.class); + + @SuppressWarnings("rawtypes") + private static final AtomicIntegerFieldUpdater CLOSED = AtomicIntegerFieldUpdater.newUpdater( + RedisChannelHandler.class, "closed"); + + private static final int ST_OPEN = 0; + private static final int ST_CLOSED = 1; + + private Duration timeout; + private CloseEvents closeEvents = new CloseEvents(); + + private final RedisChannelWriter channelWriter; + private final ClientResources clientResources; + private final boolean tracingEnabled; + private final boolean debugEnabled = logger.isDebugEnabled(); + private final CompletableFuture closeFuture = new CompletableFuture<>(); + + // accessed via CLOSED + @SuppressWarnings("unused") + private volatile int closed = ST_OPEN; + private volatile boolean active = true; + private volatile ClientOptions clientOptions; + + /** + * @param writer the channel writer + * @param timeout timeout value + */ + public RedisChannelHandler(RedisChannelWriter writer, Duration timeout) { + + this.channelWriter = writer; + this.clientResources = writer.getClientResources(); + this.tracingEnabled = clientResources.tracing().isEnabled(); + + writer.setConnectionFacade(this); + setTimeout(timeout); + } + + /** + * Set the command timeout for this connection. + * + * @param timeout Command timeout. + * @since 5.0 + */ + public void setTimeout(Duration timeout) { + + LettuceAssert.notNull(timeout, "Timeout duration must not be null"); + LettuceAssert.isTrue(!timeout.isNegative(), "Timeout duration must be greater or equal to zero"); + + this.timeout = timeout; + + if (channelWriter instanceof CommandExpiryWriter) { + ((CommandExpiryWriter) channelWriter).setTimeout(timeout); + } + } + + /** + * Close the connection (synchronous). + */ + @Override + public void close() { + + if (debugEnabled) { + logger.debug("close()"); + } + + closeAsync().join(); + } + + /** + * Close the connection (asynchronous). + * + * @since 5.1 + */ + public CompletableFuture closeAsync() { + + if (debugEnabled) { + logger.debug("closeAsync()"); + } + + if (CLOSED.get(this) == ST_CLOSED) { + logger.warn("Connection is already closed"); + return closeFuture; + } + + if (CLOSED.compareAndSet(this, ST_OPEN, ST_CLOSED)) { + + active = false; + CompletableFuture future = channelWriter.closeAsync(); + + future.whenComplete((v, t) -> { + + closeEvents.fireEventClosed(this); + closeEvents = new CloseEvents(); + + if (t != null) { + closeFuture.completeExceptionally(t); + } else { + closeFuture.complete(v); + } + }); + } else { + logger.warn("Connection is already closed (concurrently)"); + } + + return closeFuture; + } + + protected RedisCommand dispatch(RedisCommand cmd) { + + if (debugEnabled) { + logger.debug("dispatching command {}", cmd); + } + + if (tracingEnabled) { + + RedisCommand commandToSend = cmd; + TraceContextProvider provider = CommandWrapper.unwrap(cmd, TraceContextProvider.class); + + if (provider == null) { + commandToSend = new TracedCommand<>(cmd, clientResources.tracing() + .initialTraceContextProvider().getTraceContext()); + } + + return channelWriter.write(commandToSend); + } + + return channelWriter.write(cmd); + } + + protected Collection> dispatch(Collection> commands) { + + if (debugEnabled) { + logger.debug("dispatching commands {}", commands); + } + + if (tracingEnabled) { + + Collection> withTracer = new ArrayList<>(commands.size()); + + for (RedisCommand command : commands) { + + RedisCommand commandToUse = command; + TraceContextProvider provider = CommandWrapper.unwrap(command, TraceContextProvider.class); + if (provider == null) { + commandToUse = new TracedCommand<>(command, clientResources.tracing() + .initialTraceContextProvider().getTraceContext()); + } + + withTracer.add(commandToUse); + } + + return channelWriter.write(withTracer); + + } + + return channelWriter.write(commands); + } + + /** + * Register Closeable resources. Internal access only. + * + * @param registry registry of closeables + * @param closeables closeables to register + */ + public void registerCloseables(final Collection registry, Closeable... closeables) { + + registry.addAll(Arrays.asList(closeables)); + + addListener(resource -> { + for (Closeable closeable : closeables) { + if (closeable == RedisChannelHandler.this) { + continue; + } + + try { + if (closeable instanceof AsyncCloseable) { + ((AsyncCloseable) closeable).closeAsync(); + } else { + closeable.close(); + } + } catch (IOException e) { + if (debugEnabled) { + logger.debug(e.toString(), e); + } + } + } + + registry.removeAll(Arrays.asList(closeables)); + }); + } + + protected void addListener(CloseEvents.CloseListener listener) { + closeEvents.addListener(listener); + } + + /** + * @return true if the connection is closed (final state in the connection lifecyle). + */ + public boolean isClosed() { + return CLOSED.get(this) == ST_CLOSED; + } + + /** + * Notification when the connection becomes active (connected). + */ + public void activated() { + active = true; + CLOSED.set(this, ST_OPEN); + } + + /** + * Notification when the connection becomes inactive (disconnected). + */ + public void deactivated() { + active = false; + } + + /** + * @return the channel writer + */ + public RedisChannelWriter getChannelWriter() { + return channelWriter; + } + + /** + * @return true if the connection is active and not closed. + */ + public boolean isOpen() { + return active; + } + + @Deprecated + @Override + public void reset() { + channelWriter.reset(); + } + + public ClientOptions getOptions() { + return clientOptions; + } + + public ClientResources getResources() { + return clientResources; + } + + public void setOptions(ClientOptions clientOptions) { + LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); + this.clientOptions = clientOptions; + } + + public Duration getTimeout() { + return timeout; + } + + @SuppressWarnings("unchecked") + protected T syncHandler(Object asyncApi, Class... interfaces) { + FutureSyncInvocationHandler h = new FutureSyncInvocationHandler((StatefulConnection) this, asyncApi, interfaces); + return (T) Proxy.newProxyInstance(AbstractRedisClient.class.getClassLoader(), interfaces, h); + } + + public void setAutoFlushCommands(boolean autoFlush) { + getChannelWriter().setAutoFlushCommands(autoFlush); + } + + public void flushCommands() { + getChannelWriter().flushCommands(); + } +} diff --git a/src/main/java/io/lettuce/core/RedisChannelWriter.java b/src/main/java/io/lettuce/core/RedisChannelWriter.java new file mode 100644 index 0000000000..f464e0ed37 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisChannelWriter.java @@ -0,0 +1,106 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.io.Closeable; +import java.util.Collection; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.internal.AsyncCloseable; +import io.lettuce.core.protocol.ConnectionFacade; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; + +/** + * Writer for a channel. Writers push commands on to the communication channel and maintain a state for the commands. + * + * @author Mark Paluch + * @since 3.0 + */ +public interface RedisChannelWriter extends Closeable, AsyncCloseable { + + /** + * Write a command on the channel. The command may be changed/wrapped during write and the written instance is returned + * after the call. + * + * @param command the Redis command. + * @param result type + * @return the written Redis command. + */ + RedisCommand write(RedisCommand command); + + /** + * Write multiple commands on the channel. The commands may be changed/wrapped during write and the written instance is + * returned after the call. + * + * @param commands the Redis commands. + * @param key type + * @param value type + * @return the written redis command + */ + Collection> write(Collection> commands); + + @Override + void close(); + + /** + * Asynchronously close the {@link RedisChannelWriter}. + * + * @return future for result synchronization. + * @since 5.1 + */ + @Override + CompletableFuture closeAsync(); + + /** + * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the + * internal state machine gets out of sync with the connection (e.g. errors during external SSL tunneling). Calling this + * method will reset the protocol state, therefore it is considered unsafe. + * + * @deprecated since 5.2. This method is unsafe and can cause protocol offsets (i.e. Redis commands are completed with + * previous command values). + */ + @Deprecated + void reset(); + + /** + * Set the corresponding connection facade in order to notify it about channel active/inactive state. + * + * @param connection the connection facade (external connection object) + */ + void setConnectionFacade(ConnectionFacade connection); + + /** + * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands + * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is + * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. + * + * @param autoFlush state of autoFlush. + */ + void setAutoFlushCommands(boolean autoFlush); + + /** + * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to + * achieve batching. No-op if channel is not connected. + */ + void flushCommands(); + + /** + * @return the {@link ClientResources}. + * @since 5.1 + */ + ClientResources getClientResources(); +} diff --git a/src/main/java/io/lettuce/core/RedisClient.java b/src/main/java/io/lettuce/core/RedisClient.java new file mode 100644 index 0000000000..618f43c1d2 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisClient.java @@ -0,0 +1,827 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.LettuceStrings.isEmpty; +import static io.lettuce.core.LettuceStrings.isNotEmpty; + +import java.net.InetSocketAddress; +import java.net.SocketAddress; +import java.time.Duration; +import java.util.List; +import java.util.Queue; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionException; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.function.Supplier; + +import io.lettuce.core.internal.ExceptionFactory; +import reactor.core.publisher.Mono; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandExpiryWriter; +import io.lettuce.core.protocol.CommandHandler; +import io.lettuce.core.protocol.DefaultEndpoint; +import io.lettuce.core.protocol.Endpoint; +import io.lettuce.core.pubsub.PubSubCommandHandler; +import io.lettuce.core.pubsub.PubSubEndpoint; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnectionImpl; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.sentinel.StatefulRedisSentinelConnectionImpl; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * A scalable and thread-safe Redis client supporting synchronous, asynchronous and reactive + * execution models. Multiple threads may share one connection if they avoid blocking and transactional operations such as BLPOP + * and MULTI/EXEC. + *

+ * {@link RedisClient} can be used with: + *

    + *
  • Redis Standalone
  • + *
  • Redis Pub/Sub
  • + *
  • Redis Sentinel, Sentinel connections
  • + *
  • Redis Sentinel, Master connections
  • + *
+ * + * Redis Cluster is used through {@link io.lettuce.core.cluster.RedisClusterClient}. Master/Replica connections through + * {@link io.lettuce.core.masterreplica.MasterReplica} provide connections to Redis Master/Replica setups which run either in a + * static Master/Replica setup or are managed by Redis Sentinel. + *

+ * {@link RedisClient} is an expensive resource. It holds a set of netty's {@link io.netty.channel.EventLoopGroup}'s that use + * multiple threads. Reuse this instance as much as possible or share a {@link ClientResources} instance amongst multiple client + * instances. + * + * @author Will Glozer + * @author Mark Paluch + * @see RedisURI + * @see StatefulRedisConnection + * @see RedisFuture + * @see reactor.core.publisher.Mono + * @see reactor.core.publisher.Flux + * @see RedisCodec + * @see ClientOptions + * @see ClientResources + * @see io.lettuce.core.masterreplica.MasterReplica + * @see io.lettuce.core.cluster.RedisClusterClient + */ +public class RedisClient extends AbstractRedisClient { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(RedisClient.class); + + private static final RedisURI EMPTY_URI = new RedisURI(); + + private final RedisURI redisURI; + + protected RedisClient(ClientResources clientResources, RedisURI redisURI) { + + super(clientResources); + + assertNotNull(redisURI); + + this.redisURI = redisURI; + setDefaultTimeout(redisURI.getTimeout()); + } + + /** + * Creates a uri-less RedisClient. You can connect to different Redis servers but you must supply a {@link RedisURI} on + * connecting. Methods without having a {@link RedisURI} will fail with a {@link java.lang.IllegalStateException}. + * Non-private constructor to make {@link RedisClient} proxyable. + */ + protected RedisClient() { + this(null, EMPTY_URI); + } + + /** + * Creates a uri-less RedisClient with default {@link ClientResources}. You can connect to different Redis servers but you + * must supply a {@link RedisURI} on connecting. Methods without having a {@link RedisURI} will fail with a + * {@link java.lang.IllegalStateException}. + * + * @return a new instance of {@link RedisClient} + */ + public static RedisClient create() { + return new RedisClient(null, EMPTY_URI); + } + + /** + * Create a new client that connects to the supplied {@link RedisURI uri} with default {@link ClientResources}. You can + * connect to different Redis servers but you must supply a {@link RedisURI} on connecting. + * + * @param redisURI the Redis URI, must not be {@literal null} + * @return a new instance of {@link RedisClient} + */ + public static RedisClient create(RedisURI redisURI) { + assertNotNull(redisURI); + return new RedisClient(null, redisURI); + } + + /** + * Create a new client that connects to the supplied uri with default {@link ClientResources}. You can connect to different + * Redis servers but you must supply a {@link RedisURI} on connecting. + * + * @param uri the Redis URI, must not be {@literal null} + * @return a new instance of {@link RedisClient} + */ + public static RedisClient create(String uri) { + LettuceAssert.notEmpty(uri, "URI must not be empty"); + return new RedisClient(null, RedisURI.create(uri)); + } + + /** + * Creates a uri-less RedisClient with shared {@link ClientResources}. You need to shut down the {@link ClientResources} + * upon shutting down your application. You can connect to different Redis servers but you must supply a {@link RedisURI} on + * connecting. Methods without having a {@link RedisURI} will fail with a {@link java.lang.IllegalStateException}. + * + * @param clientResources the client resources, must not be {@literal null} + * @return a new instance of {@link RedisClient} + */ + public static RedisClient create(ClientResources clientResources) { + assertNotNull(clientResources); + return new RedisClient(clientResources, EMPTY_URI); + } + + /** + * Create a new client that connects to the supplied uri with shared {@link ClientResources}.You need to shut down the + * {@link ClientResources} upon shutting down your application. You can connect to different Redis servers but you must + * supply a {@link RedisURI} on connecting. + * + * @param clientResources the client resources, must not be {@literal null} + * @param uri the Redis URI, must not be {@literal null} + * + * @return a new instance of {@link RedisClient} + */ + public static RedisClient create(ClientResources clientResources, String uri) { + assertNotNull(clientResources); + LettuceAssert.notEmpty(uri, "URI must not be empty"); + return create(clientResources, RedisURI.create(uri)); + } + + /** + * Create a new client that connects to the supplied {@link RedisURI uri} with shared {@link ClientResources}. You need to + * shut down the {@link ClientResources} upon shutting down your application.You can connect to different Redis servers but + * you must supply a {@link RedisURI} on connecting. + * + * @param clientResources the client resources, must not be {@literal null} + * @param redisURI the Redis URI, must not be {@literal null} + * @return a new instance of {@link RedisClient} + */ + public static RedisClient create(ClientResources clientResources, RedisURI redisURI) { + assertNotNull(clientResources); + assertNotNull(redisURI); + return new RedisClient(clientResources, redisURI); + } + + /** + * Open a new connection to a Redis server that treats keys and values as UTF-8 strings. + * + * @return A new stateful Redis connection + */ + public StatefulRedisConnection connect() { + return connect(newStringStringCodec()); + } + + /** + * Open a new connection to a Redis server. Use the supplied {@link RedisCodec codec} to encode/decode keys and values. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new stateful Redis connection + */ + public StatefulRedisConnection connect(RedisCodec codec) { + + checkForRedisURI(); + + return getConnection(connectStandaloneAsync(codec, this.redisURI, getDefaultTimeout())); + } + + /** + * Open a new connection to a Redis server using the supplied {@link RedisURI} that treats keys and values as UTF-8 strings. + * + * @param redisURI the Redis server to connect to, must not be {@literal null} + * @return A new connection + */ + public StatefulRedisConnection connect(RedisURI redisURI) { + + assertNotNull(redisURI); + + return getConnection(connectStandaloneAsync(newStringStringCodec(), redisURI, redisURI.getTimeout())); + } + + /** + * Open a new connection to a Redis server using the supplied {@link RedisURI} and the supplied {@link RedisCodec codec} to + * encode/decode keys. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param redisURI the Redis server to connect to, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new connection + */ + public StatefulRedisConnection connect(RedisCodec codec, RedisURI redisURI) { + + assertNotNull(redisURI); + + return getConnection(connectStandaloneAsync(codec, redisURI, redisURI.getTimeout())); + } + + /** + * Open asynchronously a new connection to a Redis server using the supplied {@link RedisURI} and the supplied + * {@link RedisCodec codec} to encode/decode keys. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param redisURI the Redis server to connect to, must not be {@literal null} + * @param Key type + * @param Value type + * @return {@link ConnectionFuture} to indicate success or failure to connect. + * @since 5.0 + */ + public ConnectionFuture> connectAsync(RedisCodec codec, RedisURI redisURI) { + + assertNotNull(redisURI); + + return transformAsyncConnectionException(connectStandaloneAsync(codec, redisURI, redisURI.getTimeout())); + } + + private ConnectionFuture> connectStandaloneAsync(RedisCodec codec, + RedisURI redisURI, Duration timeout) { + + assertNotNull(codec); + checkValidRedisURI(redisURI); + + logger.debug("Trying to get a Redis connection for: " + redisURI); + + DefaultEndpoint endpoint = new DefaultEndpoint(getOptions(), getResources()); + RedisChannelWriter writer = endpoint; + + if (CommandExpiryWriter.isSupported(getOptions())) { + writer = new CommandExpiryWriter(writer, getOptions(), getResources()); + } + + StatefulRedisConnectionImpl connection = newStatefulRedisConnection(writer, codec, timeout); + ConnectionFuture> future = connectStatefulAsync(connection, endpoint, redisURI, + () -> new CommandHandler(getOptions(), getResources(), endpoint)); + + future.whenComplete((channelHandler, throwable) -> { + + if (throwable != null) { + connection.close(); + } + }); + + return future; + } + + @SuppressWarnings("unchecked") + private ConnectionFuture connectStatefulAsync(StatefulRedisConnectionImpl connection, Endpoint endpoint, + RedisURI redisURI, Supplier commandHandlerSupplier) { + + ConnectionBuilder connectionBuilder; + if (redisURI.isSsl()) { + SslConnectionBuilder sslConnectionBuilder = SslConnectionBuilder.sslConnectionBuilder(); + sslConnectionBuilder.ssl(redisURI); + connectionBuilder = sslConnectionBuilder; + } else { + connectionBuilder = ConnectionBuilder.connectionBuilder(); + } + + ConnectionState state = connection.getConnectionState(); + state.apply(redisURI); + state.setDb(redisURI.getDatabase()); + + connectionBuilder.connection(connection); + connectionBuilder.clientOptions(getOptions()); + connectionBuilder.clientResources(getResources()); + connectionBuilder.commandHandler(commandHandlerSupplier).endpoint(endpoint); + + connectionBuilder(getSocketAddressSupplier(redisURI), connectionBuilder, redisURI); + connectionBuilder.connectionInitializer(createHandshake(state)); + channelType(connectionBuilder, redisURI); + + ConnectionFuture> future = initializeChannelAsync(connectionBuilder); + + return future.thenApply(channelHandler -> (S) connection); + } + + /** + * Open a new pub/sub connection to a Redis server that treats keys and values as UTF-8 strings. + * + * @return A new stateful pub/sub connection + */ + public StatefulRedisPubSubConnection connectPubSub() { + return getConnection(connectPubSubAsync(newStringStringCodec(), redisURI, getDefaultTimeout())); + } + + /** + * Open a new pub/sub connection to a Redis server using the supplied {@link RedisURI} that treats keys and values as UTF-8 + * strings. + * + * @param redisURI the Redis server to connect to, must not be {@literal null} + * @return A new stateful pub/sub connection + */ + public StatefulRedisPubSubConnection connectPubSub(RedisURI redisURI) { + + assertNotNull(redisURI); + return getConnection(connectPubSubAsync(newStringStringCodec(), redisURI, redisURI.getTimeout())); + } + + /** + * Open a new pub/sub connection to the Redis server using the supplied {@link RedisURI} and use the supplied + * {@link RedisCodec codec} to encode/decode keys and values. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new stateful pub/sub connection + */ + public StatefulRedisPubSubConnection connectPubSub(RedisCodec codec) { + checkForRedisURI(); + return getConnection(connectPubSubAsync(codec, redisURI, getDefaultTimeout())); + } + + /** + * Open a new pub/sub connection to the Redis server using the supplied {@link RedisURI} and use the supplied + * {@link RedisCodec codec} to encode/decode keys and values. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param redisURI the Redis server to connect to, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new connection + */ + public StatefulRedisPubSubConnection connectPubSub(RedisCodec codec, RedisURI redisURI) { + + assertNotNull(redisURI); + return getConnection(connectPubSubAsync(codec, redisURI, redisURI.getTimeout())); + } + + /** + * Open asynchronously a new pub/sub connection to the Redis server using the supplied {@link RedisURI} and use the supplied + * {@link RedisCodec codec} to encode/decode keys and values. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param redisURI the redis server to connect to, must not be {@literal null} + * @param Key type + * @param Value type + * @return {@link ConnectionFuture} to indicate success or failure to connect. + * @since 5.0 + */ + public ConnectionFuture> connectPubSubAsync(RedisCodec codec, + RedisURI redisURI) { + + assertNotNull(redisURI); + return transformAsyncConnectionException(connectPubSubAsync(codec, redisURI, redisURI.getTimeout())); + } + + private ConnectionFuture> connectPubSubAsync(RedisCodec codec, + RedisURI redisURI, Duration timeout) { + + assertNotNull(codec); + checkValidRedisURI(redisURI); + + PubSubEndpoint endpoint = new PubSubEndpoint<>(getOptions(), getResources()); + RedisChannelWriter writer = endpoint; + + if (CommandExpiryWriter.isSupported(getOptions())) { + writer = new CommandExpiryWriter(writer, getOptions(), getResources()); + } + + StatefulRedisPubSubConnectionImpl connection = newStatefulRedisPubSubConnection(endpoint, writer, codec, timeout); + + ConnectionFuture> future = connectStatefulAsync(connection, endpoint, redisURI, + () -> new PubSubCommandHandler<>(getOptions(), getResources(), codec, endpoint)); + + return future.whenComplete((conn, throwable) -> { + + if (throwable != null) { + conn.close(); + } + }); + } + + /** + * Open a connection to a Redis Sentinel that treats keys and values as UTF-8 strings. + * + * @return A new stateful Redis Sentinel connection + */ + public StatefulRedisSentinelConnection connectSentinel() { + return connectSentinel(newStringStringCodec()); + } + + /** + * Open a connection to a Redis Sentinel that treats keys and use the supplied {@link RedisCodec codec} to encode/decode + * keys and values. The client {@link RedisURI} must contain one or more sentinels. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new stateful Redis Sentinel connection + */ + public StatefulRedisSentinelConnection connectSentinel(RedisCodec codec) { + checkForRedisURI(); + return getConnection(connectSentinelAsync(codec, redisURI, getDefaultTimeout())); + } + + /** + * Open a connection to a Redis Sentinel using the supplied {@link RedisURI} that treats keys and values as UTF-8 strings. + * The client {@link RedisURI} must contain one or more sentinels. + * + * @param redisURI the Redis server to connect to, must not be {@literal null} + * @return A new connection + */ + public StatefulRedisSentinelConnection connectSentinel(RedisURI redisURI) { + + assertNotNull(redisURI); + + return getConnection(connectSentinelAsync(newStringStringCodec(), redisURI, redisURI.getTimeout())); + } + + /** + * Open a connection to a Redis Sentinel using the supplied {@link RedisURI} and use the supplied {@link RedisCodec codec} + * to encode/decode keys and values. The client {@link RedisURI} must contain one or more sentinels. + * + * @param codec the Redis server to connect to, must not be {@literal null} + * @param redisURI the Redis server to connect to, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new connection + */ + public StatefulRedisSentinelConnection connectSentinel(RedisCodec codec, RedisURI redisURI) { + + assertNotNull(redisURI); + + return getConnection(connectSentinelAsync(codec, redisURI, redisURI.getTimeout())); + } + + /** + * Open asynchronously a connection to a Redis Sentinel using the supplied {@link RedisURI} and use the supplied + * {@link RedisCodec codec} to encode/decode keys and values. The client {@link RedisURI} must contain one or more + * sentinels. + * + * @param codec the Redis server to connect to, must not be {@literal null} + * @param redisURI the Redis server to connect to, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new connection + * @since 5.1 + */ + public CompletableFuture> connectSentinelAsync(RedisCodec codec, + RedisURI redisURI) { + + assertNotNull(redisURI); + + return transformAsyncConnectionException(connectSentinelAsync(codec, redisURI, redisURI.getTimeout()), redisURI); + } + + private CompletableFuture> connectSentinelAsync(RedisCodec codec, + RedisURI redisURI, Duration timeout) { + + assertNotNull(codec); + checkValidRedisURI(redisURI); + + logger.debug("Trying to get a Redis Sentinel connection for one of: " + redisURI.getSentinels()); + + if (redisURI.getSentinels().isEmpty() && (isNotEmpty(redisURI.getHost()) || !isEmpty(redisURI.getSocket()))) { + return doConnectSentinelAsync(codec, redisURI, timeout, redisURI.getClientName()).toCompletableFuture(); + } + + List sentinels = redisURI.getSentinels(); + Queue exceptionCollector = new LinkedBlockingQueue<>(); + validateUrisAreOfSameConnectionType(sentinels); + + Mono> connectionLoop = null; + + for (RedisURI uri : sentinels) { + + Mono> connectionMono = Mono + .fromCompletionStage(() -> doConnectSentinelAsync(codec, uri, timeout, redisURI.getClientName())) + .onErrorMap(CompletionException.class, Throwable::getCause) + .onErrorMap(e -> new RedisConnectionException("Cannot connect Redis Sentinel at " + uri, e)) + .doOnError(exceptionCollector::add); + + if (connectionLoop == null) { + connectionLoop = connectionMono; + } else { + connectionLoop = connectionLoop.onErrorResume(t -> connectionMono); + } + } + + if (connectionLoop == null) { + return Mono + .> error( + new RedisConnectionException("Cannot connect to a Redis Sentinel: " + redisURI.getSentinels())) + .toFuture(); + } + + return connectionLoop.onErrorMap(e -> { + + RedisConnectionException ex = new RedisConnectionException( + "Cannot connect to a Redis Sentinel: " + redisURI.getSentinels(), e); + + for (Throwable throwable : exceptionCollector) { + if (e != throwable) { + ex.addSuppressed(throwable); + } + } + + return ex; + }).toFuture(); + } + + private ConnectionFuture> doConnectSentinelAsync(RedisCodec codec, + RedisURI redisURI, Duration timeout, String clientName) { + + ConnectionBuilder connectionBuilder; + if (redisURI.isSsl()) { + SslConnectionBuilder sslConnectionBuilder = SslConnectionBuilder.sslConnectionBuilder(); + sslConnectionBuilder.ssl(redisURI); + connectionBuilder = sslConnectionBuilder; + } else { + connectionBuilder = ConnectionBuilder.connectionBuilder(); + } + connectionBuilder.clientOptions(ClientOptions.copyOf(getOptions())); + connectionBuilder.clientResources(getResources()); + + DefaultEndpoint endpoint = new DefaultEndpoint(getOptions(), getResources()); + RedisChannelWriter writer = endpoint; + + if (CommandExpiryWriter.isSupported(getOptions())) { + writer = new CommandExpiryWriter(writer, getOptions(), getResources()); + } + + StatefulRedisSentinelConnectionImpl connection = newStatefulRedisSentinelConnection(writer, codec, timeout); + ConnectionState state = connection.getConnectionState(); + + state.apply(redisURI); + if (LettuceStrings.isEmpty(state.getClientName())) { + state.setClientName(clientName); + } + + connectionBuilder.connectionInitializer(createHandshake(state)); + + logger.debug("Connecting to Redis Sentinel, address: " + redisURI); + + connectionBuilder.endpoint(endpoint).commandHandler(() -> new CommandHandler(getOptions(), getResources(), endpoint)) + .connection(connection); + connectionBuilder(getSocketAddressSupplier(redisURI), connectionBuilder, redisURI); + + channelType(connectionBuilder, redisURI); + ConnectionFuture sync = initializeChannelAsync(connectionBuilder); + + return sync.thenApply(ignore -> (StatefulRedisSentinelConnection) connection).whenComplete((ignore, e) -> { + + if (e != null) { + logger.warn("Cannot connect Redis Sentinel at " + redisURI + ": " + e.toString()); + connection.close(); + } + }); + } + + /** + * Set the {@link ClientOptions} for the client. + * + * @param clientOptions the new client options + * @throws IllegalArgumentException if {@literal clientOptions} is null + */ + @Override + public void setOptions(ClientOptions clientOptions) { + super.setOptions(clientOptions); + } + + // ------------------------------------------------------------------------- + // Implementation hooks and helper methods + // ------------------------------------------------------------------------- + + /** + * Create a new instance of {@link StatefulRedisPubSubConnectionImpl} or a subclass. + *

+ * Subclasses of {@link RedisClient} may override that method. + * + * @param endpoint the endpoint + * @param channelWriter the channel writer + * @param codec codec + * @param timeout default timeout + * @param Key-Type + * @param Value Type + * @return new instance of StatefulRedisPubSubConnectionImpl + */ + protected StatefulRedisPubSubConnectionImpl newStatefulRedisPubSubConnection(PubSubEndpoint endpoint, + RedisChannelWriter channelWriter, RedisCodec codec, Duration timeout) { + return new StatefulRedisPubSubConnectionImpl<>(endpoint, channelWriter, codec, timeout); + } + + /** + * Create a new instance of {@link StatefulRedisSentinelConnectionImpl} or a subclass. + *

+ * Subclasses of {@link RedisClient} may override that method. + * + * @param channelWriter the channel writer + * @param codec codec + * @param timeout default timeout + * @param Key-Type + * @param Value Type + * @return new instance of StatefulRedisSentinelConnectionImpl + */ + protected StatefulRedisSentinelConnectionImpl newStatefulRedisSentinelConnection( + RedisChannelWriter channelWriter, RedisCodec codec, Duration timeout) { + return new StatefulRedisSentinelConnectionImpl<>(channelWriter, codec, timeout); + } + + /** + * Create a new instance of {@link StatefulRedisConnectionImpl} or a subclass. + *

+ * Subclasses of {@link RedisClient} may override that method. + * + * @param channelWriter the channel writer + * @param codec codec + * @param timeout default timeout + * @param Key-Type + * @param Value Type + * @return new instance of StatefulRedisConnectionImpl + */ + protected StatefulRedisConnectionImpl newStatefulRedisConnection(RedisChannelWriter channelWriter, + RedisCodec codec, Duration timeout) { + return new StatefulRedisConnectionImpl<>(channelWriter, codec, timeout); + } + + /** + * Get a {@link Mono} that resolves {@link RedisURI} to a {@link SocketAddress}. Resolution is performed either using Redis + * Sentinel (if the {@link RedisURI} is configured with Sentinels) or via DNS resolution. + *

+ * Subclasses of {@link RedisClient} may override that method. + * + * @param redisURI must not be {@literal null}. + * @return the resolved {@link SocketAddress}. + * @see ClientResources#dnsResolver() + * @see RedisURI#getSentinels() + * @see RedisURI#getSentinelMasterId() + */ + protected Mono getSocketAddress(RedisURI redisURI) { + + return Mono.defer(() -> { + + if (redisURI.getSentinelMasterId() != null && !redisURI.getSentinels().isEmpty()) { + logger.debug("Connecting to Redis using Sentinels {}, MasterId {}", redisURI.getSentinels(), + redisURI.getSentinelMasterId()); + return lookupRedis(redisURI).switchIfEmpty(Mono.error(new RedisConnectionException( + "Cannot provide redisAddress using sentinel for masterId " + redisURI.getSentinelMasterId()))); + + } else { + return Mono.fromCallable(() -> getResources().socketAddressResolver().resolve((redisURI))); + } + }); + } + + /** + * Returns a {@link String} {@link RedisCodec codec}. + * + * @return a {@link String} {@link RedisCodec codec}. + * @see StringCodec#UTF8 + */ + protected RedisCodec newStringStringCodec() { + return StringCodec.UTF8; + } + + private static void validateUrisAreOfSameConnectionType(List redisUris) { + + boolean unixDomainSocket = false; + boolean inetSocket = false; + for (RedisURI sentinel : redisUris) { + if (sentinel.getSocket() != null) { + unixDomainSocket = true; + } + if (sentinel.getHost() != null) { + inetSocket = true; + } + } + + if (unixDomainSocket && inetSocket) { + throw new RedisConnectionException("You cannot mix unix domain socket and IP socket URI's"); + } + } + + private Mono getSocketAddressSupplier(RedisURI redisURI) { + return getSocketAddress(redisURI).doOnNext(addr -> logger.debug("Resolved SocketAddress {} using {}", addr, redisURI)); + } + + private Mono lookupRedis(RedisURI sentinelUri) { + + Duration timeout = getDefaultTimeout(); + + Mono> connection = Mono + .fromCompletionStage(() -> connectSentinelAsync(newStringStringCodec(), sentinelUri, timeout)); + + return connection.flatMap(c -> { + + String sentinelMasterId = sentinelUri.getSentinelMasterId(); + return c.reactive().getMasterAddrByName(sentinelMasterId).map(it -> { + + if (it instanceof InetSocketAddress) { + + InetSocketAddress isa = (InetSocketAddress) it; + SocketAddress resolved = getResources().socketAddressResolver() + .resolve(RedisURI.create(isa.getHostString(), isa.getPort())); + + logger.debug("Resolved Master {} SocketAddress {}:{} to {}", sentinelMasterId, isa.getHostString(), + isa.getPort(), resolved); + + return resolved; + } + + return it; + }).timeout(timeout) // + .onErrorResume(e -> { + + RedisCommandTimeoutException ex = ExceptionFactory + .createTimeoutException("Cannot obtain master using SENTINEL MASTER", timeout); + ex.addSuppressed(e); + + return Mono.fromCompletionStage(c::closeAsync).then(Mono.error(ex)); + }).flatMap(it -> Mono.fromCompletionStage(c::closeAsync) // + .thenReturn(it)); + }); + } + + private static ConnectionFuture transformAsyncConnectionException(ConnectionFuture future) { + + return future.thenCompose((v, e) -> { + + if (e != null) { + return Futures.failed(RedisConnectionException.create(future.getRemoteAddress(), e)); + } + + return CompletableFuture.completedFuture(v); + }); + } + + private static CompletableFuture transformAsyncConnectionException(CompletionStage future, RedisURI target) { + + return ConnectionFuture.from(null, future.toCompletableFuture()).thenCompose((v, e) -> { + + if (e != null) { + return Futures.failed(RedisConnectionException.create(target.toString(), e)); + } + + return CompletableFuture.completedFuture(v); + }).toCompletableFuture(); + } + + private static void checkValidRedisURI(RedisURI redisURI) { + + LettuceAssert.notNull(redisURI, "A valid RedisURI is required"); + + if (redisURI.getSentinels().isEmpty()) { + if (isEmpty(redisURI.getHost()) && isEmpty(redisURI.getSocket())) { + throw new IllegalArgumentException("RedisURI for Redis Standalone does not contain a host or a socket"); + } + } else { + + if (isEmpty(redisURI.getSentinelMasterId())) { + throw new IllegalArgumentException("RedisURI for Redis Sentinel requires a masterId"); + } + + for (RedisURI sentinel : redisURI.getSentinels()) { + if (isEmpty(sentinel.getHost()) && isEmpty(sentinel.getSocket())) { + throw new IllegalArgumentException("RedisURI for Redis Sentinel does not contain a host or a socket"); + } + } + } + } + + private static void assertNotNull(RedisCodec codec) { + LettuceAssert.notNull(codec, "RedisCodec must not be null"); + } + + private static void assertNotNull(RedisURI redisURI) { + LettuceAssert.notNull(redisURI, "RedisURI must not be null"); + } + + private static void assertNotNull(ClientResources clientResources) { + LettuceAssert.notNull(clientResources, "ClientResources must not be null"); + } + + private void checkForRedisURI() { + LettuceAssert.assertState(this.redisURI != EMPTY_URI, + "RedisURI is not available. Use RedisClient(Host), RedisClient(Host, Port) or RedisClient(RedisURI) to construct your client."); + checkValidRedisURI(this.redisURI); + } +} diff --git a/src/main/java/io/lettuce/core/RedisCommandBuilder.java b/src/main/java/io/lettuce/core/RedisCommandBuilder.java new file mode 100644 index 0000000000..85f224cdb4 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisCommandBuilder.java @@ -0,0 +1,3302 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.LettuceStrings.string; +import static io.lettuce.core.protocol.CommandKeyword.*; +import static io.lettuce.core.protocol.CommandType.*; + +import java.nio.ByteBuffer; +import java.util.*; + +import io.lettuce.core.Range.Boundary; +import io.lettuce.core.XReadArgs.StreamOffset; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.*; +import io.lettuce.core.protocol.*; + +/** + * @param + * @param + * @author Mark Paluch + * @author Zhang Jessey + * @author Tugdual Grall + */ +@SuppressWarnings({ "unchecked", "varargs" }) +class RedisCommandBuilder extends BaseRedisCommandBuilder { + + private static final String MUST_NOT_CONTAIN_NULL_ELEMENTS = "must not contain null elements"; + private static final String MUST_NOT_BE_EMPTY = "must not be empty"; + private static final String MUST_NOT_BE_NULL = "must not be null"; + + private static final byte[] MINUS_BYTES = { '-' }; + private static final byte[] PLUS_BYTES = { '+' }; + + RedisCommandBuilder(RedisCodec codec) { + super(codec); + } + + Command append(K key, V value) { + notNullKey(key); + + return createCommand(APPEND, new IntegerOutput<>(codec), key, value); + } + + Command asking() { + + CommandArgs args = new CommandArgs<>(codec); + return createCommand(ASKING, new StatusOutput<>(codec), args); + } + + Command auth(CharSequence password) { + LettuceAssert.notNull(password, "Password " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(password, "Password " + MUST_NOT_BE_EMPTY); + + char[] chars = new char[password.length()]; + for (int i = 0; i < password.length(); i++) { + chars[i] = password.charAt(i); + } + return auth(chars); + } + + Command auth(char[] password) { + LettuceAssert.notNull(password, "Password " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(password.length > 0, "Password " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).add(password); + return createCommand(AUTH, new StatusOutput<>(codec), args); + } + + Command auth(String username, CharSequence password) { + LettuceAssert.notNull(username, "Username " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(!username.isEmpty(), "Username " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(password, "Password " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(password, "Password " + MUST_NOT_BE_EMPTY); + + char[] chars = new char[password.length()]; + for (int i = 0; i < password.length(); i++) { + chars[i] = password.charAt(i); + } + return auth(username,chars); + } + + Command auth(String username, char[] password) { + LettuceAssert.notNull(username, "Username " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(!username.isEmpty(), "Username " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(password, "Password " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(password.length > 0, "Password " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).add(username).add(password); + return createCommand(AUTH, new StatusOutput<>(codec), args); + } + + Command bgrewriteaof() { + return createCommand(BGREWRITEAOF, new StatusOutput<>(codec)); + } + + Command bgsave() { + return createCommand(BGSAVE, new StatusOutput<>(codec)); + } + + Command bitcount(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + return createCommand(BITCOUNT, new IntegerOutput<>(codec), args); + } + + Command bitcount(K key, long start, long end) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(start).add(end); + return createCommand(BITCOUNT, new IntegerOutput<>(codec), args); + } + + Command> bitfield(K key, BitFieldArgs bitFieldArgs) { + notNullKey(key); + LettuceAssert.notNull(bitFieldArgs, "BitFieldArgs must not be null"); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key); + + bitFieldArgs.build(args); + + return createCommand(BITFIELD, (CommandOutput) new ArrayOutput<>(codec), args); + } + + Command>> bitfieldValue(K key, BitFieldArgs bitFieldArgs) { + notNullKey(key); + LettuceAssert.notNull(bitFieldArgs, "BitFieldArgs must not be null"); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key); + + bitFieldArgs.build(args); + + return createCommand(BITFIELD, (CommandOutput) new ValueValueListOutput<>(codec), args); + } + + Command bitopAnd(K destination, K... keys) { + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec); + args.add(AND).addKey(destination).addKeys(keys); + return createCommand(BITOP, new IntegerOutput<>(codec), args); + } + + Command bitopNot(K destination, K source) { + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(source, "Source " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + args.add(NOT).addKey(destination).addKey(source); + return createCommand(BITOP, new IntegerOutput<>(codec), args); + } + + Command bitopOr(K destination, K... keys) { + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec); + args.add(OR).addKey(destination).addKeys(keys); + return createCommand(BITOP, new IntegerOutput<>(codec), args); + } + + Command bitopXor(K destination, K... keys) { + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec); + args.add(XOR).addKey(destination).addKeys(keys); + return createCommand(BITOP, new IntegerOutput<>(codec), args); + } + + Command bitpos(K key, boolean state) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(state ? 1 : 0); + return createCommand(BITPOS, new IntegerOutput<>(codec), args); + } + + Command bitpos(K key, boolean state, long start) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(state ? 1 : 0).add(start); + return createCommand(BITPOS, new IntegerOutput<>(codec), args); + } + + Command bitpos(K key, boolean state, long start, long end) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(state ? 1 : 0).add(start).add(end); + return createCommand(BITPOS, new IntegerOutput<>(codec), args); + } + + Command> blpop(long timeout, K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys).add(timeout); + return createCommand(BLPOP, new KeyValueOutput<>(codec), args); + } + + Command> brpop(long timeout, K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys).add(timeout); + return createCommand(BRPOP, new KeyValueOutput<>(codec), args); + } + + Command brpoplpush(long timeout, K source, K destination) { + LettuceAssert.notNull(source, "Source " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(source).addKey(destination).add(timeout); + return createCommand(BRPOPLPUSH, new ValueOutput<>(codec), args); + } + + Command clientGetname() { + CommandArgs args = new CommandArgs<>(codec).add(GETNAME); + return createCommand(CLIENT, new KeyOutput<>(codec), args); + } + + Command clientKill(String addr) { + LettuceAssert.notNull(addr, "Addr " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(addr, "Addr " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).add(KILL).add(addr); + return createCommand(CLIENT, new StatusOutput<>(codec), args); + } + + Command clientKill(KillArgs killArgs) { + LettuceAssert.notNull(killArgs, "KillArgs " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add(KILL); + killArgs.build(args); + return createCommand(CLIENT, new IntegerOutput<>(codec), args); + } + + Command clientList() { + CommandArgs args = new CommandArgs<>(codec).add(LIST); + return createCommand(CLIENT, new StatusOutput<>(codec), args); + } + + Command clientId() { + CommandArgs args = new CommandArgs<>(codec).add(ID); + return createCommand(CLIENT, new IntegerOutput<>(codec), args); + } + + Command clientPause(long timeout) { + CommandArgs args = new CommandArgs<>(codec).add(PAUSE).add(timeout); + return createCommand(CLIENT, new StatusOutput<>(codec), args); + } + + Command clientSetname(K name) { + LettuceAssert.notNull(name, "Name " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add(SETNAME).addKey(name); + return createCommand(CLIENT, new StatusOutput<>(codec), args); + } + + Command clientUnblock(long id, UnblockType type) { + LettuceAssert.notNull(type, "UnblockType " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add(UNBLOCK).add(id).add(type); + return createCommand(CLIENT, new IntegerOutput<>(codec), args); + } + + Command clusterAddslots(int[] slots) { + notEmptySlots(slots); + + CommandArgs args = new CommandArgs<>(codec).add(ADDSLOTS); + + for (int slot : slots) { + args.add(slot); + } + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterBumpepoch() { + CommandArgs args = new CommandArgs<>(codec).add(BUMPEPOCH); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterCountFailureReports(String nodeId) { + assertNodeId(nodeId); + + CommandArgs args = new CommandArgs<>(codec).add("COUNT-FAILURE-REPORTS").add(nodeId); + return createCommand(CLUSTER, new IntegerOutput<>(codec), args); + } + + Command clusterCountKeysInSlot(int slot) { + CommandArgs args = new CommandArgs<>(codec).add(COUNTKEYSINSLOT).add(slot); + return createCommand(CLUSTER, new IntegerOutput<>(codec), args); + } + + Command clusterDelslots(int[] slots) { + notEmptySlots(slots); + + CommandArgs args = new CommandArgs<>(codec).add(DELSLOTS); + + for (int slot : slots) { + args.add(slot); + } + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterFailover(boolean force) { + + CommandArgs args = new CommandArgs<>(codec).add(FAILOVER); + if (force) { + args.add(FORCE); + } + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterFlushslots() { + + CommandArgs args = new CommandArgs<>(codec).add(FLUSHSLOTS); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterForget(String nodeId) { + assertNodeId(nodeId); + + CommandArgs args = new CommandArgs<>(codec).add(FORGET).add(nodeId); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command> clusterGetKeysInSlot(int slot, int count) { + CommandArgs args = new CommandArgs<>(codec).add(GETKEYSINSLOT).add(slot).add(count); + return createCommand(CLUSTER, new KeyListOutput<>(codec), args); + } + + Command clusterInfo() { + CommandArgs args = new CommandArgs<>(codec).add(INFO); + + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterKeyslot(K key) { + CommandArgs args = new CommandArgs<>(codec).add(KEYSLOT).addKey(key); + return createCommand(CLUSTER, new IntegerOutput<>(codec), args); + } + + Command clusterMeet(String ip, int port) { + LettuceAssert.notNull(ip, "IP " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(ip, "IP " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).add(MEET).add(ip).add(port); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterMyId() { + CommandArgs args = new CommandArgs<>(codec).add(MYID); + + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterNodes() { + CommandArgs args = new CommandArgs<>(codec).add(NODES); + + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterReplicate(String nodeId) { + assertNodeId(nodeId); + + CommandArgs args = new CommandArgs<>(codec).add(REPLICATE).add(nodeId); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterReset(boolean hard) { + + CommandArgs args = new CommandArgs<>(codec).add(RESET); + if (hard) { + args.add(HARD); + } else { + args.add(SOFT); + } + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterSaveconfig() { + CommandArgs args = new CommandArgs<>(codec).add(SAVECONFIG); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterSetConfigEpoch(long configEpoch) { + CommandArgs args = new CommandArgs<>(codec).add("SET-CONFIG-EPOCH").add(configEpoch); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterSetSlotImporting(int slot, String nodeId) { + assertNodeId(nodeId); + + CommandArgs args = new CommandArgs<>(codec).add(SETSLOT).add(slot).add(IMPORTING).add(nodeId); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterSetSlotMigrating(int slot, String nodeId) { + assertNodeId(nodeId); + + CommandArgs args = new CommandArgs<>(codec).add(SETSLOT).add(slot).add(MIGRATING).add(nodeId); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterSetSlotNode(int slot, String nodeId) { + assertNodeId(nodeId); + + CommandArgs args = new CommandArgs<>(codec).add(SETSLOT).add(slot).add(NODE).add(nodeId); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command clusterSetSlotStable(int slot) { + + CommandArgs args = new CommandArgs<>(codec).add(SETSLOT).add(slot).add(STABLE); + return createCommand(CLUSTER, new StatusOutput<>(codec), args); + } + + Command> clusterSlaves(String nodeId) { + assertNodeId(nodeId); + + CommandArgs args = new CommandArgs<>(codec).add(SLAVES).add(nodeId); + return createCommand(CLUSTER, new StringListOutput<>(codec), args); + } + + Command> clusterSlots() { + CommandArgs args = new CommandArgs<>(codec).add(SLOTS); + return createCommand(CLUSTER, new ArrayOutput<>(codec), args); + } + + Command> command() { + CommandArgs args = new CommandArgs((RedisCodec) StringCodec.UTF8); + return createCommand(COMMAND, new ArrayOutput((RedisCodec) StringCodec.UTF8), args); + } + + Command commandCount() { + CommandArgs args = new CommandArgs<>(codec).add(COUNT); + return createCommand(COMMAND, new IntegerOutput<>(codec), args); + } + + Command> commandInfo(String... commands) { + LettuceAssert.notNull(commands, "Commands " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(commands, "Commands " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(commands, "Commands " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs((RedisCodec) StringCodec.UTF8); + args.add(INFO); + + for (String command : commands) { + args.add(command); + } + + return createCommand(COMMAND, new ArrayOutput((RedisCodec) StringCodec.UTF8), args); + } + + Command> configGet(String parameter) { + LettuceAssert.notNull(parameter, "Parameter " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(parameter, "Parameter " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>((RedisCodec) StringCodec.UTF8).add(GET).add(parameter); + return createCommand(CONFIG, new MapOutput<>((RedisCodec) StringCodec.UTF8), args); + } + + Command configResetstat() { + CommandArgs args = new CommandArgs<>(codec).add(RESETSTAT); + return createCommand(CONFIG, new StatusOutput<>(codec), args); + } + + Command configRewrite() { + CommandArgs args = new CommandArgs<>(codec).add(REWRITE); + return createCommand(CONFIG, new StatusOutput<>(codec), args); + } + + Command configSet(String parameter, String value) { + LettuceAssert.notNull(parameter, "Parameter " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(parameter, "Parameter " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(value, "Value " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add(SET).add(parameter).add(value); + return createCommand(CONFIG, new StatusOutput<>(codec), args); + } + + Command dbsize() { + return createCommand(DBSIZE, new IntegerOutput<>(codec)); + } + + Command debugCrashAndRecover(Long delay) { + CommandArgs args = new CommandArgs<>(codec).add("CRASH-AND-RECOVER"); + if (delay != null) { + args.add(delay); + } + return createCommand(DEBUG, new StatusOutput<>(codec), args); + } + + Command debugHtstats(int db) { + CommandArgs args = new CommandArgs<>(codec).add(HTSTATS).add(db); + return createCommand(DEBUG, new StatusOutput<>(codec), args); + } + + Command debugObject(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).add(OBJECT).addKey(key); + return createCommand(DEBUG, new StatusOutput<>(codec), args); + } + + Command debugOom() { + return createCommand(DEBUG, null, new CommandArgs<>(codec).add("OOM")); + } + + Command debugReload() { + return createCommand(DEBUG, new StatusOutput<>(codec), new CommandArgs<>(codec).add(RELOAD)); + } + + Command debugRestart(Long delay) { + CommandArgs args = new CommandArgs<>(codec).add(RESTART); + if (delay != null) { + args.add(delay); + } + return createCommand(DEBUG, new StatusOutput<>(codec), args); + } + + Command debugSdslen(K key) { + notNullKey(key); + + return createCommand(DEBUG, new StatusOutput<>(codec), new CommandArgs<>(codec).add("SDSLEN").addKey(key)); + } + + Command debugSegfault() { + CommandArgs args = new CommandArgs<>(codec).add(SEGFAULT); + return createCommand(DEBUG, null, args); + } + + Command decr(K key) { + notNullKey(key); + + return createCommand(DECR, new IntegerOutput<>(codec), key); + } + + Command decrby(K key, long amount) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(amount); + return createCommand(DECRBY, new IntegerOutput<>(codec), args); + } + + Command del(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(DEL, new IntegerOutput<>(codec), args); + } + + Command del(Iterable keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(DEL, new IntegerOutput<>(codec), args); + } + + Command discard() { + return createCommand(DISCARD, new StatusOutput<>(codec)); + } + + Command dump(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + return createCommand(DUMP, new ByteArrayOutput<>(codec), args); + } + + Command echo(V msg) { + LettuceAssert.notNull(msg, "message " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addValue(msg); + return createCommand(ECHO, new ValueOutput<>(codec), args); + } + + Command eval(byte[] script, ScriptOutputType type, K... keys) { + LettuceAssert.notNull(script, "Script " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(type, "ScriptOutputType " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + args.add(script).add(keys.length).addKeys(keys); + CommandOutput output = newScriptOutput(codec, type); + return createCommand(EVAL, output, args); + } + + Command eval(byte[] script, ScriptOutputType type, K[] keys, V... values) { + LettuceAssert.notNull(script, "Script " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(type, "ScriptOutputType " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(values, "Values " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + args.add(script).add(keys.length).addKeys(keys).addValues(values); + CommandOutput output = newScriptOutput(codec, type); + return createCommand(EVAL, output, args); + } + + Command evalsha(String digest, ScriptOutputType type, K... keys) { + LettuceAssert.notNull(digest, "Digest " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(digest, "Digest " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(type, "ScriptOutputType " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + args.add(digest).add(keys.length).addKeys(keys); + CommandOutput output = newScriptOutput(codec, type); + return createCommand(EVALSHA, output, args); + } + + Command evalsha(String digest, ScriptOutputType type, K[] keys, V... values) { + LettuceAssert.notNull(digest, "Digest " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(digest, "Digest " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(type, "ScriptOutputType " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(values, "Values " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + args.add(digest).add(keys.length).addKeys(keys).addValues(values); + CommandOutput output = newScriptOutput(codec, type); + return createCommand(EVALSHA, output, args); + } + + Command exists(K key) { + notNullKey(key); + + return createCommand(EXISTS, new BooleanOutput<>(codec), key); + } + + Command exists(K... keys) { + notEmpty(keys); + + return createCommand(EXISTS, new IntegerOutput<>(codec), new CommandArgs<>(codec).addKeys(keys)); + } + + Command exists(Iterable keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + + return createCommand(EXISTS, new IntegerOutput<>(codec), new CommandArgs<>(codec).addKeys(keys)); + } + + Command expire(K key, long seconds) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(seconds); + return createCommand(EXPIRE, new BooleanOutput<>(codec), args); + } + + Command expireat(K key, long timestamp) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(timestamp); + return createCommand(EXPIREAT, new BooleanOutput<>(codec), args); + } + + Command flushall() { + return createCommand(FLUSHALL, new StatusOutput<>(codec)); + } + + Command flushallAsync() { + return createCommand(FLUSHALL, new StatusOutput<>(codec), new CommandArgs<>(codec).add(ASYNC)); + } + + Command flushdb() { + return createCommand(FLUSHDB, new StatusOutput<>(codec)); + } + + Command flushdbAsync() { + return createCommand(FLUSHDB, new StatusOutput<>(codec), new CommandArgs<>(codec).add(ASYNC)); + } + + Command geoadd(K key, double longitude, double latitude, V member) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(longitude).add(latitude).addValue(member); + return createCommand(GEOADD, new IntegerOutput<>(codec), args); + } + + Command geoadd(K key, Object[] lngLatMember) { + + notNullKey(key); + LettuceAssert.notNull(lngLatMember, "LngLatMember " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(lngLatMember, "LngLatMember " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(lngLatMember, "LngLatMember " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + LettuceAssert.isTrue(lngLatMember.length % 3 == 0, "LngLatMember.length must be a multiple of 3 and contain a " + + "sequence of longitude1, latitude1, member1, longitude2, latitude2, member2, ... longitudeN, latitudeN, memberN"); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + for (int i = 0; i < lngLatMember.length; i += 3) { + args.add((Double) lngLatMember[i]); + args.add((Double) lngLatMember[i + 1]); + args.addValue((V) lngLatMember[i + 2]); + } + + return createCommand(GEOADD, new IntegerOutput<>(codec), args); + } + + Command geodist(K key, V from, V to, GeoArgs.Unit unit) { + notNullKey(key); + LettuceAssert.notNull(from, "From " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(from, "To " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValue(from).addValue(to); + + if (unit != null) { + args.add(unit.name()); + } + + return createCommand(GEODIST, new DoubleOutput<>(codec), args); + } + + Command>> geohash(K key, V... members) { + notNullKey(key); + LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValues(members); + return createCommand(GEOHASH, new StringValueListOutput<>(codec), args); + } + + Command> geopos(K key, V[] members) { + notNullKey(key); + LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValues(members); + + return createCommand(GEOPOS, new GeoCoordinatesListOutput<>(codec), args); + } + + Command>> geoposValues(K key, V[] members) { + notNullKey(key); + LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValues(members); + + return createCommand(GEOPOS, new GeoCoordinatesValueListOutput<>(codec), args); + } + + Command> georadius(CommandType commandType, K key, double longitude, double latitude, double distance, + String unit) { + notNullKey(key); + LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(longitude).add(latitude).add(distance).add(unit); + return createCommand(commandType, new ValueSetOutput<>(codec), args); + } + + Command>> georadius(CommandType commandType, K key, double longitude, double latitude, + double distance, String unit, GeoArgs geoArgs) { + + notNullKey(key); + LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(geoArgs, "GeoArgs " + MUST_NOT_BE_NULL); + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(longitude).add(latitude).add(distance).add(unit); + geoArgs.build(args); + + return createCommand(commandType, + new GeoWithinListOutput<>(codec, geoArgs.isWithDistance(), geoArgs.isWithHash(), geoArgs.isWithCoordinates()), + args); + } + + Command georadius(K key, double longitude, double latitude, double distance, String unit, + GeoRadiusStoreArgs geoRadiusStoreArgs) { + + notNullKey(key); + LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(geoRadiusStoreArgs, "GeoRadiusStoreArgs " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(geoRadiusStoreArgs.getStoreKey() != null || geoRadiusStoreArgs.getStoreDistKey() != null, + "At least STORE key or STORDIST key is required"); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(longitude).add(latitude).add(distance).add(unit); + geoRadiusStoreArgs.build(args); + + return createCommand(GEORADIUS, new IntegerOutput<>(codec), args); + } + + Command> georadiusbymember(CommandType commandType, K key, V member, double distance, String unit) { + + notNullKey(key); + LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValue(member).add(distance).add(unit); + return createCommand(commandType, new ValueSetOutput<>(codec), args); + } + + Command>> georadiusbymember(CommandType commandType, K key, V member, double distance, String unit, + GeoArgs geoArgs) { + + notNullKey(key); + LettuceAssert.notNull(geoArgs, "GeoArgs " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValue(member).add(distance).add(unit); + geoArgs.build(args); + + return createCommand(commandType, + new GeoWithinListOutput<>(codec, geoArgs.isWithDistance(), geoArgs.isWithHash(), geoArgs.isWithCoordinates()), + args); + } + + Command georadiusbymember(K key, V member, double distance, String unit, + GeoRadiusStoreArgs geoRadiusStoreArgs) { + + notNullKey(key); + LettuceAssert.notNull(geoRadiusStoreArgs, "GeoRadiusStoreArgs " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(unit, "Unit " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(unit, "Unit " + MUST_NOT_BE_EMPTY); + LettuceAssert.isTrue(geoRadiusStoreArgs.getStoreKey() != null || geoRadiusStoreArgs.getStoreDistKey() != null, + "At least STORE key or STORDIST key is required"); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValue(member).add(distance).add(unit); + geoRadiusStoreArgs.build(args); + + return createCommand(GEORADIUSBYMEMBER, new IntegerOutput<>(codec), args); + } + + Command get(K key) { + notNullKey(key); + + return createCommand(GET, new ValueOutput<>(codec), key); + } + + Command getbit(K key, long offset) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(offset); + return createCommand(GETBIT, new IntegerOutput<>(codec), args); + } + + Command getrange(K key, long start, long end) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(end); + return createCommand(GETRANGE, new ValueOutput<>(codec), args); + } + + Command getset(K key, V value) { + notNullKey(key); + + return createCommand(GETSET, new ValueOutput<>(codec), key, value); + } + + Command hdel(K key, K... fields) { + notNullKey(key); + LettuceAssert.notNull(fields, "Fields " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(fields, "Fields " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKeys(fields); + return createCommand(HDEL, new IntegerOutput<>(codec), args); + } + + Command> hello(int protocolVersion, String user, char[] password, String name) { + + CommandArgs args = new CommandArgs<>(StringCodec.ASCII).add(protocolVersion); + + if (user != null && password != null) { + args.add(AUTH).add(user).add(password); + } + + if (name != null) { + args.add(SETNAME).add(name); + } + + return new Command<>(HELLO, new GenericMapOutput<>(StringCodec.ASCII), args); + } + + Command hexists(K key, K field) { + notNullKey(key); + LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(field); + return createCommand(HEXISTS, new BooleanOutput<>(codec), args); + } + + Command hget(K key, K field) { + notNullKey(key); + LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(field); + return createCommand(HGET, new ValueOutput<>(codec), args); + } + + Command> hgetall(K key) { + notNullKey(key); + + return createCommand(HGETALL, new MapOutput<>(codec), key); + } + + Command hgetall(KeyValueStreamingChannel channel, K key) { + notNullKey(key); + notNull(channel); + + return createCommand(HGETALL, new KeyValueStreamingOutput<>(codec, channel), key); + } + + Command hincrby(K key, K field, long amount) { + notNullKey(key); + LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(field).add(amount); + return createCommand(HINCRBY, new IntegerOutput<>(codec), args); + } + + Command hincrbyfloat(K key, K field, double amount) { + notNullKey(key); + LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(field).add(amount); + return createCommand(HINCRBYFLOAT, new DoubleOutput<>(codec), args); + } + + Command> hkeys(K key) { + notNullKey(key); + + return createCommand(HKEYS, new KeyListOutput<>(codec), key); + } + + Command hkeys(KeyStreamingChannel channel, K key) { + notNullKey(key); + notNull(channel); + + return createCommand(HKEYS, new KeyStreamingOutput<>(codec, channel), key); + } + + Command hlen(K key) { + notNullKey(key); + + return createCommand(HLEN, new IntegerOutput<>(codec), key); + } + + Command> hmget(K key, K... fields) { + notNullKey(key); + LettuceAssert.notNull(fields, "Fields " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(fields, "Fields " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKeys(fields); + return createCommand(HMGET, new ValueListOutput<>(codec), args); + } + + Command hmget(ValueStreamingChannel channel, K key, K... fields) { + notNullKey(key); + LettuceAssert.notNull(fields, "Fields " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(fields, "Fields " + MUST_NOT_BE_EMPTY); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKeys(fields); + return createCommand(HMGET, new ValueStreamingOutput<>(codec, channel), args); + } + + Command hmget(KeyValueStreamingChannel channel, K key, K... fields) { + notNullKey(key); + LettuceAssert.notNull(fields, "Fields " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(fields, "Fields " + MUST_NOT_BE_EMPTY); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKeys(fields); + return createCommand(HMGET, new KeyValueStreamingOutput<>(codec, channel, Arrays.asList(fields)), args); + } + + Command>> hmgetKeyValue(K key, K... fields) { + notNullKey(key); + LettuceAssert.notNull(fields, "Fields " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(fields, "Fields " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKeys(fields); + return createCommand(HMGET, new KeyValueListOutput<>(codec, Arrays.asList(fields)), args); + } + + Command hmset(K key, Map map) { + notNullKey(key); + LettuceAssert.notNull(map, "Map " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(!map.isEmpty(), "Map " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(map); + return createCommand(HMSET, new StatusOutput<>(codec), args); + } + + Command> hscan(K key) { + notNullKey(key); + + return hscan(key, ScanCursor.INITIAL, null); + } + + Command> hscan(K key, ScanCursor scanCursor) { + notNullKey(key); + + return hscan(key, scanCursor, null); + } + + Command> hscan(K key, ScanArgs scanArgs) { + notNullKey(key); + + return hscan(key, ScanCursor.INITIAL, scanArgs); + } + + Command> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key); + + scanArgs(scanCursor, scanArgs, args); + + MapScanOutput output = new MapScanOutput<>(codec); + return createCommand(HSCAN, output, args); + } + + Command hscanStreaming(KeyValueStreamingChannel channel, K key) { + notNullKey(key); + notNull(channel); + + return hscanStreaming(channel, key, ScanCursor.INITIAL, null); + } + + Command hscanStreaming(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor) { + notNullKey(key); + notNull(channel); + + return hscanStreaming(channel, key, scanCursor, null); + } + + Command hscanStreaming(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs) { + notNullKey(key); + notNull(channel); + + return hscanStreaming(channel, key, ScanCursor.INITIAL, scanArgs); + } + + Command hscanStreaming(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, + ScanArgs scanArgs) { + notNullKey(key); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + + args.addKey(key); + scanArgs(scanCursor, scanArgs, args); + + KeyValueScanStreamingOutput output = new KeyValueScanStreamingOutput<>(codec, channel); + return createCommand(HSCAN, output, args); + } + + Command hset(K key, K field, V value) { + notNullKey(key); + LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(field).addValue(value); + return createCommand(HSET, new BooleanOutput<>(codec), args); + } + + Command hset(K key, Map map) { + notNullKey(key); + LettuceAssert.notNull(map, "Map " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(!map.isEmpty(), "Map " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(map); + return createCommand(HSET, new IntegerOutput<>(codec), args); + } + + Command hsetnx(K key, K field, V value) { + notNullKey(key); + LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(field).addValue(value); + return createCommand(HSETNX, new BooleanOutput<>(codec), args); + } + + Command hstrlen(K key, K field) { + notNullKey(key); + LettuceAssert.notNull(field, "Field " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(field); + return createCommand(HSTRLEN, new IntegerOutput<>(codec), args); + } + + Command> hvals(K key) { + notNullKey(key); + + return createCommand(HVALS, new ValueListOutput<>(codec), key); + } + + Command hvals(ValueStreamingChannel channel, K key) { + notNullKey(key); + notNull(channel); + + return createCommand(HVALS, new ValueStreamingOutput<>(codec, channel), key); + } + + Command incr(K key) { + notNullKey(key); + + return createCommand(INCR, new IntegerOutput<>(codec), key); + } + + Command incrby(K key, long amount) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(amount); + return createCommand(INCRBY, new IntegerOutput<>(codec), args); + } + + Command incrbyfloat(K key, double amount) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(amount); + return createCommand(INCRBYFLOAT, new DoubleOutput<>(codec), args); + } + + Command info() { + return createCommand(INFO, new StatusOutput<>(codec)); + } + + Command info(String section) { + LettuceAssert.notNull(section, "Section " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add(section); + return createCommand(INFO, new StatusOutput<>(codec), args); + } + + Command> keys(K pattern) { + LettuceAssert.notNull(pattern, "Pattern " + MUST_NOT_BE_NULL); + + return createCommand(KEYS, new KeyListOutput<>(codec), pattern); + } + + Command keys(KeyStreamingChannel channel, K pattern) { + LettuceAssert.notNull(pattern, "Pattern " + MUST_NOT_BE_NULL); + notNull(channel); + + return createCommand(KEYS, new KeyStreamingOutput<>(codec, channel), pattern); + } + + Command lastsave() { + return createCommand(LASTSAVE, new DateOutput<>(codec)); + } + + Command lindex(K key, long index) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(index); + return createCommand(LINDEX, new ValueOutput<>(codec), args); + } + + Command linsert(K key, boolean before, V pivot, V value) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(before ? BEFORE : AFTER).addValue(pivot).addValue(value); + return createCommand(LINSERT, new IntegerOutput<>(codec), args); + } + + Command llen(K key) { + notNullKey(key); + + return createCommand(LLEN, new IntegerOutput<>(codec), key); + } + + Command lpop(K key) { + notNullKey(key); + + return createCommand(LPOP, new ValueOutput<>(codec), key); + } + + Command lpush(K key, V... values) { + notNullKey(key); + notEmptyValues(values); + + return createCommand(LPUSH, new IntegerOutput<>(codec), key, values); + } + + Command lpushx(K key, V... values) { + notNullKey(key); + notEmptyValues(values); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValues(values); + + return createCommand(LPUSHX, new IntegerOutput<>(codec), args); + } + + Command> lrange(K key, long start, long stop) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(stop); + return createCommand(LRANGE, new ValueListOutput<>(codec), args); + } + + Command lrange(ValueStreamingChannel channel, K key, long start, long stop) { + notNullKey(key); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(stop); + return createCommand(LRANGE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command lrem(K key, long count, V value) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(count).addValue(value); + return createCommand(LREM, new IntegerOutput<>(codec), args); + } + + Command lset(K key, long index, V value) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(index).addValue(value); + return createCommand(LSET, new StatusOutput<>(codec), args); + } + + Command ltrim(K key, long start, long stop) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(stop); + return createCommand(LTRIM, new StatusOutput<>(codec), args); + } + + Command memoryUsage(K key) { + return createCommand(MEMORY, new IntegerOutput<>(codec), new CommandArgs<>(codec).add(USAGE).add(key.toString())); + } + + Command> mget(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(MGET, new ValueListOutput<>(codec), args); + } + + Command> mget(Iterable keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(MGET, new ValueListOutput<>(codec), args); + } + + Command mget(ValueStreamingChannel channel, K... keys) { + notEmpty(keys); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(MGET, new ValueStreamingOutput<>(codec, channel), args); + } + + Command mget(KeyValueStreamingChannel channel, K... keys) { + notEmpty(keys); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(MGET, new KeyValueStreamingOutput<>(codec, channel, Arrays.asList(keys)), args); + } + + Command mget(ValueStreamingChannel channel, Iterable keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(MGET, new ValueStreamingOutput<>(codec, channel), args); + } + + Command mget(KeyValueStreamingChannel channel, Iterable keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(MGET, new KeyValueStreamingOutput<>(codec, channel, keys), args); + } + + Command>> mgetKeyValue(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(MGET, new KeyValueListOutput<>(codec, Arrays.asList(keys)), args); + } + + Command>> mgetKeyValue(Iterable keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(MGET, new KeyValueListOutput<>(codec, keys), args); + } + + Command migrate(String host, int port, K key, int db, long timeout) { + LettuceAssert.notNull(host, "Host " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(host, "Host " + MUST_NOT_BE_EMPTY); + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.add(host).add(port).addKey(key).add(db).add(timeout); + return createCommand(MIGRATE, new StatusOutput<>(codec), args); + } + + Command migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs) { + LettuceAssert.notNull(host, "Host " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(host, "Host " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(migrateArgs, "migrateArgs " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + + args.add(host).add(port); + + if (migrateArgs.keys.size() == 1) { + args.addKey(migrateArgs.keys.get(0)); + } else { + args.add(""); + } + + args.add(db).add(timeout); + migrateArgs.build(args); + + return createCommand(MIGRATE, new StatusOutput<>(codec), args); + } + + Command move(K key, int db) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(db); + return createCommand(MOVE, new BooleanOutput<>(codec), args); + } + + Command mset(Map map) { + LettuceAssert.notNull(map, "Map " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(!map.isEmpty(), "Map " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).add(map); + return createCommand(MSET, new StatusOutput<>(codec), args); + } + + Command msetnx(Map map) { + LettuceAssert.notNull(map, "Map " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(!map.isEmpty(), "Map " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).add(map); + return createCommand(MSETNX, new BooleanOutput<>(codec), args); + } + + Command multi() { + return createCommand(MULTI, new StatusOutput<>(codec)); + } + + Command objectEncoding(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).add(ENCODING).addKey(key); + return createCommand(OBJECT, new StatusOutput<>(codec), args); + } + + Command objectIdletime(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).add(IDLETIME).addKey(key); + return createCommand(OBJECT, new IntegerOutput<>(codec), args); + } + + Command objectRefcount(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).add(REFCOUNT).addKey(key); + return createCommand(OBJECT, new IntegerOutput<>(codec), args); + } + + Command persist(K key) { + notNullKey(key); + + return createCommand(PERSIST, new BooleanOutput<>(codec), key); + } + + Command pexpire(K key, long milliseconds) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(milliseconds); + return createCommand(PEXPIRE, new BooleanOutput<>(codec), args); + } + + Command pexpireat(K key, long timestamp) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(timestamp); + return createCommand(PEXPIREAT, new BooleanOutput<>(codec), args); + } + + Command pfadd(K key, V value, V... moreValues) { + notNullKey(key); + LettuceAssert.notNull(value, "Value " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(moreValues, "MoreValues " + MUST_NOT_BE_NULL); + LettuceAssert.noNullElements(moreValues, "MoreValues " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValue(value).addValues(moreValues); + return createCommand(PFADD, new IntegerOutput<>(codec), args); + } + + Command pfadd(K key, V... values) { + notNullKey(key); + notEmptyValues(values); + LettuceAssert.noNullElements(values, "Values " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValues(values); + return createCommand(PFADD, new IntegerOutput<>(codec), args); + } + + Command pfcount(K key, K... moreKeys) { + notNullKey(key); + LettuceAssert.notNull(moreKeys, "MoreKeys " + MUST_NOT_BE_NULL); + LettuceAssert.noNullElements(moreKeys, "MoreKeys " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKeys(moreKeys); + return createCommand(PFCOUNT, new IntegerOutput<>(codec), args); + } + + Command pfcount(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(PFCOUNT, new IntegerOutput<>(codec), args); + } + + @SuppressWarnings("unchecked") + Command pfmerge(K destkey, K sourcekey, K... moreSourceKeys) { + LettuceAssert.notNull(destkey, "Destkey " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(sourcekey, "Sourcekey " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(moreSourceKeys, "MoreSourceKeys " + MUST_NOT_BE_NULL); + LettuceAssert.noNullElements(moreSourceKeys, "MoreSourceKeys " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).addKeys(destkey).addKey(sourcekey).addKeys(moreSourceKeys); + return createCommand(PFMERGE, new StatusOutput<>(codec), args); + } + + @SuppressWarnings("unchecked") + Command pfmerge(K destkey, K... sourcekeys) { + LettuceAssert.notNull(destkey, "Destkey " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(sourcekeys, "Sourcekeys " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(sourcekeys, "Sourcekeys " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(sourcekeys, "Sourcekeys " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).addKeys(destkey).addKeys(sourcekeys); + return createCommand(PFMERGE, new StatusOutput<>(codec), args); + } + + Command ping() { + return createCommand(PING, new StatusOutput<>(codec)); + } + + Command psetex(K key, long milliseconds, V value) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(milliseconds).addValue(value); + return createCommand(PSETEX, new StatusOutput<>(codec), args); + } + + Command pttl(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + return createCommand(PTTL, new IntegerOutput<>(codec), args); + } + + Command publish(K channel, V message) { + LettuceAssert.notNull(channel, "Channel " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(channel).addValue(message); + return createCommand(PUBLISH, new IntegerOutput<>(codec), args); + } + + Command> pubsubChannels() { + CommandArgs args = new CommandArgs<>(codec).add(CHANNELS); + return createCommand(PUBSUB, new KeyListOutput<>(codec), args); + } + + Command> pubsubChannels(K pattern) { + LettuceAssert.notNull(pattern, "Pattern " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add(CHANNELS).addKey(pattern); + return createCommand(PUBSUB, new KeyListOutput<>(codec), args); + } + + Command pubsubNumpat() { + CommandArgs args = new CommandArgs<>(codec).add(NUMPAT); + return createCommand(PUBSUB, new IntegerOutput<>(codec), args); + } + + @SuppressWarnings({ "unchecked", "rawtypes" }) + Command> pubsubNumsub(K... pattern) { + LettuceAssert.notNull(pattern, "Pattern " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(pattern, "Pattern " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).add(NUMSUB).addKeys(pattern); + return createCommand(PUBSUB, (MapOutput) new MapOutput((RedisCodec) codec), args); + } + + Command quit() { + return createCommand(QUIT, new StatusOutput<>(codec)); + } + + Command randomkey() { + return createCommand(RANDOMKEY, new KeyOutput<>(codec)); + } + + Command readOnly() { + return createCommand(READONLY, new StatusOutput<>(codec)); + } + + Command readWrite() { + return createCommand(READWRITE, new StatusOutput<>(codec)); + } + + Command rename(K key, K newKey) { + notNullKey(key); + LettuceAssert.notNull(newKey, "NewKey " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(newKey); + return createCommand(RENAME, new StatusOutput<>(codec), args); + } + + Command renamenx(K key, K newKey) { + notNullKey(key); + LettuceAssert.notNull(newKey, "NewKey " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(newKey); + return createCommand(RENAMENX, new BooleanOutput<>(codec), args); + } + + Command restore(K key, byte[] value, RestoreArgs restoreArgs) { + notNullKey(key); + LettuceAssert.notNull(value, "Value " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(restoreArgs, "RestoreArgs " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(restoreArgs.ttl).add(value); + + if (restoreArgs.replace) { + args.add(REPLACE); + } + + return createCommand(RESTORE, new StatusOutput<>(codec), args); + } + + Command> role() { + return createCommand(ROLE, new ArrayOutput<>(codec)); + } + + Command rpop(K key) { + notNullKey(key); + + return createCommand(RPOP, new ValueOutput<>(codec), key); + } + + Command rpoplpush(K source, K destination) { + LettuceAssert.notNull(source, "Source " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(source).addKey(destination); + return createCommand(RPOPLPUSH, new ValueOutput<>(codec), args); + } + + Command rpush(K key, V... values) { + notNullKey(key); + notEmptyValues(values); + + return createCommand(RPUSH, new IntegerOutput<>(codec), key, values); + } + + Command rpushx(K key, V... values) { + notNullKey(key); + notEmptyValues(values); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValues(values); + return createCommand(RPUSHX, new IntegerOutput<>(codec), args); + } + + Command sadd(K key, V... members) { + notNullKey(key); + LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); + + return createCommand(SADD, new IntegerOutput<>(codec), key, members); + } + + Command save() { + return createCommand(SAVE, new StatusOutput<>(codec)); + } + + Command> scan() { + return scan(ScanCursor.INITIAL, null); + } + + Command> scan(ScanCursor scanCursor) { + return scan(scanCursor, null); + } + + Command> scan(ScanArgs scanArgs) { + return scan(ScanCursor.INITIAL, scanArgs); + } + + Command> scan(ScanCursor scanCursor, ScanArgs scanArgs) { + CommandArgs args = new CommandArgs<>(codec); + + scanArgs(scanCursor, scanArgs, args); + + KeyScanOutput output = new KeyScanOutput<>(codec); + return createCommand(SCAN, output, args); + } + + protected void scanArgs(ScanCursor scanCursor, ScanArgs scanArgs, CommandArgs args) { + LettuceAssert.notNull(scanCursor, "ScanCursor " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(!scanCursor.isFinished(), "ScanCursor must not be finished"); + + args.add(scanCursor.getCursor()); + + if (scanArgs != null) { + scanArgs.build(args); + } + } + + Command scanStreaming(KeyStreamingChannel channel) { + notNull(channel); + LettuceAssert.notNull(channel, "KeyStreamingChannel " + MUST_NOT_BE_NULL); + + return scanStreaming(channel, ScanCursor.INITIAL, null); + } + + Command scanStreaming(KeyStreamingChannel channel, ScanCursor scanCursor) { + notNull(channel); + LettuceAssert.notNull(channel, "KeyStreamingChannel " + MUST_NOT_BE_NULL); + + return scanStreaming(channel, scanCursor, null); + } + + Command scanStreaming(KeyStreamingChannel channel, ScanArgs scanArgs) { + notNull(channel); + LettuceAssert.notNull(channel, "KeyStreamingChannel " + MUST_NOT_BE_NULL); + + return scanStreaming(channel, ScanCursor.INITIAL, scanArgs); + } + + Command scanStreaming(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { + notNull(channel); + LettuceAssert.notNull(channel, "KeyStreamingChannel " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + scanArgs(scanCursor, scanArgs, args); + + KeyScanStreamingOutput output = new KeyScanStreamingOutput<>(codec, channel); + return createCommand(SCAN, output, args); + } + + Command scard(K key) { + notNullKey(key); + + return createCommand(SCARD, new IntegerOutput<>(codec), key); + } + + Command> scriptExists(String... digests) { + LettuceAssert.notNull(digests, "Digests " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(digests, "Digests " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(digests, "Digests " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).add(EXISTS); + for (String sha : digests) { + args.add(sha); + } + return createCommand(SCRIPT, new BooleanListOutput<>(codec), args); + } + + Command scriptFlush() { + CommandArgs args = new CommandArgs<>(codec).add(FLUSH); + return createCommand(SCRIPT, new StatusOutput<>(codec), args); + } + + Command scriptKill() { + CommandArgs args = new CommandArgs<>(codec).add(KILL); + return createCommand(SCRIPT, new StatusOutput<>(codec), args); + } + + Command scriptLoad(byte[] script) { + LettuceAssert.notNull(script, "Script " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add(LOAD).add(script); + return createCommand(SCRIPT, new StatusOutput<>(codec), args); + } + + Command> sdiff(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(SDIFF, new ValueSetOutput<>(codec), args); + } + + Command sdiff(ValueStreamingChannel channel, K... keys) { + notEmpty(keys); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(SDIFF, new ValueStreamingOutput<>(codec, channel), args); + } + + Command sdiffstore(K destination, K... keys) { + notEmpty(keys); + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(destination).addKeys(keys); + return createCommand(SDIFFSTORE, new IntegerOutput<>(codec), args); + } + + Command select(int db) { + CommandArgs args = new CommandArgs<>(codec).add(db); + return createCommand(SELECT, new StatusOutput<>(codec), args); + } + + Command set(K key, V value) { + notNullKey(key); + + return createCommand(SET, new StatusOutput<>(codec), key, value); + } + + Command set(K key, V value, SetArgs setArgs) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValue(value); + setArgs.build(args); + return createCommand(SET, new StatusOutput<>(codec), args); + } + + Command setbit(K key, long offset, int value) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(offset).add(value); + return createCommand(SETBIT, new IntegerOutput<>(codec), args); + } + + Command setex(K key, long seconds, V value) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(seconds).addValue(value); + return createCommand(SETEX, new StatusOutput<>(codec), args); + } + + Command setnx(K key, V value) { + notNullKey(key); + return createCommand(SETNX, new BooleanOutput<>(codec), key, value); + } + + Command setrange(K key, long offset, V value) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(offset).addValue(value); + return createCommand(SETRANGE, new IntegerOutput<>(codec), args); + } + + Command shutdown(boolean save) { + CommandArgs args = new CommandArgs<>(codec); + return createCommand(SHUTDOWN, new StatusOutput<>(codec), save ? args.add(SAVE) : args.add(NOSAVE)); + } + + Command> sinter(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(SINTER, new ValueSetOutput<>(codec), args); + } + + Command sinter(ValueStreamingChannel channel, K... keys) { + notEmpty(keys); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(SINTER, new ValueStreamingOutput<>(codec, channel), args); + } + + Command sinterstore(K destination, K... keys) { + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKey(destination).addKeys(keys); + return createCommand(SINTERSTORE, new IntegerOutput<>(codec), args); + } + + Command sismember(K key, V member) { + notNullKey(key); + return createCommand(SISMEMBER, new BooleanOutput<>(codec), key, member); + } + + Command slaveof(String host, int port) { + LettuceAssert.notNull(host, "Host " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(host, "Host " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec).add(host).add(port); + return createCommand(SLAVEOF, new StatusOutput<>(codec), args); + } + + Command slaveofNoOne() { + CommandArgs args = new CommandArgs<>(codec).add(NO).add(ONE); + return createCommand(SLAVEOF, new StatusOutput<>(codec), args); + } + + Command> slowlogGet() { + CommandArgs args = new CommandArgs<>(codec).add(GET); + return createCommand(SLOWLOG, new NestedMultiOutput<>(codec), args); + } + + Command> slowlogGet(int count) { + CommandArgs args = new CommandArgs<>(codec).add(GET).add(count); + return createCommand(SLOWLOG, new NestedMultiOutput<>(codec), args); + } + + Command slowlogLen() { + CommandArgs args = new CommandArgs<>(codec).add(LEN); + return createCommand(SLOWLOG, new IntegerOutput<>(codec), args); + } + + Command slowlogReset() { + CommandArgs args = new CommandArgs<>(codec).add(RESET); + return createCommand(SLOWLOG, new StatusOutput<>(codec), args); + } + + Command> smembers(K key) { + notNullKey(key); + + return createCommand(SMEMBERS, new ValueSetOutput<>(codec), key); + } + + Command smembers(ValueStreamingChannel channel, K key) { + notNullKey(key); + notNull(channel); + + return createCommand(SMEMBERS, new ValueStreamingOutput<>(codec, channel), key); + } + + Command smove(K source, K destination, V member) { + LettuceAssert.notNull(source, "Source " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(source).addKey(destination).addValue(member); + return createCommand(SMOVE, new BooleanOutput<>(codec), args); + } + + Command> sort(K key) { + notNullKey(key); + + return createCommand(SORT, new ValueListOutput<>(codec), key); + } + + Command> sort(K key, SortArgs sortArgs) { + notNullKey(key); + LettuceAssert.notNull(sortArgs, "SortArgs " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + sortArgs.build(args, null); + return createCommand(SORT, new ValueListOutput<>(codec), args); + } + + Command sort(ValueStreamingChannel channel, K key) { + notNullKey(key); + notNull(channel); + + return createCommand(SORT, new ValueStreamingOutput<>(codec, channel), key); + } + + Command sort(ValueStreamingChannel channel, K key, SortArgs sortArgs) { + notNullKey(key); + notNull(channel); + LettuceAssert.notNull(sortArgs, "SortArgs " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + sortArgs.build(args, null); + return createCommand(SORT, new ValueStreamingOutput<>(codec, channel), args); + } + + Command sortStore(K key, SortArgs sortArgs, K destination) { + notNullKey(key); + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(sortArgs, "SortArgs " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + sortArgs.build(args, destination); + return createCommand(SORT, new IntegerOutput<>(codec), args); + } + + Command spop(K key) { + notNullKey(key); + + return createCommand(SPOP, new ValueOutput<>(codec), key); + } + + Command> spop(K key, long count) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(count); + return createCommand(SPOP, new ValueSetOutput<>(codec), args); + } + + Command srandmember(K key) { + notNullKey(key); + + return createCommand(SRANDMEMBER, new ValueOutput<>(codec), key); + } + + Command> srandmember(K key, long count) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(count); + return createCommand(SRANDMEMBER, new ValueListOutput<>(codec), args); + } + + Command srandmember(ValueStreamingChannel channel, K key, long count) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(count); + return createCommand(SRANDMEMBER, new ValueStreamingOutput<>(codec, channel), args); + } + + Command srem(K key, V... members) { + notNullKey(key); + LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); + + return createCommand(SREM, new IntegerOutput<>(codec), key, members); + } + + Command> sscan(K key) { + notNullKey(key); + + return sscan(key, ScanCursor.INITIAL, null); + } + + Command> sscan(K key, ScanCursor scanCursor) { + notNullKey(key); + + return sscan(key, scanCursor, null); + } + + Command> sscan(K key, ScanArgs scanArgs) { + notNullKey(key); + + return sscan(key, ScanCursor.INITIAL, scanArgs); + } + + Command> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key); + + scanArgs(scanCursor, scanArgs, args); + + ValueScanOutput output = new ValueScanOutput<>(codec); + return createCommand(SSCAN, output, args); + } + + Command sscanStreaming(ValueStreamingChannel channel, K key) { + notNullKey(key); + notNull(channel); + + return sscanStreaming(channel, key, ScanCursor.INITIAL, null); + } + + Command sscanStreaming(ValueStreamingChannel channel, K key, ScanCursor scanCursor) { + notNullKey(key); + notNull(channel); + + return sscanStreaming(channel, key, scanCursor, null); + } + + Command sscanStreaming(ValueStreamingChannel channel, K key, ScanArgs scanArgs) { + notNullKey(key); + notNull(channel); + + return sscanStreaming(channel, key, ScanCursor.INITIAL, scanArgs); + } + + Command sscanStreaming(ValueStreamingChannel channel, K key, ScanCursor scanCursor, + ScanArgs scanArgs) { + notNullKey(key); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + + args.addKey(key); + scanArgs(scanCursor, scanArgs, args); + + ValueScanStreamingOutput output = new ValueScanStreamingOutput<>(codec, channel); + return createCommand(SSCAN, output, args); + } + + Command strlen(K key) { + notNullKey(key); + + return createCommand(STRLEN, new IntegerOutput<>(codec), key); + } + + Command> sunion(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(SUNION, new ValueSetOutput<>(codec), args); + } + + Command sunion(ValueStreamingChannel channel, K... keys) { + notEmpty(keys); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(SUNION, new ValueStreamingOutput<>(codec, channel), args); + } + + Command sunionstore(K destination, K... keys) { + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKey(destination).addKeys(keys); + return createCommand(SUNIONSTORE, new IntegerOutput<>(codec), args); + } + + Command swapdb(int db1, int db2) { + CommandArgs args = new CommandArgs<>(codec).add(db1).add(db2); + return createCommand(SWAPDB, new StatusOutput<>(codec), args); + } + + Command sync() { + return createCommand(SYNC, new StatusOutput<>(codec)); + } + + Command> time() { + CommandArgs args = new CommandArgs<>(codec); + return createCommand(TIME, new ValueListOutput<>(codec), args); + } + + Command touch(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(TOUCH, new IntegerOutput<>(codec), args); + } + + Command touch(Iterable keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(TOUCH, new IntegerOutput<>(codec), args); + } + + Command ttl(K key) { + notNullKey(key); + + return createCommand(TTL, new IntegerOutput<>(codec), key); + } + + Command type(K key) { + notNullKey(key); + + return createCommand(TYPE, new StatusOutput<>(codec), key); + } + + Command unlink(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(UNLINK, new IntegerOutput<>(codec), args); + } + + Command unlink(Iterable keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(UNLINK, new IntegerOutput<>(codec), args); + } + + Command unwatch() { + return createCommand(UNWATCH, new StatusOutput<>(codec)); + } + + Command wait(int replicas, long timeout) { + CommandArgs args = new CommandArgs<>(codec).add(replicas).add(timeout); + + return createCommand(WAIT, new IntegerOutput<>(codec), args); + } + + Command watch(K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys); + return createCommand(WATCH, new StatusOutput<>(codec), args); + } + + public Command xack(K key, K group, String[] messageIds) { + notNullKey(key); + LettuceAssert.notNull(group, "Group " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(messageIds, "MessageIds " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(messageIds, "MessageIds " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(group); + + for (String messageId : messageIds) { + args.add(messageId); + } + + return createCommand(XACK, new IntegerOutput<>(codec), args); + } + + public Command>> xclaim(K key, Consumer consumer, XClaimArgs xClaimArgs, + String[] messageIds) { + + notNullKey(key); + LettuceAssert.notNull(consumer, "Consumer " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(messageIds, "MessageIds " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(messageIds, "MessageIds " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + LettuceAssert.notNull(xClaimArgs, "XClaimArgs " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(consumer.group).addKey(consumer.name) + .add(xClaimArgs.minIdleTime); + + for (String messageId : messageIds) { + args.add(messageId); + } + + xClaimArgs.build(args); + + return createCommand(XCLAIM, new StreamMessageListOutput<>(codec, key), args); + } + + public Command xadd(K key, XAddArgs xAddArgs, Map map) { + notNullKey(key); + LettuceAssert.notNull(map, "Message body " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + if (xAddArgs != null) { + xAddArgs.build(args); + } else { + args.add("*"); + } + + args.add(map); + + return createCommand(XADD, new StatusOutput<>(codec), args); + } + + public Command xadd(K key, XAddArgs xAddArgs, Object[] body) { + notNullKey(key); + LettuceAssert.notNull(body, "Message body " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(body, "Message body " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(body, "Message body " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + LettuceAssert.isTrue(body.length % 2 == 0, "Message body.length must be a multiple of 2 and contain a " + + "sequence of field1, value1, field2, value2, fieldN, valueN"); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + if (xAddArgs != null) { + xAddArgs.build(args); + } else { + args.add("*"); + } + + for (int i = 0; i < body.length; i += 2) { + args.addKey((K) body[i]); + args.addValue((V) body[i + 1]); + } + + return createCommand(XADD, new StatusOutput<>(codec), args); + } + + public Command xdel(K key, String[] messageIds) { + notNullKey(key); + LettuceAssert.notEmpty(messageIds, "MessageIds " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(messageIds, "MessageIds " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + for (String messageId : messageIds) { + args.add(messageId); + } + + return createCommand(XDEL, new IntegerOutput<>(codec), args); + } + + public Command xgroupCreate(StreamOffset offset, K group, XGroupCreateArgs commandArgs) { + LettuceAssert.notNull(offset, "StreamOffset " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(group, "Group " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add(CREATE).addKey(offset.getName()).addKey(group) + .add(offset.getOffset()); + + if (commandArgs != null) { + commandArgs.build(args); + } + + return createCommand(XGROUP, new StatusOutput<>(codec), args); + } + + public Command xgroupDelconsumer(K key, Consumer consumer) { + notNullKey(key); + LettuceAssert.notNull(consumer, "Consumer " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add("DELCONSUMER").addKey(key).addKey(consumer.getGroup()) + .addKey(consumer.getName()); + + return createCommand(XGROUP, new BooleanOutput<>(codec), args); + } + + public Command xgroupDestroy(K key, K group) { + notNullKey(key); + LettuceAssert.notNull(group, "Group " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add("DESTROY").addKey(key).addKey(group); + + return createCommand(XGROUP, new BooleanOutput<>(codec), args); + } + + public Command xgroupSetid(StreamOffset offset, K group) { + LettuceAssert.notNull(offset, "StreamOffset " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(group, "Group " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).add("SETID").addKey(offset.getName()).addKey(group) + .add(offset.getOffset()); + + return createCommand(XGROUP, new StatusOutput<>(codec), args); + } + + public Command> xinfoStream(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).add(STREAM).addKey(key); + + return createCommand(XINFO, new ArrayOutput<>(codec), args); + } + + public Command> xinfoGroups(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).add(GROUPS).addKey(key); + + return createCommand(XINFO, new ArrayOutput<>(codec), args); + } + + public Command> xinfoConsumers(K key, K group) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).add(CONSUMERS).addKey(key).addKey(group); + + return createCommand(XINFO, new ArrayOutput<>(codec), args); + } + + public Command xlen(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + return createCommand(XLEN, new IntegerOutput<>(codec), args); + } + + public Command> xpending(K key, K group, Range range, Limit limit) { + notNullKey(key); + LettuceAssert.notNull(group, "Group " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(range, "Range " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(limit, "Limit " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(group); + + if (limit.isLimited() || !range.getLower().equals(Boundary.unbounded()) + || !range.getUpper().equals(Boundary.unbounded())) { + args.add(getLowerValue(range)).add(getUpperValue(range)); + + if (!limit.isLimited()) { + throw new IllegalArgumentException("Limit must be set using Range queries with XPENDING"); + } + args.add(limit.getCount()); + } + + return createCommand(XPENDING, new NestedMultiOutput<>(codec), args); + } + + public Command> xpending(K key, Consumer consumer, Range range, Limit limit) { + notNullKey(key); + LettuceAssert.notNull(consumer, "Consumer " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(range, "Range " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(limit, "Limit " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).addKey(consumer.group); + + args.add(getLowerValue(range)).add(getUpperValue(range)); + + args.add(limit.isLimited() ? limit.getCount() : Long.MAX_VALUE); + args.addKey(consumer.name); + + return createCommand(XPENDING, new NestedMultiOutput<>(codec), args); + } + + public Command>> xrange(K key, Range range, Limit limit) { + notNullKey(key); + LettuceAssert.notNull(range, "Range " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(limit, "Limit " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + args.add(getLowerValue(range)).add(getUpperValue(range)); + + if (limit.isLimited()) { + args.add(COUNT).add(limit.getCount()); + } + + return createCommand(XRANGE, new StreamMessageListOutput<>(codec, key), args); + } + + public Command>> xrevrange(K key, Range range, Limit limit) { + notNullKey(key); + LettuceAssert.notNull(range, "Range " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(limit, "Limit " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + args.add(getUpperValue(range)).add(getLowerValue(range)); + + if (limit.isLimited()) { + args.add(COUNT).add(limit.getCount()); + } + + return createCommand(XREVRANGE, new StreamMessageListOutput<>(codec, key), args); + } + + public Command xtrim(K key, boolean approximateTrimming, long count) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(MAXLEN); + + if (approximateTrimming) { + args.add("~"); + } + + args.add(count); + + return createCommand(XTRIM, new IntegerOutput<>(codec), args); + } + + private static String getLowerValue(Range range) { + + if (range.getLower().equals(Boundary.unbounded())) { + return "-"; + } + + return range.getLower().getValue(); + } + + private static String getUpperValue(Range range) { + + if (range.getUpper().equals(Boundary.unbounded())) { + return "+"; + } + + return range.getUpper().getValue(); + } + + public Command>> xread(XReadArgs xReadArgs, StreamOffset[] streams) { + LettuceAssert.notNull(streams, "Streams " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(streams.length > 0, "Streams " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new CommandArgs<>(codec); + + if (xReadArgs != null) { + xReadArgs.build(args); + } + + args.add("STREAMS"); + + for (StreamOffset stream : streams) { + args.addKey(stream.name); + } + + for (StreamOffset stream : streams) { + args.add(stream.offset); + } + + return createCommand(XREAD, new StreamReadOutput<>(codec), args); + } + + public Command>> xreadgroup(Consumer consumer, XReadArgs xReadArgs, + StreamOffset[] streams) { + LettuceAssert.notNull(streams, "Streams " + MUST_NOT_BE_NULL); + LettuceAssert.isTrue(streams.length > 0, "Streams " + MUST_NOT_BE_EMPTY); + LettuceAssert.notNull(consumer, "Consumer " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + + args.add("GROUP").addKeys(consumer.group).addKeys(consumer.name); + + if (xReadArgs != null) { + xReadArgs.build(args); + } + + args.add("STREAMS"); + + for (StreamOffset stream : streams) { + args.addKey(stream.name); + } + + for (XReadArgs.StreamOffset stream : streams) { + args.add(stream.offset); + } + + return createCommand(XREADGROUP, new StreamReadOutput<>(codec), args); + } + + Command>> bzpopmin(long timeout, K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys).add(timeout); + + return createCommand(BZPOPMIN, new KeyValueScoredValueOutput<>(codec), args); + } + + Command>> bzpopmax(long timeout, K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKeys(keys).add(timeout); + + return createCommand(BZPOPMAX, new KeyValueScoredValueOutput<>(codec), args); + } + + Command zadd(K key, ZAddArgs zAddArgs, double score, V member) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + if (zAddArgs != null) { + zAddArgs.build(args); + } + args.add(score).addValue(member); + + return createCommand(ZADD, new IntegerOutput<>(codec), args); + } + + @SuppressWarnings("unchecked") + Command zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues) { + notNullKey(key); + LettuceAssert.notNull(scoresAndValues, "ScoresAndValues " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(scoresAndValues, "ScoresAndValues " + MUST_NOT_BE_EMPTY); + LettuceAssert.noNullElements(scoresAndValues, "ScoresAndValues " + MUST_NOT_CONTAIN_NULL_ELEMENTS); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + if (zAddArgs != null) { + zAddArgs.build(args); + } + + if (allElementsInstanceOf(scoresAndValues, ScoredValue.class)) { + + for (Object o : scoresAndValues) { + ScoredValue scoredValue = (ScoredValue) o; + + args.add(scoredValue.getScore()); + args.addValue(scoredValue.getValue()); + } + + } else { + LettuceAssert.isTrue(scoresAndValues.length % 2 == 0, + "ScoresAndValues.length must be a multiple of 2 and contain a " + + "sequence of score1, value1, score2, value2, scoreN, valueN"); + + for (int i = 0; i < scoresAndValues.length; i += 2) { + args.add((Double) scoresAndValues[i]); + args.addValue((V) scoresAndValues[i + 1]); + } + } + + return createCommand(ZADD, new IntegerOutput<>(codec), args); + } + + Command zaddincr(K key, ZAddArgs zAddArgs, double score, V member) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key); + + if (zAddArgs != null) { + zAddArgs.build(args); + } + + args.add(INCR); + args.add(score).addValue(member); + + return createCommand(ZADD, new DoubleOutput<>(codec), args); + } + + Command zcard(K key) { + notNullKey(key); + + return createCommand(ZCARD, new IntegerOutput<>(codec), key); + } + + Command zcount(K key, double min, double max) { + return zcount(key, string(min), string(max)); + } + + Command zcount(K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(min).add(max); + return createCommand(ZCOUNT, new IntegerOutput<>(codec), args); + } + + Command zcount(K key, Range range) { + notNullKey(key); + notNullRange(range); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(min(range)).add(max(range)); + return createCommand(ZCOUNT, new IntegerOutput<>(codec), args); + } + + Command zincrby(K key, double amount, V member) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(amount).addValue(member); + return createCommand(ZINCRBY, new DoubleOutput<>(codec), args); + } + + Command zinterstore(K destination, K... keys) { + notEmpty(keys); + + return zinterstore(destination, new ZStoreArgs(), keys); + } + + Command zinterstore(K destination, ZStoreArgs storeArgs, K... keys) { + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(storeArgs, "ZStoreArgs " + MUST_NOT_BE_NULL); + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec).addKey(destination).add(keys.length).addKeys(keys); + storeArgs.build(args); + return createCommand(ZINTERSTORE, new IntegerOutput<>(codec), args); + } + + RedisCommand zlexcount(K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(min).add(max); + return createCommand(ZLEXCOUNT, new IntegerOutput<>(codec), args); + } + + RedisCommand zlexcount(K key, Range range) { + notNullKey(key); + notNullRange(range); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(minValue(range)).add(maxValue(range)); + return createCommand(ZLEXCOUNT, new IntegerOutput<>(codec), args); + } + + Command> zpopmin(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKeys(key); + + return createCommand(ZPOPMIN, new ScoredValueOutput<>(codec), args); + } + + Command>> zpopmin(K key, long count) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKeys(key).add(count); + + return createCommand(ZPOPMIN, new ScoredValueListOutput<>(codec), args); + } + + Command> zpopmax(K key) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKeys(key); + + return createCommand(ZPOPMAX, new ScoredValueOutput<>(codec), args); + } + + Command>> zpopmax(K key, long count) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKeys(key).add(count); + + return createCommand(ZPOPMAX, new ScoredValueListOutput<>(codec), args); + } + + Command> zrange(K key, long start, long stop) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(stop); + return createCommand(ZRANGE, new ValueListOutput<>(codec), args); + } + + Command zrange(ValueStreamingChannel channel, K key, long start, long stop) { + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(stop); + return createCommand(ZRANGE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command>> zrangeWithScores(K key, long start, long stop) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(start).add(stop).add(WITHSCORES); + return createCommand(ZRANGE, new ScoredValueListOutput<>(codec), args); + } + + Command zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { + notNullKey(key); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(start).add(stop).add(WITHSCORES); + return createCommand(ZRANGE, new ScoredValueStreamingOutput<>(codec, channel), args); + } + + RedisCommand> zrangebylex(K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(min).add(max); + return createCommand(ZRANGEBYLEX, new ValueListOutput<>(codec), args); + } + + RedisCommand> zrangebylex(K key, String min, String max, long offset, long count) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(min).add(max), Limit.create(offset, count)); + return createCommand(ZRANGEBYLEX, new ValueListOutput<>(codec), args); + } + + RedisCommand> zrangebylex(K key, Range range, Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(minValue(range)).add(maxValue(range)), limit); + return createCommand(ZRANGEBYLEX, new ValueListOutput<>(codec), args); + } + + Command> zrangebyscore(K key, double min, double max) { + return zrangebyscore(key, string(min), string(max)); + } + + Command> zrangebyscore(K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(min).add(max); + return createCommand(ZRANGEBYSCORE, new ValueListOutput<>(codec), args); + } + + Command> zrangebyscore(K key, double min, double max, long offset, long count) { + return zrangebyscore(key, string(min), string(max), offset, count); + } + + Command> zrangebyscore(K key, String min, String max, long offset, long count) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(min).add(max).add(LIMIT).add(offset).add(count); + return createCommand(ZRANGEBYSCORE, new ValueListOutput<>(codec), args); + } + + Command> zrangebyscore(K key, Range range, Limit limit) { + notNullKey(key); + notNullRange(range); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(min(range)).add(max(range)); + + if (limit.isLimited()) { + args.add(LIMIT).add(limit.getOffset()).add(limit.getCount()); + } + return createCommand(ZRANGEBYSCORE, new ValueListOutput<>(codec), args); + } + + Command zrangebyscore(ValueStreamingChannel channel, K key, double min, double max) { + return zrangebyscore(channel, key, string(min), string(max)); + } + + Command zrangebyscore(ValueStreamingChannel channel, K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + LettuceAssert.notNull(channel, "ScoredValueStreamingChannel " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(min).add(max); + return createCommand(ZRANGEBYSCORE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, + long count) { + return zrangebyscore(channel, key, string(min), string(max), offset, count); + } + + Command zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, + long count) { + notNullKey(key); + notNullMinMax(min, max); + LettuceAssert.notNull(channel, "ScoredValueStreamingChannel " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(min).add(max), Limit.create(offset, count)); + return createCommand(ZRANGEBYSCORE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command zrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + LettuceAssert.notNull(channel, "ScoredValueStreamingChannel " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(min(range)).add(max(range)), limit); + return createCommand(ZRANGEBYSCORE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command>> zrangebyscoreWithScores(K key, double min, double max) { + return zrangebyscoreWithScores(key, string(min), string(max)); + } + + Command>> zrangebyscoreWithScores(K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(min).add(max).add(WITHSCORES); + return createCommand(ZRANGEBYSCORE, new ScoredValueListOutput<>(codec), args); + } + + Command>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count) { + return zrangebyscoreWithScores(key, string(min), string(max), offset, count); + } + + Command>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(min).add(max).add(WITHSCORES), Limit.create(offset, count)); + return createCommand(ZRANGEBYSCORE, new ScoredValueListOutput<>(codec), args); + } + + Command>> zrangebyscoreWithScores(K key, Range range, Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(min(range)).add(max(range)).add(WITHSCORES), limit); + return createCommand(ZRANGEBYSCORE, new ScoredValueListOutput<>(codec), args); + } + + Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max) { + return zrangebyscoreWithScores(channel, key, string(min), string(max)); + } + + Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(min).add(max).add(WITHSCORES); + return createCommand(ZRANGEBYSCORE, new ScoredValueStreamingOutput<>(codec, channel), args); + } + + Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, + long offset, long count) { + return zrangebyscoreWithScores(channel, key, string(min), string(max), offset, count); + } + + Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, + long offset, long count) { + notNullKey(key); + notNullMinMax(min, max); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(min).add(max).add(WITHSCORES), Limit.create(offset, count)); + return createCommand(ZRANGEBYSCORE, new ScoredValueStreamingOutput<>(codec, channel), args); + } + + Command zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, + Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(min(range)).add(max(range)).add(WITHSCORES), limit); + return createCommand(ZRANGEBYSCORE, new ScoredValueStreamingOutput<>(codec, channel), args); + } + + Command zrank(K key, V member) { + notNullKey(key); + + return createCommand(ZRANK, new IntegerOutput<>(codec), key, member); + } + + Command zrem(K key, V... members) { + notNullKey(key); + LettuceAssert.notNull(members, "Members " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(members, "Members " + MUST_NOT_BE_EMPTY); + + return createCommand(ZREM, new IntegerOutput<>(codec), key, members); + } + + RedisCommand zremrangebylex(K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(min).add(max); + return createCommand(ZREMRANGEBYLEX, new IntegerOutput<>(codec), args); + } + + RedisCommand zremrangebylex(K key, Range range) { + notNullKey(key); + notNullRange(range); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(minValue(range)).add(maxValue(range)); + return createCommand(ZREMRANGEBYLEX, new IntegerOutput<>(codec), args); + } + + Command zremrangebyrank(K key, long start, long stop) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(stop); + return createCommand(ZREMRANGEBYRANK, new IntegerOutput<>(codec), args); + } + + Command zremrangebyscore(K key, double min, double max) { + return zremrangebyscore(key, string(min), string(max)); + } + + Command zremrangebyscore(K key, String min, String max) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(min).add(max); + return createCommand(ZREMRANGEBYSCORE, new IntegerOutput<>(codec), args); + } + + Command zremrangebyscore(K key, Range range) { + notNullKey(key); + notNullRange(range); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(min(range)).add(max(range)); + return createCommand(ZREMRANGEBYSCORE, new IntegerOutput<>(codec), args); + } + + Command> zrevrange(K key, long start, long stop) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(stop); + return createCommand(ZREVRANGE, new ValueListOutput<>(codec), args); + } + + Command zrevrange(ValueStreamingChannel channel, K key, long start, long stop) { + notNullKey(key); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(start).add(stop); + return createCommand(ZREVRANGE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command>> zrevrangeWithScores(K key, long start, long stop) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(start).add(stop).add(WITHSCORES); + return createCommand(ZREVRANGE, new ScoredValueListOutput<>(codec), args); + } + + Command zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop) { + notNullKey(key); + LettuceAssert.notNull(channel, "ValueStreamingChannel " + MUST_NOT_BE_NULL); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(start).add(stop).add(WITHSCORES); + return createCommand(ZREVRANGE, new ScoredValueStreamingOutput<>(codec, channel), args); + } + + Command> zrevrangebylex(K key, Range range, Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(maxValue(range)).add(minValue(range)), limit); + return createCommand(ZREVRANGEBYLEX, new ValueListOutput<>(codec), args); + } + + Command> zrevrangebyscore(K key, double max, double min) { + return zrevrangebyscore(key, string(max), string(min)); + } + + Command> zrevrangebyscore(K key, String max, String min) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(max).add(min); + return createCommand(ZREVRANGEBYSCORE, new ValueListOutput<>(codec), args); + } + + Command> zrevrangebyscore(K key, double max, double min, long offset, long count) { + return zrevrangebyscore(key, string(max), string(min), offset, count); + } + + Command> zrevrangebyscore(K key, String max, String min, long offset, long count) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(max).add(min), Limit.create(offset, count)); + return createCommand(ZREVRANGEBYSCORE, new ValueListOutput<>(codec), args); + } + + Command> zrevrangebyscore(K key, Range range, Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(max(range)).add(min(range)), limit); + return createCommand(ZREVRANGEBYSCORE, new ValueListOutput<>(codec), args); + } + + Command zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min) { + return zrevrangebyscore(channel, key, string(max), string(min)); + } + + Command zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min) { + notNullKey(key); + notNullMinMax(min, max); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec).addKey(key).add(max).add(min); + return createCommand(ZREVRANGEBYSCORE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, + long count) { + return zrevrangebyscore(channel, key, string(max), string(min), offset, count); + } + + Command zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, + long count) { + notNullKey(key); + notNullMinMax(min, max); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(max).add(min), Limit.create(offset, count)); + return createCommand(ZREVRANGEBYSCORE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(max(range)).add(min(range)), limit); + return createCommand(ZREVRANGEBYSCORE, new ValueStreamingOutput<>(codec, channel), args); + } + + Command>> zrevrangebyscoreWithScores(K key, double max, double min) { + return zrevrangebyscoreWithScores(key, string(max), string(min)); + } + + Command>> zrevrangebyscoreWithScores(K key, String max, String min) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(max).add(min).add(WITHSCORES); + return createCommand(ZREVRANGEBYSCORE, new ScoredValueListOutput<>(codec), args); + } + + Command>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count) { + return zrevrangebyscoreWithScores(key, string(max), string(min), offset, count); + } + + Command>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count) { + notNullKey(key); + notNullMinMax(min, max); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(max).add(min).add(WITHSCORES), Limit.create(offset, count)); + return createCommand(ZREVRANGEBYSCORE, new ScoredValueListOutput<>(codec), args); + } + + Command>> zrevrangebyscoreWithScores(K key, Range range, Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(max(range)).add(min(range)).add(WITHSCORES), limit); + return createCommand(ZREVRANGEBYSCORE, new ScoredValueListOutput<>(codec), args); + } + + Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min) { + return zrevrangebyscoreWithScores(channel, key, string(max), string(min)); + } + + Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min) { + notNullKey(key); + notNullMinMax(min, max); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key).add(max).add(min).add(WITHSCORES); + return createCommand(ZREVRANGEBYSCORE, new ScoredValueStreamingOutput<>(codec, channel), args); + } + + Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, + long offset, long count) { + notNullKey(key); + LettuceAssert.notNull(min, "Min " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(max, "Max " + MUST_NOT_BE_NULL); + notNull(channel); + return zrevrangebyscoreWithScores(channel, key, string(max), string(min), offset, count); + } + + Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, + long offset, long count) { + notNullKey(key); + notNullMinMax(min, max); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(max).add(min).add(WITHSCORES), Limit.create(offset, count)); + return createCommand(ZREVRANGEBYSCORE, new ScoredValueStreamingOutput<>(codec, channel), args); + } + + Command zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, + Limit limit) { + notNullKey(key); + notNullRange(range); + notNullLimit(limit); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + addLimit(args.addKey(key).add(max(range)).add(min(range)).add(WITHSCORES), limit); + return createCommand(ZREVRANGEBYSCORE, new ScoredValueStreamingOutput<>(codec, channel), args); + } + + Command zrevrank(K key, V member) { + notNullKey(key); + + return createCommand(ZREVRANK, new IntegerOutput<>(codec), key, member); + } + + Command> zscan(K key) { + notNullKey(key); + + return zscan(key, ScanCursor.INITIAL, null); + } + + Command> zscan(K key, ScanCursor scanCursor) { + notNullKey(key); + + return zscan(key, scanCursor, null); + } + + Command> zscan(K key, ScanArgs scanArgs) { + notNullKey(key); + + return zscan(key, ScanCursor.INITIAL, scanArgs); + } + + Command> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs) { + notNullKey(key); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(key); + + scanArgs(scanCursor, scanArgs, args); + + ScoredValueScanOutput output = new ScoredValueScanOutput<>(codec); + return createCommand(ZSCAN, output, args); + } + + Command zscanStreaming(ScoredValueStreamingChannel channel, K key) { + notNullKey(key); + notNull(channel); + + return zscanStreaming(channel, key, ScanCursor.INITIAL, null); + } + + Command zscanStreaming(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor) { + notNullKey(key); + notNull(channel); + + return zscanStreaming(channel, key, scanCursor, null); + } + + Command zscanStreaming(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs) { + notNullKey(key); + notNull(channel); + + return zscanStreaming(channel, key, ScanCursor.INITIAL, scanArgs); + } + + Command zscanStreaming(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, + ScanArgs scanArgs) { + notNullKey(key); + notNull(channel); + + CommandArgs args = new CommandArgs<>(codec); + + args.addKey(key); + scanArgs(scanCursor, scanArgs, args); + + ScoredValueScanStreamingOutput output = new ScoredValueScanStreamingOutput<>(codec, channel); + return createCommand(ZSCAN, output, args); + } + + Command zscore(K key, V member) { + notNullKey(key); + + return createCommand(ZSCORE, new DoubleOutput<>(codec), key, member); + } + + Command zunionstore(K destination, K... keys) { + notEmpty(keys); + LettuceAssert.notNull(destination, "Destination " + MUST_NOT_BE_NULL); + + return zunionstore(destination, new ZStoreArgs(), keys); + } + + Command zunionstore(K destination, ZStoreArgs storeArgs, K... keys) { + notEmpty(keys); + + CommandArgs args = new CommandArgs<>(codec); + args.addKey(destination).add(keys.length).addKeys(keys); + storeArgs.build(args); + return createCommand(ZUNIONSTORE, new IntegerOutput<>(codec), args); + } + + private boolean allElementsInstanceOf(Object[] objects, Class expectedAssignableType) { + + for (Object object : objects) { + if (!expectedAssignableType.isAssignableFrom(object.getClass())) { + return false; + } + } + + return true; + } + + private byte[] maxValue(Range range) { + + Boundary upper = range.getUpper(); + + if (upper.getValue() == null) { + return PLUS_BYTES; + } + + ByteBuffer encoded = codec.encodeValue(upper.getValue()); + ByteBuffer allocated = ByteBuffer.allocate(encoded.remaining() + 1); + allocated.put(upper.isIncluding() ? (byte) '[' : (byte) '(').put(encoded); + + return allocated.array(); + } + + private byte[] minValue(Range range) { + + Boundary lower = range.getLower(); + + if (lower.getValue() == null) { + return MINUS_BYTES; + } + + ByteBuffer encoded = codec.encodeValue(lower.getValue()); + ByteBuffer allocated = ByteBuffer.allocate(encoded.remaining() + 1); + allocated.put(lower.isIncluding() ? (byte) '[' : (byte) '(').put(encoded); + + return allocated.array(); + } + + static void notNull(ScoredValueStreamingChannel channel) { + LettuceAssert.notNull(channel, "ScoredValueStreamingChannel " + MUST_NOT_BE_NULL); + } + + static void notNull(KeyStreamingChannel channel) { + LettuceAssert.notNull(channel, "KeyValueStreamingChannel " + MUST_NOT_BE_NULL); + } + + static void notNull(ValueStreamingChannel channel) { + LettuceAssert.notNull(channel, "ValueStreamingChannel " + MUST_NOT_BE_NULL); + } + + static void notNull(KeyValueStreamingChannel channel) { + LettuceAssert.notNull(channel, "KeyValueStreamingChannel " + MUST_NOT_BE_NULL); + } + + static void notNullMinMax(String min, String max) { + LettuceAssert.notNull(min, "Min " + MUST_NOT_BE_NULL); + LettuceAssert.notNull(max, "Max " + MUST_NOT_BE_NULL); + } + + private static void addLimit(CommandArgs args, Limit limit) { + + if (limit.isLimited()) { + args.add(LIMIT).add(limit.getOffset()).add(limit.getCount()); + } + } + + private static void assertNodeId(String nodeId) { + LettuceAssert.notNull(nodeId, "NodeId " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(nodeId, "NodeId " + MUST_NOT_BE_EMPTY); + } + + private static String max(Range range) { + + Boundary upper = range.getUpper(); + + if (upper.getValue() == null + || upper.getValue() instanceof Double && upper.getValue().doubleValue() == Double.POSITIVE_INFINITY) { + return "+inf"; + } + + if (!upper.isIncluding()) { + return "(" + upper.getValue(); + } + + return upper.getValue().toString(); + } + + private static String min(Range range) { + + Boundary lower = range.getLower(); + + if (lower.getValue() == null + || lower.getValue() instanceof Double && lower.getValue().doubleValue() == Double.NEGATIVE_INFINITY) { + return "-inf"; + } + + if (!lower.isIncluding()) { + return "(" + lower.getValue(); + } + + return lower.getValue().toString(); + } + + private static void notEmpty(Object[] keys) { + LettuceAssert.notNull(keys, "Keys " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(keys, "Keys " + MUST_NOT_BE_EMPTY); + } + + private static void notEmptySlots(int[] slots) { + LettuceAssert.notNull(slots, "Slots " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(slots, "Slots " + MUST_NOT_BE_EMPTY); + } + + private static void notEmptyValues(Object[] values) { + LettuceAssert.notNull(values, "Values " + MUST_NOT_BE_NULL); + LettuceAssert.notEmpty(values, "Values " + MUST_NOT_BE_EMPTY); + } + + private static void notNullKey(Object key) { + LettuceAssert.notNull(key, "Key " + MUST_NOT_BE_NULL); + } + + private static void notNullLimit(Limit limit) { + LettuceAssert.notNull(limit, "Limit " + MUST_NOT_BE_NULL); + } + + private static void notNullRange(Range range) { + LettuceAssert.notNull(range, "Range " + MUST_NOT_BE_NULL); + } +} diff --git a/src/main/java/io/lettuce/core/RedisCommandExecutionException.java b/src/main/java/io/lettuce/core/RedisCommandExecutionException.java new file mode 100644 index 0000000000..23b3fd5f9f --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisCommandExecutionException.java @@ -0,0 +1,54 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Exception for errors states reported by Redis. + * + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class RedisCommandExecutionException extends RedisException { + + /** + * Create a {@code RedisCommandExecutionException} with the specified detail message. + * + * @param msg the detail message. + */ + public RedisCommandExecutionException(String msg) { + super(msg); + } + + /** + * Create a {@code RedisCommandExecutionException} with the specified detail message and nested exception. + * + * @param msg the detail message. + * @param cause the nested exception. + */ + public RedisCommandExecutionException(String msg, Throwable cause) { + super(msg, cause); + } + + /** + * Create a {@code RedisCommandExecutionException} with the specified nested exception. + * + * @param cause the nested exception. + */ + public RedisCommandExecutionException(Throwable cause) { + super(cause); + } + +} diff --git a/src/main/java/io/lettuce/core/RedisCommandInterruptedException.java b/src/main/java/io/lettuce/core/RedisCommandInterruptedException.java new file mode 100644 index 0000000000..fddd622430 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisCommandInterruptedException.java @@ -0,0 +1,35 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Exception thrown when the thread executing a redis command is interrupted. + * + * @author Will Glozer + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class RedisCommandInterruptedException extends RedisException { + + /** + * Create a {@code RedisCommandInterruptedException} with the specified nested exception. + * + * @param cause the nested exception. + */ + public RedisCommandInterruptedException(Throwable cause) { + super("Command interrupted", cause); + } +} diff --git a/src/main/java/io/lettuce/core/RedisCommandTimeoutException.java b/src/main/java/io/lettuce/core/RedisCommandTimeoutException.java new file mode 100644 index 0000000000..0c6dee4ba4 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisCommandTimeoutException.java @@ -0,0 +1,50 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Exception thrown when the command waiting timeout is exceeded. + * + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class RedisCommandTimeoutException extends RedisException { + + /** + * Create a {@code RedisCommandTimeoutException} with a default message. + */ + public RedisCommandTimeoutException() { + super("Command timed out"); + } + + /** + * Create a {@code RedisCommandTimeoutException} with the specified detail message. + * + * @param msg the detail message. + */ + public RedisCommandTimeoutException(String msg) { + super(msg); + } + + /** + * Create a {@code RedisException} with the specified nested exception. + * + * @param cause the nested exception. + */ + public RedisCommandTimeoutException(Throwable cause) { + super(cause); + } +} diff --git a/src/main/java/io/lettuce/core/RedisConnectionException.java b/src/main/java/io/lettuce/core/RedisConnectionException.java new file mode 100644 index 0000000000..a0f3457040 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisConnectionException.java @@ -0,0 +1,105 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; + +/** + * Exception for connection failures. + * + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class RedisConnectionException extends RedisException { + + /** + * Create a {@code RedisConnectionException} with the specified detail message. + * + * @param msg the detail message. + */ + public RedisConnectionException(String msg) { + super(msg); + } + + /** + * Create a {@code RedisConnectionException} with the specified detail message and nested exception. + * + * @param msg the detail message. + * @param cause the nested exception. + */ + public RedisConnectionException(String msg, Throwable cause) { + super(msg, cause); + } + + /** + * Create a new {@link RedisConnectionException} given {@link SocketAddress} and the {@link Throwable cause}. + * + * @param remoteAddress remote socket address. + * @param cause the nested exception. + * @return the {@link RedisConnectionException}. + * @since 4.4 + */ + public static RedisConnectionException create(SocketAddress remoteAddress, Throwable cause) { + return create(remoteAddress == null ? null : remoteAddress.toString(), cause); + } + + /** + * Create a new {@link RedisConnectionException} given {@code remoteAddress} and the {@link Throwable cause}. + * + * @param remoteAddress remote address. + * @param cause the nested exception. + * @return the {@link RedisConnectionException}. + * @since 5.1 + */ + public static RedisConnectionException create(String remoteAddress, Throwable cause) { + + if (remoteAddress == null) { + + if (cause instanceof RedisConnectionException) { + return new RedisConnectionException(cause.getMessage(), cause.getCause()); + } + + return new RedisConnectionException(null, cause); + } + + return new RedisConnectionException(String.format("Unable to connect to %s", remoteAddress), cause); + } + + /** + * Create a new {@link RedisConnectionException} given {@link Throwable cause}. + * + * @param cause the exception. + * @return the {@link RedisConnectionException}. + * @since 5.1 + */ + public static RedisConnectionException create(Throwable cause) { + + if (cause instanceof RedisConnectionException) { + return new RedisConnectionException(cause.getMessage(), cause.getCause()); + } + + return new RedisConnectionException("Unable to connect", cause); + } + + /** + * @param error the error message. + * @return {@literal true} if the {@code error} message indicates Redis protected mode. + * @since 5.0.1 + */ + public static boolean isProtectedMode(String error) { + return error != null && error.startsWith("DENIED"); + } +} diff --git a/src/main/java/io/lettuce/core/RedisConnectionStateAdapter.java b/src/main/java/io/lettuce/core/RedisConnectionStateAdapter.java new file mode 100644 index 0000000000..bf3e26a569 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisConnectionStateAdapter.java @@ -0,0 +1,42 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; + +/** + * Convenience adapter with an empty implementation of all {@link RedisConnectionStateListener} callback methods. + * + * @author Mark Paluch + * @since 4.4 + */ +public class RedisConnectionStateAdapter implements RedisConnectionStateListener { + + @Override + public void onRedisConnected(RedisChannelHandler connection, SocketAddress socketAddress) { + // empty adapter method + } + + @Override + public void onRedisDisconnected(RedisChannelHandler connection) { + // empty adapter method + } + + @Override + public void onRedisExceptionCaught(RedisChannelHandler connection, Throwable cause) { + // empty adapter method + } +} diff --git a/src/main/java/io/lettuce/core/RedisConnectionStateListener.java b/src/main/java/io/lettuce/core/RedisConnectionStateListener.java new file mode 100644 index 0000000000..7ba22ddd77 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisConnectionStateListener.java @@ -0,0 +1,65 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; + +/** + * Simple interface for Redis connection state monitoring. + * + * @author ze + * @author Mark Paluch + */ +public interface RedisConnectionStateListener { + + /** + * Event handler for successful connection event. + * + * @param connection Source connection. + * @deprecated since 4.4, use {@link RedisConnectionStateListener#onRedisConnected(RedisChannelHandler, SocketAddress)}. + */ + @Deprecated + default void onRedisConnected(RedisChannelHandler connection) { + } + + /** + * Event handler for successful connection event. Delegates by default to {@link #onRedisConnected(RedisChannelHandler)}. + * + * @param connection Source connection. + * @param socketAddress remote {@link SocketAddress}. + * @since 4.4 + */ + default void onRedisConnected(RedisChannelHandler connection, SocketAddress socketAddress) { + onRedisConnected(connection); + } + + /** + * Event handler for disconnection event. + * + * @param connection Source connection. + */ + void onRedisDisconnected(RedisChannelHandler connection); + + /** + * + * Event handler for exceptions. + * + * @param connection Source connection. + * + * @param cause Caught exception. + */ + void onRedisExceptionCaught(RedisChannelHandler connection, Throwable cause); +} diff --git a/src/main/java/io/lettuce/core/RedisException.java b/src/main/java/io/lettuce/core/RedisException.java new file mode 100644 index 0000000000..3cdef2212a --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisException.java @@ -0,0 +1,54 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Exception thrown when Redis returns an error message, or when the client fails for any reason. + * + * @author Will Glozer + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class RedisException extends RuntimeException { + + /** + * Create a {@code RedisException} with the specified detail message. + * + * @param msg the detail message. + */ + public RedisException(String msg) { + super(msg); + } + + /** + * Create a {@code RedisException} with the specified detail message and nested exception. + * + * @param msg the detail message. + * @param cause the nested exception. + */ + public RedisException(String msg, Throwable cause) { + super(msg, cause); + } + + /** + * Create a {@code RedisException} with the specified nested exception. + * + * @param cause the nested exception. + */ + public RedisException(Throwable cause) { + super(cause); + } +} diff --git a/src/main/java/io/lettuce/core/RedisFuture.java b/src/main/java/io/lettuce/core/RedisFuture.java new file mode 100644 index 0000000000..4cee8bc443 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisFuture.java @@ -0,0 +1,47 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.concurrent.CompletionStage; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; + +/** + * A {@code RedisFuture} represents the result of an asynchronous computation, extending {@link CompletionStage}. The execution + * of the notification happens either on finish of the future execution or, if the future is completed already, immediately. + * + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public interface RedisFuture extends CompletionStage, Future { + + /** + * @return error text, if any error occurred. + */ + String getError(); + + /** + * Wait up to the specified time for the command output to become available. + * + * @param timeout Maximum time to wait for a result. + * @param unit Unit of time for the timeout. + * + * @return true if the output became available. + * @throws InterruptedException if the current thread is interrupted while waiting + */ + boolean await(long timeout, TimeUnit unit) throws InterruptedException; +} diff --git a/src/main/java/io/lettuce/core/RedisHandshake.java b/src/main/java/io/lettuce/core/RedisHandshake.java new file mode 100644 index 0000000000..b449355de7 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisHandshake.java @@ -0,0 +1,231 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; + +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.ConnectionInitializer; +import io.lettuce.core.protocol.ProtocolVersion; +import io.netty.channel.Channel; + +/** + * Redis RESP2/RESP3 handshake using the configured {@link ProtocolVersion} and other options for connection initialization and + * connection state restoration. This class is part of the internal API. + * + * @author Mark Paluch + * @author Tugdual Grall + * @since 6.0 + */ +class RedisHandshake implements ConnectionInitializer { + + private final RedisCommandBuilder commandBuilder = new RedisCommandBuilder<>(StringCodec.UTF8); + + private final ProtocolVersion requestedProtocolVersion; + private final boolean pingOnConnect; + private final ConnectionState connectionState; + + private volatile ProtocolVersion negotiatedProtocolVersion; + + RedisHandshake(ProtocolVersion requestedProtocolVersion, boolean pingOnConnect, ConnectionState connectionState) { + + this.requestedProtocolVersion = requestedProtocolVersion; + this.pingOnConnect = pingOnConnect; + this.connectionState = connectionState; + } + + /** + * @return the requested {@link ProtocolVersion}. May be {@literal null} if not configured. + */ + public ProtocolVersion getRequestedProtocolVersion() { + return requestedProtocolVersion; + } + + /** + * @return the negotiated {@link ProtocolVersion} once the handshake is done. + */ + public ProtocolVersion getNegotiatedProtocolVersion() { + return negotiatedProtocolVersion; + } + + @Override + public CompletionStage initialize(Channel channel) { + + CompletableFuture handshake; + + if (this.requestedProtocolVersion == ProtocolVersion.RESP2) { + handshake = initializeResp2(channel); + negotiatedProtocolVersion = ProtocolVersion.RESP2; + } else if (this.requestedProtocolVersion == ProtocolVersion.RESP3) { + handshake = initializeResp3(channel); + } else if (this.requestedProtocolVersion == null) { + handshake = tryHandshakeResp3(channel); + } else { + handshake = Futures.failed( + new RedisConnectionException("Protocol version" + this.requestedProtocolVersion + " not supported")); + } + + return handshake.thenCompose(ignore -> applyPostHandshake(channel, getNegotiatedProtocolVersion())); + } + + private CompletableFuture tryHandshakeResp3(Channel channel) { + + CompletableFuture handshake = new CompletableFuture<>(); + AsyncCommand> hello = initiateHandshakeResp3(channel); + + hello.whenComplete((settings, throwable) -> { + + if (throwable != null) { + if (isUnknownCommand(hello.getError())) { + fallbackToResp2(channel, handshake); + } else { + handshake.completeExceptionally(throwable); + } + } else { + handshake.complete(null); + } + }); + + return handshake; + } + + private void fallbackToResp2(Channel channel, CompletableFuture handshake) { + + initializeResp2(channel).whenComplete((o, nested) -> { + + if (nested != null) { + handshake.completeExceptionally(nested); + } else { + handshake.complete(null); + } + }); + } + + private CompletableFuture initializeResp2(Channel channel) { + return initiateHandshakeResp2(channel).thenRun(() -> { + negotiatedProtocolVersion = ProtocolVersion.RESP2; + + connectionState.setHandshakeResponse( + new ConnectionState.HandshakeResponse(negotiatedProtocolVersion, null, null, null, null)); + }); + } + + private CompletableFuture initializeResp3(Channel channel) { + return initiateHandshakeResp3(channel).thenAccept(response -> { + + Long id = (Long) response.get("id"); + String mode = (String) response.get("mode"); + String version = (String) response.get("version"); + String role = (String) response.get("role"); + + negotiatedProtocolVersion = ProtocolVersion.RESP3; + + connectionState.setHandshakeResponse( + new ConnectionState.HandshakeResponse(negotiatedProtocolVersion, id, version, mode, role)); + }); + } + + /** + * Perform a RESP2 Handshake: Issue a {@code PING} or {@code AUTH}. + * + * @param channel + * @return + */ + private CompletableFuture initiateHandshakeResp2(Channel channel) { + + if (connectionState.hasUsername()) { + return dispatch(channel, this.commandBuilder.auth(connectionState.getUsername(), connectionState.getPassword())); + } else if (connectionState.hasPassword()) { + return dispatch(channel, this.commandBuilder.auth(connectionState.getPassword())); + } else if (this.pingOnConnect) { + return dispatch(channel, this.commandBuilder.ping()); + } + + return CompletableFuture.completedFuture(null); + } + + /** + * Perform a RESP3 Handshake: Issue a {@code HELLO}. + * + * @param channel + * @return + */ + private AsyncCommand> initiateHandshakeResp3(Channel channel) { + + if (connectionState.hasPassword()) { + + return dispatch(channel, this.commandBuilder.hello(3, + LettuceStrings.isNotEmpty(connectionState.getUsername()) ? connectionState.getUsername() : "default", + connectionState.getPassword(), connectionState.getClientName())); + } + + return dispatch(channel, this.commandBuilder.hello(3, null, null, connectionState.getClientName())); + } + + private CompletableFuture applyPostHandshake(Channel channel, ProtocolVersion negotiatedProtocolVersion) { + + List> postHandshake = new ArrayList<>(); + + if (connectionState.getClientName() != null && negotiatedProtocolVersion == ProtocolVersion.RESP2) { + postHandshake.add(new AsyncCommand<>(this.commandBuilder.clientSetname(connectionState.getClientName()))); + } + + if (connectionState.getDb() > 0) { + postHandshake.add(new AsyncCommand<>(this.commandBuilder.select(connectionState.getDb()))); + } + + if (connectionState.isReadOnly()) { + postHandshake.add(new AsyncCommand<>(this.commandBuilder.readOnly())); + } + + if (postHandshake.isEmpty()) { + return CompletableFuture.completedFuture(null); + } + + return dispatch(channel, postHandshake); + } + + private CompletableFuture dispatch(Channel channel, List> commands) { + + CompletionStage writeFuture = Futures.toCompletionStage(channel.writeAndFlush(commands)); + return CompletableFuture.allOf(Futures.allOf(commands), writeFuture.toCompletableFuture()); + } + + private AsyncCommand dispatch(Channel channel, Command command) { + + AsyncCommand future = new AsyncCommand<>(command); + + channel.writeAndFlush(future).addListener(writeFuture -> { + + if (!writeFuture.isSuccess()) { + future.completeExceptionally(writeFuture.cause()); + } + }); + + return future; + } + + private static boolean isUnknownCommand(String error) { + return LettuceStrings.isNotEmpty(error) && error.startsWith("ERR unknown command"); + } +} diff --git a/src/main/java/io/lettuce/core/RedisLoadingException.java b/src/main/java/io/lettuce/core/RedisLoadingException.java new file mode 100644 index 0000000000..d9ad1cc092 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisLoadingException.java @@ -0,0 +1,45 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Exception that gets thrown when Redis is loading a dataset into memory and replying with a {@code LOADING} error response. + * + * @author Mark Paluch + * @since 4.5 + */ +@SuppressWarnings("serial") +public class RedisLoadingException extends RedisCommandExecutionException { + + /** + * Create a {@code RedisLoadingException} with the specified detail message. + * + * @param msg the detail message. + */ + public RedisLoadingException(String msg) { + super(msg); + } + + /** + * Create a {@code RedisLoadingException} with the specified detail message and nested exception. + * + * @param msg the detail message. + * @param cause the nested exception. + */ + public RedisLoadingException(String msg, Throwable cause) { + super(msg, cause); + } +} diff --git a/src/main/java/io/lettuce/core/RedisNoScriptException.java b/src/main/java/io/lettuce/core/RedisNoScriptException.java new file mode 100644 index 0000000000..95f2b89784 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisNoScriptException.java @@ -0,0 +1,46 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Exception that gets thrown when Redis indicates absence of a Lua script referenced by its SHA1 digest with a {@code NOSCRIPT} + * error response. + * + * @author Mark Paluch + * @since 4.5 + */ +@SuppressWarnings("serial") +public class RedisNoScriptException extends RedisCommandExecutionException { + + /** + * Create a {@code RedisNoScriptException} with the specified detail message. + * + * @param msg the detail message. + */ + public RedisNoScriptException(String msg) { + super(msg); + } + + /** + * Create a {@code RedisNoScriptException} with the specified detail message and nested exception. + * + * @param msg the detail message. + * @param cause the nested exception. + */ + public RedisNoScriptException(String msg, Throwable cause) { + super(msg, cause); + } +} diff --git a/src/main/java/io/lettuce/core/RedisPublisher.java b/src/main/java/io/lettuce/core/RedisPublisher.java new file mode 100644 index 0000000000..b7bb6eea9b --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisPublisher.java @@ -0,0 +1,1068 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Collection; +import java.util.Objects; +import java.util.Queue; +import java.util.concurrent.Executor; +import java.util.concurrent.atomic.AtomicLongFieldUpdater; +import java.util.concurrent.atomic.AtomicReference; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; +import java.util.function.Supplier; + +import org.reactivestreams.Publisher; +import org.reactivestreams.Subscriber; +import org.reactivestreams.Subscription; + +import io.lettuce.core.internal.ExceptionFactory; +import reactor.core.CoreSubscriber; +import reactor.core.Exceptions; +import reactor.util.context.Context; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.StreamingOutput; +import io.lettuce.core.protocol.CommandWrapper; +import io.lettuce.core.protocol.DemandAware; +import io.lettuce.core.protocol.RedisCommand; +import io.netty.util.Recycler; +import io.netty.util.concurrent.ImmediateEventExecutor; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Reactive command {@link Publisher} using ReactiveStreams. + * + * This publisher handles command execution and response propagation to a {@link Subscriber}. Collections can be dissolved into + * individual elements instead of emitting collections. This publisher allows multiple subscriptions if it's backed by a + * {@link Supplier command supplier}. + *

+ * When using streaming outputs ({@link io.lettuce.core.output.CommandOutput} that implement {@link StreamingOutput}) elements + * are emitted as they are decoded. Otherwise, results are processed at command completion. + * + * @author Mark Paluch + * @since 5.0 + */ +class RedisPublisher implements Publisher { + + private static final InternalLogger LOG = InternalLoggerFactory.getInstance(RedisPublisher.class); + + private final boolean traceEnabled = LOG.isTraceEnabled(); + + private final Supplier> commandSupplier; + private final AtomicReference> ref; + private final StatefulConnection connection; + private final boolean dissolve; + private final Executor executor; + + /** + * Creates a new {@link RedisPublisher} for a static command. + * + * @param staticCommand static command, must not be {@literal null}. + * @param connection the connection, must not be {@literal null}. + * @param dissolve dissolve collections into particular elements. + * @param publishOn executor to use for publishOn signals. + */ + public RedisPublisher(RedisCommand staticCommand, StatefulConnection connection, boolean dissolve, + Executor publishOn) { + this(() -> staticCommand, connection, dissolve, publishOn); + } + + /** + * Creates a new {@link RedisPublisher} for a command supplier. + * + * @param commandSupplier command supplier, must not be {@literal null}. + * @param connection the connection, must not be {@literal null}. + * @param dissolve dissolve collections into particular elements. + * @param publishOn executor to use for publishOn signals. + */ + public RedisPublisher(Supplier> commandSupplier, StatefulConnection connection, + boolean dissolve, Executor publishOn) { + + LettuceAssert.notNull(commandSupplier, "CommandSupplier must not be null"); + LettuceAssert.notNull(connection, "StatefulConnection must not be null"); + LettuceAssert.notNull(publishOn, "Executor must not be null"); + + this.commandSupplier = commandSupplier; + this.connection = connection; + this.dissolve = dissolve; + this.executor = publishOn; + this.ref = new AtomicReference<>(commandSupplier.get()); + } + + @Override + public void subscribe(Subscriber subscriber) { + + if (this.traceEnabled) { + LOG.trace("subscribe: {}@{}", subscriber.getClass().getName(), Objects.hashCode(subscriber)); + } + + // Reuse the first command but then discard it. + RedisCommand command = ref.get(); + + if (command != null) { + if (!ref.compareAndSet(command, null)) { + command = commandSupplier.get(); + } + } else { + command = commandSupplier.get(); + } + + RedisSubscription redisSubscription = new RedisSubscription<>(connection, command, dissolve, executor); + redisSubscription.subscribe(subscriber); + } + + /** + * Implementation of {@link Subscription}. This subscription can receive demand for data signals with {@link #request(long)} + * . It maintains a {@link State} to react on pull signals like demand for data or push signals as soon as data is + * available. Subscription behavior and state transitions are kept inside the {@link State}. + * + * @param data element type + */ + static class RedisSubscription extends StreamingOutput.Subscriber implements Subscription { + + static final InternalLogger LOG = InternalLoggerFactory.getInstance(RedisPublisher.class); + + static final int ST_PROGRESS = 0; + static final int ST_COMPLETED = 1; + + @SuppressWarnings({ "rawtypes", "unchecked" }) + static final AtomicLongFieldUpdater DEMAND = AtomicLongFieldUpdater + .newUpdater(RedisSubscription.class, "demand"); + + @SuppressWarnings({ "rawtypes", "unchecked" }) + static final AtomicReferenceFieldUpdater STATE = AtomicReferenceFieldUpdater + .newUpdater(RedisSubscription.class, State.class, "state"); + + @SuppressWarnings({ "rawtypes", "unchecked" }) + static final AtomicReferenceFieldUpdater COMMAND_DISPATCH = AtomicReferenceFieldUpdater + .newUpdater(RedisSubscription.class, CommandDispatch.class, "commandDispatch"); + + private final SubscriptionCommand subscriptionCommand; + private final boolean traceEnabled = LOG.isTraceEnabled(); + + final Queue data = Operators.newQueue(); + final StatefulConnection connection; + final RedisCommand command; + final boolean dissolve; + private final Executor executor; + + // accessed via AtomicLongFieldUpdater + @SuppressWarnings("unused") + volatile long demand; + @SuppressWarnings("unused") + volatile State state = State.UNSUBSCRIBED; + @SuppressWarnings("unused") + volatile CommandDispatch commandDispatch = CommandDispatch.UNDISPATCHED; + + volatile boolean allDataRead = false; + + volatile RedisSubscriber subscriber; + + @SuppressWarnings("unchecked") + RedisSubscription(StatefulConnection connection, RedisCommand command, boolean dissolve, + Executor executor) { + + LettuceAssert.notNull(connection, "Connection must not be null"); + LettuceAssert.notNull(command, "RedisCommand must not be null"); + LettuceAssert.notNull(executor, "Executor must not be null"); + + this.connection = connection; + this.command = command; + this.dissolve = dissolve; + this.executor = executor; + + if (command.getOutput() instanceof StreamingOutput) { + StreamingOutput streamingOutput = (StreamingOutput) command.getOutput(); + + if (connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection).isMulti()) { + streamingOutput.setSubscriber(new CompositeSubscriber<>(this, streamingOutput.getSubscriber())); + } else { + streamingOutput.setSubscriber(this); + } + } + + this.subscriptionCommand = new SubscriptionCommand<>(command, this, dissolve); + } + + /** + * Subscription procedure called by a {@link Publisher} + * + * @param subscriber the subscriber, must not be {@literal null}. + */ + void subscribe(Subscriber subscriber) { + + if (subscriber == null) { + throw new NullPointerException("Subscriber must not be null"); + } + + State state = state(); + + if (traceEnabled) { + LOG.trace("{} subscribe: {}@{}", state, subscriber.getClass().getName(), subscriber.hashCode()); + } + + state.subscribe(this, subscriber); + } + + /** + * Signal for data demand. + * + * @param n number of requested elements. + */ + @Override + public final void request(long n) { + + State state = state(); + + if (traceEnabled) { + LOG.trace("{} request: {}", state, n); + } + + state.request(this, n); + } + + /** + * Cancels a command. + */ + @Override + public final void cancel() { + + State state = state(); + + if (traceEnabled) { + LOG.trace("{} cancel", state); + } + + state.cancel(this); + } + + /** + * Called by {@link StreamingOutput} to dispatch data (push). + * + * @param t element + */ + @Override + public void onNext(T t) { + + LettuceAssert.notNull(t, "Data must not be null"); + + State state = state(); + + if (state == State.COMPLETED) { + return; + } + + // Fast-path publishing, preserve ordering + if (data.isEmpty() && state() == State.DEMAND) { + + long initial = getDemand(); + + if (initial > 0) { + + try { + DEMAND.decrementAndGet(this); + this.subscriber.onNext(t); + } catch (Exception e) { + onError(e); + } + return; + } + } + + if (!data.offer(t)) { + + Subscriber subscriber = this.subscriber; + Context context = Context.empty(); + if (subscriber instanceof CoreSubscriber) { + context = ((CoreSubscriber) subscriber).currentContext(); + } + + Throwable e = Operators.onOperatorError(this, Exceptions.failWithOverflow(), t, context); + onError(e); + return; + } + + onDataAvailable(); + } + + /** + * Called via a listener interface to indicate that reading is possible. + */ + final void onDataAvailable() { + + State state = state(); + + if (traceEnabled) { + LOG.trace("{} onDataAvailable()", state); + } + + state.onDataAvailable(this); + } + + /** + * Called via a listener interface to indicate that all data has been read. + */ + final void onAllDataRead() { + + State state = state(); + + if (traceEnabled) { + LOG.trace("{} onAllDataRead()", state); + } + + allDataRead = true; + onDataAvailable(); + } + + /** + * Called by a listener interface to indicate that as error has occurred. + * + * @param t the error + */ + final void onError(Throwable t) { + + State state = state(); + + if (LOG.isErrorEnabled()) { + LOG.trace("{} onError(): {}", state, t.toString(), t); + } + + state.onError(this, t); + } + + /** + * Reads data from the input, if possible. + * + * @return the data that was read or {@literal null} + */ + protected T read() { + return data.poll(); + } + + boolean hasDemand() { + return getDemand() > 0; + } + + private long getDemand() { + return DEMAND.get(this); + } + + boolean changeState(State oldState, State newState) { + return STATE.compareAndSet(this, oldState, newState); + } + + boolean afterRead() { + return changeState(State.READING, getDemand() > 0 ? State.DEMAND : State.NO_DEMAND); + } + + public boolean complete() { + return changeState(State.READING, State.COMPLETED); + } + + void checkCommandDispatch() { + COMMAND_DISPATCH.get(this).dispatch(this); + } + + @SuppressWarnings({ "unchecked", "rawtypes" }) + void dispatchCommand() { + connection.dispatch((RedisCommand) subscriptionCommand); + } + + void checkOnDataAvailable() { + + if (data.isEmpty()) { + potentiallyReadMore(); + } + + if (!data.isEmpty()) { + onDataAvailable(); + } + } + + void potentiallyReadMore() { + + if ((getDemand() + 1) > data.size()) { + state().readData(this); + } + } + + /** + * Reads and publishes data from the input. Continues until either there is no more demand, or until there is no more + * data to be read. + */ + void readAndPublish() { + + while (hasDemand()) { + + T data = read(); + + if (data == null) { + return; + } + + DEMAND.decrementAndGet(this); + this.subscriber.onNext(data); + } + } + + RedisPublisher.State state() { + return STATE.get(this); + } + } + + /** + * Represents a state for command dispatch of the {@link Subscription}. The following figure indicates the two different + * states that exist, and the relationships between them. + * + *

+     *   UNDISPATCHED
+     *        |
+     *        v
+     *   DISPATCHED
+     * 
+ * + * Refer to the individual states for more information. + */ + private enum CommandDispatch { + + /** + * Initial state. Will respond to {@link #dispatch(RedisSubscription)} by changing the state to {@link #DISPATCHED} and + * dispatch the command. + */ + UNDISPATCHED { + + @Override + void dispatch(RedisSubscription redisSubscription) { + + if (RedisSubscription.COMMAND_DISPATCH.compareAndSet(redisSubscription, this, DISPATCHED)) { + redisSubscription.dispatchCommand(); + } + } + }, + DISPATCHED; + + void dispatch(RedisSubscription redisSubscription) { + } + } + + /** + * Represents a state for the {@link Subscription} to be in. The following figure indicates the four different states that + * exist, and the relationships between them. + * + *
+     *       UNSUBSCRIBED
+     *        |
+     *        v
+     * NO_DEMAND -------------------> DEMAND
+     *    |    ^                      ^    |
+     *    |    |                      |    |
+     *    |    --------- READING <-----    |
+     *    |                 |              |
+     *    |                 v              |
+     *    ------------> COMPLETED <---------
+     * 
+ * + * Refer to the individual states for more information. + */ + private enum State { + + /** + * The initial unsubscribed state. Will respond to {@link #subscribe(RedisSubscription, Subscriber)} by changing state + * to {@link #NO_DEMAND}. + */ + UNSUBSCRIBED { + @SuppressWarnings("unchecked") + @Override + void subscribe(RedisSubscription subscription, Subscriber subscriber) { + + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + + if (subscription.changeState(this, NO_DEMAND)) { + + subscription.subscriber = RedisSubscriber.create(subscriber, subscription.executor); + subscriber.onSubscribe(subscription); + } else { + throw new IllegalStateException(toString()); + } + } + }, + + /** + * State that gets entered when there is no demand. Responds to {@link #request(RedisSubscription, long)} + * (RedisPublisher, long)} by increasing the demand, changing state to {@link #DEMAND} and will check whether there is + * data available for reading. + */ + NO_DEMAND { + @Override + void request(RedisSubscription subscription, long n) { + + if (Operators.request(RedisSubscription.DEMAND, subscription, n)) { + + if (subscription.changeState(this, DEMAND)) { + + try { + subscription.checkCommandDispatch(); + } catch (Exception ex) { + subscription.onError(ex); + } + subscription.checkOnDataAvailable(); + } + + subscription.potentiallyReadMore(); + onDataAvailable(subscription); + } else { + onError(subscription, Exceptions.nullOrNegativeRequestException(n)); + } + } + }, + + /** + * State that gets entered when there is demand. Responds to {@link #onDataAvailable(RedisSubscription)} by reading the + * available data. The state will be changed to {@link #NO_DEMAND} if there is no demand. + */ + DEMAND { + @Override + void onDataAvailable(RedisSubscription subscription) { + + try { + do { + + if (!read(subscription)) { + return; + } + } while (subscription.hasDemand()); + } catch (Exception e) { + subscription.onError(e); + } + } + + @Override + void request(RedisSubscription subscription, long n) { + + if (Operators.request(RedisSubscription.DEMAND, subscription, n)) { + + onDataAvailable(subscription); + + subscription.potentiallyReadMore(); + } else { + onError(subscription, Exceptions.nullOrNegativeRequestException(n)); + } + } + + /** + * @param subscription + * @return {@literal true} if the {@code read()} call was able to perform a read and whether this method should be + * called again to emit remaining data. + */ + private boolean read(RedisSubscription subscription) { + + State state = subscription.state(); + + // concurrency/entry guard + if (state == NO_DEMAND || state == DEMAND) { + if (!subscription.changeState(state, READING)) { + return false; + } + } else { + return false; + } + + subscription.readAndPublish(); + + if (subscription.allDataRead && subscription.data.isEmpty()) { + state.onAllDataRead(subscription); + return false; + } + + // concurrency/leave guard + subscription.afterRead(); + + if (subscription.allDataRead || !subscription.data.isEmpty()) { + return true; + } + + return false; + } + }, + + READING { + @Override + void request(RedisSubscription subscription, long n) { + DEMAND.request(subscription, n); + } + }, + + /** + * The terminal completed state. Does not respond to any events. + */ + COMPLETED { + + @Override + void request(RedisSubscription subscription, long n) { + // ignore + } + + @Override + void cancel(RedisSubscription subscription) { + // ignore + } + + @Override + void onAllDataRead(RedisSubscription subscription) { + // ignore + } + + @Override + void onError(RedisSubscription subscription, Throwable t) { + // ignore + } + }; + + void subscribe(RedisSubscription subscription, Subscriber subscriber) { + throw new IllegalStateException(toString()); + } + + void request(RedisSubscription subscription, long n) { + throw new IllegalStateException(toString()); + } + + void cancel(RedisSubscription subscription) { + + subscription.command.cancel(); + if (subscription.changeState(this, COMPLETED)) { + readData(subscription); + } + } + + void readData(RedisSubscription subscription) { + + DemandAware.Source source = subscription.subscriptionCommand.source; + + if (source != null) { + source.requestMore(); + } + } + + void onDataAvailable(RedisSubscription subscription) { + // ignore + } + + void onAllDataRead(RedisSubscription subscription) { + + if (subscription.data.isEmpty() && subscription.complete()) { + + readData(subscription); + + Subscriber subscriber = subscription.subscriber; + + if (subscriber != null) { + subscriber.onComplete(); + } + } + } + + void onError(RedisSubscription subscription, Throwable t) { + + State state; + while ((state = subscription.state()) != COMPLETED && subscription.changeState(state, COMPLETED)) { + + readData(subscription); + + Subscriber subscriber = subscription.subscriber; + if (subscriber != null) { + subscriber.onError(t); + return; + } + } + } + } + + /** + * Command that emits it data after completion to a {@link RedisSubscription}. + * + * @param key type + * @param value type + * @param response type + */ + private static class SubscriptionCommand extends CommandWrapper implements DemandAware.Sink { + + private final boolean dissolve; + private final RedisSubscription subscription; + private volatile boolean completed = false; + private volatile DemandAware.Source source; + + public SubscriptionCommand(RedisCommand command, RedisSubscription subscription, boolean dissolve) { + + super(command); + + this.subscription = subscription; + this.dissolve = dissolve; + } + + @Override + public boolean hasDemand() { + return completed || subscription.state() == State.COMPLETED || subscription.data.isEmpty(); + } + + @Override + @SuppressWarnings("unchecked") + public void complete() { + + if (completed) { + return; + } + + try { + super.complete(); + + if (getOutput() != null) { + Object result = getOutput().get(); + + if (getOutput().hasError()) { + onError(ExceptionFactory.createExecutionException(getOutput().getError())); + completed = true; + return; + } + + if (!(getOutput() instanceof StreamingOutput) && result != null) { + + if (dissolve && result instanceof Collection) { + + Collection collection = (Collection) result; + + for (T t : collection) { + if (t != null) { + subscription.onNext(t); + } + } + } else { + subscription.onNext((T) result); + } + } + } + + subscription.onAllDataRead(); + } finally { + completed = true; + } + } + + @Override + public void setSource(DemandAware.Source source) { + this.source = source; + } + + @Override + public void removeSource() { + this.source = null; + } + + @Override + public void cancel() { + + if (completed) { + return; + } + + super.cancel(); + + completed = true; + } + + @Override + public boolean completeExceptionally(Throwable throwable) { + + if (completed) { + return false; + } + + boolean b = super.completeExceptionally(throwable); + onError(throwable); + completed = true; + return b; + } + + private void onError(Throwable throwable) { + subscription.onError(throwable); + } + } + + /** + * Composite {@link io.lettuce.core.output.StreamingOutput.Subscriber} that can notify multiple nested subscribers. + * + * @param element type + */ + private static class CompositeSubscriber extends StreamingOutput.Subscriber { + + private final StreamingOutput.Subscriber first; + private final StreamingOutput.Subscriber second; + + public CompositeSubscriber(StreamingOutput.Subscriber first, StreamingOutput.Subscriber second) { + this.first = first; + this.second = second; + } + + @Override + public void onNext(T t) { + throw new UnsupportedOperationException(); + } + + @Override + public void onNext(Collection outputTarget, T t) { + + first.onNext(outputTarget, t); + second.onNext(outputTarget, t); + } + } + + /** + * Lettuce-specific interface. + * + * @param + */ + interface RedisSubscriber extends CoreSubscriber { + + /** + * Create a new {@link RedisSubscriber}. Optimizes for immediate executor usage. + * + * @param delegate + * @param executor + * @param + * @return + * @see ImmediateSubscriber + * @see PublishOnSubscriber + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + static RedisSubscriber create(Subscriber delegate, Executor executor) { + + if (executor == ImmediateEventExecutor.INSTANCE) { + return new ImmediateSubscriber(delegate); + } + + return new PublishOnSubscriber(delegate, executor); + } + } + + /** + * {@link RedisSubscriber} using immediate signal dispatch by calling directly {@link Subscriber} method. + * + * @param + */ + static class ImmediateSubscriber implements RedisSubscriber { + + private final CoreSubscriber delegate; + + public ImmediateSubscriber(Subscriber delegate) { + this.delegate = (CoreSubscriber) reactor.core.publisher.Operators.toCoreSubscriber(delegate); + } + + @Override + public Context currentContext() { + return delegate.currentContext(); + } + + @Override + public void onSubscribe(Subscription s) { + delegate.onSubscribe(s); + } + + @Override + public void onNext(T t) { + delegate.onNext(t); + } + + @Override + public void onError(Throwable t) { + delegate.onError(t); + } + + @Override + public void onComplete() { + delegate.onComplete(); + } + } + + /** + * {@link RedisSubscriber} dispatching subscriber signals on a {@link Executor}. + * + * @param + */ + static class PublishOnSubscriber implements RedisSubscriber { + + private final CoreSubscriber delegate; + private final Executor executor; + + public PublishOnSubscriber(Subscriber delegate, Executor executor) { + this.delegate = (CoreSubscriber) reactor.core.publisher.Operators.toCoreSubscriber(delegate); + this.executor = executor; + } + + @Override + public Context currentContext() { + return delegate.currentContext(); + } + + @Override + public void onSubscribe(Subscription s) { + delegate.onSubscribe(s); + } + + @Override + public void onNext(T t) { + executor.execute(OnNext.newInstance(t, delegate)); + } + + @Override + public void onError(Throwable t) { + executor.execute(OnComplete.newInstance(t, delegate)); + } + + @Override + public void onComplete() { + executor.execute(OnComplete.newInstance(delegate)); + } + } + + /** + * OnNext {@link Runnable}. This listener is pooled and must be {@link #recycle() recycled after usage}. + */ + static class OnNext implements Runnable { + + private static final Recycler RECYCLER = new Recycler() { + @Override + protected OnNext newObject(Handle handle) { + return new OnNext(handle); + } + }; + + private final Recycler.Handle handle; + private Object signal; + private Subscriber subscriber; + + OnNext(Recycler.Handle handle) { + this.handle = handle; + } + + /** + * Allocate a new instance. + * + * @return + * @see Subscriber#onNext(Object) + */ + static OnNext newInstance(Object signal, Subscriber subscriber) { + + OnNext entry = RECYCLER.get(); + + entry.signal = signal; + entry.subscriber = (Subscriber) subscriber; + + return entry; + } + + @Override + public void run() { + try { + subscriber.onNext(signal); + } finally { + recycle(); + } + } + + private void recycle() { + + this.signal = null; + this.subscriber = null; + + handle.recycle(this); + } + } + + /** + * OnComplete {@link Runnable}. This listener is pooled and must be {@link #recycle() recycled after usage}. + */ + static class OnComplete implements Runnable { + + private static final Recycler RECYCLER = new Recycler() { + @Override + protected OnComplete newObject(Handle handle) { + return new OnComplete(handle); + } + }; + + private final Recycler.Handle handle; + private Throwable signal; + private Subscriber subscriber; + + OnComplete(Recycler.Handle handle) { + this.handle = handle; + } + + /** + * Allocate a new instance. + * + * @return + * @see Subscriber#onError(Throwable) + */ + static OnComplete newInstance(Throwable signal, Subscriber subscriber) { + + OnComplete entry = RECYCLER.get(); + + entry.signal = signal; + entry.subscriber = subscriber; + + return entry; + } + + /** + * Allocate a new instance. + * + * @return + * @see Subscriber#onComplete() + */ + static OnComplete newInstance(Subscriber subscriber) { + + OnComplete entry = RECYCLER.get(); + + entry.signal = null; + entry.subscriber = subscriber; + + return entry; + } + + @Override + public void run() { + try { + if (signal != null) { + subscriber.onError(signal); + } else { + subscriber.onComplete(); + } + } finally { + recycle(); + } + } + + private void recycle() { + + this.signal = null; + this.subscriber = null; + + handle.recycle(this); + } + } +} diff --git a/src/main/java/io/lettuce/core/RedisReactiveCommandsImpl.java b/src/main/java/io/lettuce/core/RedisReactiveCommandsImpl.java new file mode 100644 index 0000000000..d48dd7c5d1 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisReactiveCommandsImpl.java @@ -0,0 +1,48 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.cluster.api.reactive.RedisClusterReactiveCommands; +import io.lettuce.core.codec.RedisCodec; + +/** + * A reactive and thread-safe API for a Redis Sentinel connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class RedisReactiveCommandsImpl extends AbstractRedisReactiveCommands + implements RedisReactiveCommands, RedisClusterReactiveCommands { + + /** + * Initialize a new instance. + * + * @param connection the connection to operate on. + * @param codec the codec for command encoding. + * + */ + public RedisReactiveCommandsImpl(StatefulRedisConnection connection, RedisCodec codec) { + super(connection, codec); + } + + @Override + public StatefulRedisConnection getStatefulConnection() { + return (StatefulRedisConnection) super.getConnection(); + } +} diff --git a/src/main/java/io/lettuce/core/RedisURI.java b/src/main/java/io/lettuce/core/RedisURI.java new file mode 100644 index 0000000000..21417cf905 --- /dev/null +++ b/src/main/java/io/lettuce/core/RedisURI.java @@ -0,0 +1,1421 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.LettuceStrings.isEmpty; +import static io.lettuce.core.LettuceStrings.isNotEmpty; + +import java.io.Serializable; +import java.io.UnsupportedEncodingException; +import java.net.URI; +import java.net.URLEncoder; +import java.nio.charset.StandardCharsets; +import java.time.Duration; +import java.util.*; +import java.util.function.LongFunction; +import java.util.stream.Collectors; + +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceSets; + +/** + * Redis URI. Contains connection details for the Redis/Sentinel connections. You can provide the database, client name, + * password and timeouts within the RedisURI. + * + * You have the following possibilities to create a {@link RedisURI}: + * + *
    + *
  • Use an URI: + *

    + * {@code RedisURI.create("redis://localhost/");} + *

    + * See {@link #create(String)} for more options
  • + *
  • Use the Builder: + *

    + * {@code RedisURI.Builder.redis("localhost", 6379).withPassword("password").withDatabase(1).build(); } + *

    + * See {@link io.lettuce.core.RedisURI.Builder#redis(String)} and {@link io.lettuce.core.RedisURI.Builder#sentinel(String)} for + * more options.
  • + *
  • Construct your own instance: + *

    + * {@code new RedisURI("localhost", 6379, Duration.ofSeconds(60));} + *

    + * or + *

    + * {@code RedisURI uri = new RedisURI(); uri.setHost("localhost"); + * } + *

    + *
  • + *
+ * + *

URI syntax

+ * + * Redis Standalone
redis{@code ://}[[username{@code :}]password@]host + * [{@code :} port][{@code /}database][{@code ?} + * [timeout=timeout[d|h|m|s|ms|us|ns]] [ &database=database] [&clientName=clientName]] + *
+ * + * Redis Standalone (SSL)
+ * rediss{@code ://}[[username{@code :}]password@]host [{@code :} + * port][{@code /}database][{@code ?} [timeout=timeout[d|h|m|s|ms|us|ns]] [ + * &database=database] [&clientName=clientName]]
+ * + * Redis Standalone (Unix Domain Sockets)
redis-socket{@code ://} + * [[username{@code :}]password@]path[ + * {@code ?}[timeout=timeout[d|h|m|s|ms|us|ns]][&database=database] + * [&clientName=clientName]]
+ * + * Redis Sentinel
+ * redis-sentinel{@code ://}[[username{@code :}]password@]host1 [{@code :} + * port1][, host2 [{@code :}port2]][, hostN [{@code :}portN]][{@code /} + * database][{@code ?} [timeout=timeout[d|h|m|s|ms|us|ns]] [ + * &sentinelMasterId=sentinelMasterId] [&database=database] [&clientName=clientName]] + *
+ * + *

+ * Note: When using Redis Sentinel, the password from the URI applies to the data nodes only. Sentinel authentication must be + * configured for each {@link #getSentinels() sentinel node}. + *

+ *

+ * Note:Usernames are supported as of Redis 6. + *

+ * + *

+ * Schemes + *

+ *
    + *
  • redis Redis Standalone
  • + *
  • rediss Redis Standalone SSL
  • + *
  • redis-socket Redis Standalone Unix Domain Socket
  • + *
  • redis-sentinel Redis Sentinel
  • + *
  • rediss-sentinel Redis Sentinel SSL
  • + *
+ * + *

+ * Timeout units + *

+ *
    + *
  • d Days
  • + *
  • h Hours
  • + *
  • m Minutes
  • + *
  • s Seconds
  • + *
  • ms Milliseconds
  • + *
  • us Microseconds
  • + *
  • ns Nanoseconds
  • + *
+ * + *

+ * Hint: The database parameter within the query part has higher precedence than the database in the path. + *

+ * + * RedisURI supports Redis Standalone, Redis Sentinel and Redis Cluster with plain, SSL, TLS and unix domain socket connections. + * + * @author Mark Paluch + * @author Guy Korland + * @since 3.0 + */ +@SuppressWarnings("serial") +public class RedisURI implements Serializable, ConnectionPoint { + + public static final String URI_SCHEME_REDIS_SENTINEL = "redis-sentinel"; + public static final String URI_SCHEME_REDIS_SENTINEL_SECURE = "rediss-sentinel"; + public static final String URI_SCHEME_REDIS = "redis"; + public static final String URI_SCHEME_REDIS_SECURE = "rediss"; + public static final String URI_SCHEME_REDIS_SECURE_ALT = "redis+ssl"; + public static final String URI_SCHEME_REDIS_TLS_ALT = "redis+tls"; + public static final String URI_SCHEME_REDIS_SOCKET = "redis-socket"; + public static final String URI_SCHEME_REDIS_SOCKET_ALT = "redis+socket"; + public static final String PARAMETER_NAME_TIMEOUT = "timeout"; + public static final String PARAMETER_NAME_DATABASE = "database"; + public static final String PARAMETER_NAME_DATABASE_ALT = "db"; + public static final String PARAMETER_NAME_SENTINEL_MASTER_ID = "sentinelMasterId"; + public static final String PARAMETER_NAME_CLIENT_NAME = "clientName"; + + public static final Map> CONVERTER_MAP; + + static { + Map> unitMap = new HashMap<>(); + unitMap.put("ns", Duration::ofNanos); + unitMap.put("us", us -> Duration.ofNanos(us * 1000)); + unitMap.put("ms", Duration::ofMillis); + unitMap.put("s", Duration::ofSeconds); + unitMap.put("m", Duration::ofMinutes); + unitMap.put("h", Duration::ofHours); + unitMap.put("d", Duration::ofDays); + CONVERTER_MAP = Collections.unmodifiableMap(unitMap); + } + + /** + * The default sentinel port. + */ + public static final int DEFAULT_SENTINEL_PORT = 26379; + + /** + * The default redis port. + */ + public static final int DEFAULT_REDIS_PORT = 6379; + + /** + * Default timeout: 60 sec + */ + public static final long DEFAULT_TIMEOUT = 60; + public static final Duration DEFAULT_TIMEOUT_DURATION = Duration.ofSeconds(DEFAULT_TIMEOUT); + + private String host; + private String socket; + private String sentinelMasterId; + private int port; + private int database; + private String clientName; + private String username; + private char[] password; + private boolean ssl = false; + private boolean verifyPeer = true; + private boolean startTls = false; + private Duration timeout = DEFAULT_TIMEOUT_DURATION; + private final List sentinels = new ArrayList<>(); + + /** + * Default empty constructor. + */ + public RedisURI() { + } + + /** + * Constructor with host/port and timeout. + * + * @param host the host + * @param port the port + * @param timeout timeout value + * @param timeout unit of the timeout value + */ + public RedisURI(String host, int port, Duration timeout) { + + LettuceAssert.notEmpty(host, "Host must not be empty"); + LettuceAssert.notNull(timeout, "Timeout duration must not be null"); + LettuceAssert.isTrue(!timeout.isNegative(), "Timeout duration must be greater or equal to zero"); + + setHost(host); + setPort(port); + setTimeout(timeout); + } + + /** + * Returns a new {@link RedisURI.Builder} to construct a {@link RedisURI}. + * + * @return a new {@link RedisURI.Builder} to construct a {@link RedisURI}. + */ + public static RedisURI.Builder builder() { + return new Builder(); + } + + /** + * Create a Redis URI from host and port. + * + * @param host the host + * @param port the port + * @return An instance of {@link RedisURI} containing details from the {@code host} and {@code port}. + */ + public static RedisURI create(String host, int port) { + return Builder.redis(host, port).build(); + } + + /** + * Create a Redis URI from an URI string. + * + * The uri must follow conventions of {@link java.net.URI} + * + * @param uri The URI string. + * @return An instance of {@link RedisURI} containing details from the URI. + */ + public static RedisURI create(String uri) { + LettuceAssert.notEmpty(uri, "URI must not be empty"); + return create(URI.create(uri)); + } + + /** + * Create a Redis URI from an URI string: + * + * The uri must follow conventions of {@link java.net.URI} + * + * @param uri The URI. + * @return An instance of {@link RedisURI} containing details from the URI. + */ + public static RedisURI create(URI uri) { + return buildRedisUriFromUri(uri); + } + + /** + * Returns the host. + * + * @return the host. + */ + public String getHost() { + return host; + } + + /** + * Sets the Redis host. + * + * @param host the host + */ + public void setHost(String host) { + this.host = host; + } + + /** + * Returns the Sentinel Master Id. + * + * @return the Sentinel Master Id. + */ + public String getSentinelMasterId() { + return sentinelMasterId; + } + + /** + * Sets the Sentinel Master Id. + * + * @param sentinelMasterId the Sentinel Master Id. + */ + public void setSentinelMasterId(String sentinelMasterId) { + this.sentinelMasterId = sentinelMasterId; + } + + /** + * Returns the Redis port. + * + * @return the Redis port + */ + public int getPort() { + return port; + } + + /** + * Sets the Redis port. Defaults to {@link #DEFAULT_REDIS_PORT}. + * + * @param port the Redis port + */ + public void setPort(int port) { + this.port = port; + } + + /** + * Returns the Unix Domain Socket path. + * + * @return the Unix Domain Socket path. + */ + public String getSocket() { + return socket; + } + + /** + * Sets the Unix Domain Socket path. + * + * @param socket the Unix Domain Socket path. + */ + public void setSocket(String socket) { + this.socket = socket; + } + + /** + * Returns the username. + * + * @return the username + * @since 6.0 + */ + public String getUsername() { + return username; + } + + /** + * Sets the username. + * + * @param username the username, must not be {@literal null}. + * @since 6.0 + */ + public void setUsername(String username) { + this.username = username; + } + + /** + * Returns the password. + * + * @return the password + */ + public char[] getPassword() { + return password; + } + + /** + * Sets the password. Use empty string to skip authentication. + * + * @param password the password, must not be {@literal null}. + * @deprecated since 6.0. Use {@link #setPassword(CharSequence)} or {@link #setPassword(char[])} avoid String caching. + */ + @Deprecated + public void setPassword(String password) { + setPassword((CharSequence) password); + } + + /** + * Sets the password. Use empty string to skip authentication. + * + * @param password the password, must not be {@literal null}. + * @since 5.2 + */ + public void setPassword(CharSequence password) { + + LettuceAssert.notNull(password, "Password must not be null"); + this.password = password.toString().toCharArray(); + } + + /** + * Sets the password. Use empty char array to skip authentication. + * + * @param password the password, must not be {@literal null}. + * @since 4.4 + */ + public void setPassword(char[] password) { + + LettuceAssert.notNull(password, "Password must not be null"); + this.password = Arrays.copyOf(password, password.length); + } + + /** + * Returns the command timeout for synchronous command execution. + * + * @return the Timeout + * @since 5.0 + */ + public Duration getTimeout() { + return timeout; + } + + /** + * Sets the command timeout for synchronous command execution. A zero timeout value indicates to not time out. + * + * @param timeout the command timeout for synchronous command execution. + * @since 5.0 + */ + public void setTimeout(Duration timeout) { + + LettuceAssert.notNull(timeout, "Timeout must not be null"); + LettuceAssert.isTrue(!timeout.isNegative(), "Timeout must be greater or equal 0"); + + this.timeout = timeout; + } + + /** + * Returns the Redis database number. Databases are only available for Redis Standalone and Redis Master/Slave. + * + * @return + */ + public int getDatabase() { + return database; + } + + /** + * Sets the Redis database number. Databases are only available for Redis Standalone and Redis Master/Slave. + * + * @param database the Redis database number. + */ + public void setDatabase(int database) { + + LettuceAssert.isTrue(database >= 0, "Invalid database number: " + database); + + this.database = database; + } + + /** + * Returns the client name. + * + * @return + * @since 4.4 + */ + public String getClientName() { + return clientName; + } + + /** + * Sets the client name to be applied on Redis connections. + * + * @param clientName the client name. + * @since 4.4 + */ + public void setClientName(String clientName) { + this.clientName = clientName; + } + + /** + * Returns {@literal true} if SSL mode is enabled. + * + * @return {@literal true} if SSL mode is enabled. + */ + public boolean isSsl() { + return ssl; + } + + /** + * Sets whether to use SSL. Sets SSL also for already configured Redis Sentinel nodes. + * + * @param ssl + */ + public void setSsl(boolean ssl) { + this.ssl = ssl; + this.sentinels.forEach(it -> it.setSsl(ssl)); + } + + /** + * Sets whether to verify peers when using {@link #isSsl() SSL}. + * + * @return {@literal true} to verify peers when using {@link #isSsl() SSL}. + */ + public boolean isVerifyPeer() { + return verifyPeer; + } + + /** + * Sets whether to verify peers when using {@link #isSsl() SSL}. Sets peer verification also for already configured Redis + * Sentinel nodes. + * + * @param verifyPeer {@literal true} to verify peers when using {@link #isSsl() SSL}. + */ + public void setVerifyPeer(boolean verifyPeer) { + this.verifyPeer = verifyPeer; + this.sentinels.forEach(it -> it.setVerifyPeer(verifyPeer)); + } + + /** + * Returns {@literal true} if StartTLS is enabled. + * + * @return {@literal true} if StartTLS is enabled. + */ + public boolean isStartTls() { + return startTls; + } + + /** + * Returns whether StartTLS is enabled. Sets StartTLS also for already configured Redis Sentinel nodes. + * + * @param startTls {@literal true} if StartTLS is enabled. + */ + public void setStartTls(boolean startTls) { + this.startTls = startTls; + this.sentinels.forEach(it -> it.setStartTls(startTls)); + } + + /** + * + * @return the list of {@link RedisURI Redis Sentinel URIs}. + */ + public List getSentinels() { + return sentinels; + } + + /** + * Creates an URI based on the RedisURI if possible. + *

+ * An URI an represent a Standalone address using host and port or socket addressing or a Redis Sentinel address using + * host/port. A Redis Sentinel URI with multiple nodes using Unix Domain Sockets cannot be rendered to a {@link URI}. + * + * @return URI based on the RedisURI. + * @throws IllegalStateException if the URI cannot be rendered. + */ + public URI toURI() { + try { + return URI.create(createUriString()); + } catch (Exception e) { + throw new IllegalStateException("Cannot render URI for " + toString(), e); + } + } + + private String createUriString() { + String scheme = getScheme(); + String authority = getAuthority(scheme); + String queryString = getQueryString(); + String uri = scheme + "://" + authority; + + if (!queryString.isEmpty()) { + uri += "?" + queryString; + } + return uri; + } + + private static RedisURI buildRedisUriFromUri(URI uri) { + + LettuceAssert.notNull(uri, "URI must not be null"); + LettuceAssert.notNull(uri.getScheme(), "URI scheme must not be null"); + + Builder builder; + if (isSentinel(uri.getScheme())) { + builder = configureSentinel(uri); + } else { + builder = configureStandalone(uri); + } + + String userInfo = uri.getUserInfo(); + + if (isEmpty(userInfo) && isNotEmpty(uri.getAuthority()) && uri.getAuthority().indexOf('@') > 0) { + userInfo = uri.getAuthority().substring(0, uri.getAuthority().indexOf('@')); + } + + if (isNotEmpty(userInfo)) { + String password = userInfo; + String username = null; + if (password.startsWith(":")) { + password = password.substring(1); + } else { + + int index = password.indexOf(':'); + if (index > 0) { + username = password.substring(0, index); + password = password.substring(index + 1); + } + } + if (LettuceStrings.isNotEmpty(password)) { + if (username == null) { + builder.withPassword(password); + } else { + builder.withAuthentication(username, password); + } + } + } + + if (isNotEmpty(uri.getPath()) && builder.socket == null) { + String pathSuffix = uri.getPath().substring(1); + + if (isNotEmpty(pathSuffix)) { + builder.withDatabase(Integer.parseInt(pathSuffix)); + } + } + + if (isNotEmpty(uri.getQuery())) { + StringTokenizer st = new StringTokenizer(uri.getQuery(), "&;"); + while (st.hasMoreTokens()) { + String queryParam = st.nextToken(); + String forStartWith = queryParam.toLowerCase(); + if (forStartWith.startsWith(PARAMETER_NAME_TIMEOUT + "=")) { + parseTimeout(builder, queryParam.toLowerCase()); + } + + if (forStartWith.startsWith(PARAMETER_NAME_DATABASE + "=") + || queryParam.startsWith(PARAMETER_NAME_DATABASE_ALT + "=")) { + parseDatabase(builder, queryParam); + } + + if (forStartWith.startsWith(PARAMETER_NAME_CLIENT_NAME.toLowerCase() + "=")) { + parseClientName(builder, queryParam); + } + + if (forStartWith.startsWith(PARAMETER_NAME_SENTINEL_MASTER_ID.toLowerCase() + "=")) { + parseSentinelMasterId(builder, queryParam); + } + } + } + + if (isSentinel(uri.getScheme())) { + LettuceAssert.notEmpty(builder.sentinelMasterId, "URI must contain the sentinelMasterId"); + } + + return builder.build(); + } + + private String getAuthority(String scheme) { + + String authority = null; + + if (host != null) { + if (host.contains(",")) { + authority = host; + } else { + authority = urlEncode(host) + getPortPart(port, scheme); + } + } + + if (sentinels.size() != 0) { + + authority = sentinels.stream().map(redisURI -> { + if (LettuceStrings.isNotEmpty(redisURI.getSocket())) { + return String.format("[Socket %s]", redisURI.getSocket()); + } + + return urlEncode(redisURI.getHost()) + getPortPart(redisURI.getPort(), scheme); + }).collect(Collectors.joining(",")); + } + + if (socket != null) { + authority = urlEncode(socket); + } else { + if (database != 0) { + authority += "/" + database; + } + } + + if (password != null && password.length != 0) { + authority = urlEncode(new String(password)) + "@" + authority; + } + if (username != null) { + authority = urlEncode(username) + ":" + authority; + } + return authority; + } + + private String getQueryString() { + + List queryPairs = new ArrayList<>(); + + if (database != 0 && LettuceStrings.isNotEmpty(socket)) { + queryPairs.add(PARAMETER_NAME_DATABASE + "=" + database); + } + + if (clientName != null) { + queryPairs.add(PARAMETER_NAME_CLIENT_NAME + "=" + urlEncode(clientName)); + } + + if (sentinelMasterId != null) { + queryPairs.add(PARAMETER_NAME_SENTINEL_MASTER_ID + "=" + urlEncode(sentinelMasterId)); + } + + if (timeout.getSeconds() != DEFAULT_TIMEOUT) { + + if (timeout.getNano() == 0) { + queryPairs.add(PARAMETER_NAME_TIMEOUT + "=" + timeout.getSeconds() + "s"); + } else { + queryPairs.add(PARAMETER_NAME_TIMEOUT + "=" + timeout.toMillis() + "ns"); + } + } + + return queryPairs.stream().collect(Collectors.joining("&")); + } + + private String getPortPart(int port, String scheme) { + + if (isSentinel(scheme) && port == DEFAULT_SENTINEL_PORT) { + return ""; + } + + if (URI_SCHEME_REDIS.equals(scheme) && port == DEFAULT_REDIS_PORT) { + return ""; + } + + return ":" + port; + } + + private String getScheme() { + String scheme = URI_SCHEME_REDIS; + + if (isSsl()) { + if (isStartTls()) { + scheme = URI_SCHEME_REDIS_TLS_ALT; + } else { + scheme = URI_SCHEME_REDIS_SECURE; + } + } + + if (socket != null) { + scheme = URI_SCHEME_REDIS_SOCKET; + } + + if (host == null && !sentinels.isEmpty()) { + if (isSsl()) { + scheme = URI_SCHEME_REDIS_SENTINEL_SECURE; + } else { + scheme = URI_SCHEME_REDIS_SENTINEL; + } + } + return scheme; + } + + /** + * URL encode the {@code str} without slash escaping {@code %2F}. + * + * @param str the string to encode. + * @return the URL-encoded string + */ + private static String urlEncode(String str) { + try { + return URLEncoder.encode(str, StandardCharsets.UTF_8.name()).replaceAll("%2F", "/"); + } catch (UnsupportedEncodingException e) { + throw new IllegalStateException(e); + } + } + + /** + * @return the RedisURL in a URI-like form. + */ + @Override + public String toString() { + return createUriString(); + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof RedisURI)) + return false; + + RedisURI redisURI = (RedisURI) o; + + if (port != redisURI.port) + return false; + if (database != redisURI.database) + return false; + if (host != null ? !host.equals(redisURI.host) : redisURI.host != null) + return false; + if (socket != null ? !socket.equals(redisURI.socket) : redisURI.socket != null) + return false; + if (sentinelMasterId != null ? !sentinelMasterId.equals(redisURI.sentinelMasterId) : redisURI.sentinelMasterId != null) + return false; + return !(sentinels != null ? !sentinels.equals(redisURI.sentinels) : redisURI.sentinels != null); + + } + + @Override + public int hashCode() { + int result = host != null ? host.hashCode() : 0; + result = 31 * result + (socket != null ? socket.hashCode() : 0); + result = 31 * result + (sentinelMasterId != null ? sentinelMasterId.hashCode() : 0); + result = 31 * result + port; + result = 31 * result + database; + result = 31 * result + (sentinels != null ? sentinels.hashCode() : 0); + return result; + } + + private static void parseTimeout(Builder builder, String queryParam) { + int index = queryParam.indexOf('='); + if (index < 0) { + return; + } + + String timeoutString = queryParam.substring(index + 1); + + int numbersEnd = 0; + while (numbersEnd < timeoutString.length() && Character.isDigit(timeoutString.charAt(numbersEnd))) { + numbersEnd++; + } + + if (numbersEnd == 0) { + if (timeoutString.startsWith("-")) { + builder.withTimeout(Duration.ZERO); + } else { + // no-op, leave defaults + } + } else { + String timeoutValueString = timeoutString.substring(0, numbersEnd); + long timeoutValue = Long.parseLong(timeoutValueString); + builder.withTimeout(Duration.ofMillis(timeoutValue)); + + String suffix = timeoutString.substring(numbersEnd); + LongFunction converter = CONVERTER_MAP.get(suffix); + if (converter == null) { + converter = Duration::ofMillis; + } + + builder.withTimeout(converter.apply(timeoutValue)); + } + } + + private static void parseDatabase(Builder builder, String queryParam) { + int index = queryParam.indexOf('='); + if (index < 0) { + return; + } + + String databaseString = queryParam.substring(index + 1); + + int numbersEnd = 0; + while (numbersEnd < databaseString.length() && Character.isDigit(databaseString.charAt(numbersEnd))) { + numbersEnd++; + } + + if (numbersEnd != 0) { + String databaseValueString = databaseString.substring(0, numbersEnd); + int value = Integer.parseInt(databaseValueString); + builder.withDatabase(value); + } + } + + private static void parseClientName(Builder builder, String queryParam) { + + String clientName = getValuePart(queryParam); + if (isNotEmpty(clientName)) { + builder.withClientName(clientName); + } + } + + private static void parseSentinelMasterId(Builder builder, String queryParam) { + + String masterIdString = getValuePart(queryParam); + if (isNotEmpty(masterIdString)) { + builder.withSentinelMasterId(masterIdString); + } + } + + private static String getValuePart(String queryParam) { + int index = queryParam.indexOf('='); + if (index < 0) { + return null; + } + + return queryParam.substring(index + 1); + } + + private static Builder configureStandalone(URI uri) { + + Builder builder = null; + Set allowedSchemes = LettuceSets.unmodifiableSet(URI_SCHEME_REDIS, URI_SCHEME_REDIS_SECURE, + URI_SCHEME_REDIS_SOCKET, URI_SCHEME_REDIS_SOCKET_ALT, URI_SCHEME_REDIS_SECURE_ALT, URI_SCHEME_REDIS_TLS_ALT); + + if (!allowedSchemes.contains(uri.getScheme())) { + throw new IllegalArgumentException("Scheme " + uri.getScheme() + " not supported"); + } + + if (URI_SCHEME_REDIS_SOCKET.equals(uri.getScheme()) || URI_SCHEME_REDIS_SOCKET_ALT.equals(uri.getScheme())) { + builder = Builder.socket(uri.getPath()); + } else { + + if (isNotEmpty(uri.getHost())) { + + if (uri.getPort() > 0) { + builder = Builder.redis(uri.getHost(), uri.getPort()); + } else { + builder = Builder.redis(uri.getHost()); + } + } else { + + if (isNotEmpty(uri.getAuthority())) { + String authority = uri.getAuthority(); + if (authority.indexOf('@') > -1) { + authority = authority.substring(authority.indexOf('@') + 1); + } + + builder = Builder.redis(authority); + } + } + } + + LettuceAssert.notNull(builder, "Invalid URI, cannot get host or socket part"); + + if (URI_SCHEME_REDIS_SECURE.equals(uri.getScheme()) || URI_SCHEME_REDIS_SECURE_ALT.equals(uri.getScheme())) { + builder.withSsl(true); + } + + if (URI_SCHEME_REDIS_TLS_ALT.equals(uri.getScheme())) { + builder.withSsl(true); + builder.withStartTls(true); + } + return builder; + } + + private static RedisURI.Builder configureSentinel(URI uri) { + String masterId = uri.getFragment(); + + RedisURI.Builder builder = null; + + if (isNotEmpty(uri.getHost())) { + if (uri.getPort() != -1) { + builder = RedisURI.Builder.sentinel(uri.getHost(), uri.getPort()); + } else { + builder = RedisURI.Builder.sentinel(uri.getHost()); + } + } + + if (builder == null && isNotEmpty(uri.getAuthority())) { + String authority = uri.getAuthority(); + if (authority.indexOf('@') > -1) { + authority = authority.substring(authority.indexOf('@') + 1); + } + + String[] hosts = authority.split(","); + for (String host : hosts) { + HostAndPort hostAndPort = HostAndPort.parse(host); + if (builder == null) { + if (hostAndPort.hasPort()) { + builder = RedisURI.Builder.sentinel(hostAndPort.getHostText(), hostAndPort.getPort()); + } else { + builder = RedisURI.Builder.sentinel(hostAndPort.getHostText()); + } + } else { + if (hostAndPort.hasPort()) { + builder.withSentinel(hostAndPort.getHostText(), hostAndPort.getPort()); + } else { + builder.withSentinel(hostAndPort.getHostText()); + } + } + } + } + + LettuceAssert.notNull(builder, "Invalid URI, cannot get host part"); + + if (isNotEmpty(masterId)) { + builder.withSentinelMasterId(masterId); + } + + if (uri.getScheme().equals(URI_SCHEME_REDIS_SENTINEL_SECURE)) { + builder.withSsl(true); + } + + return builder; + } + + private static boolean isSentinel(String scheme) { + return URI_SCHEME_REDIS_SENTINEL.equals(scheme) || URI_SCHEME_REDIS_SENTINEL_SECURE.equals(scheme); + } + + /** + * Builder for Redis URI. + */ + public static class Builder { + + private String host; + private String socket; + private String sentinelMasterId; + private int port = DEFAULT_REDIS_PORT; + private int database; + private String clientName; + private String username; + private char[] password; + private char[] sentinelPassword; + private boolean ssl = false; + private boolean verifyPeer = true; + private boolean startTls = false; + private Duration timeout = DEFAULT_TIMEOUT_DURATION; + private final List sentinels = new ArrayList<>(); + + private Builder() { + } + + /** + * Set Redis socket. Creates a new builder. + * + * @param socket the host name + * @return new builder with Redis socket. + */ + public static Builder socket(String socket) { + + LettuceAssert.notNull(socket, "Socket must not be null"); + + Builder builder = RedisURI.builder(); + builder.socket = socket; + return builder; + } + + /** + * Set Redis host. Creates a new builder. + * + * @param host the host name + * @return new builder with Redis host/port. + */ + public static Builder redis(String host) { + return redis(host, DEFAULT_REDIS_PORT); + } + + /** + * Set Redis host and port. Creates a new builder + * + * @param host the host name + * @param port the port + * @return new builder with Redis host/port. + */ + public static Builder redis(String host, int port) { + + LettuceAssert.notEmpty(host, "Host must not be empty"); + LettuceAssert.isTrue(isValidPort(port), () -> String.format("Port out of range: %s", port)); + + Builder builder = RedisURI.builder(); + return builder.withHost(host).withPort(port); + } + + /** + * Set Sentinel host. Creates a new builder. + * + * @param host the host name + * @return new builder with Sentinel host/port. + */ + public static Builder sentinel(String host) { + + LettuceAssert.notEmpty(host, "Host must not be empty"); + + Builder builder = RedisURI.builder(); + return builder.withSentinel(host); + } + + /** + * Set Sentinel host and port. Creates a new builder. + * + * @param host the host name + * @param port the port + * @return new builder with Sentinel host/port. + */ + public static Builder sentinel(String host, int port) { + + LettuceAssert.notEmpty(host, "Host must not be empty"); + LettuceAssert.isTrue(isValidPort(port), () -> String.format("Port out of range: %s", port)); + + Builder builder = RedisURI.builder(); + return builder.withSentinel(host, port); + } + + /** + * Set Sentinel host and master id. Creates a new builder. + * + * @param host the host name + * @param masterId sentinel master id + * @return new builder with Sentinel host/port. + */ + public static Builder sentinel(String host, String masterId) { + return sentinel(host, DEFAULT_SENTINEL_PORT, masterId); + } + + /** + * Set Sentinel host, port and master id. Creates a new builder. + * + * @param host the host name + * @param port the port + * @param masterId sentinel master id + * @return new builder with Sentinel host/port. + */ + public static Builder sentinel(String host, int port, String masterId) { + return sentinel(host, port, masterId, null); + } + + /** + * Set Sentinel host, port, master id and Sentinel authentication. Creates a new builder. + * + * @param host the host name + * @param port the port + * @param masterId sentinel master id + * @param password the Sentinel password (supported since Redis 5.0.1) + * @return new builder with Sentinel host/port. + */ + public static Builder sentinel(String host, int port, String masterId, CharSequence password) { + + LettuceAssert.notEmpty(host, "Host must not be empty"); + LettuceAssert.isTrue(isValidPort(port), () -> String.format("Port out of range: %s", port)); + + Builder builder = RedisURI.builder(); + if (password != null) { + builder.sentinelPassword = password.toString().toCharArray(); + } + return builder.withSentinelMasterId(masterId).withSentinel(host, port); + } + + /** + * Add a withSentinel host to the existing builder. + * + * @param host the host name + * @return the builder + */ + public Builder withSentinel(String host) { + return withSentinel(host, DEFAULT_SENTINEL_PORT); + } + + /** + * Add a withSentinel host/port to the existing builder. + * + * @param host the host name + * @param port the port + * @return the builder + */ + public Builder withSentinel(String host, int port) { + + if (this.sentinelPassword != null) { + return withSentinel(host, port, new String(this.sentinelPassword)); + } + + return withSentinel(host, port, null); + } + + /** + * Add a withSentinel host/port and Sentinel authentication to the existing builder. + * + * @param host the host name + * @param port the port + * @param password the Sentinel password (supported since Redis 5.0.1) + * @return the builder + * @since 5.2 + */ + public Builder withSentinel(String host, int port, CharSequence password) { + + LettuceAssert.assertState(this.host == null, "Cannot use with Redis mode."); + LettuceAssert.notEmpty(host, "Host must not be empty"); + LettuceAssert.isTrue(isValidPort(port), () -> String.format("Port out of range: %s", port)); + + RedisURI redisURI = RedisURI.create(host, port); + + if (password != null) { + redisURI.setPassword(password.toString()); + } + + return withSentinel(redisURI); + } + + /** + * Add a withSentinel RedisURI to the existing builder. + * + * @param redisURI the sentinel URI + * @return the builder + * @since 5.2 + */ + public Builder withSentinel(RedisURI redisURI) { + + LettuceAssert.notNull(redisURI, "Redis URI must not be null"); + + sentinels.add(redisURI); + return this; + } + + /** + * Adds host information to the builder. Does only affect Redis URI, cannot be used with Sentinel connections. + * + * @param host the port + * @return the builder + */ + public Builder withHost(String host) { + + LettuceAssert.assertState(this.sentinels.isEmpty(), "Sentinels are non-empty. Cannot use in Sentinel mode."); + LettuceAssert.notEmpty(host, "Host must not be empty"); + + this.host = host; + return this; + } + + /** + * Adds port information to the builder. Does only affect Redis URI, cannot be used with Sentinel connections. + * + * @param port the port + * @return the builder + */ + public Builder withPort(int port) { + + LettuceAssert.assertState(this.host != null, "Host is null. Cannot use in Sentinel mode."); + LettuceAssert.isTrue(isValidPort(port), () -> String.format("Port out of range: %s", port)); + + this.port = port; + return this; + } + + /** + * Adds ssl information to the builder. Sets SSL also for already configured Redis Sentinel nodes. + * + * @param ssl {@literal true} if use SSL + * @return the builder + */ + public Builder withSsl(boolean ssl) { + + this.ssl = ssl; + this.sentinels.forEach(it -> it.setSsl(ssl)); + return this; + } + + /** + * Enables/disables StartTLS when using SSL. Sets StartTLS also for already configured Redis Sentinel nodes. + * + * @param startTls {@literal true} if use StartTLS + * @return the builder + */ + public Builder withStartTls(boolean startTls) { + + this.startTls = startTls; + this.sentinels.forEach(it -> it.setStartTls(startTls)); + return this; + } + + /** + * Enables/disables peer verification. Sets peer verification also for already configured Redis Sentinel nodes. + * + * @param verifyPeer {@literal true} to verify hosts when using SSL + * @return the builder + */ + public Builder withVerifyPeer(boolean verifyPeer) { + + this.verifyPeer = verifyPeer; + this.sentinels.forEach(it -> it.setVerifyPeer(verifyPeer)); + return this; + } + + /** + * Configures the database number. + * + * @param database the database number + * @return the builder + */ + public Builder withDatabase(int database) { + + LettuceAssert.isTrue(database >= 0, () -> "Invalid database number: " + database); + + this.database = database; + return this; + } + + /** + * Configures a client name. + * + * @param clientName the client name + * @return the builder + */ + public Builder withClientName(String clientName) { + + LettuceAssert.notNull(clientName, "Client name must not be null"); + + this.clientName = clientName; + return this; + } + + /** + * Configures authentication. + * + * @param username the user name + * @param password the password name + * @return the builder + */ + public Builder withAuthentication(String username, CharSequence password) { + + LettuceAssert.notNull(username, "User name must not be null"); + LettuceAssert.notNull(password, "Password must not be null"); + + this.username = username; + return withPassword(password); + } + + /** + * Configures authentication. + * + * @param password the password + * @return the builder + * @deprecated since 6.0. Use {@link #withPassword(CharSequence)} or {@link #withPassword(char[])} avoid String caching. + */ + @Deprecated + public Builder withPassword(String password) { + + LettuceAssert.notNull(password, "Password must not be null"); + + return withPassword(password.toCharArray()); + } + + /** + * Configures authentication. + * + * @param password the password + * @return the builder + * @since 6.0 + */ + public Builder withPassword(CharSequence password) { + + LettuceAssert.notNull(password, "Password must not be null"); + + char[] chars = new char[password.length()]; + for (int i = 0; i < password.length(); i++) { + chars[i] = password.charAt(i); + } + + return withPassword(chars); + } + + /** + * Configures authentication. + * + * @param password the password + * @return the builder + * @since 4.4 + */ + public Builder withPassword(char[] password) { + + LettuceAssert.notNull(password, "Password must not be null"); + + this.password = Arrays.copyOf(password, password.length); + return this; + } + + /** + * Configures a timeout. + * + * @param timeout must not be {@literal null} or negative. + * @return the builder + */ + public Builder withTimeout(Duration timeout) { + + LettuceAssert.notNull(timeout, "Timeout must not be null"); + LettuceAssert.notNull(!timeout.isNegative(), "Timeout must be greater or equal 0"); + + this.timeout = timeout; + return this; + } + + /** + * Configures a sentinel master Id. + * + * @param sentinelMasterId sentinel master id, must not be empty or {@literal null} + * @return the builder + */ + public Builder withSentinelMasterId(String sentinelMasterId) { + + LettuceAssert.notEmpty(sentinelMasterId, "Sentinel master id must not empty"); + + this.sentinelMasterId = sentinelMasterId; + return this; + } + + /** + * @return the RedisURI. + */ + public RedisURI build() { + + if (sentinels.isEmpty() && LettuceStrings.isEmpty(host) && LettuceStrings.isEmpty(socket)) { + throw new IllegalStateException( + "Cannot build a RedisURI. One of the following must be provided Host, Socket or Sentinel"); + } + + RedisURI redisURI = new RedisURI(); + redisURI.setHost(host); + redisURI.setPort(port); + + if (username != null) { + redisURI.setUsername(username); + } + + if (password != null) { + redisURI.setPassword(password); + } + + redisURI.setDatabase(database); + redisURI.setClientName(clientName); + + redisURI.setSentinelMasterId(sentinelMasterId); + + for (RedisURI sentinel : sentinels) { + + sentinel.setTimeout(timeout); + redisURI.getSentinels().add(sentinel); + } + + redisURI.setSocket(socket); + redisURI.setSsl(ssl); + redisURI.setStartTls(startTls); + redisURI.setVerifyPeer(verifyPeer); + redisURI.setTimeout(timeout); + + return redisURI; + } + } + + /** Return true for valid port numbers. */ + private static boolean isValidPort(int port) { + return port >= 0 && port <= 65535; + } +} diff --git a/src/main/java/io/lettuce/core/RestoreArgs.java b/src/main/java/io/lettuce/core/RestoreArgs.java new file mode 100644 index 0000000000..aa4dcd929f --- /dev/null +++ b/src/main/java/io/lettuce/core/RestoreArgs.java @@ -0,0 +1,116 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.time.Duration; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Argument list builder for the Redis RESTORE command. Static import the methods + * from {@link RestoreArgs.Builder} and call the methods: {@code ttl(…)} . + *

+ * {@link RestoreArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + * @since 5.1 + */ +public class RestoreArgs { + + long ttl; + boolean replace; + + /** + * Builder entry points for {@link XAddArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link RestoreArgs} and set the TTL. + * + * @return new {@link RestoreArgs} with min idle time set. + * @see RestoreArgs#ttl(long) + */ + public static RestoreArgs ttl(long milliseconds) { + return new RestoreArgs().ttl(milliseconds); + } + + /** + * Creates new {@link RestoreArgs} and set the minimum idle time. + * + * @return new {@link RestoreArgs} with min idle time set. + * @see RestoreArgs#ttl(Duration) + */ + public static RestoreArgs ttl(Duration ttl) { + + LettuceAssert.notNull(ttl, "Time to live must not be null"); + + return ttl(ttl.toMillis()); + } + } + + /** + * Set TTL in {@code milliseconds} after restoring the key. + * + * @param milliseconds time to live. + * @return {@code this}. + */ + public RestoreArgs ttl(long milliseconds) { + + this.ttl = milliseconds; + return this; + } + + /** + * Set TTL in {@code milliseconds} after restoring the key. + * + * @param ttl time to live. + * @return {@code this}. + */ + public RestoreArgs ttl(Duration ttl) { + + LettuceAssert.notNull(ttl, "Time to live must not be null"); + + return ttl(ttl.toMillis()); + } + + /** + * Replaces existing keys if the target key already exists. + * + * @return {@code this}. + */ + public RestoreArgs replace() { + return replace(true); + } + + /** + * Replaces existing keys if the target key already exists. + * + * @param replace {@literal true} to enable replacing of existing keys. + * @return {@code this}. + */ + public RestoreArgs replace(boolean replace) { + + this.replace = replace; + return this; + } +} diff --git a/src/main/java/io/lettuce/core/ScanArgs.java b/src/main/java/io/lettuce/core/ScanArgs.java new file mode 100644 index 0000000000..66ee28c429 --- /dev/null +++ b/src/main/java/io/lettuce/core/ScanArgs.java @@ -0,0 +1,128 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.protocol.CommandKeyword.COUNT; +import static io.lettuce.core.protocol.CommandKeyword.MATCH; + +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; + +/** + * Argument list builder for the Redis scan commands ({@literal SCAN, HSCAN, SSCAN, ZSCAN}). Static import the methods from + * + * {@link Builder} and chain the method calls: {@code matches("weight_*").limit(0, 2)}. + *

+ * {@link ScanArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + * @author Ge Jun + * @since 3.0 + */ +public class ScanArgs implements CompositeArgument { + + private Long count; + private String match; + private Charset charset; + + /** + * Builder entry points for {@link ScanArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link ScanArgs} with {@literal LIMIT} set. + * + * @param count number of elements to scan + * @return new {@link ScanArgs} with {@literal LIMIT} set. + * @see ScanArgs#limit(long) + */ + public static ScanArgs limit(long count) { + return new ScanArgs().limit(count); + } + + /** + * Creates new {@link ScanArgs} with {@literal MATCH} set. + * + * @param matches the filter. + * @return new {@link ScanArgs} with {@literal MATCH} set. + * @see ScanArgs#match(String) + */ + public static ScanArgs matches(String matches) { + return new ScanArgs().match(matches); + } + } + + /** + * Set the match filter. Uses {@link StandardCharsets#UTF_8 UTF-8} to encode {@code match}. + * + * @param match the filter, must not be {@literal null}. + * @return {@literal this} {@link ScanArgs}. + */ + public ScanArgs match(String match) { + return match(match, StandardCharsets.UTF_8); + } + + /** + * Set the match filter along the given {@link Charset}. + * + * @param match the filter, must not be {@literal null}. + * @param charset the charset for match, must not be {@literal null}. + * @return {@literal this} {@link ScanArgs}. + * @since 6.0 + */ + public ScanArgs match(String match, Charset charset) { + + LettuceAssert.notNull(match, "Match must not be null"); + LettuceAssert.notNull(charset, "Charset must not be null"); + + this.match = match; + this.charset = charset; + return this; + } + + /** + * Limit the scan by count + * + * @param count number of elements to scan + * @return {@literal this} {@link ScanArgs}. + */ + public ScanArgs limit(long count) { + + this.count = count; + return this; + } + + public void build(CommandArgs args) { + + if (match != null) { + args.add(MATCH).add(match.getBytes(charset)); + } + + if (count != null) { + args.add(COUNT).add(count); + } + } +} diff --git a/src/main/java/io/lettuce/core/ScanCursor.java b/src/main/java/io/lettuce/core/ScanCursor.java new file mode 100644 index 0000000000..769ed2862e --- /dev/null +++ b/src/main/java/io/lettuce/core/ScanCursor.java @@ -0,0 +1,117 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Generic Cursor data structure. + * + * @author Mark Paluch + * @since 3.0 + */ +public class ScanCursor { + + /** + * Finished cursor. + */ + public static final ScanCursor FINISHED = new ImmutableScanCursor("0", true); + + /** + * Initial cursor. + */ + public static final ScanCursor INITIAL = new ImmutableScanCursor("0", false); + + private String cursor; + private boolean finished; + + /** + * Creates a new {@link ScanCursor}. + */ + public ScanCursor() { + } + + /** + * Creates a new {@link ScanCursor}. + * + * @param cursor + * @param finished + */ + public ScanCursor(String cursor, boolean finished) { + this.cursor = cursor; + this.finished = finished; + } + + /** + * + * @return cursor id + */ + public String getCursor() { + return cursor; + } + + /** + * Set the cursor + * + * @param cursor the cursor id + */ + public void setCursor(String cursor) { + LettuceAssert.notEmpty(cursor, "Cursor must not be empty"); + + this.cursor = cursor; + } + + /** + * + * @return true if the scan operation of this cursor is finished. + */ + public boolean isFinished() { + return finished; + } + + public void setFinished(boolean finished) { + this.finished = finished; + } + + /** + * Creates a Scan-Cursor reference. + * + * @param cursor the cursor id + * @return ScanCursor + */ + public static ScanCursor of(String cursor) { + ScanCursor scanCursor = new ScanCursor(); + scanCursor.setCursor(cursor); + return scanCursor; + } + + private static class ImmutableScanCursor extends ScanCursor { + + public ImmutableScanCursor(String cursor, boolean finished) { + super(cursor, finished); + } + + @Override + public void setCursor(String cursor) { + throw new UnsupportedOperationException("setCursor not supported on " + getClass().getSimpleName()); + } + + @Override + public void setFinished(boolean finished) { + throw new UnsupportedOperationException("setFinished not supported on " + getClass().getSimpleName()); + } + } +} diff --git a/src/main/java/io/lettuce/core/ScanIterator.java b/src/main/java/io/lettuce/core/ScanIterator.java new file mode 100644 index 0000000000..b650ec7fee --- /dev/null +++ b/src/main/java/io/lettuce/core/ScanIterator.java @@ -0,0 +1,338 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Iterator; +import java.util.NoSuchElementException; +import java.util.Optional; +import java.util.Spliterators; +import java.util.stream.Stream; +import java.util.stream.StreamSupport; + +import io.lettuce.core.api.sync.RedisHashCommands; +import io.lettuce.core.api.sync.RedisKeyCommands; +import io.lettuce.core.api.sync.RedisSetCommands; +import io.lettuce.core.api.sync.RedisSortedSetCommands; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Scan command support exposed through {@link Iterator}. + *

+ * {@link ScanIterator} uses synchronous command interfaces to scan over keys ({@code SCAN}), sets ({@code SSCAN}), sorted sets + * ({@code ZSCAN}), and hashes ({@code HSCAN}). A {@link ScanIterator} is stateful and not thread-safe. Instances can be used + * only once to iterate over results. + *

+ * Use {@link ScanArgs#limit(long)} to set the batch size. + *

+ * Data structure scanning is progressive and stateful and demand-aware. It supports full iterations (until all received cursors + * are exhausted) and premature termination. Subsequent scan commands to fetch the cursor data get only issued if the caller + * signals demand by consuming the {@link ScanIterator}. + * + * @param Element type + * @author Mark Paluch + * @since 4.4 + */ +public abstract class ScanIterator implements Iterator { + + private ScanIterator() { + } + + /** + * Sequentially iterate over keys in the keyspace. This method uses {@code SCAN} to perform an iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link ScanIterator}. + */ + public static ScanIterator scan(RedisKeyCommands commands) { + return scan(commands, Optional.empty()); + } + + /** + * Sequentially iterate over keys in the keyspace. This method uses {@code SCAN} to perform an iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param scanArgs the scan arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link ScanIterator}. + */ + public static ScanIterator scan(RedisKeyCommands commands, ScanArgs scanArgs) { + + LettuceAssert.notNull(scanArgs, "ScanArgs must not be null"); + + return scan(commands, Optional.of(scanArgs)); + } + + private static ScanIterator scan(RedisKeyCommands commands, Optional scanArgs) { + + LettuceAssert.notNull(commands, "RedisKeyCommands must not be null"); + + return new SyncScanIterator() { + + @Override + protected ScanCursor nextScanCursor(ScanCursor scanCursor) { + + KeyScanCursor cursor = getNextScanCursor(scanCursor); + chunk = cursor.getKeys().iterator(); + return cursor; + } + + private KeyScanCursor getNextScanCursor(ScanCursor scanCursor) { + + if (scanCursor == null) { + return scanArgs.map(commands::scan).orElseGet(commands::scan); + } + + return scanArgs.map((scanArgs) -> commands.scan(scanCursor, scanArgs)).orElseGet( + () -> commands.scan(scanCursor)); + } + }; + } + + /** + * Sequentially iterate over entries in a hash identified by {@code key}. This method uses {@code HSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the hash to scan. + * @param Key type. + * @param Value type. + * @return a new {@link ScanIterator}. + */ + public static ScanIterator> hscan(RedisHashCommands commands, K key) { + return hscan(commands, key, Optional.empty()); + } + + /** + * Sequentially iterate over entries in a hash identified by {@code key}. This method uses {@code HSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the hash to scan. + * @param scanArgs the scan arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link ScanIterator}. + */ + public static ScanIterator> hscan(RedisHashCommands commands, K key, ScanArgs scanArgs) { + + LettuceAssert.notNull(scanArgs, "ScanArgs must not be null"); + + return hscan(commands, key, Optional.of(scanArgs)); + } + + private static ScanIterator> hscan(RedisHashCommands commands, K key, + Optional scanArgs) { + + LettuceAssert.notNull(commands, "RedisKeyCommands must not be null"); + LettuceAssert.notNull(key, "Key must not be null"); + + return new SyncScanIterator>() { + + @Override + protected ScanCursor nextScanCursor(ScanCursor scanCursor) { + + MapScanCursor cursor = getNextScanCursor(scanCursor); + chunk = cursor.getMap().keySet().stream().map(k -> KeyValue.fromNullable(k, cursor.getMap().get(k))).iterator(); + return cursor; + } + + private MapScanCursor getNextScanCursor(ScanCursor scanCursor) { + + if (scanCursor == null) { + return scanArgs.map(scanArgs -> commands.hscan(key, scanArgs)).orElseGet(() -> commands.hscan(key)); + } + + return scanArgs.map((scanArgs) -> commands.hscan(key, scanCursor, scanArgs)).orElseGet( + () -> commands.hscan(key, scanCursor)); + } + }; + } + + /** + * Sequentially iterate over elements in a set identified by {@code key}. This method uses {@code SSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the set to scan. + * @param Key type. + * @param Value type. + * @return a new {@link ScanIterator}. + */ + public static ScanIterator sscan(RedisSetCommands commands, K key) { + return sscan(commands, key, Optional.empty()); + } + + /** + * Sequentially iterate over elements in a set identified by {@code key}. This method uses {@code SSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the set to scan. + * @param scanArgs the scan arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link ScanIterator}. + */ + public static ScanIterator sscan(RedisSetCommands commands, K key, ScanArgs scanArgs) { + + LettuceAssert.notNull(scanArgs, "ScanArgs must not be null"); + + return sscan(commands, key, Optional.of(scanArgs)); + } + + private static ScanIterator sscan(RedisSetCommands commands, K key, Optional scanArgs) { + + LettuceAssert.notNull(commands, "RedisKeyCommands must not be null"); + LettuceAssert.notNull(key, "Key must not be null"); + + return new SyncScanIterator() { + + @Override + protected ScanCursor nextScanCursor(ScanCursor scanCursor) { + + ValueScanCursor cursor = getNextScanCursor(scanCursor); + chunk = cursor.getValues().iterator(); + return cursor; + } + + private ValueScanCursor getNextScanCursor(ScanCursor scanCursor) { + + if (scanCursor == null) { + return scanArgs.map(scanArgs -> commands.sscan(key, scanArgs)).orElseGet(() -> commands.sscan(key)); + } + + return scanArgs.map((scanArgs) -> commands.sscan(key, scanCursor, scanArgs)).orElseGet( + () -> commands.sscan(key, scanCursor)); + } + }; + } + + /** + * Sequentially iterate over scored values in a sorted set identified by {@code key}. This method uses {@code ZSCAN} to + * perform an iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the sorted set to scan. + * @param Key type. + * @param Value type. + * @return a new {@link ScanIterator}. + */ + public static ScanIterator> zscan(RedisSortedSetCommands commands, K key) { + return zscan(commands, key, Optional.empty()); + } + + /** + * Sequentially iterate over scored values in a sorted set identified by {@code key}. This method uses {@code ZSCAN} to + * perform an iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the sorted set to scan. + * @param scanArgs the scan arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link ScanIterator}. + */ + public static ScanIterator> zscan(RedisSortedSetCommands commands, K key, ScanArgs scanArgs) { + + LettuceAssert.notNull(scanArgs, "ScanArgs must not be null"); + + return zscan(commands, key, Optional.of(scanArgs)); + } + + private static ScanIterator> zscan(RedisSortedSetCommands commands, K key, + Optional scanArgs) { + + LettuceAssert.notNull(commands, "RedisKeyCommands must not be null"); + LettuceAssert.notNull(key, "Key must not be null"); + + return new SyncScanIterator>() { + + @Override + protected ScanCursor nextScanCursor(ScanCursor scanCursor) { + + ScoredValueScanCursor cursor = getNextScanCursor(scanCursor); + chunk = cursor.getValues().iterator(); + return cursor; + } + + private ScoredValueScanCursor getNextScanCursor(ScanCursor scanCursor) { + + if (scanCursor == null) { + return scanArgs.map(scanArgs -> commands.zscan(key, scanArgs)).orElseGet(() -> commands.zscan(key)); + } + + return scanArgs.map((scanArgs) -> commands.zscan(key, scanCursor, scanArgs)).orElseGet( + () -> commands.zscan(key, scanCursor)); + } + }; + } + + /** + * Returns a sequential {@code Stream} with this {@link ScanIterator} as its source. + * + * @return a {@link Stream} for this {@link ScanIterator}. + */ + public Stream stream() { + return StreamSupport.stream(Spliterators.spliterator(this, 0, 0), false); + } + + /** + * Synchronous {@link ScanIterator} implementation. + * + * @param + */ + private static abstract class SyncScanIterator extends ScanIterator { + + private ScanCursor scanCursor; + protected Iterator chunk = null; + + @Override + public boolean hasNext() { + + while (scanCursor == null || !scanCursor.isFinished()) { + + if (scanCursor == null || !hasChunkElements()) { + scanCursor = nextScanCursor(scanCursor); + } + + if (hasChunkElements()) { + return true; + } + } + + return hasChunkElements(); + } + + private boolean hasChunkElements() { + return chunk.hasNext(); + } + + @Override + public T next() { + + if (!hasNext()) { + throw new NoSuchElementException(); + } + + return chunk.next(); + } + + protected abstract ScanCursor nextScanCursor(ScanCursor scanCursor); + } +} diff --git a/src/main/java/io/lettuce/core/ScanStream.java b/src/main/java/io/lettuce/core/ScanStream.java new file mode 100644 index 0000000000..28a5dc9536 --- /dev/null +++ b/src/main/java/io/lettuce/core/ScanStream.java @@ -0,0 +1,558 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.*; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import java.util.concurrent.atomic.AtomicLongFieldUpdater; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; +import java.util.function.Function; + +import reactor.core.publisher.BaseSubscriber; +import reactor.core.publisher.Flux; +import reactor.core.publisher.FluxSink; +import reactor.core.publisher.Mono; +import reactor.util.context.Context; +import io.lettuce.core.api.reactive.RedisHashReactiveCommands; +import io.lettuce.core.api.reactive.RedisKeyReactiveCommands; +import io.lettuce.core.api.reactive.RedisSetReactiveCommands; +import io.lettuce.core.api.reactive.RedisSortedSetReactiveCommands; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Scan command support exposed through {@link Flux}. + *

+ * {@link ScanStream} uses reactive command interfaces to scan over keys ({@code SCAN}), sets ({@code SSCAN}), sorted sets ( + * {@code ZSCAN}), and hashes ({@code HSCAN}). + *

+ * Use {@link ScanArgs#limit(long)} to set the batch size. + *

+ * Data structure scanning is progressive and stateful and demand-aware. It supports full iterations (until all received cursors + * are exhausted) and premature termination. Subsequent scan commands to fetch the cursor data get only issued if the subscriber + * signals demand. + * + * @author Mark Paluch + * @since 5.1 + */ +public abstract class ScanStream { + + private ScanStream() { + } + + /** + * Sequentially iterate over keys in the keyspace. This method uses {@code SCAN} to perform an iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link Flux}. + */ + public static Flux scan(RedisKeyReactiveCommands commands) { + return scan(commands, Optional.empty()); + } + + /** + * Sequentially iterate over keys in the keyspace. This method uses {@code SCAN} to perform an iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param scanArgs the scan arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link Flux}. + */ + public static Flux scan(RedisKeyReactiveCommands commands, ScanArgs scanArgs) { + + LettuceAssert.notNull(scanArgs, "ScanArgs must not be null"); + + return scan(commands, Optional.of(scanArgs)); + } + + private static Flux scan(RedisKeyReactiveCommands commands, Optional scanArgs) { + + LettuceAssert.notNull(commands, "RedisKeyCommands must not be null"); + + return Flux.create(sink -> { + + Mono> res = scanArgs.map(commands::scan).orElseGet(commands::scan); + + scan(sink, res, c -> scanArgs.map(it -> commands.scan(c, it)).orElseGet(() -> commands.scan(c)), // + KeyScanCursor::getKeys); + }); + } + + /** + * Sequentially iterate over entries in a hash identified by {@code key}. This method uses {@code HSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the hash to scan. + * @param Key type. + * @param Value type. + * @return a new {@link Flux}. + */ + public static Flux> hscan(RedisHashReactiveCommands commands, K key) { + return hscan(commands, key, Optional.empty()); + } + + /** + * Sequentially iterate over entries in a hash identified by {@code key}. This method uses {@code HSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the hash to scan. + * @param scanArgs the scan arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link Flux}. + */ + public static Flux> hscan(RedisHashReactiveCommands commands, K key, ScanArgs scanArgs) { + + LettuceAssert.notNull(scanArgs, "ScanArgs must not be null"); + + return hscan(commands, key, Optional.of(scanArgs)); + } + + private static Flux> hscan(RedisHashReactiveCommands commands, K key, + Optional scanArgs) { + + LettuceAssert.notNull(commands, "RedisHashReactiveCommands must not be null"); + LettuceAssert.notNull(key, "Key must not be null"); + + return Flux.create(sink -> { + + Mono> res = scanArgs.map(it -> commands.hscan(key, it)).orElseGet(() -> commands.hscan(key)); + + scan(sink, res, c -> scanArgs.map(it -> commands.hscan(key, c, it)).orElseGet(() -> commands.hscan(key, c)), // + c -> { + + List> list = new ArrayList<>(c.getMap().size()); + + for (Map.Entry kvEntry : c.getMap().entrySet()) { + list.add(KeyValue.fromNullable(kvEntry.getKey(), kvEntry.getValue())); + } + return list; + }); + }); + + } + + /** + * Sequentially iterate over elements in a set identified by {@code key}. This method uses {@code SSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the set to scan. + * @param Key type. + * @param Value type. + * @return a new {@link Flux}. + */ + public static Flux sscan(RedisSetReactiveCommands commands, K key) { + return sscan(commands, key, Optional.empty()); + } + + /** + * Sequentially iterate over elements in a set identified by {@code key}. This method uses {@code SSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the set to scan. + * @param scanArgs the scan arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link Flux}. + */ + public static Flux sscan(RedisSetReactiveCommands commands, K key, ScanArgs scanArgs) { + + LettuceAssert.notNull(scanArgs, "ScanArgs must not be null"); + + return sscan(commands, key, Optional.of(scanArgs)); + } + + private static Flux sscan(RedisSetReactiveCommands commands, K key, Optional scanArgs) { + + LettuceAssert.notNull(commands, "RedisSetReactiveCommands must not be null"); + LettuceAssert.notNull(key, "Key must not be null"); + + return Flux.create(sink -> { + + Mono> res = scanArgs.map(it -> commands.sscan(key, it)).orElseGet(() -> commands.sscan(key)); + + scan(sink, res, c -> scanArgs.map(it -> commands.sscan(key, c, it)).orElseGet(() -> commands.sscan(key, c)), // + ValueScanCursor::getValues); + }); + } + + /** + * Sequentially iterate over elements in a set identified by {@code key}. This method uses {@code SSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the sorted set to scan. + * @param Key type. + * @param Value type. + * @return a new {@link Flux}. + */ + public static Flux> zscan(RedisSortedSetReactiveCommands commands, K key) { + return zscan(commands, key, Optional.empty()); + } + + /** + * Sequentially iterate over elements in a set identified by {@code key}. This method uses {@code SSCAN} to perform an + * iterative scan. + * + * @param commands the commands interface, must not be {@literal null}. + * @param key the sorted set to scan. + * @param scanArgs the scan arguments, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link Flux}. + */ + public static Flux> zscan(RedisSortedSetReactiveCommands commands, K key, ScanArgs scanArgs) { + + LettuceAssert.notNull(scanArgs, "ScanArgs must not be null"); + + return zscan(commands, key, Optional.of(scanArgs)); + } + + private static Flux> zscan(RedisSortedSetReactiveCommands commands, K key, + Optional scanArgs) { + + LettuceAssert.notNull(commands, "RedisSortedSetReactiveCommands must not be null"); + LettuceAssert.notNull(key, "Key must not be null"); + + return Flux.create(sink -> { + + Mono> res = scanArgs.map(it -> commands.zscan(key, it)).orElseGet( + () -> commands.zscan(key)); + + scan(sink, res, c -> scanArgs.map(it -> commands.zscan(key, c, it)).orElseGet(() -> commands.zscan(key, c)), // + ScoredValueScanCursor::getValues); + }); + } + + private static void scan(FluxSink sink, Mono initialCursor, + Function> scanFunction, Function> manyMapper) { + + new SubscriptionAdapter<>(sink, initialCursor, scanFunction, manyMapper).register(); + } + + /** + * Adapter for {@link FluxSink} to dispatch multiple {@link reactor.core.CoreSubscriber} considering subscription demand. + * + * @param item type. + * @param cursor type. + */ + static class SubscriptionAdapter implements Completable { + + @SuppressWarnings("rawtypes") + private static final AtomicReferenceFieldUpdater SUBSCRIBER = AtomicReferenceFieldUpdater + .newUpdater(SubscriptionAdapter.class, ScanSubscriber.class, "currentSubscription"); + + @SuppressWarnings("rawtypes") + private static final AtomicIntegerFieldUpdater STATUS = AtomicIntegerFieldUpdater.newUpdater( + SubscriptionAdapter.class, "status"); + + private static final int STATUS_ACTIVE = 0; + private static final int STATUS_TERMINATED = 0; + + // Access via SUBSCRIBER. + @SuppressWarnings("unused") + private volatile ScanSubscriber currentSubscription; + private volatile boolean canceled; + + // Access via STATUS. + @SuppressWarnings("unused") + private volatile int status = STATUS_ACTIVE; + + private final FluxSink sink; + private final Context context; + private final Mono initial; + private final Function> scanFunction; + private final Function> manyMapper; + + SubscriptionAdapter(FluxSink sink, Mono initial, Function> scanFunction, + Function> manyMapper) { + + this.sink = sink; + this.context = sink.currentContext(); + this.initial = initial; + this.scanFunction = scanFunction; + this.manyMapper = manyMapper; + } + + /** + * Register cancel and onDemand callbacks. + */ + public void register() { + this.sink.onRequest(this::onDemand); + this.sink.onCancel(this::canceled); + } + + void onDemand(long n) { + + if (this.canceled) { + return; + } + + ScanSubscriber current = getCurrentSubscriber(); + + if (current == null) { + current = new ScanSubscriber<>(this, sink, context, manyMapper); + if (SUBSCRIBER.compareAndSet(this, null, current)) { + initial.subscribe(current); + } + + return; + } + + ScanCursor cursor = current.getCursor(); + + if (cursor == null) { + return; + } + + current.emitFromBuffer(); + if (!current.isExhausted() || current.canceled || sink.requestedFromDownstream() == 0) { + return; + } + + if (cursor.isFinished()) { + chunkCompleted(); + return; + } + + Mono next = scanFunction.apply(cursor); + + ScanSubscriber nextSubscriber = new ScanSubscriber<>(this, sink, context, manyMapper); + if (SUBSCRIBER.compareAndSet(this, current, nextSubscriber)) { + next.subscribe(nextSubscriber); + } + } + + private void canceled() { + + this.canceled = true; + + ScanSubscriber current = getCurrentSubscriber(); + if (current != null) { + current.cancel(); + } + } + + @Override + public void chunkCompleted() { + + if (canceled) { + return; + } + + ScanSubscriber current = getCurrentSubscriber(); + if (current == null) { + return; + } + + ScanCursor cursor = current.getCursor(); + + if (cursor == null) { + return; + } + + if (cursor.isFinished() && current.isExhausted()) { + if (terminate()) { + sink.complete(); + } + } else { + onDemand(0); + } + } + + ScanSubscriber getCurrentSubscriber() { + return SUBSCRIBER.get(this); + } + + @Override + public void onError(Throwable throwable) { + + if (!this.canceled && terminate()) { + sink.error(throwable); + } + } + + protected boolean terminate() { + return STATUS.compareAndSet(this, STATUS_ACTIVE, STATUS_TERMINATED); + } + } + + /** + * {@link reactor.core.CoreSubscriber} for a {@code SCAN} cursor. + * + * @param item type. + * @param cursor type. + */ + static class ScanSubscriber extends BaseSubscriber { + + @SuppressWarnings("rawtypes") + private static final AtomicReferenceFieldUpdater CURSOR = AtomicReferenceFieldUpdater + .newUpdater(ScanSubscriber.class, ScanCursor.class, "cursor"); + + @SuppressWarnings("rawtypes") + private static final AtomicLongFieldUpdater EMITTED = AtomicLongFieldUpdater.newUpdater( + ScanSubscriber.class, "emitted"); + + private final Completable completable; + private final FluxSink sink; + private final Queue buffer = Operators.newQueue(); + private final Context context; + private final Function> manyMapper; + + volatile boolean canceled; + // see CURSOR + @SuppressWarnings("unused") + private volatile C cursor; + // see EMITTED + @SuppressWarnings("unused") + private volatile long emitted; + private volatile long cursorSize; + + ScanSubscriber(Completable completable, FluxSink sink, Context context, Function> manyMapper) { + this.completable = completable; + this.sink = sink; + this.context = context; + this.manyMapper = manyMapper; + } + + @Override + public Context currentContext() { + return context; + } + + @Override + protected void hookOnNext(C cursor) { + + if (!CURSOR.compareAndSet(this, null, cursor)) { + Operators.onOperatorError(this, new IllegalStateException("Cannot propagate Cursor"), cursor, context); + return; + } + + Collection items = manyMapper.apply(cursor); + cursorSize = items.size(); + + emitDirect(items); + } + + /** + * Fast-path emission that emits items directly without using an intermediate buffer. Only overflow (more items + * available than requested) is stored in the buffer. + * + * @param iterable + */ + void emitDirect(Iterable iterable) { + + long demand = sink.requestedFromDownstream(); + long sent = 0; + + for (T value : iterable) { + + if (canceled) { + break; + } + + if (demand <= sent) { + buffer.add(value); + continue; + } + + sent++; + + next(value); + } + } + + /** + * Buffer-based emission polling items from the buffer and emitting until the demand is satisfied or the buffer is + * exhausted. + */ + void emitFromBuffer() { + + long demand = sink.requestedFromDownstream(); + long sent = 0; + + if (demand > 0) { + T value; + while ((value = buffer.poll()) != null) { + + if (canceled) { + break; + } + + sent++; + + next(value); + + if (demand <= sent) { + break; + } + } + } + } + + private void next(T value) { + EMITTED.incrementAndGet(this); + sink.next(value); + } + + @Override + protected void hookOnComplete() { + completable.chunkCompleted(); + } + + @Override + protected void hookOnError(Throwable throwable) { + completable.onError(throwable); + } + + @Override + protected void hookOnCancel() { + this.canceled = true; + } + + public ScanCursor getCursor() { + return CURSOR.get(this); + } + + public boolean isExhausted() { + return EMITTED.get(this) == cursorSize && getCursor() != null; + } + } + + /** + * Completion callback interface. + */ + interface Completable { + + /** + * Callback if a cursor chunk is completed. + */ + void chunkCompleted(); + + /** + * Error callback. + * + * @param throwable + */ + void onError(Throwable throwable); + } +} diff --git a/src/main/java/io/lettuce/core/ScoredValue.java b/src/main/java/io/lettuce/core/ScoredValue.java new file mode 100644 index 0000000000..6aa5c1b464 --- /dev/null +++ b/src/main/java/io/lettuce/core/ScoredValue.java @@ -0,0 +1,187 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Optional; +import java.util.function.Function; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * A scored-value extension to {@link Value}. + * + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class ScoredValue extends Value { + + private static final ScoredValue EMPTY = new ScoredValue<>(0, null); + + private final double score; + + /** + * Serializable constructor. + */ + protected ScoredValue() { + super(null); + this.score = 0; + } + + private ScoredValue(double score, V value) { + super(value); + this.score = score; + } + + /** + * Creates a {@link ScoredValue} from a {@code key} and an {@link Optional}. The resulting value contains the value from the + * {@link Optional} if a value is present. Value is empty if the {@link Optional} is empty. + * + * @param score the score + * @param optional the optional. May be empty but never {@literal null}. + * @param + * @param + * @return the {@link ScoredValue} + */ + public static ScoredValue from(double score, Optional optional) { + + LettuceAssert.notNull(optional, "Optional must not be null"); + + if (optional.isPresent()) { + return new ScoredValue(score, optional.get()); + } + + return fromNullable(score, null); + } + + /** + * Creates a {@link ScoredValue} from a {@code score} and {@code value}. The resulting value contains the value if the + * {@code value} is not null. + * + * @param score the score + * @param value the value. May be {@literal null}. + * @param + * @param + * @return the {@link ScoredValue} + */ + public static ScoredValue fromNullable(double score, T value) { + + if (value == null) { + return new ScoredValue(score, null); + } + + return new ScoredValue(score, value); + } + + /** + * Returns an empty {@code ScoredValue} instance. No value is present for this instance. + * + * @param + * @return the {@link ScoredValue} + */ + public static ScoredValue empty() { + return (ScoredValue) EMPTY; + } + + /** + * Creates a {@link ScoredValue} from a {@code key} and {@code value}. The resulting value contains the value. + * + * @param score the score + * @param value the value. Must not be {@literal null}. + * @param + * @param + * @return the {@link ScoredValue} + */ + public static ScoredValue just(double score, T value) { + + LettuceAssert.notNull(value, "Value must not be null"); + + return new ScoredValue(score, value); + } + + public double getScore() { + return score; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof ScoredValue)) + return false; + if (!super.equals(o)) + return false; + + ScoredValue that = (ScoredValue) o; + + return Double.compare(that.score, score) == 0; + } + + @Override + public int hashCode() { + + long temp = Double.doubleToLongBits(score); + int result = (int) (temp ^ (temp >>> 32)); + result = 31 * result + (hasValue() ? getValue().hashCode() : 0); + return result; + } + + @Override + public String toString() { + return hasValue() ? String.format("ScoredValue[%f, %s]", score, getValue()) + : String.format("ScoredValue[%f].empty", score); + } + + /** + * Returns a {@link ScoredValue} consisting of the results of applying the given function to the value of this element. + * Mapping is performed only if a {@link #hasValue() value is present}. + * + * @param The element type of the new stream + * @param mapper a stateless function to apply to each element + * @return the new {@link ScoredValue} + */ + @SuppressWarnings("unchecked") + public ScoredValue map(Function mapper) { + + LettuceAssert.notNull(mapper, "Mapper function must not be null"); + + if (hasValue()) { + return new ScoredValue<>(score, mapper.apply(getValue())); + } + + return (ScoredValue) this; + } + + /** + * Returns a {@link ScoredValue} consisting of the results of applying the given function to the score of this element. + * Mapping is performed only if a {@link #hasValue() value is present}. + * + * @param mapper a stateless function to apply to each element + * @return the new {@link ScoredValue} + */ + @SuppressWarnings("unchecked") + public ScoredValue mapScore(Function mapper) { + + LettuceAssert.notNull(mapper, "Mapper function must not be null"); + + if (hasValue()) { + return new ScoredValue(mapper.apply(score).doubleValue(), getValue()); + } + + return this; + } +} diff --git a/src/main/java/io/lettuce/core/ScoredValueScanCursor.java b/src/main/java/io/lettuce/core/ScoredValueScanCursor.java new file mode 100644 index 0000000000..8503986136 --- /dev/null +++ b/src/main/java/io/lettuce/core/ScoredValueScanCursor.java @@ -0,0 +1,38 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.ArrayList; +import java.util.List; + +/** + * Cursor providing a list of {@link io.lettuce.core.ScoredValue} + * + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public class ScoredValueScanCursor extends ScanCursor { + + private final List> values = new ArrayList<>(); + + public ScoredValueScanCursor() { + } + + public List> getValues() { + return values; + } +} diff --git a/src/main/java/io/lettuce/core/ScriptOutputType.java b/src/main/java/io/lettuce/core/ScriptOutputType.java new file mode 100644 index 0000000000..8b390cc29e --- /dev/null +++ b/src/main/java/io/lettuce/core/ScriptOutputType.java @@ -0,0 +1,53 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * A Lua script returns one of the following types: + * + *
    + *
  • {@link #BOOLEAN} boolean
  • + *
  • {@link #INTEGER} 64-bit integer
  • + *
  • {@link #STATUS} status string
  • + *
  • {@link #VALUE} value
  • + *
  • {@link #MULTI} of these types
  • + *
+ * + * Redis to Lua conversion table. + *
    + *
  • Redis integer reply -> Lua number
  • + *
  • Redis bulk reply -> Lua string
  • + *
  • Redis multi bulk reply -> Lua table (may have other Redis data types nested)
  • + *
  • Redis status reply -> Lua table with a single {@code ok} field containing the status
  • + *
  • Redis error reply -> Lua table with a single {@code err} field containing the error
  • + *
  • Redis Nil bulk reply and Nil multi bulk reply -> Lua false boolean type
  • + *
+ * + * Lua to Redis conversion table. + *
    + *
  • Lua number -> Redis integer reply (the number is converted into an integer)
  • + *
  • Lua string -> Redis bulk reply
  • + *
  • Lua table (array) -> Redis multi bulk reply (truncated to the first {@literal null} inside the Lua array if any)
  • + *
  • Lua table with a single {@code ok} field -> Redis status reply
  • + *
  • Lua table with a single {@code err} field -> Redis error reply
  • + *
  • Lua boolean false -> Redis Nil bulk reply.
  • + *
+ * + * @author Will Glozer + */ +public enum ScriptOutputType { + BOOLEAN, INTEGER, MULTI, STATUS, VALUE +} diff --git a/src/main/java/io/lettuce/core/SetArgs.java b/src/main/java/io/lettuce/core/SetArgs.java new file mode 100644 index 0000000000..d29d23a2ca --- /dev/null +++ b/src/main/java/io/lettuce/core/SetArgs.java @@ -0,0 +1,183 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.protocol.CommandArgs; + +/** + * Argument list builder for the Redis SET command starting from Redis 2.6.12. Static + * import the methods from {@link Builder} and chain the method calls: {@code ex(10).nx()}. + *

+ * {@link SetArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Will Glozer + * @author Vincent Rischmann + * @author Mark Paluch + */ +public class SetArgs implements CompositeArgument { + + private Long ex; + private Long px; + private boolean nx = false; + private boolean xx = false; + private boolean keepttl = false; + + /** + * Builder entry points for {@link SetArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link SetArgs} and enabling {@literal EX}. + * + * @param timeout expire time in seconds. + * @return new {@link SetArgs} with {@literal EX} enabled. + * @see SetArgs#ex(long) + */ + public static SetArgs ex(long timeout) { + return new SetArgs().ex(timeout); + } + + /** + * Creates new {@link SetArgs} and enabling {@literal PX}. + * + * @param timeout expire time in milliseconds. + * @return new {@link SetArgs} with {@literal PX} enabled. + * @see SetArgs#px(long) + */ + public static SetArgs px(long timeout) { + return new SetArgs().px(timeout); + } + + /** + * Creates new {@link SetArgs} and enabling {@literal NX}. + * + * @return new {@link SetArgs} with {@literal NX} enabled. + * @see SetArgs#nx() + */ + public static SetArgs nx() { + return new SetArgs().nx(); + } + + /** + * Creates new {@link SetArgs} and enabling {@literal XX}. + * + * @return new {@link SetArgs} with {@literal XX} enabled. + * @see SetArgs#xx() + */ + public static SetArgs xx() { + return new SetArgs().xx(); + } + + /** + * Creates new {@link SetArgs} and enabling {@literal KEEPTTL}. + * + * @return new {@link SetArgs} with {@literal KEEPTTL} enabled. + * @see SetArgs#keepttl() + * @since 5.3 + */ + public static SetArgs keepttl() { + return new SetArgs().keepttl(); + } + } + + /** + * Set the specified expire time, in seconds. + * + * @param timeout expire time in seconds. + * @return {@code this} {@link SetArgs}. + */ + public SetArgs ex(long timeout) { + + this.ex = timeout; + return this; + } + + /** + * Set the specified expire time, in milliseconds. + * + * @param timeout expire time in milliseconds. + * @return {@code this} {@link SetArgs}. + */ + public SetArgs px(long timeout) { + + this.px = timeout; + return this; + } + + /** + * Only set the key if it does not already exist. + * + * @return {@code this} {@link SetArgs}. + */ + public SetArgs nx() { + + this.nx = true; + return this; + } + + /** + * Set the value and retain the existing TTL. + * + * @return {@code this} {@link SetArgs}. + * @since 5.3 + */ + public SetArgs keepttl() { + + this.keepttl = true; + return this; + } + + /** + * Only set the key if it already exists. + * + * @return {@code this} {@link SetArgs}. + */ + public SetArgs xx() { + + this.xx = true; + return this; + } + + public void build(CommandArgs args) { + + if (ex != null) { + args.add("EX").add(ex); + } + + if (px != null) { + args.add("PX").add(px); + } + + if (nx) { + args.add("NX"); + } + + if (xx) { + args.add("XX"); + } + + if (keepttl) { + args.add("KEEPTTL"); + } + } +} diff --git a/src/main/java/io/lettuce/core/SocketOptions.java b/src/main/java/io/lettuce/core/SocketOptions.java new file mode 100644 index 0000000000..a8d19e55f2 --- /dev/null +++ b/src/main/java/io/lettuce/core/SocketOptions.java @@ -0,0 +1,215 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Options to configure low-level socket options for the connections kept to Redis servers. + * + * @author Mark Paluch + * @since 4.3 + */ +public class SocketOptions { + + public static final long DEFAULT_CONNECT_TIMEOUT = 10; + public static final TimeUnit DEFAULT_CONNECT_TIMEOUT_UNIT = TimeUnit.SECONDS; + public static final Duration DEFAULT_CONNECT_TIMEOUT_DURATION = Duration.ofSeconds(DEFAULT_CONNECT_TIMEOUT); + + public static final boolean DEFAULT_SO_KEEPALIVE = false; + public static final boolean DEFAULT_SO_NO_DELAY = false; + + private final Duration connectTimeout; + private final boolean keepAlive; + private final boolean tcpNoDelay; + + protected SocketOptions(Builder builder) { + + this.connectTimeout = builder.connectTimeout; + this.keepAlive = builder.keepAlive; + this.tcpNoDelay = builder.tcpNoDelay; + } + + protected SocketOptions(SocketOptions original) { + this.connectTimeout = original.getConnectTimeout(); + this.keepAlive = original.isKeepAlive(); + this.tcpNoDelay = original.isTcpNoDelay(); + } + + /** + * Create a copy of {@literal options} + * + * @param options the original + * @return A new instance of {@link SocketOptions} containing the values of {@literal options} + */ + public static SocketOptions copyOf(SocketOptions options) { + return new SocketOptions(options); + } + + /** + * Returns a new {@link SocketOptions.Builder} to construct {@link SocketOptions}. + * + * @return a new {@link SocketOptions.Builder} to construct {@link SocketOptions}. + */ + public static SocketOptions.Builder builder() { + return new SocketOptions.Builder(); + } + + /** + * Create a new {@link SocketOptions} using default settings. + * + * @return a new instance of default cluster client client options. + */ + public static SocketOptions create() { + return builder().build(); + } + + /** + * Builder for {@link SocketOptions}. + */ + public static class Builder { + + private Duration connectTimeout = DEFAULT_CONNECT_TIMEOUT_DURATION; + private boolean keepAlive = DEFAULT_SO_KEEPALIVE; + private boolean tcpNoDelay = DEFAULT_SO_NO_DELAY; + + private Builder() { + } + + /** + * Set connection timeout. Defaults to {@literal 10 SECONDS}. See {@link #DEFAULT_CONNECT_TIMEOUT} and + * {@link #DEFAULT_CONNECT_TIMEOUT_UNIT}. + * + * @param connectTimeout connection timeout, must be greater {@literal 0}. + * @return {@code this} + * @since 5.0 + */ + public Builder connectTimeout(Duration connectTimeout) { + + LettuceAssert.notNull(connectTimeout, "Connection timeout must not be null"); + LettuceAssert.isTrue(connectTimeout.toNanos() > 0, "Connect timeout must be greater 0"); + + this.connectTimeout = connectTimeout; + return this; + } + + /** + * Set connection timeout. Defaults to {@literal 10 SECONDS}. See {@link #DEFAULT_CONNECT_TIMEOUT} and + * {@link #DEFAULT_CONNECT_TIMEOUT_UNIT}. + * + * @param connectTimeout connection timeout, must be greater {@literal 0}. + * @param connectTimeoutUnit unit for {@code connectTimeout}, must not be {@literal null}. + * @return {@code this} + * @deprecated since 5.0, use {@link #connectTimeout(Duration)} + */ + @Deprecated + public Builder connectTimeout(long connectTimeout, TimeUnit connectTimeoutUnit) { + + LettuceAssert.isTrue(connectTimeout > 0, "Connect timeout must be greater 0"); + LettuceAssert.notNull(connectTimeoutUnit, "TimeUnit must not be null"); + + return connectTimeout(Duration.ofNanos(connectTimeoutUnit.toNanos(connectTimeout))); + } + + /** + * Sets whether to enable TCP keepalive. Defaults to {@literal false}. See {@link #DEFAULT_SO_KEEPALIVE}. + * + * @param keepAlive whether to enable or disable the TCP keepalive. + * @return {@code this} + * @see java.net.SocketOptions#SO_KEEPALIVE + */ + public Builder keepAlive(boolean keepAlive) { + + this.keepAlive = keepAlive; + return this; + } + + /** + * Sets whether to disable Nagle's algorithm. Defaults to {@literal false} (Nagle enabled). See + * {@link #DEFAULT_SO_NO_DELAY}. + * + * @param tcpNoDelay {@literal true} to disable Nagle's algorithm, {@literal false} to enable Nagle's algorithm. + * @return {@code this} + * @see java.net.SocketOptions#TCP_NODELAY + */ + public Builder tcpNoDelay(boolean tcpNoDelay) { + + this.tcpNoDelay = tcpNoDelay; + return this; + } + + /** + * Create a new instance of {@link SocketOptions} + * + * @return new instance of {@link SocketOptions} + */ + public SocketOptions build() { + return new SocketOptions(this); + } + } + + /** + * Returns a builder to create new {@link SocketOptions} whose settings are replicated from the current + * {@link SocketOptions}. + * + * @return a {@link SocketOptions.Builder} to create new {@link SocketOptions} whose settings are replicated from the + * current {@link SocketOptions} + * + * @since 5.3 + */ + public SocketOptions.Builder mutate() { + + SocketOptions.Builder builder = builder(); + + builder.connectTimeout = this.getConnectTimeout(); + builder.keepAlive = this.isKeepAlive(); + builder.tcpNoDelay = this.isTcpNoDelay(); + + return builder; + } + + /** + * Returns the connection timeout. + * + * @return the connection timeout. + */ + public Duration getConnectTimeout() { + return connectTimeout; + } + + /** + * Returns whether to enable TCP keepalive. + * + * @return whether to enable TCP keepalive + * @see java.net.SocketOptions#SO_KEEPALIVE + */ + public boolean isKeepAlive() { + return keepAlive; + } + + /** + * Returns whether to use TCP NoDelay. + * + * @return {@literal true} to disable Nagle's algorithm, {@literal false} to enable Nagle's algorithm. + * @see java.net.SocketOptions#TCP_NODELAY + */ + public boolean isTcpNoDelay() { + return tcpNoDelay; + } +} diff --git a/src/main/java/io/lettuce/core/SortArgs.java b/src/main/java/io/lettuce/core/SortArgs.java new file mode 100644 index 0000000000..9c0ea7d88a --- /dev/null +++ b/src/main/java/io/lettuce/core/SortArgs.java @@ -0,0 +1,247 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.protocol.CommandKeyword.*; +import static io.lettuce.core.protocol.CommandType.GET; + +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; + +/** + * Argument list builder for the Redis SORT command. Static import the methods from + * {@link Builder} and chain the method calls: {@code by("weight_*").desc().limit(0, 2)}. + *

+ * {@link ScanArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Will Glozer + * @author Mark Paluch + */ +public class SortArgs implements CompositeArgument { + + private String by; + private Limit limit = Limit.unlimited(); + private List get; + private CommandKeyword order; + private boolean alpha; + + /** + * Builder entry points for {@link SortArgs}. + */ + public static class Builder { + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link SortArgs} setting {@literal PATTERN}. + * + * @param pattern must not be {@literal null}. + * @return new {@link SortArgs} with {@literal PATTERN} set. + * @see SortArgs#by(String) + */ + public static SortArgs by(String pattern) { + return new SortArgs().by(pattern); + } + + /** + * Creates new {@link SortArgs} setting {@literal LIMIT}. + * + * @param offset + * @param count + * @return new {@link SortArgs} with {@literal LIMIT} set. + * @see SortArgs#limit(long, long) + */ + public static SortArgs limit(long offset, long count) { + return new SortArgs().limit(offset, count); + } + + /** + * Creates new {@link SortArgs} setting {@literal GET}. + * + * @param pattern must not be {@literal null}. + * @return new {@link SortArgs} with {@literal GET} set. + * @see SortArgs#by(String) + */ + public static SortArgs get(String pattern) { + return new SortArgs().get(pattern); + } + + /** + * Creates new {@link SortArgs} setting {@literal ASC}. + * + * @return new {@link SortArgs} with {@literal ASC} set. + * @see SortArgs#asc() + */ + public static SortArgs asc() { + return new SortArgs().asc(); + } + + /** + * Creates new {@link SortArgs} setting {@literal DESC}. + * + * @return new {@link SortArgs} with {@literal DESC} set. + * @see SortArgs#desc() + */ + public static SortArgs desc() { + return new SortArgs().desc(); + } + + /** + * Creates new {@link SortArgs} setting {@literal ALPHA}. + * + * @return new {@link SortArgs} with {@literal ALPHA} set. + * @see SortArgs#alpha() + */ + public static SortArgs alpha() { + return new SortArgs().alpha(); + } + } + + /** + * Sort keys by an external list. Key names are obtained substituting the first occurrence of {@code *} with the actual + * value of the element in the list. + * + * @param pattern key name pattern. + * @return {@code this} {@link SortArgs}. + */ + public SortArgs by(String pattern) { + + LettuceAssert.notNull(pattern, "Pattern must not be null"); + + this.by = pattern; + return this; + } + + /** + * Limit the number of returned elements. + * + * @param offset + * @param count + * @return {@code this} {@link SortArgs}. + */ + public SortArgs limit(long offset, long count) { + return limit(Limit.create(offset, count)); + } + + /** + * Limit the number of returned elements. + * + * @param limit must not be {@literal null}. + * @return {@code this} {@link SortArgs}. + */ + public SortArgs limit(Limit limit) { + + LettuceAssert.notNull(limit, "Limit must not be null"); + + this.limit = limit; + return this; + } + + /** + * Retrieve external keys during sort. {@literal GET} supports {@code #} and {@code *} wildcards. + * + * @param pattern must not be {@literal null}. + * @return {@code this} {@link SortArgs}. + */ + public SortArgs get(String pattern) { + + LettuceAssert.notNull(pattern, "Pattern must not be null"); + + if (get == null) { + get = new ArrayList<>(); + } + get.add(pattern); + return this; + } + + /** + * Apply numeric sort in ascending order. + * + * @return {@code this} {@link SortArgs}. + */ + public SortArgs asc() { + order = ASC; + return this; + } + + /** + * Apply numeric sort in descending order. + * + * @return {@code this} {@link SortArgs}. + */ + public SortArgs desc() { + order = DESC; + return this; + } + + /** + * Apply lexicographically sort. + * + * @return {@code this} {@link SortArgs}. + */ + public SortArgs alpha() { + alpha = true; + return this; + } + + @Override + public void build(CommandArgs args) { + + if (by != null) { + args.add(BY); + args.add(by); + } + + if (get != null) { + for (String pattern : get) { + args.add(GET); + args.add(pattern); + } + } + + if (limit != null && limit.isLimited()) { + args.add(LIMIT); + args.add(limit.getOffset()); + args.add(limit.getCount()); + } + + if (order != null) { + args.add(order); + } + + if (alpha) { + args.add(ALPHA); + } + + } + + void build(CommandArgs args, K store) { + + build(args); + + if (store != null) { + args.add(STORE); + args.addKey(store); + } + } +} diff --git a/src/main/java/io/lettuce/core/SslConnectionBuilder.java b/src/main/java/io/lettuce/core/SslConnectionBuilder.java new file mode 100644 index 0000000000..388ffc6bbf --- /dev/null +++ b/src/main/java/io/lettuce/core/SslConnectionBuilder.java @@ -0,0 +1,140 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.io.IOException; +import java.net.InetSocketAddress; +import java.net.SocketAddress; +import java.security.GeneralSecurityException; +import java.util.List; +import java.util.function.Supplier; + +import javax.net.ssl.SSLEngine; +import javax.net.ssl.SSLParameters; + +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.resource.ClientResources; +import io.netty.buffer.ByteBufAllocator; +import io.netty.channel.Channel; +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelInitializer; +import io.netty.handler.ssl.SslContext; +import io.netty.handler.ssl.SslContextBuilder; +import io.netty.handler.ssl.SslHandler; +import io.netty.handler.ssl.util.InsecureTrustManagerFactory; + +/** + * Connection builder for SSL connections. This class is part of the internal API. + * + * @author Mark Paluch + * @author Amin Mohtashami + */ +public class SslConnectionBuilder extends ConnectionBuilder { + + private RedisURI redisURI; + + public SslConnectionBuilder ssl(RedisURI redisURI) { + this.redisURI = redisURI; + return this; + } + + public static SslConnectionBuilder sslConnectionBuilder() { + return new SslConnectionBuilder(); + } + + @Override + protected List buildHandlers() { + LettuceAssert.assertState(redisURI != null, "RedisURI must not be null"); + LettuceAssert.assertState(redisURI.isSsl(), "RedisURI is not configured for SSL (ssl is false)"); + + return super.buildHandlers(); + } + + @Override + public ChannelInitializer build(SocketAddress socketAddress) { + return new SslChannelInitializer(this::buildHandlers, toHostAndPort(socketAddress), redisURI.isVerifyPeer(), + redisURI.isStartTls(), clientResources(), clientOptions().getSslOptions()); + } + + static HostAndPort toHostAndPort(SocketAddress socketAddress) { + + if (socketAddress instanceof InetSocketAddress) { + + InetSocketAddress isa = (InetSocketAddress) socketAddress; + + return HostAndPort.of(isa.getHostString(), isa.getPort()); + } + + return null; + } + + static class SslChannelInitializer extends io.netty.channel.ChannelInitializer { + + private final Supplier> handlers; + private final HostAndPort hostAndPort; + private final boolean verifyPeer; + private final boolean startTls; + private final ClientResources clientResources; + private final SslOptions sslOptions; + + public SslChannelInitializer(Supplier> handlers, HostAndPort hostAndPort, boolean verifyPeer, + boolean startTls, ClientResources clientResources, SslOptions sslOptions) { + + this.handlers = handlers; + this.hostAndPort = hostAndPort; + this.verifyPeer = verifyPeer; + this.startTls = startTls; + this.clientResources = clientResources; + this.sslOptions = sslOptions; + } + + @Override + protected void initChannel(Channel channel) throws Exception { + + SSLEngine sslEngine = initializeSSLEngine(channel.alloc()); + SslHandler sslHandler = new SslHandler(sslEngine, startTls); + channel.pipeline().addLast(sslHandler); + + for (ChannelHandler handler : handlers.get()) { + channel.pipeline().addLast(handler); + } + + clientResources.nettyCustomizer().afterChannelInitialized(channel); + } + + private SSLEngine initializeSSLEngine(ByteBufAllocator alloc) throws IOException, GeneralSecurityException { + + SSLParameters sslParams = sslOptions.createSSLParameters(); + SslContextBuilder sslContextBuilder = sslOptions.createSslContextBuilder(); + + if (verifyPeer) { + sslParams.setEndpointIdentificationAlgorithm("HTTPS"); + } else { + sslContextBuilder.trustManager(InsecureTrustManagerFactory.INSTANCE); + } + + SslContext sslContext = sslContextBuilder.build(); + + SSLEngine sslEngine = hostAndPort != null + ? sslContext.newEngine(alloc, hostAndPort.getHostText(), hostAndPort.getPort()) + : sslContext.newEngine(alloc); + sslEngine.setSSLParameters(sslParams); + + return sslEngine; + } + } +} diff --git a/src/main/java/io/lettuce/core/SslOptions.java b/src/main/java/io/lettuce/core/SslOptions.java new file mode 100644 index 0000000000..470b8aec64 --- /dev/null +++ b/src/main/java/io/lettuce/core/SslOptions.java @@ -0,0 +1,805 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.net.URL; +import java.security.GeneralSecurityException; +import java.security.KeyStore; +import java.security.KeyStoreException; +import java.security.NoSuchAlgorithmException; +import java.security.cert.CertificateException; +import java.util.Arrays; +import java.util.function.Consumer; +import java.util.function.Supplier; + +import javax.net.ssl.KeyManagerFactory; +import javax.net.ssl.SSLParameters; +import javax.net.ssl.TrustManagerFactory; + +import io.lettuce.core.internal.LettuceAssert; +import io.netty.handler.ssl.OpenSsl; +import io.netty.handler.ssl.SslContextBuilder; +import io.netty.handler.ssl.SslProvider; + +/** + * Options to configure SSL options for the connections kept to Redis servers. + * + * @author Mark Paluch + * @author Amin Mohtashami + * @since 4.3 + */ +public class SslOptions { + + public static final SslProvider DEFAULT_SSL_PROVIDER = SslProvider.JDK; + + private final String keyStoreType; + private final SslProvider sslProvider; + private final URL keystore; + private final char[] keystorePassword; + private final URL truststore; + private final char[] truststorePassword; + private final String[] protocols; + private final String[] cipherSuites; + + private final Consumer sslContextBuilderCustomizer; + private final Supplier sslParametersSupplier; + + private final KeystoreAction keymanager; + private final KeystoreAction trustmanager; + + protected SslOptions(Builder builder) { + this.keyStoreType = builder.keyStoreType; + this.sslProvider = builder.sslProvider; + this.keystore = builder.keystore; + this.keystorePassword = builder.keystorePassword; + this.truststore = builder.truststore; + this.truststorePassword = builder.truststorePassword; + + this.protocols = builder.protocols; + this.cipherSuites = builder.cipherSuites; + + this.sslContextBuilderCustomizer = builder.sslContextBuilderCustomizer; + this.sslParametersSupplier = builder.sslParametersSupplier; + this.keymanager = builder.keymanager; + this.trustmanager = builder.trustmanager; + } + + protected SslOptions(SslOptions original) { + this.keyStoreType = original.keyStoreType; + this.sslProvider = original.getSslProvider(); + this.keystore = original.keystore; + this.keystorePassword = original.keystorePassword; + this.truststore = original.getTruststore(); + this.truststorePassword = original.getTruststorePassword(); + + this.protocols = original.protocols; + this.cipherSuites = original.cipherSuites; + + this.sslContextBuilderCustomizer = original.sslContextBuilderCustomizer; + this.sslParametersSupplier = original.sslParametersSupplier; + this.keymanager = original.keymanager; + this.trustmanager = original.trustmanager; + } + + /** + * Create a copy of {@literal options} + * + * @param options the original + * @return A new instance of {@link SslOptions} containing the values of {@literal options} + */ + public static SslOptions copyOf(SslOptions options) { + return new SslOptions(options); + } + + /** + * Returns a new {@link SslOptions.Builder} to construct {@link SslOptions}. + * + * @return a new {@link SslOptions.Builder} to construct {@link SslOptions}. + */ + public static SslOptions.Builder builder() { + return new SslOptions.Builder(); + } + + /** + * Create a new {@link SslOptions} using default settings. + * + * @return a new instance of default cluster client client options. + */ + public static SslOptions create() { + return builder().build(); + } + + /** + * Builder for {@link SslOptions}. + */ + public static class Builder { + + private SslProvider sslProvider = DEFAULT_SSL_PROVIDER; + + private String keyStoreType; + private URL keystore; + private char[] keystorePassword = new char[0]; + private URL truststore; + private char[] truststorePassword = new char[0]; + private String[] protocols = null; + private String[] cipherSuites = null; + private Consumer sslContextBuilderCustomizer = contextBuilder -> { + + }; + private Supplier sslParametersSupplier = SSLParameters::new; + + private KeystoreAction keymanager = KeystoreAction.NO_OP; + private KeystoreAction trustmanager = KeystoreAction.NO_OP; + + private Builder() { + } + + /** + * Sets the cipher suites to use. + * + * @param cipherSuites cipher suites to use. + * @return {@code this} + * @since 5.3 + */ + public Builder cipherSuites(String... cipherSuites) { + + LettuceAssert.notNull(cipherSuites, "Cipher suites must not be null"); + + this.cipherSuites = cipherSuites; + return this; + } + + /** + * Use the JDK SSL provider for SSL connections. + * + * @return {@code this} + */ + public Builder jdkSslProvider() { + return sslProvider(SslProvider.JDK); + } + + /** + * Use the OpenSSL provider for SSL connections. The OpenSSL provider requires the + * {@code netty-tcnative} dependency with the OpenSSL JNI + * binary. + * + * @return {@code this} + * @throws IllegalStateException if OpenSSL is not available + */ + public Builder openSslProvider() { + return sslProvider(SslProvider.OPENSSL); + } + + private Builder sslProvider(SslProvider sslProvider) { + + if (sslProvider == SslProvider.OPENSSL) { + if (!OpenSsl.isAvailable()) { + throw new IllegalStateException("OpenSSL SSL Provider is not available"); + } + } + + this.sslProvider = sslProvider; + + return this; + } + + /** + * Sets the KeyStore type. Defaults to {@link KeyStore#getDefaultType()} if not set. + * + * @param keyStoreType the keystore type to use, must not be {@literal null}. + * @return {@code this} + * @since 5.3 + */ + public Builder keyStoreType(String keyStoreType) { + + LettuceAssert.notNull(keyStoreType, "KeyStoreType must not be null"); + this.keyStoreType = keyStoreType; + return this; + } + + /** + * Sets the Keystore file to load client certificates. The key store file must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The keystore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param keystore the keystore file, must not be {@literal null}. + * @return {@code this} + * @since 4.4 + */ + public Builder keystore(File keystore) { + return keystore(keystore, new char[0]); + } + + /** + * Sets the Keystore file to load client certificates. The keystore file must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The keystore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param keystore the keystore file, must not be {@literal null}. + * @param keystorePassword the keystore password. May be empty to omit password and the keystore integrity check. + * @return {@code this} + * @since 4.4 + */ + public Builder keystore(File keystore, char[] keystorePassword) { + + LettuceAssert.notNull(keystore, "Keystore must not be null"); + LettuceAssert.isTrue(keystore.exists(), () -> String.format("Keystore file %s does not exist", truststore)); + LettuceAssert.isTrue(keystore.isFile(), () -> String.format("Keystore %s is not a file", truststore)); + + return keystore(Resource.from(keystore), keystorePassword); + } + + /** + * Sets the Keystore resource to load client certificates. The keystore file must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The keystore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param keystore the keystore URL, must not be {@literal null}. + * @return {@code this} + * @since 4.4 + */ + public Builder keystore(URL keystore) { + return keystore(keystore, null); + } + + /** + * Sets the Keystore resource to load client certificates. The keystore file must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The keystore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param keystore the keystore file, must not be {@literal null}. + * @return {@code this} + * @since 4.4 + */ + public Builder keystore(URL keystore, char[] keystorePassword) { + + LettuceAssert.notNull(keystore, "Keystore must not be null"); + + this.keystore = keystore; + + return keystore(Resource.from(keystore), keystorePassword); + } + + /** + * Sets the key file and its certificate to use for client authentication. The key is reloaded on each connection + * attempt that allows to replace certificates during runtime. + * + * @param keyCertChainFile an X.509 certificate chain file in PEM format. + * @param keyFile a PKCS#8 private key file in PEM format. + * @param keyPassword the password of the {@code keyFile}, or {@literal null} if it's not password-protected. + * @return {@code this} + * @since 5.3 + */ + public Builder keyManager(File keyCertChainFile, File keyFile, char[] keyPassword) { + + LettuceAssert.notNull(keyCertChainFile, "Key certificate file must not be null"); + LettuceAssert.notNull(keyFile, "Key file must not be null"); + LettuceAssert.isTrue(keyCertChainFile.exists(), + () -> String.format("Key certificate file %s does not exist", keyCertChainFile)); + LettuceAssert.isTrue(keyCertChainFile.isFile(), + () -> String.format("Key certificate %s is not a file", keyCertChainFile)); + LettuceAssert.isTrue(keyFile.exists(), () -> String.format("Key file %s does not exist", keyFile)); + LettuceAssert.isTrue(keyFile.isFile(), () -> String.format("Key %s is not a file", keyFile)); + + return keyManager(Resource.from(keyCertChainFile), Resource.from(keyFile), keyPassword); + } + + /** + * Sets the key and its certificate to use for client authentication. The key is reloaded on each connection attempt + * that allows to replace certificates during runtime. + * + * @param keyCertChain an {@link Resource} for a X.509 certificate chain in PEM format. + * @param key an {@link Resource} for a PKCS#8 private key in PEM format. + * @param keyPassword the password of the {@code keyFile}, or {@literal null} if it's not password-protected. + * @return {@code this} + * @since 5.3 + * @see Resource + */ + public Builder keyManager(Resource keyCertChain, Resource key, char[] keyPassword) { + + LettuceAssert.notNull(keyCertChain, "KeyChain InputStreamProvider must not be null"); + LettuceAssert.notNull(key, "Key InputStreamProvider must not be null"); + + char[] passwordToUse = getPassword(keyPassword); + this.keymanager = (builder, keyStoreType) -> { + + try (InputStream keyCertChainIs = keyCertChain.get(); InputStream keyIs = key.get()) { + builder.keyManager(keyCertChainIs, keyIs, + passwordToUse == null || passwordToUse.length == 0 ? null : new String(passwordToUse)); + } + }; + + return this; + } + + /** + * Sets the {@link KeyManagerFactory}. + * + * @param keyManagerFactory the {@link KeyManagerFactory} to use. + * @return {@code this} + * @since 5.3 + */ + public Builder keyManager(KeyManagerFactory keyManagerFactory) { + + LettuceAssert.notNull(keyManagerFactory, "KeyManagerFactory must not be null"); + + this.keymanager = (builder, keyStoreType) -> builder.keyManager(keyManagerFactory); + + return this; + } + + /** + * Sets the Java Keystore resource to load client certificates. The keystore file must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The keystore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param resource the provider that opens a {@link InputStream} to the keystore file, must not be {@literal null}. + * @param keystorePassword the keystore password. May be empty to omit password and the keystore integrity check. + * @return {@code this} + * @since 5.3 + */ + public Builder keystore(Resource resource, char[] keystorePassword) { + + LettuceAssert.notNull(resource, "Keystore InputStreamProvider must not be null"); + + char[] keystorePasswordToUse = getPassword(keystorePassword); + this.keystorePassword = keystorePasswordToUse; + this.keymanager = (builder, keyStoreType) -> { + + try (InputStream is = resource.get()) { + builder.keyManager(createKeyManagerFactory(is, keystorePasswordToUse, keyStoreType)); + } + }; + + return this; + } + + /** + * Sets the protocol used for the connection established to Redis Server, such as {@code TLSv1.2, TLSv1.1, TLSv1}. + * + * @param protocols list of desired protocols to use. + * @return {@code this} + * @since 5.3 + */ + public Builder protocols(String... protocols) { + + LettuceAssert.notNull(protocols, "Protocols must not be null"); + + this.protocols = protocols; + return this; + } + + /** + * Sets the Truststore file to load trusted certificates. The truststore file must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The truststore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param truststore the truststore file, must not be {@literal null}. + * @return {@code this} + */ + public Builder truststore(File truststore) { + return truststore(truststore, null); + } + + /** + * Sets the Truststore file to load trusted certificates. The truststore file must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The truststore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param truststore the truststore file, must not be {@literal null}. + * @param truststorePassword the truststore password. May be empty to omit password and the truststore integrity check. + * @return {@code this} + */ + public Builder truststore(File truststore, String truststorePassword) { + + LettuceAssert.notNull(truststore, "Truststore must not be null"); + LettuceAssert.isTrue(truststore.exists(), () -> String.format("Truststore file %s does not exist", truststore)); + LettuceAssert.isTrue(truststore.isFile(), () -> String.format("Truststore file %s is not a file", truststore)); + + return truststore(Resource.from(truststore), getPassword(truststorePassword)); + } + + /** + * Sets the Truststore resource to load trusted certificates. The truststore resource must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The truststore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param truststore the truststore file, must not be {@literal null}. + * @return {@code this} + */ + public Builder truststore(URL truststore) { + return truststore(truststore, null); + } + + /** + * Sets the Truststore resource to load trusted certificates. The truststore resource must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The truststore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param truststore the truststore file, must not be {@literal null}. + * @param truststorePassword the truststore password. May be empty to omit password and the truststore integrity check. + * @return {@code this} + */ + public Builder truststore(URL truststore, String truststorePassword) { + + LettuceAssert.notNull(truststore, "Truststore must not be null"); + + this.truststore = truststore; + + return truststore(Resource.from(truststore), getPassword(truststorePassword)); + } + + /** + * Sets the certificate file to load trusted certificates. The file must provide X.509 certificates in PEM format. + * Certificates are reloaded on each connection attempt that allows to replace certificates during runtime. + * + * @param certCollection the X.509 certificate collection in PEM format. + * @return {@code this} + * @since 5.3 + */ + public Builder trustManager(File certCollection) { + + LettuceAssert.notNull(certCollection, "Certificate collection must not be null"); + LettuceAssert.isTrue(certCollection.exists(), + () -> String.format("Certificate collection file %s does not exist", certCollection)); + LettuceAssert.isTrue(certCollection.isFile(), + () -> String.format("Certificate collection %s is not a file", certCollection)); + + return trustManager(Resource.from(certCollection)); + } + + /** + * Sets the certificate resource to load trusted certificates. The file must provide X.509 certificates in PEM format. + * Certificates are reloaded on each connection attempt that allows to replace certificates during runtime. + * + * @param certCollection the X.509 certificate collection in PEM format. + * @return {@code this} + * @since 5.3 + */ + public Builder trustManager(Resource certCollection) { + + LettuceAssert.notNull(certCollection, "Truststore must not be null"); + + this.trustmanager = (builder, keyStoreType) -> { + + try (InputStream is = certCollection.get()) { + builder.trustManager(is); + } + }; + + return this; + } + + /** + * Sets the {@link TrustManagerFactory}. + * + * @param trustManagerFactory the {@link TrustManagerFactory} to use. + * @return {@code this} + * @since 5.3 + */ + public Builder trustManager(TrustManagerFactory trustManagerFactory) { + + LettuceAssert.notNull(trustManagerFactory, "TrustManagerFactory must not be null"); + + this.trustmanager = (builder, keyStoreType) -> { + builder.trustManager(trustManagerFactory); + }; + + return this; + } + + /** + * Sets the Truststore resource to load trusted certificates. The truststore resource must be supported by + * {@link java.security.KeyStore} which is {@link KeyStore#getDefaultType()} by default. The truststore is reloaded on + * each connection attempt that allows to replace certificates during runtime. + * + * @param resource the provider that opens a {@link InputStream} to the keystore file, must not be {@literal null}. + * @param truststorePassword the truststore password. May be empty to omit password and the truststore integrity check. + * @return {@code this} + */ + public Builder truststore(Resource resource, char[] truststorePassword) { + + LettuceAssert.notNull(resource, "Truststore InputStreamProvider must not be null"); + + char[] passwordToUse = getPassword(truststorePassword); + this.truststorePassword = passwordToUse; + this.trustmanager = (builder, keyStoreType) -> { + + try (InputStream is = resource.get()) { + builder.trustManager(createTrustManagerFactory(is, passwordToUse, keyStoreType)); + } + }; + + return this; + } + + /** + * Applies a {@link SslContextBuilder} customizer by calling {@link java.util.function.Consumer#accept(Object)} + * + * @param contextBuilderCustomizer builder callback to customize the {@link SslContextBuilder}. + * @return {@code this} + * @since 5.3 + */ + public Builder sslContext(Consumer contextBuilderCustomizer) { + + LettuceAssert.notNull(contextBuilderCustomizer, "SslContextBuilder customizer must not be null"); + + this.sslContextBuilderCustomizer = contextBuilderCustomizer; + return this; + } + + /** + * Configures a {@link Supplier} to create {@link SSLParameters}. + * + * @param sslParametersSupplier {@link Supplier} for {@link SSLParameters}. + * @return {@code this} + * @since 5.3 + */ + public Builder sslParameters(Supplier sslParametersSupplier) { + + LettuceAssert.notNull(sslParametersSupplier, "SSLParameters supplier must not be null"); + + this.sslParametersSupplier = sslParametersSupplier; + return this; + } + + /** + * Create a new instance of {@link SslOptions} + * + * @return new instance of {@link SslOptions} + */ + public SslOptions build() { + return new SslOptions(this); + } + } + + /** + * Creates a new {@link SslContextBuilder} object that is pre-configured with values from this {@link SslOptions} object. + * + * @return a new {@link SslContextBuilder}. + * @throws IOException thrown when loading the keystore or the truststore fails. + * @throws GeneralSecurityException thrown when loading the keystore or the truststore fails. + * @since 5.3 + */ + public SslContextBuilder createSslContextBuilder() throws IOException, GeneralSecurityException { + + SslContextBuilder sslContextBuilder = SslContextBuilder.forClient().sslProvider(this.sslProvider) + .keyStoreType(keyStoreType); + + if (protocols != null && protocols.length > 0) { + sslContextBuilder.protocols(protocols); + } + + if (cipherSuites != null && cipherSuites.length > 0) { + sslContextBuilder.ciphers(Arrays.asList(cipherSuites)); + } + + keymanager.accept(sslContextBuilder, this.keyStoreType); + trustmanager.accept(sslContextBuilder, this.keyStoreType); + sslContextBuilderCustomizer.accept(sslContextBuilder); + + return sslContextBuilder; + } + + /** + * Creates a {@link SSLParameters} object that is pre-configured with values from this {@link SslOptions} object. + * + * @return a new a {@link SSLParameters} object. + * @since 5.3 + */ + public SSLParameters createSSLParameters() { + + SSLParameters sslParams = sslParametersSupplier.get(); + + if (protocols != null && protocols.length > 0) { + sslParams.setProtocols(protocols); + } + + if (cipherSuites != null && cipherSuites.length > 0) { + sslParams.setCipherSuites(cipherSuites); + } + + return sslParams; + } + + /** + * Returns a builder to create new {@link SslOptions} whose settings are replicated from the current {@link SslOptions}. + * + * @return a {@link SslOptions.Builder} to create new {@link SslOptions} whose settings are replicated from the current + * {@link SslOptions} + * + * @since 5.3 + */ + public SslOptions.Builder mutate() { + + Builder builder = builder(); + builder.keyStoreType = this.keyStoreType; + builder.sslProvider = this.getSslProvider(); + builder.keystore = this.keystore; + builder.keystorePassword = this.keystorePassword; + builder.truststore = this.getTruststore(); + builder.truststorePassword = this.getTruststorePassword(); + + builder.protocols = this.protocols; + builder.cipherSuites = this.cipherSuites; + + builder.sslContextBuilderCustomizer = this.sslContextBuilderCustomizer; + builder.sslParametersSupplier = this.sslParametersSupplier; + builder.keymanager = this.keymanager; + builder.trustmanager = this.trustmanager; + + return builder; + } + + /** + * @return the configured {@link SslProvider}. + */ + @Deprecated + public SslProvider getSslProvider() { + return sslProvider; + } + + /** + * @return the keystore {@link URL}. + * @deprecated since 5.3, {@link javax.net.ssl.KeyManager} is configured via {@link #createSslContextBuilder()}. + */ + @Deprecated + public URL getKeystore() { + return keystore; + } + + /** + * @return the set of protocols + */ + public String[] getProtocols() { + return protocols; + } + + /** + * @return the set of cipher suites + */ + public String[] getCipherSuites() { + return cipherSuites; + } + + /** + * @return the password for the keystore. May be empty. + * @deprecated since 5.3, {@link javax.net.ssl.KeyManager} is configured via {@link #createSslContextBuilder()}. + */ + @Deprecated + public char[] getKeystorePassword() { + return Arrays.copyOf(keystorePassword, keystorePassword.length); + } + + /** + * @return the truststore {@link URL}. + * @deprecated since 5.3, {@link javax.net.ssl.TrustManager} is configured via {@link #createSslContextBuilder()}. + */ + @Deprecated + public URL getTruststore() { + return truststore; + } + + /** + * @return the password for the truststore. May be empty. + * @deprecated since 5.3, {@link javax.net.ssl.TrustManager} is configured via {@link #createSslContextBuilder()}. + */ + @Deprecated + public char[] getTruststorePassword() { + return Arrays.copyOf(truststorePassword, truststorePassword.length); + } + + private static KeyManagerFactory createKeyManagerFactory(InputStream inputStream, char[] storePassword, String keyStoreType) + throws GeneralSecurityException, IOException { + + KeyStore keyStore = getKeyStore(inputStream, storePassword, keyStoreType); + + KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm()); + keyManagerFactory.init(keyStore, storePassword == null ? new char[0] : storePassword); + + return keyManagerFactory; + } + + private static KeyStore getKeyStore(InputStream inputStream, char[] storePassword, String keyStoreType) + throws KeyStoreException, IOException, NoSuchAlgorithmException, CertificateException { + + KeyStore keyStore = KeyStore + .getInstance(LettuceStrings.isEmpty(keyStoreType) ? KeyStore.getDefaultType() : keyStoreType); + + try { + keyStore.load(inputStream, storePassword); + } finally { + inputStream.close(); + } + return keyStore; + } + + private static TrustManagerFactory createTrustManagerFactory(InputStream inputStream, char[] storePassword, + String keystoreType) throws GeneralSecurityException, IOException { + + KeyStore trustStore = getKeyStore(inputStream, storePassword, keystoreType); + + TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm()); + trustManagerFactory.init(trustStore); + + return trustManagerFactory; + } + + private static char[] getPassword(String truststorePassword) { + return LettuceStrings.isNotEmpty(truststorePassword) ? truststorePassword.toCharArray() : null; + } + + private static char[] getPassword(char[] chars) { + return chars != null ? Arrays.copyOf(chars, chars.length) : null; + } + + @FunctionalInterface + interface KeystoreAction { + + static KeystoreAction NO_OP = (builder, keyStoreType) -> { + }; + + void accept(SslContextBuilder sslContextBuilder, String keyStoreType) throws IOException, GeneralSecurityException; + } + + /** + * Supplier for a {@link InputStream} representing a resource. The resulting {@link InputStream} must be closed by the + * calling code. + * + * @since 5.3 + */ + @FunctionalInterface + public interface Resource { + + /** + * Create a {@link Resource} that obtains a {@link InputStream} from a {@link URL}. + * + * @param url the URL to obtain the {@link InputStream} from. + * @return a {@link Resource} that opens a connection to the URL and obtains the {@link InputStream} for it. + */ + public static Resource from(URL url) { + + LettuceAssert.notNull(url, "URL must not be null"); + + return () -> url.openConnection().getInputStream(); + } + + /** + * Create a {@link Resource} that obtains a {@link InputStream} from a {@link File}. + * + * @param file the File to obtain the {@link InputStream} from. + * @return a {@link Resource} that obtains the {@link FileInputStream} for the given {@link File}. + */ + public static Resource from(File file) { + + LettuceAssert.notNull(file, "File must not be null"); + + return () -> new FileInputStream(file); + } + + /** + * Obtains the {@link InputStream}. + * + * @return the {@link InputStream}. + * @throws IOException + */ + InputStream get() throws IOException; + } +} diff --git a/src/main/java/io/lettuce/core/StatefulRedisConnectionImpl.java b/src/main/java/io/lettuce/core/StatefulRedisConnectionImpl.java new file mode 100644 index 0000000000..5f79e8d421 --- /dev/null +++ b/src/main/java/io/lettuce/core/StatefulRedisConnectionImpl.java @@ -0,0 +1,251 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.protocol.CommandType.*; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.function.Consumer; +import java.util.stream.Collectors; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.MultiOutput; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.*; + +/** + * A thread-safe connection to a Redis server. Multiple threads may share one {@link StatefulRedisConnectionImpl} + * + * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All + * pending commands will be (re)sent after successful reconnection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class StatefulRedisConnectionImpl extends RedisChannelHandler implements StatefulRedisConnection { + + protected final RedisCodec codec; + protected final RedisCommands sync; + protected final RedisAsyncCommandsImpl async; + protected final RedisReactiveCommandsImpl reactive; + private final ConnectionState state = new ConnectionState(); + + protected MultiOutput multi; + + /** + * Initialize a new connection. + * + * @param writer the channel writer + * @param codec Codec used to encode/decode keys and values. + * @param timeout Maximum time to wait for a response. + */ + public StatefulRedisConnectionImpl(RedisChannelWriter writer, RedisCodec codec, Duration timeout) { + + super(writer, timeout); + + this.codec = codec; + this.async = newRedisAsyncCommandsImpl(); + this.sync = newRedisSyncCommandsImpl(); + this.reactive = newRedisReactiveCommandsImpl(); + } + + @Override + public RedisAsyncCommands async() { + return async; + } + + /** + * Create a new instance of {@link RedisCommands}. Can be overriden to extend. + * + * @return a new instance + */ + protected RedisCommands newRedisSyncCommandsImpl() { + return syncHandler(async(), RedisCommands.class, RedisClusterCommands.class); + } + + /** + * Create a new instance of {@link RedisAsyncCommandsImpl}. Can be overriden to extend. + * + * @return a new instance + */ + protected RedisAsyncCommandsImpl newRedisAsyncCommandsImpl() { + return new RedisAsyncCommandsImpl<>(this, codec); + } + + @Override + public RedisReactiveCommands reactive() { + return reactive; + } + + /** + * Create a new instance of {@link RedisReactiveCommandsImpl}. Can be overriden to extend. + * + * @return a new instance + */ + protected RedisReactiveCommandsImpl newRedisReactiveCommandsImpl() { + return new RedisReactiveCommandsImpl<>(this, codec); + } + + @Override + public RedisCommands sync() { + return sync; + } + + @Override + public boolean isMulti() { + return multi != null; + } + + @Override + public RedisCommand dispatch(RedisCommand command) { + + RedisCommand toSend = preProcessCommand(command); + + try { + return super.dispatch(toSend); + } finally { + if (command.getType().name().equals(MULTI.name())) { + multi = (multi == null ? new MultiOutput<>(codec) : multi); + } + } + } + + @Override + public Collection> dispatch(Collection> commands) { + + List> sentCommands = new ArrayList<>(commands.size()); + + commands.forEach(o -> { + RedisCommand command = preProcessCommand(o); + + sentCommands.add(command); + if (command.getType().name().equals(MULTI.name())) { + multi = (multi == null ? new MultiOutput<>(codec) : multi); + } + }); + + return super.dispatch(sentCommands); + } + + protected RedisCommand preProcessCommand(RedisCommand command) { + + RedisCommand local = command; + + if (local.getType().name().equals(AUTH.name())) { + local = attachOnComplete(local, status -> { + if ("OK".equals(status)) { + + List args = CommandArgsAccessor.getCharArrayArguments(command.getArgs()); + + if (!args.isEmpty()) { + state.setUserNamePassword(args); + } else { + + List strings = CommandArgsAccessor.getStringArguments(command.getArgs()); + state.setUserNamePassword(strings.stream().map(String::toCharArray).collect(Collectors.toList())); + } + } + }); + } + + if (local.getType().name().equals(SELECT.name())) { + local = attachOnComplete(local, status -> { + if ("OK".equals(status)) { + Long db = CommandArgsAccessor.getFirstInteger(command.getArgs()); + if (db != null) { + state.setDb(db.intValue()); + } + } + }); + } + + if (local.getType().name().equals(READONLY.name())) { + local = attachOnComplete(local, status -> { + if ("OK".equals(status)) { + state.setReadOnly(true); + } + }); + } + + if (local.getType().name().equals(READWRITE.name())) { + local = attachOnComplete(local, status -> { + if ("OK".equals(status)) { + state.setReadOnly(false); + } + }); + } + + if (local.getType().name().equals(DISCARD.name())) { + if (multi != null) { + multi.cancel(); + multi = null; + } + } + + if (local.getType().name().equals(EXEC.name())) { + MultiOutput multiOutput = this.multi; + this.multi = null; + if (multiOutput == null) { + multiOutput = new MultiOutput<>(codec); + } + local.setOutput((MultiOutput) multiOutput); + } + + if (multi != null && !local.getType().name().equals(MULTI.name())) { + local = new TransactionalCommand<>(local); + multi.add(local); + } + return local; + } + + private RedisCommand attachOnComplete(RedisCommand command, Consumer consumer) { + + if (command instanceof CompleteableCommand) { + CompleteableCommand completeable = (CompleteableCommand) command; + completeable.onComplete(consumer); + } + return command; + } + + /** + * @param clientName + * @deprecated since 6.0, use {@link RedisAsyncCommands#clientSetname(Object)}. + */ + @Deprecated + public void setClientName(String clientName) { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(CommandKeyword.SETNAME).addValue(clientName); + AsyncCommand async = new AsyncCommand<>( + new Command<>(CommandType.CLIENT, new StatusOutput<>(StringCodec.UTF8), args)); + state.setClientName(clientName); + + dispatch((RedisCommand) async); + } + + public ConnectionState getConnectionState() { + return state; + } +} diff --git a/src/main/java/io/lettuce/core/StreamMessage.java b/src/main/java/io/lettuce/core/StreamMessage.java new file mode 100644 index 0000000000..aa35ed66fe --- /dev/null +++ b/src/main/java/io/lettuce/core/StreamMessage.java @@ -0,0 +1,81 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Map; +import java.util.Objects; + +/** + * A stream message and its id. + * + * @author Mark Paluch + * @since 5.1 + */ +public class StreamMessage { + + private final K stream; + private final String id; + private final Map body; + + /** + * Create a new {@link StreamMessage}. + * + * @param stream the stream. + * @param id the message id. + * @param body map containing the message body. + */ + public StreamMessage(K stream, String id, Map body) { + + this.stream = stream; + this.id = id; + this.body = body; + } + + public K getStream() { + return stream; + } + + public String getId() { + return id; + } + + /** + * @return the message body. Can be {@literal null} for commands that do not return the message body. + */ + public Map getBody() { + return body; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof StreamMessage)) + return false; + StreamMessage that = (StreamMessage) o; + return Objects.equals(stream, that.stream) && Objects.equals(id, that.id) && Objects.equals(body, that.body); + } + + @Override + public int hashCode() { + return Objects.hash(stream, id, body); + } + + @Override + public String toString() { + return String.format("StreamMessage[%s:%s]%s", stream, id, body); + } +} diff --git a/src/main/java/io/lettuce/core/StreamScanCursor.java b/src/main/java/io/lettuce/core/StreamScanCursor.java new file mode 100644 index 0000000000..8ba69ac21d --- /dev/null +++ b/src/main/java/io/lettuce/core/StreamScanCursor.java @@ -0,0 +1,34 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +/** + * Cursor result using the Streaming API. Provides the count of retrieved elements. + * + * @author Mark Paluch + * @since 3.0 + */ +public class StreamScanCursor extends ScanCursor { + private long count; + + public long getCount() { + return count; + } + + public void setCount(long count) { + this.count = count; + } +} diff --git a/src/main/java/io/lettuce/core/TimeoutOptions.java b/src/main/java/io/lettuce/core/TimeoutOptions.java new file mode 100644 index 0000000000..3dec488aa5 --- /dev/null +++ b/src/main/java/io/lettuce/core/TimeoutOptions.java @@ -0,0 +1,255 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.io.Serializable; +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Options for command timeouts. This options configure how and whether commands time out once they were dispatched. Command + * timeout begins: + *

    + *
  • When the command is sent successfully to the transport
  • + *
  • Queued while the connection was inactive
  • + *
+ * + * The timeout is canceled upon command completion/cancellation. Timeouts are not tied to a specific API and expire commands + * regardless of the synchronization method provided by the API that was used to enqueue the command. + * + * @author Mark Paluch + * @since 5.1 + */ +@SuppressWarnings("serial") +public class TimeoutOptions implements Serializable { + + public static final boolean DEFAULT_TIMEOUT_COMMANDS = false; + + private final boolean timeoutCommands; + private final boolean applyConnectionTimeout; + private final TimeoutSource source; + + private TimeoutOptions(boolean timeoutCommands, boolean applyConnectionTimeout, TimeoutSource source) { + + this.timeoutCommands = timeoutCommands; + this.applyConnectionTimeout = applyConnectionTimeout; + this.source = source; + } + + /** + * Returns a new {@link TimeoutOptions.Builder} to construct {@link TimeoutOptions}. + * + * @return a new {@link TimeoutOptions.Builder} to construct {@link TimeoutOptions}. + */ + public static Builder builder() { + return new Builder(); + } + + /** + * Create a new instance of {@link TimeoutOptions} with default settings. + * + * @return a new instance of {@link TimeoutOptions} with default settings. + */ + public static TimeoutOptions create() { + return builder().build(); + } + + /** + * Create a new instance of {@link TimeoutOptions} with enabled timeout applying default connection timeouts. + * + * @return a new instance of {@link TimeoutOptions} with enabled timeout applying default connection timeouts. + */ + public static TimeoutOptions enabled() { + return builder().timeoutCommands().connectionTimeout().build(); + } + + /** + * Create a new instance of {@link TimeoutOptions} with enabled timeout applying a fixed {@link Duration timeout}. + * + * @return a new instance of {@link TimeoutOptions} with enabled timeout applying a fixed {@link Duration timeout}. + */ + public static TimeoutOptions enabled(Duration timeout) { + return builder().timeoutCommands().fixedTimeout(timeout).build(); + } + + /** + * Builder for {@link TimeoutOptions}. + */ + public static class Builder { + + private boolean timeoutCommands = DEFAULT_TIMEOUT_COMMANDS; + private boolean applyConnectionTimeout = false; + private TimeoutSource source; + + /** + * Enable command timeouts. Disabled by default, see {@link #DEFAULT_TIMEOUT_COMMANDS}. + * + * @return {@code this} + */ + public Builder timeoutCommands() { + return timeoutCommands(true); + } + + /** + * Configure whether commands should timeout. Disabled by default, see {@link #DEFAULT_TIMEOUT_COMMANDS}. + * + * @param enabled {@literal true} to enable timeout; {@literal false} to disable timeouts. + * @return {@code this} + */ + public Builder timeoutCommands(boolean enabled) { + + this.timeoutCommands = enabled; + return this; + } + + /** + * Set a fixed timeout for all commands. + * + * @param duration the timeout {@link Duration}, must not be {@literal null}. + * @return {@code this} + */ + public Builder fixedTimeout(Duration duration) { + + LettuceAssert.notNull(duration, "Duration must not be null"); + + return timeoutSource(new FixedTimeoutSource(duration.toNanos(), TimeUnit.NANOSECONDS)); + } + + /** + * Configure a {@link TimeoutSource} that applies timeouts configured on the connection/client instance. + * + * @return {@code this} + */ + public Builder connectionTimeout() { + return timeoutSource(new DefaultTimeoutSource()); + } + + /** + * Set a {@link TimeoutSource} to obtain the timeout value per {@link RedisCommand}. + * + * @param source the timeout source. + * @return {@code this} + */ + public Builder timeoutSource(TimeoutSource source) { + + LettuceAssert.notNull(source, "TimeoutSource must not be null"); + + timeoutCommands(true); + this.applyConnectionTimeout = source instanceof DefaultTimeoutSource; + this.source = source; + return this; + } + + /** + * Create a new instance of {@link TimeoutOptions}. + * + * @return new instance of {@link TimeoutOptions} + */ + public TimeoutOptions build() { + + if (timeoutCommands) { + if (source == null) { + throw new IllegalStateException("TimeoutSource is required for enabled timeouts"); + } + } + + return new TimeoutOptions(timeoutCommands, applyConnectionTimeout, source); + } + } + + /** + * @return {@literal true} if commands should time out. + */ + public boolean isTimeoutCommands() { + return timeoutCommands; + } + + /** + * @return {@literal true} to apply connection timeouts declared on connection level. + */ + public boolean isApplyConnectionTimeout() { + return applyConnectionTimeout; + } + + /** + * @return the timeout source to determine the timeout for a {@link RedisCommand}. Can be {@literal null} if + * {@link #isTimeoutCommands()} is {@literal false}. + */ + public TimeoutSource getSource() { + return source; + } + + private static class DefaultTimeoutSource extends TimeoutSource { + + private final long timeout = -1; + + @Override + public long getTimeout(RedisCommand command) { + return timeout; + } + } + + private static class FixedTimeoutSource extends TimeoutSource { + + private final long timeout; + private final TimeUnit timeUnit; + + FixedTimeoutSource(long timeout, TimeUnit timeUnit) { + + this.timeout = timeout; + this.timeUnit = timeUnit; + } + + @Override + public long getTimeout(RedisCommand command) { + return timeout; + } + + @Override + public TimeUnit getTimeUnit() { + return timeUnit; + } + } + + /** + * Source for the actual timeout to expire a particular {@link RedisCommand}. + */ + public static abstract class TimeoutSource { + + /** + * Obtains the timeout for a {@link RedisCommand}. All timeouts must be specified in {@link #getTimeUnit()}. Values + * greater zero will timeout the command. Values less or equal to zero do not timeout the command. + *

+ * {@code command} may be null if a timeout is required but the command is not yet known, e.g. when the timeout is + * required but a connect did not finish yet. + * + * @param command can be {@literal null}. + * @return the timeout value. Zero disables the timeout. A value of {@code -1} applies the default timeout configured on + * the connection. + */ + public abstract long getTimeout(RedisCommand command); + + /** + * @return the {@link TimeUnit} for the timeout. + */ + public TimeUnit getTimeUnit() { + return TimeUnit.MILLISECONDS; + } + } +} diff --git a/src/main/java/io/lettuce/core/TransactionResult.java b/src/main/java/io/lettuce/core/TransactionResult.java new file mode 100644 index 0000000000..ffa3022836 --- /dev/null +++ b/src/main/java/io/lettuce/core/TransactionResult.java @@ -0,0 +1,76 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.List; +import java.util.stream.Stream; + +/** + * Value interface for a {@code MULTI} transaction result. {@link TransactionResult} contains whether the transaction was rolled + * back (i.e. conditional transaction using {@code WATCH}) and the {@link List} of transaction responses. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface TransactionResult extends Iterable { + + /** + * @return {@literal true} if the transaction batch was discarded. + * @since 5.1 + */ + boolean wasDiscarded(); + + /** + * @return {@literal true} if the transaction batch was discarded. + * @deprecated use renamed method {@link #wasDiscarded()} as Redis has no notion of rollback. + */ + @Deprecated + default boolean wasRolledBack() { + return wasDiscarded(); + } + + /** + * Returns the number of elements in this collection. If this {@link TransactionResult} contains more than + * {@link Integer#MAX_VALUE} elements, returns {@link Integer#MAX_VALUE}. + * + * @return the number of elements in this collection + */ + int size(); + + /** + * Returns {@literal true} if this {@link TransactionResult} contains no elements. + * + * @return {@literal true} if this {@link TransactionResult} contains no elements + */ + boolean isEmpty(); + + /** + * Returns the element at the specified position in this {@link TransactionResult}. + * + * @param index index of the element to return + * @param inferred type + * @return the element at the specified position in this {@link TransactionResult} + * @throws IndexOutOfBoundsException if the index is out of range (index < 0 || index >= size()) + */ + T get(int index); + + /** + * Returns a sequential {@code Stream} with this {@link TransactionResult} as its source. + * + * @return a sequential {@code Stream} over the elements in this {@link TransactionResult} + */ + Stream stream(); +} diff --git a/src/main/java/io/lettuce/core/Transports.java b/src/main/java/io/lettuce/core/Transports.java new file mode 100644 index 0000000000..f8b11429f6 --- /dev/null +++ b/src/main/java/io/lettuce/core/Transports.java @@ -0,0 +1,102 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.resource.EpollProvider; +import io.lettuce.core.resource.EventLoopResources; +import io.lettuce.core.resource.KqueueProvider; +import io.netty.channel.Channel; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.channel.socket.nio.NioSocketChannel; + +/** + * Transport infrastructure utility class. This class provides {@link EventLoopGroup} and {@link Channel} classes for socket and + * native socket transports. + * + * @author Mark Paluch + * @since 4.4 + */ +class Transports { + + /** + * @return the default {@link EventLoopGroup} for socket transport that is compatible with {@link #socketChannelClass()}. + */ + static Class eventLoopGroupClass() { + + if (NativeTransports.isSocketSupported()) { + return NativeTransports.eventLoopGroupClass(); + } + + return NioEventLoopGroup.class; + } + + /** + * @return the default {@link Channel} for socket (network/TCP) transport. + */ + static Class socketChannelClass() { + + if (NativeTransports.isSocketSupported()) { + return NativeTransports.socketChannelClass(); + } + + return NioSocketChannel.class; + } + + /** + * Native transport support. + */ + static class NativeTransports { + + static EventLoopResources RESOURCES = KqueueProvider.isAvailable() ? KqueueProvider.getResources() + : EpollProvider.getResources(); + + /** + * @return {@literal true} if a native transport is available. + */ + static boolean isSocketSupported() { + return EpollProvider.isAvailable() || KqueueProvider.isAvailable(); + } + + /** + * @return the native transport socket {@link Channel} class. + */ + static Class socketChannelClass() { + return RESOURCES.socketChannelClass(); + } + + /** + * @return the native transport domain socket {@link Channel} class. + */ + static Class domainSocketChannelClass() { + return RESOURCES.domainSocketChannelClass(); + } + + /** + * @return the native transport {@link EventLoopGroup} class. + */ + static Class eventLoopGroupClass() { + return RESOURCES.eventLoopGroupClass(); + } + + static void assertAvailable() { + + LettuceAssert.assertState(NativeTransports.isSocketSupported(), + "A unix domain socket connections requires epoll or kqueue and neither is available"); + } + } +} diff --git a/src/main/java/io/lettuce/core/UnblockType.java b/src/main/java/io/lettuce/core/UnblockType.java new file mode 100644 index 0000000000..7950f601e5 --- /dev/null +++ b/src/main/java/io/lettuce/core/UnblockType.java @@ -0,0 +1,42 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.nio.charset.StandardCharsets; + +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Unblock type for {@code CLIENT UNBLOCK} command. + * + * @author Mark Paluch + * @since 5.1 + */ +public enum UnblockType implements ProtocolKeyword { + + TIMEOUT, ERROR; + + private final byte[] bytes; + + UnblockType() { + bytes = name().getBytes(StandardCharsets.US_ASCII); + } + + @Override + public byte[] getBytes() { + return bytes; + } +} diff --git a/src/main/java/io/lettuce/core/Value.java b/src/main/java/io/lettuce/core/Value.java new file mode 100644 index 0000000000..b8775bbbf5 --- /dev/null +++ b/src/main/java/io/lettuce/core/Value.java @@ -0,0 +1,293 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.io.Serializable; +import java.util.NoSuchElementException; +import java.util.Optional; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.function.Supplier; +import java.util.stream.Stream; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * A value container object which may or may not contain a non-null value. If a value is present, {@code isPresent()} will + * return {@code true} and {@code get()} will return the value. + * + *

+ * Additional methods that depend on the presence or absence of a contained value are provided, such as + * {@link #getValueOrElse(java.lang.Object) getValueOrElse()} (return a default value if value not present). + * + * @param Value type. + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class Value implements Serializable { + + private static final Value EMPTY = new Value<>(null); + + private final V value; + + /** + * {@link Serializable} constructor. + */ + protected Value() { + this.value = null; + } + + /** + * + * @param value the value, may be {@literal null}. + */ + protected Value(V value) { + this.value = value; + } + + /** + * Creates a {@link Value} from an {@link Optional}. The resulting value contains the value from the {@link Optional} if a + * value is present. Value is empty if the {@link Optional} is empty. + * + * @param optional the optional. May be empty but never {@literal null}. + * @param + * @param + * @return the {@link Value} + */ + public static Value from(Optional optional) { + + LettuceAssert.notNull(optional, "Optional must not be null"); + + if (optional.isPresent()) { + return new Value(optional.get()); + } + + return (Value) EMPTY; + } + + /** + * Creates a {@link Value} from a {@code value}. The resulting value contains the value if the {@code value} is not null. + * + * @param value the value. May be {@literal null}. + * @param + * @param + * @return the {@link Value} + */ + public static Value fromNullable(T value) { + + if (value == null) { + return empty(); + } + + return new Value(value); + } + + /** + * Returns an empty {@code Value} instance. No value is present for this instance. + * + * @param + * @return the {@link Value} + */ + public static Value empty() { + return (Value) EMPTY; + } + + /** + * Creates a {@link Value} from a {@code value}. The resulting value contains the value. + * + * @param value the value. Must not be {@literal null}. + * @param + * @param + * @return the {@link Value} + */ + public static Value just(T value) { + + LettuceAssert.notNull(value, "Value must not be null"); + + return new Value(value); + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof Value)) + return false; + + Value value1 = (Value) o; + + return value != null ? value.equals(value1.value) : value1.value == null; + + } + + @Override + public int hashCode() { + return value != null ? value.hashCode() : 0; + } + + @Override + public String toString() { + return hasValue() ? String.format("Value[%s]", value) : "Value.empty"; + } + + /** + * If a value is present in this {@code Value}, returns the value, otherwise throws {@code NoSuchElementException}. + * + * @return the non-null value held by this {@code Optional} + * @throws NoSuchElementException if there is no value present + * + * @see Value#hasValue() + */ + public V getValue() { + + if (!hasValue()) { + throw new NoSuchElementException(); + } + + return value; + } + + /** + * Return {@code true} if there is a value present, otherwise {@code false}. + * + * @return {@code true} if there is a value present, otherwise {@code false} + */ + public boolean hasValue() { + return value != null; + } + + /** + * Return the value if present, otherwise invoke {@code other} and return the result of that invocation. + * + * @param otherSupplier a {@code Supplier} whose result is returned if no value is present. Must not be {@literal null}. + * @return the value if present otherwise the result of {@code other.get()} + * @throws NullPointerException if value is not present and {@code other} is null + */ + public V getValueOrElseGet(Supplier otherSupplier) { + + LettuceAssert.notNull(otherSupplier, "Supplier must not be null"); + + if (hasValue()) { + return value; + } + return otherSupplier.get(); + } + + /** + * Return the value if present, otherwise return {@code other}. + * + * @param other the value to be returned if there is no value present, may be null + * @return the value, if present, otherwise {@code other} + */ + public V getValueOrElse(V other) { + + if (hasValue()) { + return this.value; + } + + return other; + } + + /** + * Return the contained value, if present, otherwise throw an exception to be created by the provided supplier. + * + * @param Type of the exception to be thrown + * @param exceptionSupplier The supplier which will return the exception to be thrown, must not be {@literal null}. + * @return the present value + * @throws X if there is no value present + */ + public V getValueOrElseThrow(Supplier exceptionSupplier) throws X { + + LettuceAssert.notNull(exceptionSupplier, "Supplier function must not be null"); + + if (hasValue()) { + return value; + } + + throw exceptionSupplier.get(); + } + + /** + * Returns a {@link Value} consisting of the results of applying the given function to the value of this element. Mapping is + * performed only if a {@link #hasValue() value is present}. + * + * @param The element type of the new value + * @param mapper a stateless function to apply to each element + * @return the new {@link Value} + */ + @SuppressWarnings("unchecked") + public Value map(Function mapper) { + + LettuceAssert.notNull(mapper, "Mapper function must not be null"); + + if (hasValue()) { + return new Value(mapper.apply(getValue())); + } + + return (Value) this; + } + + /** + * If a value is present, invoke the specified {@link java.util.function.Consumer} with the value, otherwise do nothing. + * + * @param consumer block to be executed if a value is present, must not be {@literal null}. + */ + public void ifHasValue(Consumer consumer) { + + LettuceAssert.notNull(consumer, "Consumer must not be null"); + + if (hasValue()) { + consumer.accept(getValue()); + } + } + + /** + * If no value is present, invoke the specified {@link Runnable}, otherwise do nothing. + * + * @param runnable block to be executed if no value value is present, must not be {@literal null}. + */ + public void ifEmpty(Runnable runnable) { + + LettuceAssert.notNull(runnable, "Runnable must not be null"); + + if (!hasValue()) { + runnable.run(); + } + } + + /** + * Returns an {@link Optional} wrapper for the value. + * + * @return {@link Optional} wrapper for the value. + */ + public Optional optional() { + return Optional.ofNullable(value); + } + + /** + * Returns a {@link Stream} wrapper for the value. The resulting stream contains either the value if a this value + * {@link #hasValue() has a value} or it is empty if the value is empty. + * + * @return {@link Stream} wrapper for the value. + */ + public Stream stream() { + + if (hasValue()) { + return Stream.of(value); + } + return Stream.empty(); + } +} diff --git a/src/main/java/io/lettuce/core/ValueScanCursor.java b/src/main/java/io/lettuce/core/ValueScanCursor.java new file mode 100644 index 0000000000..e42fb81442 --- /dev/null +++ b/src/main/java/io/lettuce/core/ValueScanCursor.java @@ -0,0 +1,38 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.ArrayList; +import java.util.List; + +/** + * Cursor providing a list of values. + * + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public class ValueScanCursor extends ScanCursor { + + private final List values = new ArrayList<>(); + + public ValueScanCursor() { + } + + public List getValues() { + return values; + } +} diff --git a/src/main/java/io/lettuce/core/XAddArgs.java b/src/main/java/io/lettuce/core/XAddArgs.java new file mode 100644 index 0000000000..09a5a4e079 --- /dev/null +++ b/src/main/java/io/lettuce/core/XAddArgs.java @@ -0,0 +1,127 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; + +/** + * Argument list builder for the Redis XADD command. Static import the methods from + * {@link Builder} and call the methods: {@code maxlen(…)} . + *

+ * {@link XAddArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + * @since 5.1 + */ +public class XAddArgs { + + private String id; + private Long maxlen; + private boolean approximateTrimming; + + /** + * Builder entry points for {@link XAddArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link XAddArgs} and setting {@literal MAXLEN}. + * + * @return new {@link XAddArgs} with {@literal MAXLEN} set. + * @see XAddArgs#maxlen(long) + */ + public static XAddArgs maxlen(long count) { + return new XAddArgs().maxlen(count); + } + } + + /** + * Limit results to {@code maxlen} entries. + * + * @param id must not be {@literal null}. + * @return {@code this} + */ + public XAddArgs id(String id) { + + LettuceAssert.notNull(id, "Id must not be null"); + + this.id = id; + return this; + } + + /** + * Limit stream to {@code maxlen} entries. + * + * @param maxlen number greater 0. + * @return {@code this} + */ + public XAddArgs maxlen(long maxlen) { + + LettuceAssert.isTrue(maxlen > 0, "Maxlen must be greater 0"); + + this.maxlen = maxlen; + return this; + } + + /** + * Apply efficient trimming for capped streams using the {@code ~} flag. + * + * @return {@code this} + */ + public XAddArgs approximateTrimming() { + return approximateTrimming(true); + } + + /** + * Apply efficient trimming for capped streams using the {@code ~} flag. + * + * @param approximateTrimming {@literal true} to apply efficient radix node trimming. + * @return {@code this} + */ + public XAddArgs approximateTrimming(boolean approximateTrimming) { + + this.approximateTrimming = approximateTrimming; + return this; + } + + public void build(CommandArgs args) { + + if (maxlen != null) { + + args.add(CommandKeyword.MAXLEN); + + if (approximateTrimming) { + args.add("~"); + } + + args.add(maxlen); + } + + if (id != null) { + args.add(id); + } else { + args.add("*"); + } + } +} diff --git a/src/main/java/io/lettuce/core/XClaimArgs.java b/src/main/java/io/lettuce/core/XClaimArgs.java new file mode 100644 index 0000000000..d7af24e863 --- /dev/null +++ b/src/main/java/io/lettuce/core/XClaimArgs.java @@ -0,0 +1,240 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.time.Duration; +import java.time.Instant; +import java.time.temporal.TemporalAccessor; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; +import io.lettuce.core.protocol.CommandType; + +/** + * Argument list builder for the Redis XCLAIM command. Static import the methods + * from {@link XClaimArgs.Builder} and call the methods: {@code minIdleTime(…)} . + *

+ * {@link XClaimArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + * @since 5.1 + */ +public class XClaimArgs { + + long minIdleTime; + private Long idle; + private Long time; + private Long retrycount; + private boolean force; + private boolean justid; + + /** + * Builder entry points for {@link XAddArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link XClaimArgs} and set the {@code JUSTID} flag to return just the message id and do not increment the + * retry counter. The message body is not returned when calling {@code XCLAIM}. + * + * @return new {@link XClaimArgs} with min idle time set. + * @see XClaimArgs#justid() + * @since 5.3 + */ + public static XClaimArgs justid() { + return new XClaimArgs().justid(); + } + + public static XClaimArgs minIdleTime(long milliseconds) { + return new XClaimArgs().minIdleTime(milliseconds); + } + + /** + * Creates new {@link XClaimArgs} and set the minimum idle time. + * + * @return new {@link XClaimArgs} with min idle time set. + * @see XClaimArgs#minIdleTime(long) + */ + public static XClaimArgs minIdleTime(Duration minIdleTime) { + + LettuceAssert.notNull(minIdleTime, "Min idle time must not be null"); + + return minIdleTime(minIdleTime.toMillis()); + } + } + + /** + * Set the {@code JUSTID} flag to return just the message id and do not increment the retry counter. The message body is not + * returned when calling {@code XCLAIM}. + * + * @return {@code this}. + * @since 5.3 + */ + public XClaimArgs justid() { + + this.justid = true; + return this; + } + + /** + * Return only messages that are idle for at least {@code milliseconds}. + * + * @param milliseconds min idle time. + * @return {@code this}. + */ + public XClaimArgs minIdleTime(long milliseconds) { + + this.minIdleTime = milliseconds; + return this; + } + + /** + * Return only messages that are idle for at least {@code minIdleTime}. + * + * @param minIdleTime min idle time. + * @return {@code this}. + */ + public XClaimArgs minIdleTime(Duration minIdleTime) { + + LettuceAssert.notNull(minIdleTime, "Min idle time must not be null"); + + return minIdleTime(minIdleTime.toMillis()); + } + + /** + * Set the idle time (last time it was delivered) of the message. If IDLE is not specified, an IDLE of 0 is assumed, that + * is, the time count is reset because the message has now a new owner trying to process it + * + * @param milliseconds idle time. + * @return {@code this}. + */ + public XClaimArgs idle(long milliseconds) { + + this.idle = milliseconds; + return this; + } + + /** + * Set the idle time (last time it was delivered) of the message. If IDLE is not specified, an IDLE of 0 is assumed, that + * is, the time count is reset because the message has now a new owner trying to process it + * + * @param idleTime idle time. + * @return {@code this}. + */ + public XClaimArgs idle(Duration idleTime) { + + LettuceAssert.notNull(idleTime, "Idle time must not be null"); + + return idle(idleTime.toMillis()); + } + + /** + * This is the same as IDLE but instead of a relative amount of milliseconds, it sets the idle time to a specific unix time + * (in milliseconds). This is useful in order to rewrite the AOF file generating XCLAIM commands. + * + * @param millisecondsUnixTime idle time. + * @return {@code this}. + */ + public XClaimArgs time(long millisecondsUnixTime) { + + this.time = millisecondsUnixTime; + return this; + } + + /** + * This is the same as IDLE but instead of a relative amount of milliseconds, it sets the idle time to a specific unix time + * (in milliseconds). This is useful in order to rewrite the AOF file generating XCLAIM commands. + * + * @param timestamp idle time. + * @return {@code this}. + */ + public XClaimArgs time(TemporalAccessor timestamp) { + + LettuceAssert.notNull(timestamp, "Timestamp must not be null"); + + return time(Instant.from(timestamp).toEpochMilli()); + } + + /** + * Set the retry counter to the specified value. This counter is incremented every time a message is delivered again. + * Normally {@code XCLAIM} does not alter this counter, which is just served to clients when the XPENDING command is called: + * this way clients can detect anomalies, like messages that are never processed for some reason after a big number of + * delivery attempts. + * + * @param retrycount number of retries. + * @return {@code this}. + */ + public XClaimArgs retryCount(long retrycount) { + + this.retrycount = retrycount; + return this; + } + + /** + * Creates the pending message entry in the PEL even if certain specified IDs are not already in the PEL assigned to a + * different client. However the message must be exist in the stream, otherwise the IDs of non existing messages are + * ignored. + * + * @return {@code this}. + */ + public XClaimArgs force() { + return force(true); + } + + /** + * Creates the pending message entry in the PEL even if certain specified IDs are not already in the PEL assigned to a + * different client. However the message must be exist in the stream, otherwise the IDs of non existing messages are + * ignored. + * + * @param force {@literal true} to enforce PEL creation. + * @return {@code this}. + */ + public XClaimArgs force(boolean force) { + + this.force = force; + return this; + } + + public void build(CommandArgs args) { + + if (idle != null) { + args.add(CommandKeyword.IDLE).add(idle); + } + + if (time != null) { + args.add(CommandType.TIME).add(time); + } + + if (retrycount != null) { + args.add(CommandKeyword.RETRYCOUNT).add(retrycount); + } + + if (force) { + args.add(CommandKeyword.FORCE); + } + + if (justid) { + args.add(CommandKeyword.JUSTID); + } + } +} diff --git a/src/main/java/io/lettuce/core/XGroupCreateArgs.java b/src/main/java/io/lettuce/core/XGroupCreateArgs.java new file mode 100644 index 0000000000..81b879f406 --- /dev/null +++ b/src/main/java/io/lettuce/core/XGroupCreateArgs.java @@ -0,0 +1,84 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.protocol.CommandArgs; + +/** + * Argument list builder for the Redis XGROUP CREATE command. Static import the + * methods from {@link Builder} and call the methods: {@code mkstream(…)} . + *

+ * {@link XGroupCreateArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + * @since 5.2 + */ +public class XGroupCreateArgs { + + private boolean mkstream; + + /** + * Builder entry points for {@link XGroupCreateArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link XGroupCreateArgs} and setting {@literal MKSTREAM}. + * + * @return new {@link XGroupCreateArgs} with {@literal MKSTREAM} set. + * @see XGroupCreateArgs#mkstream(boolean) + */ + public static XGroupCreateArgs mkstream() { + return mkstream(true); + } + + /** + * Creates new {@link XGroupCreateArgs} and setting {@literal MKSTREAM}. + * + * @param mkstream whether to apply {@literal MKSTREAM}. + * @return new {@link XGroupCreateArgs} with {@literal MKSTREAM} set. + * @see XGroupCreateArgs#mkstream(boolean) + */ + public static XGroupCreateArgs mkstream(boolean mkstream) { + return new XGroupCreateArgs().mkstream(mkstream); + } + } + + /** + * Make a stream if it does not exists. + * + * @param mkstream whether to apply {@literal MKSTREAM} + * @return {@code this} + */ + public XGroupCreateArgs mkstream(boolean mkstream) { + + this.mkstream = mkstream; + return this; + } + + public void build(CommandArgs args) { + + if (mkstream) { + args.add("MKSTREAM"); + } + } +} diff --git a/src/main/java/io/lettuce/core/XReadArgs.java b/src/main/java/io/lettuce/core/XReadArgs.java new file mode 100644 index 0000000000..dfb36df29c --- /dev/null +++ b/src/main/java/io/lettuce/core/XReadArgs.java @@ -0,0 +1,239 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.time.Duration; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; + +/** + * Argument list builder for the Redis XREAD and {@literal XREADGROUP} commands. + * Static import the methods from {@link XReadArgs.Builder} and call the methods: {@code block(…)} . + *

+ * {@link XReadArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + * @since 5.1 + */ +public class XReadArgs { + + private Long block; + private Long count; + private boolean noack; + + /** + * Builder entry points for {@link XReadArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Create a new {@link XReadArgs} and set {@literal BLOCK}. + * + * @param milliseconds time to block. + * @return new {@link XReadArgs} with {@literal BLOCK} set. + * @see XReadArgs#block(long) + */ + public static XReadArgs block(long milliseconds) { + return new XReadArgs().block(milliseconds); + } + + /** + * Create a new {@link XReadArgs} and set {@literal BLOCK}. + * + * @param timeout time to block. + * @return new {@link XReadArgs} with {@literal BLOCK} set. + * @see XReadArgs#block(Duration) + */ + public static XReadArgs block(Duration timeout) { + + LettuceAssert.notNull(timeout, "Block timeout must not be null"); + + return block(timeout.toMillis()); + } + + /** + * Create a new {@link XReadArgs} and set {@literal COUNT}. + * + * @param count + * @return new {@link XReadArgs} with {@literal COUNT} set. + */ + public static XReadArgs count(long count) { + return new XReadArgs().count(count); + } + + /** + * Create a new {@link XReadArgs} and set {@literal NOACK}. + * + * @return new {@link XReadArgs} with {@literal NOACK} set. + * @see XReadArgs#noack(boolean) + */ + public static XReadArgs noack() { + return noack(true); + } + + /** + * Create a new {@link XReadArgs} and set {@literal NOACK}. + * + * @param noack + * @return new {@link XReadArgs} with {@literal NOACK} set. + * @see XReadArgs#noack(boolean) + */ + public static XReadArgs noack(boolean noack) { + return new XReadArgs().noack(noack); + } + } + + /** + * Perform a blocking read and wait up to {@code milliseconds} for a new stream message. + * + * @param milliseconds max time to wait. + * @return {@code this}. + */ + public XReadArgs block(long milliseconds) { + + this.block = milliseconds; + return this; + } + + /** + * Perform a blocking read and wait up to a {@link Duration timeout} for a new stream message. + * + * @param timeout max time to wait. + * @return {@code this}. + */ + public XReadArgs block(Duration timeout) { + + LettuceAssert.notNull(timeout, "Block timeout must not be null"); + + return block(timeout.toMillis()); + } + + /** + * Limit read to {@code count} messages. + * + * @param count number of messages. + * @return {@code this}. + */ + public XReadArgs count(long count) { + + this.count = count; + return this; + } + + /** + * Use NOACK option to disable auto-acknowledgement. Only valid for {@literal XREADGROUP}. + * + * @param noack {@literal true} to disable auto-ack. + * @return {@code this}. + */ + public XReadArgs noack(boolean noack) { + + this.noack = noack; + return this; + } + + public void build(CommandArgs args) { + + if (block != null) { + args.add(CommandKeyword.BLOCK).add(block); + } + + if (count != null) { + args.add(CommandKeyword.COUNT).add(count); + } + + if (noack) { + args.add(CommandKeyword.NOACK); + } + } + + /** + * Value object representing a Stream with its offset. + */ + public static class StreamOffset { + + final K name; + final String offset; + + private StreamOffset(K name, String offset) { + this.name = name; + this.offset = offset; + } + + /** + * Read all new arriving elements from the stream identified by {@code name}. + * + * @param name must not be {@literal null}. + * @return the {@link StreamOffset} object without a specific offset. + */ + public static StreamOffset latest(K name) { + + LettuceAssert.notNull(name, "Stream must not be null"); + + return new StreamOffset<>(name, "$"); + } + + /** + * Read all new arriving elements from the stream identified by {@code name} with ids greater than the last one consumed + * by the consumer group. + * + * @param name must not be {@literal null}. + * @return the {@link StreamOffset} object without a specific offset. + */ + public static StreamOffset lastConsumed(K name) { + + LettuceAssert.notNull(name, "Stream must not be null"); + + return new StreamOffset<>(name, ">"); + } + + /** + * Read all arriving elements from the stream identified by {@code name} starting at {@code offset}. + * + * @param name must not be {@literal null}. + * @param offset the stream offset. + * @return the {@link StreamOffset} object without a specific offset. + */ + public static StreamOffset from(K name, String offset) { + + LettuceAssert.notNull(name, "Stream must not be null"); + LettuceAssert.notEmpty(offset, "Offset must not be empty"); + + return new StreamOffset<>(name, offset); + } + + public K getName() { + return name; + } + + public String getOffset() { + return offset; + } + + @Override + public String toString() { + return String.format("%s:%s", name, offset); + } + } +} diff --git a/src/main/java/io/lettuce/core/ZAddArgs.java b/src/main/java/io/lettuce/core/ZAddArgs.java new file mode 100644 index 0000000000..54e9bb84bc --- /dev/null +++ b/src/main/java/io/lettuce/core/ZAddArgs.java @@ -0,0 +1,123 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.protocol.CommandArgs; + +/** + * Argument list builder for the improved Redis ZADD command starting from Redis + * 3.0.2. Static import the methods from {@link Builder} and call the methods: {@code xx()} or {@code nx()} . + *

+ * {@link ZAddArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Mark Paluch + */ +public class ZAddArgs implements CompositeArgument { + + private boolean nx = false; + private boolean xx = false; + private boolean ch = false; + + /** + * Builder entry points for {@link ScanArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link ZAddArgs} and enabling {@literal NX}. + * + * @return new {@link ZAddArgs} with {@literal NX} enabled. + * @see ZAddArgs#nx() + */ + public static ZAddArgs nx() { + return new ZAddArgs().nx(); + } + + /** + * Creates new {@link ZAddArgs} and enabling {@literal XX}. + * + * @return new {@link ZAddArgs} with {@literal XX} enabled. + * @see ZAddArgs#xx() + */ + public static ZAddArgs xx() { + return new ZAddArgs().xx(); + } + + /** + * Creates new {@link ZAddArgs} and enabling {@literal CH}. + * + * @return new {@link ZAddArgs} with {@literal CH} enabled. + * @see ZAddArgs#ch() + */ + public static ZAddArgs ch() { + return new ZAddArgs().ch(); + } + } + + /** + * Don't update already existing elements. Always add new elements. + * + * @return {@code this} {@link ZAddArgs}. + */ + public ZAddArgs nx() { + + this.nx = true; + return this; + } + + /** + * Only update elements that already exist. Never add elements. + * + * @return {@code this} {@link ZAddArgs}. + */ + public ZAddArgs xx() { + + this.xx = true; + return this; + } + + /** + * Modify the return value from the number of new elements added, to the total number of elements changed. + * + * @return {@code this} {@link ZAddArgs}. + */ + public ZAddArgs ch() { + + this.ch = true; + return this; + } + + public void build(CommandArgs args) { + + if (nx) { + args.add("NX"); + } + + if (xx) { + args.add("XX"); + } + + if (ch) { + args.add("CH"); + } + } +} diff --git a/src/main/java/io/lettuce/core/ZStoreArgs.java b/src/main/java/io/lettuce/core/ZStoreArgs.java new file mode 100644 index 0000000000..4dac84588d --- /dev/null +++ b/src/main/java/io/lettuce/core/ZStoreArgs.java @@ -0,0 +1,213 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.protocol.CommandKeyword.*; + +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgs; + +/** + * Argument list builder for the Redis ZUNIONSTORE and + * ZINTERSTORE commands. Static import the methods from {@link Builder} and + * chain the method calls: {@code weights(1, 2).max()}. + * + *

+ * {@link ZAddArgs} is a mutable object and instances should be used only once to avoid shared mutable state. + * + * @author Will Glozer + * @author Xy Ma + * @author Mark Paluch + */ +public class ZStoreArgs implements CompositeArgument { + + private enum Aggregate { + SUM, MIN, MAX + } + + private List weights; + private Aggregate aggregate; + + /** + * Builder entry points for {@link ScanArgs}. + */ + public static class Builder { + + /** + * Utility constructor. + */ + private Builder() { + } + + /** + * Creates new {@link ZStoreArgs} setting {@literal WEIGHTS} using long. + * + * @return new {@link ZAddArgs} with {@literal WEIGHTS} set. + * @see ZStoreArgs#weights(long[]) + * @deprecated use {@link #weights(double...)}. + */ + @Deprecated + public static ZStoreArgs weights(long[] weights) { + return new ZStoreArgs().weights(toDoubleArray(weights)); + } + + /** + * Creates new {@link ZStoreArgs} setting {@literal WEIGHTS}. + * + * @return new {@link ZAddArgs} with {@literal WEIGHTS} set. + * @see ZStoreArgs#weights(double...) + */ + public static ZStoreArgs weights(double... weights) { + return new ZStoreArgs().weights(weights); + } + + /** + * Creates new {@link ZStoreArgs} setting {@literal AGGREGATE SUM}. + * + * @return new {@link ZAddArgs} with {@literal AGGREGATE SUM} set. + * @see ZStoreArgs#sum() + */ + public static ZStoreArgs sum() { + return new ZStoreArgs().sum(); + } + + /** + * Creates new {@link ZStoreArgs} setting {@literal AGGREGATE MIN}. + * + * @return new {@link ZAddArgs} with {@literal AGGREGATE MIN} set. + * @see ZStoreArgs#sum() + */ + public static ZStoreArgs min() { + return new ZStoreArgs().min(); + } + + /** + * Creates new {@link ZStoreArgs} setting {@literal AGGREGATE MAX}. + * + * @return new {@link ZAddArgs} with {@literal AGGREGATE MAX} set. + * @see ZStoreArgs#sum() + */ + public static ZStoreArgs max() { + return new ZStoreArgs().max(); + } + } + + /** + * Specify a multiplication factor for each input sorted set. + * + * @param weights must not be {@literal null}. + * @return {@code this} {@link ZStoreArgs}. + * @deprecated use {@link #weights(double...)} + */ + @Deprecated + public static ZStoreArgs weights(long[] weights) { + + LettuceAssert.notNull(weights, "Weights must not be null"); + + return new ZStoreArgs().weights(toDoubleArray(weights)); + } + + /** + * Specify a multiplication factor for each input sorted set. + * + * @param weights must not be {@literal null}. + * @return {@code this} {@link ZStoreArgs}. + */ + public ZStoreArgs weights(double... weights) { + + LettuceAssert.notNull(weights, "Weights must not be null"); + + this.weights = new ArrayList<>(weights.length); + + for (double weight : weights) { + this.weights.add(weight); + } + return this; + } + + /** + * Aggregate scores of elements existing across multiple sets by summing up. + * + * @return {@code this} {@link ZStoreArgs}. + */ + public ZStoreArgs sum() { + + this.aggregate = Aggregate.SUM; + return this; + } + + /** + * Aggregate scores of elements existing across multiple sets by using the lowest score. + * + * @return {@code this} {@link ZStoreArgs}. + */ + public ZStoreArgs min() { + + this.aggregate = Aggregate.MIN; + return this; + } + + /** + * Aggregate scores of elements existing across multiple sets by using the highest score. + * + * @return {@code this} {@link ZStoreArgs}. + */ + public ZStoreArgs max() { + + this.aggregate = Aggregate.MAX; + return this; + } + + private static double[] toDoubleArray(long[] weights) { + + double result[] = new double[weights.length]; + for (int i = 0; i < weights.length; i++) { + result[i] = weights[i]; + } + return result; + } + + public void build(CommandArgs args) { + + if (weights != null && !weights.isEmpty()) { + + args.add(WEIGHTS); + for (double weight : weights) { + args.add(weight); + } + } + + if (aggregate != null) { + args.add(AGGREGATE); + switch (aggregate) { + case SUM: + args.add(SUM); + break; + case MIN: + args.add(MIN); + break; + case MAX: + args.add(MAX); + break; + default: + throw new IllegalArgumentException("Aggregation " + aggregate + " not supported"); + } + } + } +} diff --git a/src/main/java/io/lettuce/core/api/StatefulConnection.java b/src/main/java/io/lettuce/core/api/StatefulConnection.java new file mode 100644 index 0000000000..5d90c91686 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/StatefulConnection.java @@ -0,0 +1,129 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.time.Duration; +import java.util.Collection; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.internal.AsyncCloseable; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; + +/** + * A stateful connection providing command dispatching, timeouts and open/close methods. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface StatefulConnection extends AutoCloseable, AsyncCloseable { + + /** + * Set the default command timeout for this connection. A zero timeout value indicates to not time out. + * + * @param timeout Command timeout. + * @since 5.0 + */ + void setTimeout(Duration timeout); + + /** + * @return the timeout. + */ + Duration getTimeout(); + + /** + * Dispatch a command. Write a command on the channel. The command may be changed/wrapped during write and the written + * instance is returned after the call. This command does not wait until the command completes and does not guarantee + * whether the command is executed successfully. + * + * @param command the Redis command. + * @param result type + * @return the written Redis command. + */ + RedisCommand dispatch(RedisCommand command); + + /** + * Dispatch multiple command in a single write on the channel. The commands may be changed/wrapped during write and the + * written instance is returned after the call. This command does not wait until the command completes and does not + * guarantee whether the command is executed successfully. + * + * @param commands the Redis commands. + * @return the written Redis commands. + * @since 5.0 + */ + Collection> dispatch(Collection> commands); + + /** + * Close the connection. The connection will become not usable anymore as soon as this method was called. + */ + void close(); + + /** + * Request to close the connection and return the {@link CompletableFuture} that is notified about its progress. The + * connection will become not usable anymore as soon as this method was called. + * + * @return a {@link CompletableFuture} that is notified once the operation completes, either because the operation was + * successful or because of an error. + * @since 5.1 + */ + @Override + CompletableFuture closeAsync(); + + /** + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * @return the client options valid for this connection. + */ + ClientOptions getOptions(); + + /** + * @return the client resources used for this connection. + */ + ClientResources getResources(); + + /** + * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the + * internal state machine gets out of sync with the connection (e.g. errors during external SSL tunneling). Calling this + * method will reset the protocol state, therefore it is considered unsafe. + * + * @deprecated since 5.2. This method is unsafe and can cause protocol offsets (i.e. Redis commands are completed with + * previous command values). + */ + @Deprecated + void reset(); + + /** + * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands + * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is + * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. + * + * @param autoFlush state of autoFlush. + */ + void setAutoFlushCommands(boolean autoFlush); + + /** + * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to + * achieve batching. No-op if channel is not connected. + */ + void flushCommands(); +} diff --git a/src/main/java/io/lettuce/core/api/StatefulRedisConnection.java b/src/main/java/io/lettuce/core/api/StatefulRedisConnection.java new file mode 100644 index 0000000000..3e7b58e3f9 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/StatefulRedisConnection.java @@ -0,0 +1,62 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.protocol.ConnectionWatchdog; + +/** + * A thread-safe connection to a redis server. Multiple threads may share one {@link StatefulRedisConnection}. + * + * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All + * pending commands will be (re)sent after successful reconnection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface StatefulRedisConnection extends StatefulConnection { + + /** + * + * @return true, if the connection is within a transaction. + */ + boolean isMulti(); + + /** + * Returns the {@link RedisCommands} API for the current connection. Does not create a new connection. + * + * @return the synchronous API for the underlying connection. + */ + RedisCommands sync(); + + /** + * Returns the {@link RedisAsyncCommands} API for the current connection. Does not create a new connection. + * + * @return the asynchronous API for the underlying connection. + */ + RedisAsyncCommands async(); + + /** + * Returns the {@link RedisReactiveCommands} API for the current connection. Does not create a new connection. + * + * @return the reactive API for the underlying connection. + */ + RedisReactiveCommands reactive(); +} diff --git a/src/main/java/io/lettuce/core/api/async/BaseRedisAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/BaseRedisAsyncCommands.java new file mode 100644 index 0000000000..4ff28a879f --- /dev/null +++ b/src/main/java/io/lettuce/core/api/async/BaseRedisAsyncCommands.java @@ -0,0 +1,177 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Asynchronous executed commands for basic commands. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateAsyncApi + */ +public interface BaseRedisAsyncCommands { + + /** + * Post a message to a channel. + * + * @param channel the channel type: key + * @param message the message type: value + * @return Long integer-reply the number of clients that received the message. + */ + RedisFuture publish(K channel, V message); + + /** + * Lists the currently *active channels*. + * + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + RedisFuture> pubsubChannels(); + + /** + * Lists the currently *active channels*. + * + * @param channel the key + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + RedisFuture> pubsubChannels(K channel); + + /** + * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. + * + * @param channels channel keys + * @return array-reply a list of channels and number of subscribers for every channel. + */ + RedisFuture> pubsubNumsub(K... channels); + + /** + * Returns the number of subscriptions to patterns. + * + * @return Long integer-reply the number of patterns all the clients are subscribed to. + */ + RedisFuture pubsubNumpat(); + + /** + * Echo the given string. + * + * @param msg the message type: value + * @return V bulk-string-reply + */ + RedisFuture echo(V msg); + + /** + * Return the role of the instance in the context of replication. + * + * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional + * elements are role-specific. + */ + RedisFuture> role(); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + RedisFuture ping(); + + /** + * Switch connection to Read-Only mode when connecting to a cluster. + * + * @return String simple-string-reply. + */ + RedisFuture readOnly(); + + /** + * Switch connection to Read-Write mode (default) when connecting to a cluster. + * + * @return String simple-string-reply. + */ + RedisFuture readWrite(); + + /** + * Instructs Redis to disconnect the connection. Note that if auto-reconnect is enabled then Lettuce will auto-reconnect if + * the connection was disconnected. Use {@link io.lettuce.core.api.StatefulConnection#close} to close connections and + * release resources. + * + * @return String simple-string-reply always OK. + */ + RedisFuture quit(); + + /** + * Wait for replication. + * + * @param replicas minimum number of replicas + * @param timeout timeout in milliseconds + * @return number of replicas + */ + RedisFuture waitForReplication(int replicas, long timeout); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param response type + * @return the command response + */ + RedisFuture dispatch(ProtocolKeyword type, CommandOutput output); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param args the command arguments, must not be {@literal null}. + * @param response type + * @return the command response + */ + RedisFuture dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); + + /** + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the + * internal state machine gets out of sync with the connection. + */ + void reset(); + + /** + * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands + * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is + * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. + * + * @param autoFlush state of autoFlush. + */ + void setAutoFlushCommands(boolean autoFlush); + + /** + * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to + * achieve batching. No-op if channel is not connected. + */ + void flushCommands(); +} diff --git a/src/main/java/io/lettuce/core/api/async/RedisAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisAsyncCommands.java new file mode 100644 index 0000000000..0ff16b257f --- /dev/null +++ b/src/main/java/io/lettuce/core/api/async/RedisAsyncCommands.java @@ -0,0 +1,77 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; + +/** + * A complete asynchronous and thread-safe Redis API with 400+ Methods. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public interface RedisAsyncCommands extends BaseRedisAsyncCommands, RedisClusterAsyncCommands, + RedisGeoAsyncCommands, RedisHashAsyncCommands, RedisHLLAsyncCommands, RedisKeyAsyncCommands, + RedisListAsyncCommands, RedisScriptingAsyncCommands, RedisServerAsyncCommands, + RedisSetAsyncCommands, RedisSortedSetAsyncCommands, RedisStreamAsyncCommands, + RedisStringAsyncCommands, RedisTransactionalAsyncCommands { + + /** + * Authenticate to the server. + * + * @param password the password + * @return String simple-string-reply + */ + RedisFuture auth(CharSequence password); + + /** + * Authenticate to the server with username and password. Requires Redis 6 or newer. + * + * @param username the username + * @param password the password + * @return String simple-string-reply + * @since 6.0 + */ + RedisFuture auth(String username, CharSequence password); + + /** + * Change the selected database for the current connection. + * + * @param db the database number + * @return String simple-string-reply + */ + RedisFuture select(int db); + + /** + * Swap two Redis databases, so that immediately all the clients connected to a given DB will see the data of the other DB, + * and the other way around + * + * @param db1 the first database number + * @param db2 the second database number + * @return String simple-string-reply + */ + RedisFuture swapdb(int db1, int db2); + + /** + * @return the underlying connection. + */ + StatefulRedisConnection getStatefulConnection(); + +} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisGeoAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisGeoAsyncCommands.java similarity index 86% rename from src/main/java/com/lambdaworks/redis/api/async/RedisGeoAsyncCommands.java rename to src/main/java/io/lettuce/core/api/async/RedisGeoAsyncCommands.java index cdc9b54bba..0be326d565 100644 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisGeoAsyncCommands.java +++ b/src/main/java/io/lettuce/core/api/async/RedisGeoAsyncCommands.java @@ -1,19 +1,31 @@ -package com.lambdaworks.redis.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; -import com.lambdaworks.redis.GeoArgs; -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.GeoRadiusStoreArgs; -import com.lambdaworks.redis.GeoWithin; import java.util.List; import java.util.Set; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.*; /** * Asynchronous executed commands for the Geo-API. * * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi + * @generated by io.lettuce.apigenerator.CreateAsyncApi */ public interface RedisGeoAsyncCommands { @@ -44,7 +56,7 @@ public interface RedisGeoAsyncCommands { * @param members the members * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. */ - RedisFuture> geohash(K key, V... members); + RedisFuture>> geohash(K key, V... members); /** * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. @@ -72,7 +84,7 @@ public interface RedisGeoAsyncCommands { RedisFuture>> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); /** - * Perform a {@link #georadius(Object, double, double, double, Unit, GeoArgs)} query and store the results in a sorted set. + * Perform a {@link #georadius(Object, double, double, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. * * @param key the key of the geo set * @param longitude the longitude coordinate according to WGS84 @@ -98,7 +110,6 @@ public interface RedisGeoAsyncCommands { RedisFuture> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); /** - * * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the * results. * @@ -112,7 +123,7 @@ public interface RedisGeoAsyncCommands { RedisFuture>> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); /** - * Perform a {@link #georadiusbymember(Object, Object, double, Unit, GeoArgs)} query and store the results in a sorted set. + * Perform a {@link #georadiusbymember(Object, Object, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. * * @param key the key of the geo set * @param member reference member @@ -136,7 +147,6 @@ public interface RedisGeoAsyncCommands { RedisFuture> geopos(K key, V... members); /** - * * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is * returned. Default in meters by, otherwise according to {@code unit} * diff --git a/src/main/java/io/lettuce/core/api/async/RedisHLLAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisHLLAsyncCommands.java new file mode 100644 index 0000000000..b93ca3a807 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/async/RedisHLLAsyncCommands.java @@ -0,0 +1,63 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; + +import io.lettuce.core.RedisFuture; + +/** + * Asynchronous executed commands for HyperLogLog (PF* commands). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + * @generated by io.lettuce.apigenerator.CreateAsyncApi + */ +public interface RedisHLLAsyncCommands { + + /** + * Adds the specified elements to the specified HyperLogLog. + * + * @param key the key + * @param values the values + * + * @return Long integer-reply specifically: + * + * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. + */ + RedisFuture pfadd(K key, V... values); + + /** + * Merge N different HyperLogLogs into a single one. + * + * @param destkey the destination key + * @param sourcekeys the source key + * + * @return String simple-string-reply The command just returns {@code OK}. + */ + RedisFuture pfmerge(K destkey, K... sourcekeys); + + /** + * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). + * + * @param keys the keys + * + * @return Long integer-reply specifically: + * + * The approximated number of unique elements observed via {@code PFADD}. + */ + RedisFuture pfcount(K... keys); +} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisHashAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisHashAsyncCommands.java similarity index 84% rename from src/main/java/com/lambdaworks/redis/api/async/RedisHashAsyncCommands.java rename to src/main/java/io/lettuce/core/api/async/RedisHashAsyncCommands.java index 295be6ba7c..4f0aa562e1 100644 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisHashAsyncCommands.java +++ b/src/main/java/io/lettuce/core/api/async/RedisHashAsyncCommands.java @@ -1,30 +1,42 @@ -package com.lambdaworks.redis.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; import java.util.List; import java.util.Map; -import com.lambdaworks.redis.MapScanCursor; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; /** * Asynchronous executed commands for Hashes (Key-Value pairs). - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi + * @generated by io.lettuce.apigenerator.CreateAsyncApi */ public interface RedisHashAsyncCommands { /** * Delete one or more hash fields. - * + * * @param key the key * @param fields the field type: key * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing @@ -34,11 +46,11 @@ public interface RedisHashAsyncCommands { /** * Determine if a hash field exists. - * + * * @param key the key * @param field the field type: key * @return Boolean integer-reply specifically: - * + * * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, * or {@code key} does not exist. */ @@ -46,7 +58,7 @@ public interface RedisHashAsyncCommands { /** * Get the value of a hash field. - * + * * @param key the key * @param field the field type: key * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present @@ -56,7 +68,7 @@ public interface RedisHashAsyncCommands { /** * Increment the integer value of a hash field by the given number. - * + * * @param key the key * @param field the field type: key * @param amount the increment type: long @@ -66,7 +78,7 @@ public interface RedisHashAsyncCommands { /** * Increment the float value of a hash field by the given amount. - * + * * @param key the key * @param field the field type: key * @param amount the increment type: double @@ -76,7 +88,7 @@ public interface RedisHashAsyncCommands { /** * Get all the fields and values in a hash. - * + * * @param key the key * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} * does not exist. @@ -85,17 +97,17 @@ public interface RedisHashAsyncCommands { /** * Stream over all the fields and values in a hash. - * + * * @param channel the channel * @param key the key - * + * * @return Long count of the keys. */ RedisFuture hgetall(KeyValueStreamingChannel channel, K key); /** * Get all the fields in a hash. - * + * * @param key the key * @return List<K> array-reply list of fields in the hash, or an empty list when {@code key} does not exist. */ @@ -103,17 +115,17 @@ public interface RedisHashAsyncCommands { /** * Stream over all the fields in a hash. - * + * * @param channel the channel * @param key the key - * + * * @return Long count of the keys. */ RedisFuture hkeys(KeyStreamingChannel channel, K key); /** * Get the number of fields in a hash. - * + * * @param key the key * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. */ @@ -121,27 +133,27 @@ public interface RedisHashAsyncCommands { /** * Get the values of all the given hash fields. - * + * * @param key the key * @param fields the field type: key * @return List<V> array-reply list of values associated with the given fields, in the same */ - RedisFuture> hmget(K key, K... fields); + RedisFuture>> hmget(K key, K... fields); /** * Stream over the values of all the given hash fields. - * + * * @param channel the channel * @param key the key * @param fields the fields - * + * * @return Long count of the keys */ - RedisFuture hmget(ValueStreamingChannel channel, K key, K... fields); + RedisFuture hmget(KeyValueStreamingChannel channel, K key, K... fields); /** * Set multiple hash fields to multiple values. - * + * * @param key the key * @param map the null * @return String simple-string-reply @@ -150,7 +162,7 @@ public interface RedisHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @return MapScanCursor<K, V> map scan cursor. */ @@ -158,7 +170,7 @@ public interface RedisHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanArgs scan arguments * @return MapScanCursor<K, V> map scan cursor. @@ -167,7 +179,7 @@ public interface RedisHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -177,7 +189,7 @@ public interface RedisHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return MapScanCursor<K, V> map scan cursor. @@ -186,7 +198,7 @@ public interface RedisHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @return StreamScanCursor scan cursor. @@ -195,7 +207,7 @@ public interface RedisHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanArgs scan arguments @@ -205,7 +217,7 @@ public interface RedisHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -216,7 +228,7 @@ public interface RedisHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -237,6 +249,16 @@ public interface RedisHashAsyncCommands { */ RedisFuture hset(K key, K field, V value); + /** + * Set multiple hash fields to multiple values. + * + * @param key the key of the hash + * @param map the field/value pairs to update + * @return Long integer-reply: the number of fields that were added. + * @since 5.3 + */ + RedisFuture hset(K key, Map map); + /** * Set the value of a hash field, only if the field does not exist. * diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisKeyAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisKeyAsyncCommands.java similarity index 87% rename from src/main/java/com/lambdaworks/redis/api/async/RedisKeyAsyncCommands.java rename to src/main/java/io/lettuce/core/api/async/RedisKeyAsyncCommands.java index 252db0a0c8..8550b6b6a6 100644 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisKeyAsyncCommands.java +++ b/src/main/java/io/lettuce/core/api/async/RedisKeyAsyncCommands.java @@ -1,25 +1,35 @@ -package com.lambdaworks.redis.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; import java.util.Date; import java.util.List; -import com.lambdaworks.redis.KeyScanCursor; -import com.lambdaworks.redis.MigrateArgs; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.SortArgs; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; /** * Asynchronous executed commands for Keys (Key manipulation/querying). - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi + * @generated by io.lettuce.apigenerator.CreateAsyncApi */ public interface RedisKeyAsyncCommands { @@ -41,7 +51,7 @@ public interface RedisKeyAsyncCommands { /** * Return a serialized version of the value stored at the specified key. - * + * * @param key the key * @return byte[] bulk-string-reply the serialized value. */ @@ -57,11 +67,11 @@ public interface RedisKeyAsyncCommands { /** * Set a key's time to live in seconds. - * + * * @param key the key * @param seconds the seconds type: long * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set. */ @@ -69,11 +79,11 @@ public interface RedisKeyAsyncCommands { /** * Set the expiration for a key as a UNIX timestamp. - * + * * @param key the key * @param timestamp the timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -81,11 +91,11 @@ public interface RedisKeyAsyncCommands { /** * Set the expiration for a key as a UNIX timestamp. - * + * * @param key the key * @param timestamp the timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -93,7 +103,7 @@ public interface RedisKeyAsyncCommands { /** * Find all keys matching the given pattern. - * + * * @param pattern the pattern type: patternkey (pattern) * @return List<K> array-reply list of keys matching {@code pattern}. */ @@ -101,7 +111,7 @@ public interface RedisKeyAsyncCommands { /** * Find all keys matching the given pattern. - * + * * @param channel the channel * @param pattern the pattern * @return Long array-reply list of keys matching {@code pattern}. @@ -110,7 +120,7 @@ public interface RedisKeyAsyncCommands { /** * Atomically transfer a key from a Redis instance to another one. - * + * * @param host the host * @param port the port * @param key the key @@ -134,7 +144,7 @@ public interface RedisKeyAsyncCommands { /** * Move a key to another database. - * + * * @param key the key * @param db the db type: long * @return Boolean integer-reply specifically: @@ -143,7 +153,7 @@ public interface RedisKeyAsyncCommands { /** * returns the kind of internal representation used in order to store the value associated with a key. - * + * * @param key the key * @return String */ @@ -152,7 +162,7 @@ public interface RedisKeyAsyncCommands { /** * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write * operations). - * + * * @param key the key * @return number of seconds since the object stored at the specified key is idle. */ @@ -160,7 +170,7 @@ public interface RedisKeyAsyncCommands { /** * returns the number of references of the value associated with the specified key. - * + * * @param key the key * @return Long */ @@ -168,10 +178,10 @@ public interface RedisKeyAsyncCommands { /** * Remove the expiration from a key. - * + * * @param key the key * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an * associated timeout. */ @@ -179,11 +189,11 @@ public interface RedisKeyAsyncCommands { /** * Set a key's time to live in milliseconds. - * + * * @param key the key * @param milliseconds the milliseconds type: long * @return integer-reply, specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set. */ @@ -191,11 +201,11 @@ public interface RedisKeyAsyncCommands { /** * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * + * * @param key the key * @param timestamp the milliseconds-timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -203,11 +213,11 @@ public interface RedisKeyAsyncCommands { /** * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * + * * @param key the key * @param timestamp the milliseconds-timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -215,7 +225,7 @@ public interface RedisKeyAsyncCommands { /** * Get the time to live for a key in milliseconds. - * + * * @param key the key * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description * above). @@ -224,14 +234,14 @@ public interface RedisKeyAsyncCommands { /** * Return a random key from the keyspace. - * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. + * + * @return K bulk-string-reply the random key, or {@literal null} when the database is empty. */ - RedisFuture randomkey(); + RedisFuture randomkey(); /** * Rename a key. - * + * * @param key the key * @param newKey the newkey type: key * @return String simple-string-reply @@ -240,18 +250,18 @@ public interface RedisKeyAsyncCommands { /** * Rename a key, only if the new key does not exist. - * + * * @param key the key * @param newKey the newkey type: key * @return Boolean integer-reply specifically: - * + * * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. */ RedisFuture renamenx(K key, K newKey); /** * Create a key using the provided serialized value, previously obtained using DUMP. - * + * * @param key the key * @param ttl the ttl type: long * @param value the serialized-value type: string @@ -259,9 +269,20 @@ public interface RedisKeyAsyncCommands { */ RedisFuture restore(K key, long ttl, byte[] value); + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param value the serialized-value type: string + * @param args the {@link RestoreArgs}, must not be {@literal null}. + * @return String simple-string-reply The command returns OK on success. + * @since 5.1 + */ + RedisFuture restore(K key, byte[] value, RestoreArgs args); + /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @return List<V> array-reply list of sorted elements. */ @@ -269,7 +290,7 @@ public interface RedisKeyAsyncCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @return Long number of values. @@ -278,7 +299,7 @@ public interface RedisKeyAsyncCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @param sortArgs sort arguments * @return List<V> array-reply list of sorted elements. @@ -287,7 +308,7 @@ public interface RedisKeyAsyncCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param sortArgs sort arguments @@ -297,7 +318,7 @@ public interface RedisKeyAsyncCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @param sortArgs sort arguments * @param destination the destination key to store sort results @@ -307,7 +328,7 @@ public interface RedisKeyAsyncCommands { /** * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * + * * @param keys the keys * @return Long integer-reply the number of found keys. */ @@ -323,7 +344,7 @@ public interface RedisKeyAsyncCommands { /** * Determine the type stored at key. - * + * * @param key the key * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. */ @@ -331,14 +352,14 @@ public interface RedisKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @return KeyScanCursor<K> scan cursor. */ RedisFuture> scan(); /** * Incrementally iterate the keys space. - * + * * @param scanArgs scan arguments * @return KeyScanCursor<K> scan cursor. */ @@ -346,7 +367,7 @@ public interface RedisKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments * @return KeyScanCursor<K> scan cursor. @@ -355,7 +376,7 @@ public interface RedisKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return KeyScanCursor<K> scan cursor. */ @@ -363,7 +384,7 @@ public interface RedisKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @return StreamScanCursor scan cursor. */ @@ -371,7 +392,7 @@ public interface RedisKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanArgs scan arguments * @return StreamScanCursor scan cursor. @@ -380,7 +401,7 @@ public interface RedisKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -390,7 +411,7 @@ public interface RedisKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return StreamScanCursor scan cursor. diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisListAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisListAsyncCommands.java similarity index 85% rename from src/main/java/com/lambdaworks/redis/api/async/RedisListAsyncCommands.java rename to src/main/java/io/lettuce/core/api/async/RedisListAsyncCommands.java index 5627680465..1b77c9dc7c 100644 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisListAsyncCommands.java +++ b/src/main/java/io/lettuce/core/api/async/RedisListAsyncCommands.java @@ -1,28 +1,44 @@ -package com.lambdaworks.redis.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; import java.util.List; -import com.lambdaworks.redis.KeyValue; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.output.ValueStreamingChannel; /** * Asynchronous executed commands for Lists. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi + * @generated by io.lettuce.apigenerator.CreateAsyncApi */ public interface RedisListAsyncCommands { /** * Remove and get the first element in a list, or block until one is available. - * + * * @param timeout the timeout in seconds * @param keys the keys * @return KeyValue<K,V> array-reply specifically: - * + * * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk * with the first element being the name of the key where an element was popped and the second element being the * value of the popped element. @@ -31,11 +47,11 @@ public interface RedisListAsyncCommands { /** * Remove and get the last element in a list, or block until one is available. - * + * * @param timeout the timeout in seconds * @param keys the keys * @return KeyValue<K,V> array-reply specifically: - * + * * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk * with the first element being the name of the key where an element was popped and the second element being the * value of the popped element. @@ -44,7 +60,7 @@ public interface RedisListAsyncCommands { /** * Pop a value from a list, push it to another list and return it; or block until one is available. - * + * * @param timeout the timeout in seconds * @param source the source key * @param destination the destination type: key @@ -55,7 +71,7 @@ public interface RedisListAsyncCommands { /** * Get an element from a list by its index. - * + * * @param key the key * @param index the index type: long * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. @@ -64,7 +80,7 @@ public interface RedisListAsyncCommands { /** * Insert an element before or after another element in a list. - * + * * @param key the key * @param before the before * @param pivot the pivot @@ -76,7 +92,7 @@ public interface RedisListAsyncCommands { /** * Get the length of a list. - * + * * @param key the key * @return Long integer-reply the length of the list at {@code key}. */ @@ -84,7 +100,7 @@ public interface RedisListAsyncCommands { /** * Remove and get the first element in a list. - * + * * @param key the key * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. */ @@ -92,23 +108,13 @@ public interface RedisListAsyncCommands { /** * Prepend one or multiple values to a list. - * + * * @param key the key * @param values the value * @return Long integer-reply the length of the list after the push operations. */ RedisFuture lpush(K key, V... values); - /** - * Prepend a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #lpushx(Object, Object[])} - */ - RedisFuture lpushx(K key, V value); - /** * Prepend values to a list, only if the list exists. * @@ -120,7 +126,7 @@ public interface RedisListAsyncCommands { /** * Get a range of elements from a list. - * + * * @param key the key * @param start the start type: long * @param stop the stop type: long @@ -130,7 +136,7 @@ public interface RedisListAsyncCommands { /** * Get a range of elements from a list. - * + * * @param channel the channel * @param key the key * @param start the start type: long @@ -141,7 +147,7 @@ public interface RedisListAsyncCommands { /** * Remove elements from a list. - * + * * @param key the key * @param count the count type: long * @param value the value @@ -151,7 +157,7 @@ public interface RedisListAsyncCommands { /** * Set the value of an element in a list by its index. - * + * * @param key the key * @param index the index type: long * @param value the value @@ -161,7 +167,7 @@ public interface RedisListAsyncCommands { /** * Trim a list to the specified range. - * + * * @param key the key * @param start the start type: long * @param stop the stop type: long @@ -171,7 +177,7 @@ public interface RedisListAsyncCommands { /** * Remove and get the last element in a list. - * + * * @param key the key * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. */ @@ -179,7 +185,7 @@ public interface RedisListAsyncCommands { /** * Remove the last element in a list, append it to another list and return it. - * + * * @param source the source key * @param destination the destination type: key * @return V bulk-string-reply the element being popped and pushed. @@ -188,23 +194,13 @@ public interface RedisListAsyncCommands { /** * Append one or multiple values to a list. - * + * * @param key the key * @param values the value * @return Long integer-reply the length of the list after the push operation. */ RedisFuture rpush(K key, V... values); - /** - * Append a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #rpushx(java.lang.Object, java.lang.Object[])} - */ - RedisFuture rpushx(K key, V value); - /** * Append values to a list, only if the list exists. * diff --git a/src/main/java/io/lettuce/core/api/async/RedisScriptingAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisScriptingAsyncCommands.java new file mode 100644 index 0000000000..33cb40c596 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/async/RedisScriptingAsyncCommands.java @@ -0,0 +1,165 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; + +import java.util.List; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.ScriptOutputType; + +/** + * Asynchronous executed commands for Scripting. {@link java.lang.String Lua scripts} are encoded by using the configured + * {@link io.lettuce.core.ClientOptions#getScriptCharset() charset}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateAsyncApi + */ +public interface RedisScriptingAsyncCommands { + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + */ + RedisFuture eval(String script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + * @since 6.0 + */ + RedisFuture eval(byte[] script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + RedisFuture eval(String script, ScriptOutputType type, K[] keys, V... values); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + * @since 6.0 + */ + RedisFuture eval(byte[] script, ScriptOutputType type, K[] keys, V... values); + + /** + * Evaluates a script cached on the server side by its SHA1 digest + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param expected return type + * @return script result + */ + RedisFuture evalsha(String digest, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + RedisFuture evalsha(String digest, ScriptOutputType type, K[] keys, V... values); + + /** + * Check existence of scripts in the script cache. + * + * @param digests script digests + * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 + * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 + * is returned, otherwise 0 is returned. + */ + RedisFuture> scriptExists(String... digests); + + /** + * Remove all the scripts from the script cache. + * + * @return String simple-string-reply + */ + RedisFuture scriptFlush(); + + /** + * Kill the script currently in execution. + * + * @return String simple-string-reply + */ + RedisFuture scriptKill(); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + RedisFuture scriptLoad(String script); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + RedisFuture scriptLoad(byte[] script); + + /** + * Create a SHA1 digest from a Lua script. + * + * @param script script content + * @return the SHA1 value + * @since 6.0 + */ + String digest(String script); + + /** + * Create a SHA1 digest from a Lua script. + * + * @param script script content + * @return the SHA1 value + * @since 6.0 + */ + String digest(byte[] script); +} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisServerAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisServerAsyncCommands.java similarity index 80% rename from src/main/java/com/lambdaworks/redis/api/async/RedisServerAsyncCommands.java rename to src/main/java/io/lettuce/core/api/async/RedisServerAsyncCommands.java index 9cfd71cedc..3e790a4d6f 100644 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisServerAsyncCommands.java +++ b/src/main/java/io/lettuce/core/api/async/RedisServerAsyncCommands.java @@ -1,46 +1,64 @@ -package com.lambdaworks.redis.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; import java.util.Date; import java.util.List; -import com.lambdaworks.redis.KillArgs; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.RedisFuture; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.UnblockType; +import io.lettuce.core.protocol.CommandType; /** * Asynchronous executed commands for Server Control. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi + * @generated by io.lettuce.apigenerator.CreateAsyncApi */ public interface RedisServerAsyncCommands { /** * Asynchronously rewrite the append-only file. - * + * * @return String simple-string-reply always {@code OK}. */ RedisFuture bgrewriteaof(); /** * Asynchronously save the dataset to disk. - * + * * @return String simple-string-reply */ RedisFuture bgsave(); /** * Get the current connection name. - * + * * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. */ RedisFuture clientGetname(); /** * Set the current connection name. - * + * * @param name the client name * @return simple-string-reply {@code OK} if the connection name was successfully set. */ @@ -48,7 +66,7 @@ public interface RedisServerAsyncCommands { /** * Kill the connection of a client identified by ip:port. - * + * * @param addr ip:port * @return String simple-string-reply {@code OK} if the connection exists and has been closed */ @@ -62,9 +80,19 @@ public interface RedisServerAsyncCommands { */ RedisFuture clientKill(KillArgs killArgs); + /** + * Unblock the specified blocked client. + * + * @param id the client id. + * @param type unblock type. + * @return Long integer-reply number of unblocked connections. + * @since 5.1 + */ + RedisFuture clientUnblock(long id, UnblockType type); + /** * Stop processing commands from clients for some time. - * + * * @param timeout the timeout value in milliseconds * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. */ @@ -72,22 +100,30 @@ public interface RedisServerAsyncCommands { /** * Get the list of client connections. - * + * * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), * each line is composed of a succession of property=value fields separated by a space character. */ RedisFuture clientList(); + /** + * Get the id of the current connection. + * + * @return Long The command just returns the ID of the current connection. + * @since 5.3 + */ + RedisFuture clientId(); + /** * Returns an array reply of details about all Redis commands. - * + * * @return List<Object> array-reply */ RedisFuture> command(); /** * Returns an array reply of details about the requested commands. - * + * * @param commands the commands to query for * @return List<Object> array-reply */ @@ -95,7 +131,7 @@ public interface RedisServerAsyncCommands { /** * Returns an array reply of details about the requested commands. - * + * * @param commands the commands to query for * @return List<Object> array-reply */ @@ -103,29 +139,29 @@ public interface RedisServerAsyncCommands { /** * Get total number of Redis commands. - * + * * @return Long integer-reply of number of total commands in this Redis server. */ RedisFuture commandCount(); /** * Get the value of a configuration parameter. - * + * * @param parameter name of the parameter - * @return List<String> bulk-string-reply + * @return Map<String, String> bulk-string-reply */ - RedisFuture> configGet(String parameter); + RedisFuture> configGet(String parameter); /** * Reset the stats returned by INFO. - * + * * @return String simple-string-reply always {@code OK}. */ RedisFuture configResetstat(); /** * Rewrite the configuration file with the in memory configuration. - * + * * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is * returned. */ @@ -133,7 +169,7 @@ public interface RedisServerAsyncCommands { /** * Set a configuration parameter to the given value. - * + * * @param parameter the parameter name * @param value the parameter value * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. @@ -142,13 +178,14 @@ public interface RedisServerAsyncCommands { /** * Return the number of keys in the selected database. - * + * * @return Long integer-reply */ RedisFuture dbsize(); /** * Crash and recover + * * @param delay optional delay in milliseconds * @return String simple-string-reply */ @@ -164,7 +201,7 @@ public interface RedisServerAsyncCommands { /** * Get debugging information about a key. - * + * * @param key the key * @return String simple-string-reply */ @@ -172,13 +209,11 @@ public interface RedisServerAsyncCommands { /** * Make the server crash: Out of memory. - * */ void debugOom(); /** * Make the server crash: Invalid pointer access. - * */ void debugSegfault(); @@ -191,6 +226,7 @@ public interface RedisServerAsyncCommands { /** * Restart the server gracefully. + * * @param delay optional delay in milliseconds * @return String simple-string-reply */ @@ -206,7 +242,7 @@ public interface RedisServerAsyncCommands { /** * Remove all keys from all databases. - * + * * @return String simple-string-reply */ RedisFuture flushall(); @@ -220,7 +256,7 @@ public interface RedisServerAsyncCommands { /** * Remove all keys from the current database. - * + * * @return String simple-string-reply */ RedisFuture flushdb(); @@ -234,14 +270,14 @@ public interface RedisServerAsyncCommands { /** * Get information and statistics about the server. - * + * * @return String bulk-string-reply as a collection of text lines. */ RedisFuture info(); /** * Get information and statistics about the server. - * + * * @param section the section type: string * @return String bulk-string-reply as a collection of text lines. */ @@ -249,28 +285,36 @@ public interface RedisServerAsyncCommands { /** * Get the UNIX time stamp of the last successful save to disk. - * + * * @return Date integer-reply an UNIX time stamp. */ RedisFuture lastsave(); + /** + * Reports the number of bytes that a key and its value require to be stored in RAM. + * + * @return memory usage in bytes. + * @since 5.2 + */ + RedisFuture memoryUsage(K key); + /** * Synchronously save the dataset to disk. - * + * * @return String simple-string-reply The commands returns OK on success. */ RedisFuture save(); /** * Synchronously save the dataset to disk and then shut down the server. - * + * * @param save {@literal true} force save operation */ void shutdown(boolean save); /** - * Make the server a slave of another instance, or promote it as master. - * + * Make the server a replica of another instance, or promote it as master. + * * @param host the host type: string * @param port the port type: string * @return String simple-string-reply @@ -279,21 +323,21 @@ public interface RedisServerAsyncCommands { /** * Promote server as master. - * + * * @return String simple-string-reply */ RedisFuture slaveofNoOne(); /** * Read the slow log. - * + * * @return List<Object> deeply nested multi bulk replies */ RedisFuture> slowlogGet(); /** * Read the slow log. - * + * * @param count the count * @return List<Object> deeply nested multi bulk replies */ @@ -301,33 +345,25 @@ public interface RedisServerAsyncCommands { /** * Obtaining the current length of the slow log. - * + * * @return Long length of the slow log. */ RedisFuture slowlogLen(); /** * Resetting the slow log. - * + * * @return String simple-string-reply The commands returns OK on success. */ RedisFuture slowlogReset(); - /** - * Internal command used for replication. - * - * @return String simple-string-reply - */ - @Deprecated - RedisFuture sync(); - /** * Return the current server time. - * + * * @return List<V> array-reply specifically: - * + * * A multi bulk reply containing two elements: - * + * * unix time in seconds. microseconds. */ RedisFuture> time(); diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisSetAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisSetAsyncCommands.java similarity index 89% rename from src/main/java/com/lambdaworks/redis/api/async/RedisSetAsyncCommands.java rename to src/main/java/io/lettuce/core/api/async/RedisSetAsyncCommands.java index 5f3125ad92..fa1d00e896 100644 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisSetAsyncCommands.java +++ b/src/main/java/io/lettuce/core/api/async/RedisSetAsyncCommands.java @@ -1,28 +1,40 @@ -package com.lambdaworks.redis.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; import java.util.List; import java.util.Set; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ValueScanCursor; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.*; +import io.lettuce.core.output.ValueStreamingChannel; /** * Asynchronous executed commands for Sets. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi + * @generated by io.lettuce.apigenerator.CreateAsyncApi */ public interface RedisSetAsyncCommands { /** * Add one or more members to a set. - * + * * @param key the key * @param members the member type: value * @return Long integer-reply the number of elements that were added to the set, not including all the elements already @@ -32,7 +44,7 @@ public interface RedisSetAsyncCommands { /** * Get the number of members in a set. - * + * * @param key the key * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not * exist. @@ -41,7 +53,7 @@ public interface RedisSetAsyncCommands { /** * Subtract multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -49,7 +61,7 @@ public interface RedisSetAsyncCommands { /** * Subtract multiple sets. - * + * * @param channel the channel * @param keys the keys * @return Long count of members of the resulting set. @@ -58,7 +70,7 @@ public interface RedisSetAsyncCommands { /** * Subtract multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -67,7 +79,7 @@ public interface RedisSetAsyncCommands { /** * Intersect multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -75,7 +87,7 @@ public interface RedisSetAsyncCommands { /** * Intersect multiple sets. - * + * * @param channel the channel * @param keys the keys * @return Long count of members of the resulting set. @@ -84,7 +96,7 @@ public interface RedisSetAsyncCommands { /** * Intersect multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -93,11 +105,11 @@ public interface RedisSetAsyncCommands { /** * Determine if a given value is a member of a set. - * + * * @param key the key * @param member the member type: value * @return Boolean integer-reply specifically: - * + * * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the * set, or if {@code key} does not exist. */ @@ -105,12 +117,12 @@ public interface RedisSetAsyncCommands { /** * Move a member from one set to another. - * + * * @param source the source key * @param destination the destination type: key * @param member the member type: value * @return Boolean integer-reply specifically: - * + * * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no * operation was performed. */ @@ -118,7 +130,7 @@ public interface RedisSetAsyncCommands { /** * Get all the members in a set. - * + * * @param key the key * @return Set<V> array-reply all elements of the set. */ @@ -126,7 +138,7 @@ public interface RedisSetAsyncCommands { /** * Get all the members in a set. - * + * * @param channel the channel * @param key the keys * @return Long count of members of the resulting set. @@ -135,7 +147,7 @@ public interface RedisSetAsyncCommands { /** * Remove and return a random member from a set. - * + * * @param key the key * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. */ @@ -152,9 +164,9 @@ public interface RedisSetAsyncCommands { /** * Get one random member from a set. - * + * * @param key the key - * + * * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the * randomly selected element, or {@literal null} when {@code key} does not exist. */ @@ -162,7 +174,7 @@ public interface RedisSetAsyncCommands { /** * Get one or multiple random members from a set. - * + * * @param key the key * @param count the count type: long * @return Set<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply @@ -172,7 +184,7 @@ public interface RedisSetAsyncCommands { /** * Get one or multiple random members from a set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param count the count @@ -182,7 +194,7 @@ public interface RedisSetAsyncCommands { /** * Remove one or more members from a set. - * + * * @param key the key * @param members the member type: value * @return Long integer-reply the number of members that were removed from the set, not including non existing members. @@ -191,7 +203,7 @@ public interface RedisSetAsyncCommands { /** * Add multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -199,7 +211,7 @@ public interface RedisSetAsyncCommands { /** * Add multiple sets. - * + * * @param channel streaming channel that receives a call for every value * @param keys the keys * @return Long count of members of the resulting set. @@ -208,7 +220,7 @@ public interface RedisSetAsyncCommands { /** * Add multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -217,7 +229,7 @@ public interface RedisSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @return ValueScanCursor<V> scan cursor. */ @@ -225,7 +237,7 @@ public interface RedisSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanArgs scan arguments * @return ValueScanCursor<V> scan cursor. @@ -234,7 +246,7 @@ public interface RedisSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -244,7 +256,7 @@ public interface RedisSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return ValueScanCursor<V> scan cursor. @@ -253,7 +265,7 @@ public interface RedisSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @return StreamScanCursor scan cursor. @@ -262,7 +274,7 @@ public interface RedisSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanArgs scan arguments @@ -272,7 +284,7 @@ public interface RedisSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -283,7 +295,7 @@ public interface RedisSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} diff --git a/src/main/java/io/lettuce/core/api/async/RedisSortedSetAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisSortedSetAsyncCommands.java new file mode 100644 index 0000000000..ee7a6e7641 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/async/RedisSortedSetAsyncCommands.java @@ -0,0 +1,1251 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; + +import java.util.List; + +import io.lettuce.core.*; +import io.lettuce.core.output.ScoredValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Asynchronous executed commands for Sorted Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateAsyncApi + */ +public interface RedisSortedSetAsyncCommands { + + /** + * Removes and returns a member with the lowest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + RedisFuture>> bzpopmin(long timeout, K... keys); + + /** + * Removes and returns a member with the highest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + RedisFuture>> bzpopmax(long timeout, K... keys); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + RedisFuture zadd(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + RedisFuture zadd(K key, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + RedisFuture zadd(K key, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + RedisFuture zadd(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + RedisFuture zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the ke + * @param zAddArgs arguments for zadd + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + RedisFuture zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + */ + RedisFuture zaddincr(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + * @since 4.3 + */ + RedisFuture zaddincr(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Get the number of members in a sorted set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} + * does not exist. + */ + RedisFuture zcard(K key); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zcount(K key, double min, double max); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zcount(K key, String min, String max); + + /** + * Count the members in a sorted set with scores within the given {@link Range}. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + RedisFuture zcount(K key, Range range); + + /** + * Increment the score of a member in a sorted set. + * + * @param key the key + * @param amount the increment type: long + * @param member the member type: value + * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented + * as string. + */ + RedisFuture zincrby(K key, double amount, V member); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + RedisFuture zinterstore(K destination, K... keys); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + RedisFuture zinterstore(K destination, ZStoreArgs storeArgs, K... keys); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zlexcount(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zlexcount(K key, String min, String max); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + RedisFuture zlexcount(K key, Range range); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + RedisFuture> zpopmin(K key); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + RedisFuture>> zpopmin(K key, long count); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + RedisFuture> zpopmax(K key); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + RedisFuture>> zpopmax(K key, long count); + + /** + * Return a range of members in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + RedisFuture> zrange(K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + RedisFuture zrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + RedisFuture>> zrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + RedisFuture zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture> zrangebylex(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + RedisFuture> zrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture> zrangebylex(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + RedisFuture> zrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture> zrangebyscore(K key, double min, double max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture> zrangebyscore(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture> zrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture> zrangebyscore(K key, double min, double max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture> zrangebyscore(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture> zrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + RedisFuture zrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture>> zrangebyscoreWithScores(K key, double min, double max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture>> zrangebyscoreWithScores(K key, String min, String max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture>> zrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit limit)} + */ + @Deprecated + RedisFuture>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture>> zrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + RedisFuture zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + RedisFuture zrank(K key, V member); + + /** + * Remove one or more members from a sorted set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply specifically: + * + * The number of members removed from the sorted set, not including non existing members. + */ + RedisFuture zrem(K key, V... members); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebylex(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zremrangebylex(K key, String min, String max); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + RedisFuture zremrangebylex(K key, Range range); + + /** + * Remove all members in a sorted set within the given indexes. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long integer-reply the number of elements removed. + */ + RedisFuture zremrangebyrank(K key, long start, long stop); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zremrangebyscore(K key, double min, double max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zremrangebyscore(K key, String min, String max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + RedisFuture zremrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + RedisFuture> zrevrange(K key, long start, long stop); + + /** + * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + RedisFuture zrevrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + RedisFuture>> zrevrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + RedisFuture zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture> zrevrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture> zrevrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture> zrevrangebyscore(K key, double max, double min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture> zrevrangebyscore(K key, String max, String min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture> zrevrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the withscores + * @param count the null + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture> zrevrangebyscore(K key, double max, double min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture> zrevrangebyscore(K key, String max, String min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture> zrevrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param max max score + * @param min min score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + * @since 4.3 + */ + RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + RedisFuture zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture>> zrevrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + RedisFuture>> zrevrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + */ + RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + RedisFuture zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set, with scores ordered from high to low. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + RedisFuture zrevrank(K key, V member); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @return ScoredValueScanCursor<V> scan cursor. + */ + RedisFuture> zscan(K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + RedisFuture> zscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + RedisFuture> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ScoredValueScanCursor<V> scan cursor. + */ + RedisFuture> zscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + RedisFuture zscan(ScoredValueStreamingChannel channel, K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + RedisFuture zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Get the score associated with the given member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as + * string. + */ + RedisFuture zscore(K key, V member); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination destination key + * @param keys source keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + RedisFuture zunionstore(K destination, K... keys); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + RedisFuture zunionstore(K destination, ZStoreArgs storeArgs, K... keys); +} diff --git a/src/main/java/io/lettuce/core/api/async/RedisStreamAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisStreamAsyncCommands.java new file mode 100644 index 0000000000..00af2dec7d --- /dev/null +++ b/src/main/java/io/lettuce/core/api/async/RedisStreamAsyncCommands.java @@ -0,0 +1,324 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.*; +import io.lettuce.core.XReadArgs.StreamOffset; + +/** + * Asynchronous executed commands for Streams. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.1 + * @generated by io.lettuce.apigenerator.CreateAsyncApi + */ +public interface RedisStreamAsyncCommands { + + /** + * Acknowledge one or more messages as processed. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param messageIds message Id's to acknowledge. + * @return simple-reply the lenght of acknowledged messages. + */ + RedisFuture xack(K key, K group, String... messageIds); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param body message body. + * @return simple-reply the message Id. + */ + RedisFuture xadd(K key, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param body message body. + * @return simple-reply the message Id. + */ + RedisFuture xadd(K key, XAddArgs args, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + RedisFuture xadd(K key, Object... keysAndValues); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + RedisFuture xadd(K key, XAddArgs args, Object... keysAndValues); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param minIdleTime + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + RedisFuture>> xclaim(K key, Consumer consumer, long minIdleTime, String... messageIds); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + *

+ * Note that setting the {@code JUSTID} flag (calling this method with {@link XClaimArgs#justid()}) suppresses the message + * bode and {@link StreamMessage#getBody()} is {@code null}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param args + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + RedisFuture>> xclaim(K key, Consumer consumer, XClaimArgs args, String... messageIds); + + /** + * Removes the specified entries from the stream. Returns the number of items deleted, that may be different from the number + * of IDs passed in case certain IDs do not exist. + * + * @param key the stream key. + * @param messageIds stream message Id's. + * @return simple-reply number of removed entries. + */ + RedisFuture xdel(K key, String... messageIds); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + RedisFuture xgroupCreate(StreamOffset streamOffset, K group); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @param args + * @return simple-reply {@literal true} if successful. + * @since 5.2 + */ + RedisFuture xgroupCreate(StreamOffset streamOffset, K group, XGroupCreateArgs args); + + /** + * Delete a consumer from a consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @return simple-reply {@literal true} if successful. + */ + RedisFuture xgroupDelconsumer(K key, Consumer consumer); + + /** + * Destroy a consumer group. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + RedisFuture xgroupDestroy(K key, K group); + + /** + * Set the current {@code group} id. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply OK + */ + RedisFuture xgroupSetid(StreamOffset streamOffset, K group); + + /** + * Retrieve information about the stream at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + RedisFuture> xinfoStream(K key); + + /** + * Retrieve information about the stream consumer groups at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + RedisFuture> xinfoGroups(K key); + + /** + * Retrieve information about consumer groups of group {@code group} and stream at {@code key}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply. + * @since 5.2 + */ + RedisFuture> xinfoConsumers(K key, K group); + + /** + * Get the length of a steam. + * + * @param key the stream key. + * @return simple-reply the lenght of the stream. + */ + RedisFuture xlen(K key); + + /** + * Read pending messages from a stream for a {@code group}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply list pending entries. + */ + RedisFuture> xpending(K key, K group); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + RedisFuture> xpending(K key, K group, Range range, Limit limit); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + RedisFuture> xpending(K key, Consumer consumer, Range range, Limit limit); + + /** + * Read messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + RedisFuture>> xrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + RedisFuture>> xrange(K key, Range range, Limit limit); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + RedisFuture>> xread(StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + RedisFuture>> xread(XReadArgs args, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + RedisFuture>> xreadgroup(Consumer consumer, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + RedisFuture>> xreadgroup(Consumer consumer, XReadArgs args, StreamOffset... streams); + + /** + * Read messages from a stream within a specific {@link Range} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + RedisFuture>> xrevrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + RedisFuture>> xrevrange(K key, Range range, Limit limit); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + RedisFuture xtrim(K key, long count); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param approximateTrimming {@literal true} to trim approximately using the {@code ~} flag. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + RedisFuture xtrim(K key, boolean approximateTrimming, long count); +} diff --git a/src/main/java/com/lambdaworks/redis/api/async/RedisStringAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisStringAsyncCommands.java similarity index 83% rename from src/main/java/com/lambdaworks/redis/api/async/RedisStringAsyncCommands.java rename to src/main/java/io/lettuce/core/api/async/RedisStringAsyncCommands.java index 4ef6e72599..8daee16124 100644 --- a/src/main/java/com/lambdaworks/redis/api/async/RedisStringAsyncCommands.java +++ b/src/main/java/io/lettuce/core/api/async/RedisStringAsyncCommands.java @@ -1,13 +1,29 @@ -package com.lambdaworks.redis.api.async; - -import com.lambdaworks.redis.BitFieldArgs; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.SetArgs; -import com.lambdaworks.redis.output.ValueStreamingChannel; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; import java.util.List; import java.util.Map; +import io.lettuce.core.BitFieldArgs; +import io.lettuce.core.KeyValue; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.SetArgs; +import io.lettuce.core.output.KeyValueStreamingChannel; + /** * Asynchronous executed commands for Strings. * @@ -15,7 +31,7 @@ * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncApi + * @generated by io.lettuce.apigenerator.CreateAsyncApi */ public interface RedisStringAsyncCommands { @@ -75,13 +91,30 @@ public interface RedisStringAsyncCommands { * * Basically the function consider the right of the string as padded with zeros if you look for clear bits and * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. */ RedisFuture bitpos(K key, boolean state); + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * @since 5.0.1 + */ + RedisFuture bitpos(K key, boolean state, long start); + /** * Find first bit set or clear in a string. * @@ -232,7 +265,7 @@ public interface RedisStringAsyncCommands { * @param keys the key * @return List<V> array-reply list of values at the specified keys. */ - RedisFuture> mget(K... keys); + RedisFuture>> mget(K... keys); /** * Stream over the values of all the given keys. @@ -242,7 +275,7 @@ public interface RedisStringAsyncCommands { * * @return Long array-reply list of values at the specified keys. */ - RedisFuture mget(ValueStreamingChannel channel, K... keys); + RedisFuture mget(KeyValueStreamingChannel channel, K... keys); /** * Set multiple keys to multiple values. diff --git a/src/main/java/io/lettuce/core/api/async/RedisTransactionalAsyncCommands.java b/src/main/java/io/lettuce/core/api/async/RedisTransactionalAsyncCommands.java new file mode 100644 index 0000000000..3146c38b7c --- /dev/null +++ b/src/main/java/io/lettuce/core/api/async/RedisTransactionalAsyncCommands.java @@ -0,0 +1,71 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.async; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.TransactionResult; + +/** + * Asynchronous executed commands for Transactions. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateAsyncApi + */ +public interface RedisTransactionalAsyncCommands { + + /** + * Discard all commands issued after MULTI. + * + * @return String simple-string-reply always {@code OK}. + */ + RedisFuture discard(); + + /** + * Execute all commands issued after MULTI. + * + * @return List<Object> array-reply each element being the reply to each of the commands in the atomic transaction. + * + * When using {@code WATCH}, {@code EXEC} can return a {@link TransactionResult#wasDiscarded discarded + * TransactionResult}. + * @see TransactionResult#wasDiscarded + */ + RedisFuture exec(); + + /** + * Mark the start of a transaction block. + * + * @return String simple-string-reply always {@code OK}. + */ + RedisFuture multi(); + + /** + * Watch the given keys to determine execution of the MULTI/EXEC block. + * + * @param keys the key + * @return String simple-string-reply always {@code OK}. + */ + RedisFuture watch(K... keys); + + /** + * Forget about all watched keys. + * + * @return String simple-string-reply always {@code OK}. + */ + RedisFuture unwatch(); +} diff --git a/src/main/java/io/lettuce/core/api/async/package-info.java b/src/main/java/io/lettuce/core/api/async/package-info.java new file mode 100644 index 0000000000..31acaeaeef --- /dev/null +++ b/src/main/java/io/lettuce/core/api/async/package-info.java @@ -0,0 +1,4 @@ +/** + * Standalone Redis API for asynchronous executed commands. + */ +package io.lettuce.core.api.async; diff --git a/src/main/java/io/lettuce/core/api/package-info.java b/src/main/java/io/lettuce/core/api/package-info.java new file mode 100644 index 0000000000..60e47de051 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/package-info.java @@ -0,0 +1,4 @@ +/** + * Standalone Redis connection API. + */ +package io.lettuce.core.api; diff --git a/src/main/java/io/lettuce/core/api/reactive/BaseRedisReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/BaseRedisReactiveCommands.java new file mode 100644 index 0000000000..ad272cd271 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/BaseRedisReactiveCommands.java @@ -0,0 +1,177 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Reactive executed commands for basic commands. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface BaseRedisReactiveCommands { + + /** + * Post a message to a channel. + * + * @param channel the channel type: key + * @param message the message type: value + * @return Long integer-reply the number of clients that received the message. + */ + Mono publish(K channel, V message); + + /** + * Lists the currently *active channels*. + * + * @return K array-reply a list of active channels, optionally matching the specified pattern. + */ + Flux pubsubChannels(); + + /** + * Lists the currently *active channels*. + * + * @param channel the key + * @return K array-reply a list of active channels, optionally matching the specified pattern. + */ + Flux pubsubChannels(K channel); + + /** + * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. + * + * @param channels channel keys + * @return array-reply a list of channels and number of subscribers for every channel. + */ + Mono> pubsubNumsub(K... channels); + + /** + * Returns the number of subscriptions to patterns. + * + * @return Long integer-reply the number of patterns all the clients are subscribed to. + */ + Mono pubsubNumpat(); + + /** + * Echo the given string. + * + * @param msg the message type: value + * @return V bulk-string-reply + */ + Mono echo(V msg); + + /** + * Return the role of the instance in the context of replication. + * + * @return Object array-reply where the first element is one of master, slave, sentinel and the additional + * elements are role-specific. + */ + Flux role(); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + Mono ping(); + + /** + * Switch connection to Read-Only mode when connecting to a cluster. + * + * @return String simple-string-reply. + */ + Mono readOnly(); + + /** + * Switch connection to Read-Write mode (default) when connecting to a cluster. + * + * @return String simple-string-reply. + */ + Mono readWrite(); + + /** + * Instructs Redis to disconnect the connection. Note that if auto-reconnect is enabled then Lettuce will auto-reconnect if + * the connection was disconnected. Use {@link io.lettuce.core.api.StatefulConnection#close} to close connections and + * release resources. + * + * @return String simple-string-reply always OK. + */ + Mono quit(); + + /** + * Wait for replication. + * + * @param replicas minimum number of replicas + * @param timeout timeout in milliseconds + * @return number of replicas + */ + Mono waitForReplication(int replicas, long timeout); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param response type + * @return the command response + */ + Flux dispatch(ProtocolKeyword type, CommandOutput output); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param args the command arguments, must not be {@literal null}. + * @param response type + * @return the command response + */ + Flux dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); + + /** + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the + * internal state machine gets out of sync with the connection. + */ + void reset(); + + /** + * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands + * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is + * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. + * + * @param autoFlush state of autoFlush. + */ + void setAutoFlushCommands(boolean autoFlush); + + /** + * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to + * achieve batching. No-op if channel is not connected. + */ + void flushCommands(); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisGeoReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisGeoReactiveCommands.java new file mode 100644 index 0000000000..e9244e9aad --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisGeoReactiveCommands.java @@ -0,0 +1,161 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; + +/** + * Reactive executed commands for the Geo-API. + * + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisGeoReactiveCommands { + + /** + * Single geo add. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param member the member to add + * @return Long integer-reply the number of elements that were added to the set + */ + Mono geoadd(K key, double longitude, double latitude, V member); + + /** + * Multi geo add. + * + * @param key the key of the geo set + * @param lngLatMember triplets of double longitude, double latitude and V member + * @return Long integer-reply the number of elements that were added to the set + */ + Mono geoadd(K key, Object... lngLatMember); + + /** + * Retrieve Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index. + * + * @param key the key of the geo set + * @param members the members + * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. + */ + Flux> geohash(K key, V... members); + + /** + * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @return bulk reply + */ + Flux georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit); + + /** + * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @param geoArgs args to control the result + * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} + */ + Flux> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); + + /** + * Perform a {@link #georadius(Object, double, double, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with + * their locations a sorted set. + * @return Long integer-reply the number of elements in the result + */ + Mono georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); + + /** + * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the + * results. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @return set of members + */ + Flux georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); + + /** + * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the + * results. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @param geoArgs args to control the result + * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} + */ + Flux> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); + + /** + * Perform a {@link #georadiusbymember(Object, Object, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with + * their locations a sorted set. + * @return Long integer-reply the number of elements in the result + */ + Mono georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); + + /** + * Get geo coordinates for the {@code members}. + * + * @param key the key of the geo set + * @param members the members + * + * @return a list of {@link GeoCoordinates}s representing the x,y position of each element specified in the arguments. For + * missing elements {@literal null} is returned. + */ + Flux> geopos(K key, V... members); + + /** + * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is + * returned. Default in meters by, otherwise according to {@code unit} + * + * @param key the key of the geo set + * @param from from member + * @param to to member + * @param unit distance unit + * + * @return distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is + * returned. + */ + Mono geodist(K key, V from, V to, GeoArgs.Unit unit); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisHLLReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisHLLReactiveCommands.java new file mode 100644 index 0000000000..8d8d6eea62 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisHLLReactiveCommands.java @@ -0,0 +1,63 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import reactor.core.publisher.Mono; + +/** + * Reactive executed commands for HyperLogLog (PF* commands). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisHLLReactiveCommands { + + /** + * Adds the specified elements to the specified HyperLogLog. + * + * @param key the key + * @param values the values + * + * @return Long integer-reply specifically: + * + * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. + */ + Mono pfadd(K key, V... values); + + /** + * Merge N different HyperLogLogs into a single one. + * + * @param destkey the destination key + * @param sourcekeys the source key + * + * @return String simple-string-reply The command just returns {@code OK}. + */ + Mono pfmerge(K destkey, K... sourcekeys); + + /** + * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). + * + * @param keys the keys + * + * @return Long integer-reply specifically: + * + * The approximated number of unique elements observed via {@code PFADD}. + */ + Mono pfcount(K... keys); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisHashReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisHashReactiveCommands.java new file mode 100644 index 0000000000..187a8249be --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisHashReactiveCommands.java @@ -0,0 +1,303 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Reactive executed commands for Hashes (Key-Value pairs). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisHashReactiveCommands { + + /** + * Delete one or more hash fields. + * + * @param key the key + * @param fields the field type: key + * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing + * fields. + */ + Mono hdel(K key, K... fields); + + /** + * Determine if a hash field exists. + * + * @param key the key + * @param field the field type: key + * @return Boolean integer-reply specifically: + * + * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, + * or {@code key} does not exist. + */ + Mono hexists(K key, K field); + + /** + * Get the value of a hash field. + * + * @param key the key + * @param field the field type: key + * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present + * in the hash or {@code key} does not exist. + */ + Mono hget(K key, K field); + + /** + * Increment the integer value of a hash field by the given number. + * + * @param key the key + * @param field the field type: key + * @param amount the increment type: long + * @return Long integer-reply the value at {@code field} after the increment operation. + */ + Mono hincrby(K key, K field, long amount); + + /** + * Increment the float value of a hash field by the given amount. + * + * @param key the key + * @param field the field type: key + * @param amount the increment type: double + * @return Double bulk-string-reply the value of {@code field} after the increment. + */ + Mono hincrbyfloat(K key, K field, double amount); + + /** + * Get all the fields and values in a hash. + * + * @param key the key + * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} + * does not exist. + */ + Mono> hgetall(K key); + + /** + * Stream over all the fields and values in a hash. + * + * @param channel the channel + * @param key the key + * + * @return Long count of the keys. + */ + Mono hgetall(KeyValueStreamingChannel channel, K key); + + /** + * Get all the fields in a hash. + * + * @param key the key + * @return K array-reply list of fields in the hash, or an empty list when {@code key} does not exist. + */ + Flux hkeys(K key); + + /** + * Stream over all the fields in a hash. + * + * @param channel the channel + * @param key the key + * + * @return Long count of the keys. + */ + Mono hkeys(KeyStreamingChannel channel, K key); + + /** + * Get the number of fields in a hash. + * + * @param key the key + * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. + */ + Mono hlen(K key); + + /** + * Get the values of all the given hash fields. + * + * @param key the key + * @param fields the field type: key + * @return V array-reply list of values associated with the given fields, in the same + */ + Flux> hmget(K key, K... fields); + + /** + * Stream over the values of all the given hash fields. + * + * @param channel the channel + * @param key the key + * @param fields the fields + * + * @return Long count of the keys + */ + Mono hmget(KeyValueStreamingChannel channel, K key, K... fields); + + /** + * Set multiple hash fields to multiple values. + * + * @param key the key + * @param map the null + * @return String simple-string-reply + */ + Mono hmset(K key, Map map); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @return MapScanCursor<K, V> map scan cursor. + */ + Mono> hscan(K key); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanArgs scan arguments + * @return MapScanCursor<K, V> map scan cursor. + */ + Mono> hscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return MapScanCursor<K, V> map scan cursor. + */ + Mono> hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return MapScanCursor<K, V> map scan cursor. + */ + Mono> hscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @return StreamScanCursor scan cursor. + */ + Mono hscan(KeyValueStreamingChannel channel, K key); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Mono hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Mono hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + Mono hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Set the string value of a hash field. + * + * @param key the key + * @param field the field type: key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@literal true} if {@code field} is a new field in the hash and {@code value} was set. {@literal false} if + * {@code field} already exists in the hash and the value was updated. + */ + Mono hset(K key, K field, V value); + + /** + * Set multiple hash fields to multiple values. + * + * @param key the key of the hash + * @param map the field/value pairs to update + * @return Long integer-reply: the number of fields that were added. + * @since 5.3 + */ + Mono hset(K key, Map map); + + /** + * Set the value of a hash field, only if the field does not exist. + * + * @param key the key + * @param field the field type: key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@code 1} if {@code field} is a new field in the hash and {@code value} was set. {@code 0} if {@code field} + * already exists in the hash and no operation was performed. + */ + Mono hsetnx(K key, K field, V value); + + /** + * Get the string length of the field value in a hash. + * + * @param key the key + * @param field the field type: key + * @return Long integer-reply the string length of the {@code field} value, or {@code 0} when {@code field} is not present + * in the hash or {@code key} does not exist at all. + */ + Mono hstrlen(K key, K field); + + /** + * Get all the values in a hash. + * + * @param key the key + * @return V array-reply list of values in the hash, or an empty list when {@code key} does not exist. + */ + Flux hvals(K key); + + /** + * Stream over all the values in a hash. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * + * @return Long count of the keys. + */ + Mono hvals(ValueStreamingChannel channel, K key); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisKeyReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisKeyReactiveCommands.java new file mode 100644 index 0000000000..0078c708c1 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisKeyReactiveCommands.java @@ -0,0 +1,421 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import java.util.Date; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Reactive executed commands for Keys (Key manipulation/querying). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisKeyReactiveCommands { + + /** + * Delete one or more keys. + * + * @param keys the keys + * @return Long integer-reply The number of keys that were removed. + */ + Mono del(K... keys); + + /** + * Unlink one or more keys (non blocking DEL). + * + * @param keys the keys + * @return Long integer-reply The number of keys that were removed. + */ + Mono unlink(K... keys); + + /** + * Return a serialized version of the value stored at the specified key. + * + * @param key the key + * @return byte[] bulk-string-reply the serialized value. + */ + Mono dump(K key); + + /** + * Determine how many keys exist. + * + * @param keys the keys + * @return Long integer-reply specifically: Number of existing keys + */ + Mono exists(K... keys); + + /** + * Set a key's time to live in seconds. + * + * @param key the key + * @param seconds the seconds type: long + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set. + */ + Mono expire(K key, long seconds); + + /** + * Set the expiration for a key as a UNIX timestamp. + * + * @param key the key + * @param timestamp the timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Mono expireat(K key, Date timestamp); + + /** + * Set the expiration for a key as a UNIX timestamp. + * + * @param key the key + * @param timestamp the timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Mono expireat(K key, long timestamp); + + /** + * Find all keys matching the given pattern. + * + * @param pattern the pattern type: patternkey (pattern) + * @return K array-reply list of keys matching {@code pattern}. + */ + Flux keys(K pattern); + + /** + * Find all keys matching the given pattern. + * + * @param channel the channel + * @param pattern the pattern + * @return Long array-reply list of keys matching {@code pattern}. + */ + Mono keys(KeyStreamingChannel channel, K pattern); + + /** + * Atomically transfer a key from a Redis instance to another one. + * + * @param host the host + * @param port the port + * @param key the key + * @param db the database + * @param timeout the timeout in milliseconds + * @return String simple-string-reply The command returns OK on success. + */ + Mono migrate(String host, int port, K key, int db, long timeout); + + /** + * Atomically transfer one or more keys from a Redis instance to another one. + * + * @param host the host + * @param port the port + * @param db the database + * @param timeout the timeout in milliseconds + * @param migrateArgs migrate args that allow to configure further options + * @return String simple-string-reply The command returns OK on success. + */ + Mono migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs); + + /** + * Move a key to another database. + * + * @param key the key + * @param db the db type: long + * @return Boolean integer-reply specifically: + */ + Mono move(K key, int db); + + /** + * returns the kind of internal representation used in order to store the value associated with a key. + * + * @param key the key + * @return String + */ + Mono objectEncoding(K key); + + /** + * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write + * operations). + * + * @param key the key + * @return number of seconds since the object stored at the specified key is idle. + */ + Mono objectIdletime(K key); + + /** + * returns the number of references of the value associated with the specified key. + * + * @param key the key + * @return Long + */ + Mono objectRefcount(K key); + + /** + * Remove the expiration from a key. + * + * @param key the key + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an + * associated timeout. + */ + Mono persist(K key); + + /** + * Set a key's time to live in milliseconds. + * + * @param key the key + * @param milliseconds the milliseconds type: long + * @return integer-reply, specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set. + */ + Mono pexpire(K key, long milliseconds); + + /** + * Set the expiration for a key as a UNIX timestamp specified in milliseconds. + * + * @param key the key + * @param timestamp the milliseconds-timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Mono pexpireat(K key, Date timestamp); + + /** + * Set the expiration for a key as a UNIX timestamp specified in milliseconds. + * + * @param key the key + * @param timestamp the milliseconds-timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Mono pexpireat(K key, long timestamp); + + /** + * Get the time to live for a key in milliseconds. + * + * @param key the key + * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description + * above). + */ + Mono pttl(K key); + + /** + * Return a random key from the keyspace. + * + * @return K bulk-string-reply the random key, or {@literal null} when the database is empty. + */ + Mono randomkey(); + + /** + * Rename a key. + * + * @param key the key + * @param newKey the newkey type: key + * @return String simple-string-reply + */ + Mono rename(K key, K newKey); + + /** + * Rename a key, only if the new key does not exist. + * + * @param key the key + * @param newKey the newkey type: key + * @return Boolean integer-reply specifically: + * + * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. + */ + Mono renamenx(K key, K newKey); + + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param ttl the ttl type: long + * @param value the serialized-value type: string + * @return String simple-string-reply The command returns OK on success. + */ + Mono restore(K key, long ttl, byte[] value); + + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param value the serialized-value type: string + * @param args the {@link RestoreArgs}, must not be {@literal null}. + * @return String simple-string-reply The command returns OK on success. + * @since 5.1 + */ + Mono restore(K key, byte[] value, RestoreArgs args); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @return V array-reply list of sorted elements. + */ + Flux sort(K key); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @return Long number of values. + */ + Mono sort(ValueStreamingChannel channel, K key); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @param sortArgs sort arguments + * @return V array-reply list of sorted elements. + */ + Flux sort(K key, SortArgs sortArgs); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param sortArgs sort arguments + * @return Long number of values. + */ + Mono sort(ValueStreamingChannel channel, K key, SortArgs sortArgs); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @param sortArgs sort arguments + * @param destination the destination key to store sort results + * @return Long number of values. + */ + Mono sortStore(K key, SortArgs sortArgs, K destination); + + /** + * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. + * + * @param keys the keys + * @return Long integer-reply the number of found keys. + */ + Mono touch(K... keys); + + /** + * Get the time to live for a key. + * + * @param key the key + * @return Long integer-reply TTL in seconds, or a negative value in order to signal an error (see the description above). + */ + Mono ttl(K key); + + /** + * Determine the type stored at key. + * + * @param key the key + * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. + */ + Mono type(K key); + + /** + * Incrementally iterate the keys space. + * + * @return KeyScanCursor<K> scan cursor. + */ + Mono> scan(); + + /** + * Incrementally iterate the keys space. + * + * @param scanArgs scan arguments + * @return KeyScanCursor<K> scan cursor. + */ + Mono> scan(ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return KeyScanCursor<K> scan cursor. + */ + Mono> scan(ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return KeyScanCursor<K> scan cursor. + */ + Mono> scan(ScanCursor scanCursor); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @return StreamScanCursor scan cursor. + */ + Mono scan(KeyStreamingChannel channel); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Mono scan(KeyStreamingChannel channel, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Mono scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + Mono scan(KeyStreamingChannel channel, ScanCursor scanCursor); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisListReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisListReactiveCommands.java new file mode 100644 index 0000000000..bc4eaffae9 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisListReactiveCommands.java @@ -0,0 +1,211 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.KeyValue; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Reactive executed commands for Lists. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisListReactiveCommands { + + /** + * Remove and get the first element in a list, or block until one is available. + * + * @param timeout the timeout in seconds + * @param keys the keys + * @return KeyValue<K,V> array-reply specifically: + * + * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk + * with the first element being the name of the key where an element was popped and the second element being the + * value of the popped element. + */ + Mono> blpop(long timeout, K... keys); + + /** + * Remove and get the last element in a list, or block until one is available. + * + * @param timeout the timeout in seconds + * @param keys the keys + * @return KeyValue<K,V> array-reply specifically: + * + * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk + * with the first element being the name of the key where an element was popped and the second element being the + * value of the popped element. + */ + Mono> brpop(long timeout, K... keys); + + /** + * Pop a value from a list, push it to another list and return it; or block until one is available. + * + * @param timeout the timeout in seconds + * @param source the source key + * @param destination the destination type: key + * @return V bulk-string-reply the element being popped from {@code source} and pushed to {@code destination}. If + * {@code timeout} is reached, a + */ + Mono brpoplpush(long timeout, K source, K destination); + + /** + * Get an element from a list by its index. + * + * @param key the key + * @param index the index type: long + * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. + */ + Mono lindex(K key, long index); + + /** + * Insert an element before or after another element in a list. + * + * @param key the key + * @param before the before + * @param pivot the pivot + * @param value the value + * @return Long integer-reply the length of the list after the insert operation, or {@code -1} when the value {@code pivot} + * was not found. + */ + Mono linsert(K key, boolean before, V pivot, V value); + + /** + * Get the length of a list. + * + * @param key the key + * @return Long integer-reply the length of the list at {@code key}. + */ + Mono llen(K key); + + /** + * Remove and get the first element in a list. + * + * @param key the key + * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. + */ + Mono lpop(K key); + + /** + * Prepend one or multiple values to a list. + * + * @param key the key + * @param values the value + * @return Long integer-reply the length of the list after the push operations. + */ + Mono lpush(K key, V... values); + + /** + * Prepend values to a list, only if the list exists. + * + * @param key the key + * @param values the values + * @return Long integer-reply the length of the list after the push operation. + */ + Mono lpushx(K key, V... values); + + /** + * Get a range of elements from a list. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return V array-reply list of elements in the specified range. + */ + Flux lrange(K key, long start, long stop); + + /** + * Get a range of elements from a list. + * + * @param channel the channel + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long count of elements in the specified range. + */ + Mono lrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Remove elements from a list. + * + * @param key the key + * @param count the count type: long + * @param value the value + * @return Long integer-reply the number of removed elements. + */ + Mono lrem(K key, long count, V value); + + /** + * Set the value of an element in a list by its index. + * + * @param key the key + * @param index the index type: long + * @param value the value + * @return String simple-string-reply + */ + Mono lset(K key, long index, V value); + + /** + * Trim a list to the specified range. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return String simple-string-reply + */ + Mono ltrim(K key, long start, long stop); + + /** + * Remove and get the last element in a list. + * + * @param key the key + * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. + */ + Mono rpop(K key); + + /** + * Remove the last element in a list, append it to another list and return it. + * + * @param source the source key + * @param destination the destination type: key + * @return V bulk-string-reply the element being popped and pushed. + */ + Mono rpoplpush(K source, K destination); + + /** + * Append one or multiple values to a list. + * + * @param key the key + * @param values the value + * @return Long integer-reply the length of the list after the push operation. + */ + Mono rpush(K key, V... values); + + /** + * Append values to a list, only if the list exists. + * + * @param key the key + * @param values the values + * @return Long integer-reply the length of the list after the push operation. + */ + Mono rpushx(K key, V... values); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisReactiveCommands.java new file mode 100644 index 0000000000..0c999add37 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisReactiveCommands.java @@ -0,0 +1,77 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import reactor.core.publisher.Mono; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.reactive.RedisClusterReactiveCommands; + +/** + * A complete reactive and thread-safe Redis API with 400+ Methods. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.0 + */ +public interface RedisReactiveCommands extends BaseRedisReactiveCommands, RedisClusterReactiveCommands, + RedisGeoReactiveCommands, RedisHashReactiveCommands, RedisHLLReactiveCommands, + RedisKeyReactiveCommands, RedisListReactiveCommands, RedisScriptingReactiveCommands, + RedisServerReactiveCommands, RedisSetReactiveCommands, RedisSortedSetReactiveCommands, + RedisStreamReactiveCommands, RedisStringReactiveCommands, RedisTransactionalReactiveCommands { + + /** + * Authenticate to the server. + * + * @param password the password + * @return String simple-string-reply + */ + Mono auth(CharSequence password); + + /** + * Authenticate to the server with username and password. Requires Redis 6 or newer. + * + * @param username the username + * @param password the password + * @return String simple-string-reply + * @since 6.0 + */ + Mono auth(String username, CharSequence password); + + /** + * Change the selected database for the current connection. + * + * @param db the database number + * @return String simple-string-reply + */ + Mono select(int db); + + /** + * Swap two Redis databases, so that immediately all the clients connected to a given DB will see the data of the other DB, + * and the other way around + * + * @param db1 the first database number + * @param db2 the second database number + * @return String simple-string-reply + */ + Mono swapdb(int db1, int db2); + + /** + * @return the underlying connection. + */ + StatefulRedisConnection getStatefulConnection(); + +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisScriptingReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisScriptingReactiveCommands.java new file mode 100644 index 0000000000..d9a1b92893 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisScriptingReactiveCommands.java @@ -0,0 +1,164 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.ScriptOutputType; + +/** + * Reactive executed commands for Scripting. {@link java.lang.String Lua scripts} are encoded by using the configured + * {@link io.lettuce.core.ClientOptions#getScriptCharset() charset}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisScriptingReactiveCommands { + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + */ + Flux eval(String script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + * @since 6.0 + */ + Flux eval(byte[] script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + Flux eval(String script, ScriptOutputType type, K[] keys, V... values); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + * @since 6.0 + */ + Flux eval(byte[] script, ScriptOutputType type, K[] keys, V... values); + + /** + * Evaluates a script cached on the server side by its SHA1 digest + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param expected return type + * @return script result + */ + Flux evalsha(String digest, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + Flux evalsha(String digest, ScriptOutputType type, K[] keys, V... values); + + /** + * Check existence of scripts in the script cache. + * + * @param digests script digests + * @return Boolean array-reply The command returns an array of integers that correspond to the specified SHA1 + * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 + * is returned, otherwise 0 is returned. + */ + Flux scriptExists(String... digests); + + /** + * Remove all the scripts from the script cache. + * + * @return String simple-string-reply + */ + Mono scriptFlush(); + + /** + * Kill the script currently in execution. + * + * @return String simple-string-reply + */ + Mono scriptKill(); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + Mono scriptLoad(String script); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + Mono scriptLoad(byte[] script); + + /** + * Create a SHA1 digest from a Lua script. + * + * @param script script content + * @return the SHA1 value + * @since 6.0 + */ + String digest(String script); + + /** + * Create a SHA1 digest from a Lua script. + * + * @param script script content + * @return the SHA1 value + * @since 6.0 + */ + String digest(byte[] script); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisServerReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisServerReactiveCommands.java new file mode 100644 index 0000000000..cb645b4a01 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisServerReactiveCommands.java @@ -0,0 +1,374 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import java.util.Date; +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.KillArgs; +import io.lettuce.core.UnblockType; +import io.lettuce.core.protocol.CommandType; + +/** + * Reactive executed commands for Server Control. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisServerReactiveCommands { + + /** + * Asynchronously rewrite the append-only file. + * + * @return String simple-string-reply always {@code OK}. + */ + Mono bgrewriteaof(); + + /** + * Asynchronously save the dataset to disk. + * + * @return String simple-string-reply + */ + Mono bgsave(); + + /** + * Get the current connection name. + * + * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. + */ + Mono clientGetname(); + + /** + * Set the current connection name. + * + * @param name the client name + * @return simple-string-reply {@code OK} if the connection name was successfully set. + */ + Mono clientSetname(K name); + + /** + * Kill the connection of a client identified by ip:port. + * + * @param addr ip:port + * @return String simple-string-reply {@code OK} if the connection exists and has been closed + */ + Mono clientKill(String addr); + + /** + * Kill connections of clients which are filtered by {@code killArgs} + * + * @param killArgs args for the kill operation + * @return Long integer-reply number of killed connections + */ + Mono clientKill(KillArgs killArgs); + + /** + * Unblock the specified blocked client. + * + * @param id the client id. + * @param type unblock type. + * @return Long integer-reply number of unblocked connections. + * @since 5.1 + */ + Mono clientUnblock(long id, UnblockType type); + + /** + * Stop processing commands from clients for some time. + * + * @param timeout the timeout value in milliseconds + * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. + */ + Mono clientPause(long timeout); + + /** + * Get the list of client connections. + * + * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), + * each line is composed of a succession of property=value fields separated by a space character. + */ + Mono clientList(); + + /** + * Get the id of the current connection. + * + * @return Long The command just returns the ID of the current connection. + * @since 5.3 + */ + Mono clientId(); + + /** + * Returns an array reply of details about all Redis commands. + * + * @return Object array-reply + */ + Flux command(); + + /** + * Returns an array reply of details about the requested commands. + * + * @param commands the commands to query for + * @return Object array-reply + */ + Flux commandInfo(String... commands); + + /** + * Returns an array reply of details about the requested commands. + * + * @param commands the commands to query for + * @return Object array-reply + */ + Flux commandInfo(CommandType... commands); + + /** + * Get total number of Redis commands. + * + * @return Long integer-reply of number of total commands in this Redis server. + */ + Mono commandCount(); + + /** + * Get the value of a configuration parameter. + * + * @param parameter name of the parameter + * @return Map<String, String> bulk-string-reply + */ + Mono> configGet(String parameter); + + /** + * Reset the stats returned by INFO. + * + * @return String simple-string-reply always {@code OK}. + */ + Mono configResetstat(); + + /** + * Rewrite the configuration file with the in memory configuration. + * + * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is + * returned. + */ + Mono configRewrite(); + + /** + * Set a configuration parameter to the given value. + * + * @param parameter the parameter name + * @param value the parameter value + * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. + */ + Mono configSet(String parameter, String value); + + /** + * Return the number of keys in the selected database. + * + * @return Long integer-reply + */ + Mono dbsize(); + + /** + * Crash and recover + * + * @param delay optional delay in milliseconds + * @return String simple-string-reply + */ + Mono debugCrashAndRecover(Long delay); + + /** + * Get debugging information about the internal hash-table state. + * + * @param db the database number + * @return String simple-string-reply + */ + Mono debugHtstats(int db); + + /** + * Get debugging information about a key. + * + * @param key the key + * @return String simple-string-reply + */ + Mono debugObject(K key); + + /** + * Make the server crash: Out of memory. + * + * @return nothing, because the server crashes before returning. + */ + Mono debugOom(); + + /** + * Make the server crash: Invalid pointer access. + * + * @return nothing, because the server crashes before returning. + */ + Mono debugSegfault(); + + /** + * Save RDB, clear the database and reload RDB. + * + * @return String simple-string-reply The commands returns OK on success. + */ + Mono debugReload(); + + /** + * Restart the server gracefully. + * + * @param delay optional delay in milliseconds + * @return String simple-string-reply + */ + Mono debugRestart(Long delay); + + /** + * Get debugging information about the internal SDS length. + * + * @param key the key + * @return String simple-string-reply + */ + Mono debugSdslen(K key); + + /** + * Remove all keys from all databases. + * + * @return String simple-string-reply + */ + Mono flushall(); + + /** + * Remove all keys asynchronously from all databases. + * + * @return String simple-string-reply + */ + Mono flushallAsync(); + + /** + * Remove all keys from the current database. + * + * @return String simple-string-reply + */ + Mono flushdb(); + + /** + * Remove all keys asynchronously from the current database. + * + * @return String simple-string-reply + */ + Mono flushdbAsync(); + + /** + * Get information and statistics about the server. + * + * @return String bulk-string-reply as a collection of text lines. + */ + Mono info(); + + /** + * Get information and statistics about the server. + * + * @param section the section type: string + * @return String bulk-string-reply as a collection of text lines. + */ + Mono info(String section); + + /** + * Get the UNIX time stamp of the last successful save to disk. + * + * @return Date integer-reply an UNIX time stamp. + */ + Mono lastsave(); + + /** + * Reports the number of bytes that a key and its value require to be stored in RAM. + * + * @return memory usage in bytes. + * @since 5.2 + */ + Mono memoryUsage(K key); + + /** + * Synchronously save the dataset to disk. + * + * @return String simple-string-reply The commands returns OK on success. + */ + Mono save(); + + /** + * Synchronously save the dataset to disk and then shut down the server. + * + * @param save {@literal true} force save operation + */ + Mono shutdown(boolean save); + + /** + * Make the server a replica of another instance, or promote it as master. + * + * @param host the host type: string + * @param port the port type: string + * @return String simple-string-reply + */ + Mono slaveof(String host, int port); + + /** + * Promote server as master. + * + * @return String simple-string-reply + */ + Mono slaveofNoOne(); + + /** + * Read the slow log. + * + * @return Object deeply nested multi bulk replies + */ + Flux slowlogGet(); + + /** + * Read the slow log. + * + * @param count the count + * @return Object deeply nested multi bulk replies + */ + Flux slowlogGet(int count); + + /** + * Obtaining the current length of the slow log. + * + * @return Long length of the slow log. + */ + Mono slowlogLen(); + + /** + * Resetting the slow log. + * + * @return String simple-string-reply The commands returns OK on success. + */ + Mono slowlogReset(); + + /** + * Return the current server time. + * + * @return V array-reply specifically: + * + * A multi bulk reply containing two elements: + * + * unix time in seconds. microseconds. + */ + Flux time(); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisSetReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisSetReactiveCommands.java new file mode 100644 index 0000000000..0dbe038a67 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisSetReactiveCommands.java @@ -0,0 +1,307 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.ScanArgs; +import io.lettuce.core.ScanCursor; +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.ValueScanCursor; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Reactive executed commands for Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisSetReactiveCommands { + + /** + * Add one or more members to a set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply the number of elements that were added to the set, not including all the elements already + * present into the set. + */ + Mono sadd(K key, V... members); + + /** + * Get the number of members in a set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not + * exist. + */ + Mono scard(K key); + + /** + * Subtract multiple sets. + * + * @param keys the key + * @return V array-reply list with members of the resulting set. + */ + Flux sdiff(K... keys); + + /** + * Subtract multiple sets. + * + * @param channel the channel + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Mono sdiff(ValueStreamingChannel channel, K... keys); + + /** + * Subtract multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Mono sdiffstore(K destination, K... keys); + + /** + * Intersect multiple sets. + * + * @param keys the key + * @return V array-reply list with members of the resulting set. + */ + Flux sinter(K... keys); + + /** + * Intersect multiple sets. + * + * @param channel the channel + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Mono sinter(ValueStreamingChannel channel, K... keys); + + /** + * Intersect multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Mono sinterstore(K destination, K... keys); + + /** + * Determine if a given value is a member of a set. + * + * @param key the key + * @param member the member type: value + * @return Boolean integer-reply specifically: + * + * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the + * set, or if {@code key} does not exist. + */ + Mono sismember(K key, V member); + + /** + * Move a member from one set to another. + * + * @param source the source key + * @param destination the destination type: key + * @param member the member type: value + * @return Boolean integer-reply specifically: + * + * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no + * operation was performed. + */ + Mono smove(K source, K destination, V member); + + /** + * Get all the members in a set. + * + * @param key the key + * @return V array-reply all elements of the set. + */ + Flux smembers(K key); + + /** + * Get all the members in a set. + * + * @param channel the channel + * @param key the keys + * @return Long count of members of the resulting set. + */ + Mono smembers(ValueStreamingChannel channel, K key); + + /** + * Remove and return a random member from a set. + * + * @param key the key + * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. + */ + Mono spop(K key); + + /** + * Remove and return one or multiple random members from a set. + * + * @param key the key + * @param count number of members to pop + * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. + */ + Flux spop(K key, long count); + + /** + * Get one random member from a set. + * + * @param key the key + * + * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the + * randomly selected element, or {@literal null} when {@code key} does not exist. + */ + Mono srandmember(K key); + + /** + * Get one or multiple random members from a set. + * + * @param key the key + * @param count the count type: long + * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply + * with the randomly selected element, or {@literal null} when {@code key} does not exist. + */ + Flux srandmember(K key, long count); + + /** + * Get one or multiple random members from a set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param count the count + * @return Long count of members of the resulting set. + */ + Mono srandmember(ValueStreamingChannel channel, K key, long count); + + /** + * Remove one or more members from a set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply the number of members that were removed from the set, not including non existing members. + */ + Mono srem(K key, V... members); + + /** + * Add multiple sets. + * + * @param keys the key + * @return V array-reply list with members of the resulting set. + */ + Flux sunion(K... keys); + + /** + * Add multiple sets. + * + * @param channel streaming channel that receives a call for every value + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Mono sunion(ValueStreamingChannel channel, K... keys); + + /** + * Add multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Mono sunionstore(K destination, K... keys); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @return ValueScanCursor<V> scan cursor. + */ + Mono> sscan(K key); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanArgs scan arguments + * @return ValueScanCursor<V> scan cursor. + */ + Mono> sscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ValueScanCursor<V> scan cursor. + */ + Mono> sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ValueScanCursor<V> scan cursor. + */ + Mono> sscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + Mono sscan(ValueStreamingChannel channel, K key); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Mono sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Mono sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + Mono sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisSortedSetReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisSortedSetReactiveCommands.java new file mode 100644 index 0000000000..6e12c0dbda --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisSortedSetReactiveCommands.java @@ -0,0 +1,1251 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.output.ScoredValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Reactive executed commands for Sorted Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisSortedSetReactiveCommands { + + /** + * Removes and returns a member with the lowest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + Mono>> bzpopmin(long timeout, K... keys); + + /** + * Removes and returns a member with the highest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + Mono>> bzpopmax(long timeout, K... keys); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Mono zadd(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Mono zadd(K key, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Mono zadd(K key, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Mono zadd(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Mono zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the ke + * @param zAddArgs arguments for zadd + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Mono zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + */ + Mono zaddincr(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + * @since 4.3 + */ + Mono zaddincr(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Get the number of members in a sorted set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} + * does not exist. + */ + Mono zcard(K key); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + Mono zcount(K key, double min, double max); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + Mono zcount(K key, String min, String max); + + /** + * Count the members in a sorted set with scores within the given {@link Range}. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + Mono zcount(K key, Range range); + + /** + * Increment the score of a member in a sorted set. + * + * @param key the key + * @param amount the increment type: long + * @param member the member type: value + * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented + * as string. + */ + Mono zincrby(K key, double amount, V member); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Mono zinterstore(K destination, K... keys); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Mono zinterstore(K destination, ZStoreArgs storeArgs, K... keys); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zlexcount(java.lang.Object, Range)} + */ + @Deprecated + Mono zlexcount(K key, String min, String max); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + Mono zlexcount(K key, Range range); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + Mono> zpopmin(K key); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return ScoredValue<V> array-reply list of popped scores and elements. + * @since 5.1 + */ + Flux> zpopmin(K key, long count); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + Mono> zpopmax(K key); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return ScoredValue<V> array-reply list of popped scores and elements. + * @since 5.1 + */ + Flux> zpopmax(K key, long count); + + /** + * Return a range of members in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return V array-reply list of elements in the specified range. + */ + Flux zrange(K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Mono zrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return V array-reply list of elements in the specified range. + */ + Flux> zrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Mono zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return V array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + Flux zrangebylex(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @return V array-reply list of elements in the specified range. + * @since 4.3 + */ + Flux zrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return V array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + Flux zrangebylex(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return V array-reply list of elements in the specified range. + * @since 4.3 + */ + Flux zrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Flux zrangebyscore(K key, double min, double max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Flux zrangebyscore(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return V array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux zrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + Flux zrangebyscore(K key, double min, double max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + Flux zrangebyscore(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return V array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux zrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Mono zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Mono zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Mono zrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Mono zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Mono zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Mono zrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + Flux> zrangebyscoreWithScores(K key, double min, double max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + Flux> zrangebyscoreWithScores(K key, String min, String max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux> zrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Flux> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Flux> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux> zrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Mono zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + Mono zrank(K key, V member); + + /** + * Remove one or more members from a sorted set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply specifically: + * + * The number of members removed from the sorted set, not including non existing members. + */ + Mono zrem(K key, V... members); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebylex(java.lang.Object, Range)} + */ + @Deprecated + Mono zremrangebylex(K key, String min, String max); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + Mono zremrangebylex(K key, Range range); + + /** + * Remove all members in a sorted set within the given indexes. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long integer-reply the number of elements removed. + */ + Mono zremrangebyrank(K key, long start, long stop); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Mono zremrangebyscore(K key, double min, double max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Mono zremrangebyscore(K key, String min, String max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + Mono zremrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return V array-reply list of elements in the specified range. + */ + Flux zrevrange(K key, long start, long stop); + + /** + * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Mono zrevrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return V array-reply list of elements in the specified range. + */ + Flux> zrevrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Mono zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @return V array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux zrevrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return V array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux zrevrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Flux zrevrangebyscore(K key, double max, double min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Flux zrevrangebyscore(K key, String max, String min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return V array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux zrevrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the withscores + * @param count the null + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + Flux zrevrangebyscore(K key, double max, double min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + Flux zrevrangebyscore(K key, String max, String min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return V array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux zrevrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param max max score + * @param min min score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Mono zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Mono zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Mono zrevrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Mono zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Mono zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Mono zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + Flux> zrevrangebyscoreWithScores(K key, double max, double min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + Flux> zrevrangebyscoreWithScores(K key, String max, String min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux> zrevrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return ScoredValue<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Flux> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return V array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Flux> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit limit + * @return V array-reply list of elements in the specified score range. + * @since 4.3 + */ + Flux> zrevrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + */ + Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Mono zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set, with scores ordered from high to low. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + Mono zrevrank(K key, V member); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @return ScoredValueScanCursor<V> scan cursor. + */ + Mono> zscan(K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + Mono> zscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + Mono> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ScoredValueScanCursor<V> scan cursor. + */ + Mono> zscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + Mono zscan(ScoredValueStreamingChannel channel, K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Mono zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Mono zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + Mono zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Get the score associated with the given member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as + * string. + */ + Mono zscore(K key, V member); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination destination key + * @param keys source keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Mono zunionstore(K destination, K... keys); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Mono zunionstore(K destination, ZStoreArgs storeArgs, K... keys); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisStreamReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisStreamReactiveCommands.java new file mode 100644 index 0000000000..76c22a7704 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisStreamReactiveCommands.java @@ -0,0 +1,325 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.XReadArgs.StreamOffset; + +/** + * Reactive executed commands for Streams. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.1 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisStreamReactiveCommands { + + /** + * Acknowledge one or more messages as processed. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param messageIds message Id's to acknowledge. + * @return simple-reply the lenght of acknowledged messages. + */ + Mono xack(K key, K group, String... messageIds); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param body message body. + * @return simple-reply the message Id. + */ + Mono xadd(K key, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param body message body. + * @return simple-reply the message Id. + */ + Mono xadd(K key, XAddArgs args, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + Mono xadd(K key, Object... keysAndValues); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + Mono xadd(K key, XAddArgs args, Object... keysAndValues); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param minIdleTime + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + Flux> xclaim(K key, Consumer consumer, long minIdleTime, String... messageIds); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + *

+ * Note that setting the {@code JUSTID} flag (calling this method with {@link XClaimArgs#justid()}) suppresses the message + * bode and {@link StreamMessage#getBody()} is {@code null}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param args + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + Flux> xclaim(K key, Consumer consumer, XClaimArgs args, String... messageIds); + + /** + * Removes the specified entries from the stream. Returns the number of items deleted, that may be different from the number + * of IDs passed in case certain IDs do not exist. + * + * @param key the stream key. + * @param messageIds stream message Id's. + * @return simple-reply number of removed entries. + */ + Mono xdel(K key, String... messageIds); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + Mono xgroupCreate(StreamOffset streamOffset, K group); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @param args + * @return simple-reply {@literal true} if successful. + * @since 5.2 + */ + Mono xgroupCreate(StreamOffset streamOffset, K group, XGroupCreateArgs args); + + /** + * Delete a consumer from a consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @return simple-reply {@literal true} if successful. + */ + Mono xgroupDelconsumer(K key, Consumer consumer); + + /** + * Destroy a consumer group. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + Mono xgroupDestroy(K key, K group); + + /** + * Set the current {@code group} id. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply OK + */ + Mono xgroupSetid(StreamOffset streamOffset, K group); + + /** + * Retrieve information about the stream at {@code key}. + * + * @param key the stream key. + * @return Object array-reply. + * @since 5.2 + */ + Flux xinfoStream(K key); + + /** + * Retrieve information about the stream consumer groups at {@code key}. + * + * @param key the stream key. + * @return Object array-reply. + * @since 5.2 + */ + Flux xinfoGroups(K key); + + /** + * Retrieve information about consumer groups of group {@code group} and stream at {@code key}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return Object array-reply. + * @since 5.2 + */ + Flux xinfoConsumers(K key, K group); + + /** + * Get the length of a steam. + * + * @param key the stream key. + * @return simple-reply the lenght of the stream. + */ + Mono xlen(K key); + + /** + * Read pending messages from a stream for a {@code group}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return Object array-reply list pending entries. + */ + Flux xpending(K key, K group); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return Object array-reply list with members of the resulting stream. + */ + Flux xpending(K key, K group, Range range, Limit limit); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return Object array-reply list with members of the resulting stream. + */ + Flux xpending(K key, Consumer consumer, Range range, Limit limit); + + /** + * Read messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return StreamMessage array-reply list with members of the resulting stream. + */ + Flux> xrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return StreamMessage array-reply list with members of the resulting stream. + */ + Flux> xrange(K key, Range range, Limit limit); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param streams the streams to read from. + * @return StreamMessage array-reply list with members of the resulting stream. + */ + Flux> xread(StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param args read arguments. + * @param streams the streams to read from. + * @return StreamMessage array-reply list with members of the resulting stream. + */ + Flux> xread(XReadArgs args, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param streams the streams to read from. + * @return StreamMessage array-reply list with members of the resulting stream. + */ + Flux> xreadgroup(Consumer consumer, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param args read arguments. + * @param streams the streams to read from. + * @return StreamMessage array-reply list with members of the resulting stream. + */ + Flux> xreadgroup(Consumer consumer, XReadArgs args, StreamOffset... streams); + + /** + * Read messages from a stream within a specific {@link Range} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return StreamMessage array-reply list with members of the resulting stream. + */ + Flux> xrevrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return StreamMessage array-reply list with members of the resulting stream. + */ + Flux> xrevrange(K key, Range range, Limit limit); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + Mono xtrim(K key, long count); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param approximateTrimming {@literal true} to trim approximately using the {@code ~} flag. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + Mono xtrim(K key, boolean approximateTrimming, long count); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisStringReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisStringReactiveCommands.java new file mode 100644 index 0000000000..718d12cee6 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisStringReactiveCommands.java @@ -0,0 +1,378 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.BitFieldArgs; +import io.lettuce.core.KeyValue; +import io.lettuce.core.SetArgs; +import io.lettuce.core.Value; +import io.lettuce.core.output.KeyValueStreamingChannel; + +/** + * Reactive executed commands for Strings. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisStringReactiveCommands { + + /** + * Append a value to a key. + * + * @param key the key + * @param value the value + * @return Long integer-reply the length of the string after the append operation. + */ + Mono append(K key, V value); + + /** + * Count set bits in a string. + * + * @param key the key + * + * @return Long integer-reply The number of bits set to 1. + */ + Mono bitcount(K key); + + /** + * Count set bits in a string. + * + * @param key the key + * @param start the start + * @param end the end + * + * @return Long integer-reply The number of bits set to 1. + */ + Mono bitcount(K key, long start, long end); + + /** + * Execute {@code BITFIELD} with its subcommands. + * + * @param key the key + * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. + * + * @return Long bulk-reply the results from the bitfield commands. + */ + Flux> bitfield(K key, BitFieldArgs bitFieldArgs); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the state + * + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + */ + Mono bitpos(K key, boolean state); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * @since 5.0.1 + */ + Mono bitpos(K key, boolean state, long start); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @param end the end type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * + * However this behavior changes if you are looking for clear bits and specify a range with both + * start and end. If no clear bit is found in the specified range, the function + * returns -1 as the user specified a clear range and there are no 0 bits in that range. + */ + Mono bitpos(K key, boolean state, long start, long end); + + /** + * Perform bitwise AND between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Mono bitopAnd(K destination, K... keys); + + /** + * Perform bitwise NOT between strings. + * + * @param destination result key of the operation + * @param source operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Mono bitopNot(K destination, K source); + + /** + * Perform bitwise OR between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Mono bitopOr(K destination, K... keys); + + /** + * Perform bitwise XOR between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Mono bitopXor(K destination, K... keys); + + /** + * Decrement the integer value of a key by one. + * + * @param key the key + * @return Long integer-reply the value of {@code key} after the decrement + */ + Mono decr(K key); + + /** + * Decrement the integer value of a key by the given number. + * + * @param key the key + * @param amount the decrement type: long + * @return Long integer-reply the value of {@code key} after the decrement + */ + Mono decrby(K key, long amount); + + /** + * Get the value of a key. + * + * @param key the key + * @return V bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not exist. + */ + Mono get(K key); + + /** + * Returns the bit value at offset in the string value stored at key. + * + * @param key the key + * @param offset the offset type: long + * @return Long integer-reply the bit value stored at offset. + */ + Mono getbit(K key, long offset); + + /** + * Get a substring of the string stored at a key. + * + * @param key the key + * @param start the start type: long + * @param end the end type: long + * @return V bulk-string-reply + */ + Mono getrange(K key, long start, long end); + + /** + * Set the string value of a key and return its old value. + * + * @param key the key + * @param value the value + * @return V bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} did not exist. + */ + Mono getset(K key, V value); + + /** + * Increment the integer value of a key by one. + * + * @param key the key + * @return Long integer-reply the value of {@code key} after the increment + */ + Mono incr(K key); + + /** + * Increment the integer value of a key by the given amount. + * + * @param key the key + * @param amount the increment type: long + * @return Long integer-reply the value of {@code key} after the increment + */ + Mono incrby(K key, long amount); + + /** + * Increment the float value of a key by the given amount. + * + * @param key the key + * @param amount the increment type: double + * @return Double bulk-string-reply the value of {@code key} after the increment. + */ + Mono incrbyfloat(K key, double amount); + + /** + * Get the values of all the given keys. + * + * @param keys the key + * @return V array-reply list of values at the specified keys. + */ + Flux> mget(K... keys); + + /** + * Stream over the values of all the given keys. + * + * @param channel the channel + * @param keys the keys + * + * @return Long array-reply list of values at the specified keys. + */ + Mono mget(KeyValueStreamingChannel channel, K... keys); + + /** + * Set multiple keys to multiple values. + * + * @param map the null + * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. + */ + Mono mset(Map map); + + /** + * Set multiple keys to multiple values, only if none of the keys exist. + * + * @param map the null + * @return Boolean integer-reply specifically: + * + * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). + */ + Mono msetnx(Map map); + + /** + * Set the string value of a key. + * + * @param key the key + * @param value the value + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + Mono set(K key, V value); + + /** + * Set the string value of a key. + * + * @param key the key + * @param value the value + * @param setArgs the setArgs + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + Mono set(K key, V value, SetArgs setArgs); + + /** + * Sets or clears the bit at offset in the string value stored at key. + * + * @param key the key + * @param offset the offset type: long + * @param value the value type: string + * @return Long integer-reply the original bit value stored at offset. + */ + Mono setbit(K key, long offset, int value); + + /** + * Set the value and expiration of a key. + * + * @param key the key + * @param seconds the seconds type: long + * @param value the value + * @return String simple-string-reply + */ + Mono setex(K key, long seconds, V value); + + /** + * Set the value and expiration in milliseconds of a key. + * + * @param key the key + * @param milliseconds the milliseconds type: long + * @param value the value + * @return String simple-string-reply + */ + Mono psetex(K key, long milliseconds, V value); + + /** + * Set the value of a key, only if the key does not exist. + * + * @param key the key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@code 1} if the key was set {@code 0} if the key was not set + */ + Mono setnx(K key, V value); + + /** + * Overwrite part of a string at key starting at the specified offset. + * + * @param key the key + * @param offset the offset type: long + * @param value the value + * @return Long integer-reply the length of the string after it was modified by the command. + */ + Mono setrange(K key, long offset, V value); + + /** + * Get the length of the value stored in a key. + * + * @param key the key + * @return Long integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does not exist. + */ + Mono strlen(K key); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/RedisTransactionalReactiveCommands.java b/src/main/java/io/lettuce/core/api/reactive/RedisTransactionalReactiveCommands.java new file mode 100644 index 0000000000..6259c70fb1 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/RedisTransactionalReactiveCommands.java @@ -0,0 +1,71 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.reactive; + +import reactor.core.publisher.Mono; +import io.lettuce.core.TransactionResult; + +/** + * Reactive executed commands for Transactions. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisTransactionalReactiveCommands { + + /** + * Discard all commands issued after MULTI. + * + * @return String simple-string-reply always {@code OK}. + */ + Mono discard(); + + /** + * Execute all commands issued after MULTI. + * + * @return Object array-reply each element being the reply to each of the commands in the atomic transaction. + * + * When using {@code WATCH}, {@code EXEC} can return a {@link TransactionResult#wasDiscarded discarded + * TransactionResult}. + * @see TransactionResult#wasDiscarded + */ + Mono exec(); + + /** + * Mark the start of a transaction block. + * + * @return String simple-string-reply always {@code OK}. + */ + Mono multi(); + + /** + * Watch the given keys to determine execution of the MULTI/EXEC block. + * + * @param keys the key + * @return String simple-string-reply always {@code OK}. + */ + Mono watch(K... keys); + + /** + * Forget about all watched keys. + * + * @return String simple-string-reply always {@code OK}. + */ + Mono unwatch(); +} diff --git a/src/main/java/io/lettuce/core/api/reactive/package-info.java b/src/main/java/io/lettuce/core/api/reactive/package-info.java new file mode 100644 index 0000000000..0aab03df8b --- /dev/null +++ b/src/main/java/io/lettuce/core/api/reactive/package-info.java @@ -0,0 +1,4 @@ +/** + * Standalone Redis API for reactive command execution. + */ +package io.lettuce.core.api.reactive; diff --git a/src/main/java/io/lettuce/core/api/sync/BaseRedisCommands.java b/src/main/java/io/lettuce/core/api/sync/BaseRedisCommands.java new file mode 100644 index 0000000000..651a2a893a --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/BaseRedisCommands.java @@ -0,0 +1,161 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Synchronous executed commands for basic commands. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface BaseRedisCommands { + + /** + * Post a message to a channel. + * + * @param channel the channel type: key + * @param message the message type: value + * @return Long integer-reply the number of clients that received the message. + */ + Long publish(K channel, V message); + + /** + * Lists the currently *active channels*. + * + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + List pubsubChannels(); + + /** + * Lists the currently *active channels*. + * + * @param channel the key + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + List pubsubChannels(K channel); + + /** + * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. + * + * @param channels channel keys + * @return array-reply a list of channels and number of subscribers for every channel. + */ + Map pubsubNumsub(K... channels); + + /** + * Returns the number of subscriptions to patterns. + * + * @return Long integer-reply the number of patterns all the clients are subscribed to. + */ + Long pubsubNumpat(); + + /** + * Echo the given string. + * + * @param msg the message type: value + * @return V bulk-string-reply + */ + V echo(V msg); + + /** + * Return the role of the instance in the context of replication. + * + * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional + * elements are role-specific. + */ + List role(); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + String ping(); + + /** + * Switch connection to Read-Only mode when connecting to a cluster. + * + * @return String simple-string-reply. + */ + String readOnly(); + + /** + * Switch connection to Read-Write mode (default) when connecting to a cluster. + * + * @return String simple-string-reply. + */ + String readWrite(); + + /** + * Instructs Redis to disconnect the connection. Note that if auto-reconnect is enabled then Lettuce will auto-reconnect if + * the connection was disconnected. Use {@link io.lettuce.core.api.StatefulConnection#close} to close connections and + * release resources. + * + * @return String simple-string-reply always OK. + */ + String quit(); + + /** + * Wait for replication. + * + * @param replicas minimum number of replicas + * @param timeout timeout in milliseconds + * @return number of replicas + */ + Long waitForReplication(int replicas, long timeout); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param response type + * @return the command response + */ + T dispatch(ProtocolKeyword type, CommandOutput output); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param args the command arguments, must not be {@literal null}. + * @param response type + * @return the command response + */ + T dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); + + /** + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the + * internal state machine gets out of sync with the connection. + */ + void reset(); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisCommands.java new file mode 100644 index 0000000000..5c5113efd7 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisCommands.java @@ -0,0 +1,75 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; + +/** + * + * A complete synchronous and thread-safe Redis API with 400+ Methods. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public interface RedisCommands extends BaseRedisCommands, RedisClusterCommands, RedisGeoCommands, + RedisHashCommands, RedisHLLCommands, RedisKeyCommands, RedisListCommands, + RedisScriptingCommands, RedisServerCommands, RedisSetCommands, RedisSortedSetCommands, + RedisStreamCommands, RedisStringCommands, RedisTransactionalCommands { + + /** + * Authenticate to the server. + * + * @param password the password + * @return String simple-string-reply + */ + String auth(CharSequence password); + + /** + * Authenticate to the server with username and password. Requires Redis 6 or newer. + * + * @param username the username + * @param password the password + * @return String simple-string-reply + * @since 6.0 + */ + String auth(String username, CharSequence password); + + /** + * Change the selected database for the current Commands. + * + * @param db the database number + * @return String simple-string-reply + */ + String select(int db); + + /** + * Swap two Redis databases, so that immediately all the clients connected to a given DB will see the data of the other DB, + * and the other way around + * + * @param db1 the first database number + * @param db2 the second database number + * @return String simple-string-reply + */ + String swapdb(int db1, int db2); + + /** + * @return the underlying connection. + */ + StatefulRedisConnection getStatefulConnection(); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisGeoCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisGeoCommands.java new file mode 100644 index 0000000000..f36ee30b10 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisGeoCommands.java @@ -0,0 +1,162 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; +import java.util.Set; + +import io.lettuce.core.*; + +/** + * Synchronous executed commands for the Geo-API. + * + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisGeoCommands { + + /** + * Single geo add. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param member the member to add + * @return Long integer-reply the number of elements that were added to the set + */ + Long geoadd(K key, double longitude, double latitude, V member); + + /** + * Multi geo add. + * + * @param key the key of the geo set + * @param lngLatMember triplets of double longitude, double latitude and V member + * @return Long integer-reply the number of elements that were added to the set + */ + Long geoadd(K key, Object... lngLatMember); + + /** + * Retrieve Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index. + * + * @param key the key of the geo set + * @param members the members + * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. + */ + List> geohash(K key, V... members); + + /** + * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @return bulk reply + */ + Set georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit); + + /** + * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @param geoArgs args to control the result + * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} + */ + List> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); + + /** + * Perform a {@link #georadius(Object, double, double, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with + * their locations a sorted set. + * @return Long integer-reply the number of elements in the result + */ + Long georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); + + /** + * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the + * results. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @return set of members + */ + Set georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); + + /** + * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the + * results. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @param geoArgs args to control the result + * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} + */ + List> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); + + /** + * Perform a {@link #georadiusbymember(Object, Object, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with + * their locations a sorted set. + * @return Long integer-reply the number of elements in the result + */ + Long georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); + + /** + * Get geo coordinates for the {@code members}. + * + * @param key the key of the geo set + * @param members the members + * + * @return a list of {@link GeoCoordinates}s representing the x,y position of each element specified in the arguments. For + * missing elements {@literal null} is returned. + */ + List geopos(K key, V... members); + + /** + * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is + * returned. Default in meters by, otherwise according to {@code unit} + * + * @param key the key of the geo set + * @param from from member + * @param to to member + * @param unit distance unit + * + * @return distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is + * returned. + */ + Double geodist(K key, V from, V to, GeoArgs.Unit unit); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisHLLCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisHLLCommands.java new file mode 100644 index 0000000000..4a73f28dc8 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisHLLCommands.java @@ -0,0 +1,61 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +/** + * Synchronous executed commands for HyperLogLog (PF* commands). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisHLLCommands { + + /** + * Adds the specified elements to the specified HyperLogLog. + * + * @param key the key + * @param values the values + * + * @return Long integer-reply specifically: + * + * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. + */ + Long pfadd(K key, V... values); + + /** + * Merge N different HyperLogLogs into a single one. + * + * @param destkey the destination key + * @param sourcekeys the source key + * + * @return String simple-string-reply The command just returns {@code OK}. + */ + String pfmerge(K destkey, K... sourcekeys); + + /** + * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). + * + * @param keys the keys + * + * @return Long integer-reply specifically: + * + * The approximated number of unique elements observed via {@code PFADD}. + */ + Long pfcount(K... keys); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisHashCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisHashCommands.java new file mode 100644 index 0000000000..562c3479b2 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisHashCommands.java @@ -0,0 +1,302 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Synchronous executed commands for Hashes (Key-Value pairs). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisHashCommands { + + /** + * Delete one or more hash fields. + * + * @param key the key + * @param fields the field type: key + * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing + * fields. + */ + Long hdel(K key, K... fields); + + /** + * Determine if a hash field exists. + * + * @param key the key + * @param field the field type: key + * @return Boolean integer-reply specifically: + * + * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, + * or {@code key} does not exist. + */ + Boolean hexists(K key, K field); + + /** + * Get the value of a hash field. + * + * @param key the key + * @param field the field type: key + * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present + * in the hash or {@code key} does not exist. + */ + V hget(K key, K field); + + /** + * Increment the integer value of a hash field by the given number. + * + * @param key the key + * @param field the field type: key + * @param amount the increment type: long + * @return Long integer-reply the value at {@code field} after the increment operation. + */ + Long hincrby(K key, K field, long amount); + + /** + * Increment the float value of a hash field by the given amount. + * + * @param key the key + * @param field the field type: key + * @param amount the increment type: double + * @return Double bulk-string-reply the value of {@code field} after the increment. + */ + Double hincrbyfloat(K key, K field, double amount); + + /** + * Get all the fields and values in a hash. + * + * @param key the key + * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} + * does not exist. + */ + Map hgetall(K key); + + /** + * Stream over all the fields and values in a hash. + * + * @param channel the channel + * @param key the key + * + * @return Long count of the keys. + */ + Long hgetall(KeyValueStreamingChannel channel, K key); + + /** + * Get all the fields in a hash. + * + * @param key the key + * @return List<K> array-reply list of fields in the hash, or an empty list when {@code key} does not exist. + */ + List hkeys(K key); + + /** + * Stream over all the fields in a hash. + * + * @param channel the channel + * @param key the key + * + * @return Long count of the keys. + */ + Long hkeys(KeyStreamingChannel channel, K key); + + /** + * Get the number of fields in a hash. + * + * @param key the key + * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. + */ + Long hlen(K key); + + /** + * Get the values of all the given hash fields. + * + * @param key the key + * @param fields the field type: key + * @return List<V> array-reply list of values associated with the given fields, in the same + */ + List> hmget(K key, K... fields); + + /** + * Stream over the values of all the given hash fields. + * + * @param channel the channel + * @param key the key + * @param fields the fields + * + * @return Long count of the keys + */ + Long hmget(KeyValueStreamingChannel channel, K key, K... fields); + + /** + * Set multiple hash fields to multiple values. + * + * @param key the key + * @param map the null + * @return String simple-string-reply + */ + String hmset(K key, Map map); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @return MapScanCursor<K, V> map scan cursor. + */ + MapScanCursor hscan(K key); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanArgs scan arguments + * @return MapScanCursor<K, V> map scan cursor. + */ + MapScanCursor hscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return MapScanCursor<K, V> map scan cursor. + */ + MapScanCursor hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return MapScanCursor<K, V> map scan cursor. + */ + MapScanCursor hscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor hscan(KeyValueStreamingChannel channel, K key); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Set the string value of a hash field. + * + * @param key the key + * @param field the field type: key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@literal true} if {@code field} is a new field in the hash and {@code value} was set. {@literal false} if + * {@code field} already exists in the hash and the value was updated. + */ + Boolean hset(K key, K field, V value); + + /** + * Set multiple hash fields to multiple values. + * + * @param key the key of the hash + * @param map the field/value pairs to update + * @return Long integer-reply: the number of fields that were added. + * @since 5.3 + */ + Long hset(K key, Map map); + + /** + * Set the value of a hash field, only if the field does not exist. + * + * @param key the key + * @param field the field type: key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@code 1} if {@code field} is a new field in the hash and {@code value} was set. {@code 0} if {@code field} + * already exists in the hash and no operation was performed. + */ + Boolean hsetnx(K key, K field, V value); + + /** + * Get the string length of the field value in a hash. + * + * @param key the key + * @param field the field type: key + * @return Long integer-reply the string length of the {@code field} value, or {@code 0} when {@code field} is not present + * in the hash or {@code key} does not exist at all. + */ + Long hstrlen(K key, K field); + + /** + * Get all the values in a hash. + * + * @param key the key + * @return List<V> array-reply list of values in the hash, or an empty list when {@code key} does not exist. + */ + List hvals(K key); + + /** + * Stream over all the values in a hash. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * + * @return Long count of the keys. + */ + Long hvals(ValueStreamingChannel channel, K key); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisKeyCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisKeyCommands.java new file mode 100644 index 0000000000..609964f9eb --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisKeyCommands.java @@ -0,0 +1,420 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.Date; +import java.util.List; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Synchronous executed commands for Keys (Key manipulation/querying). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisKeyCommands { + + /** + * Delete one or more keys. + * + * @param keys the keys + * @return Long integer-reply The number of keys that were removed. + */ + Long del(K... keys); + + /** + * Unlink one or more keys (non blocking DEL). + * + * @param keys the keys + * @return Long integer-reply The number of keys that were removed. + */ + Long unlink(K... keys); + + /** + * Return a serialized version of the value stored at the specified key. + * + * @param key the key + * @return byte[] bulk-string-reply the serialized value. + */ + byte[] dump(K key); + + /** + * Determine how many keys exist. + * + * @param keys the keys + * @return Long integer-reply specifically: Number of existing keys + */ + Long exists(K... keys); + + /** + * Set a key's time to live in seconds. + * + * @param key the key + * @param seconds the seconds type: long + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set. + */ + Boolean expire(K key, long seconds); + + /** + * Set the expiration for a key as a UNIX timestamp. + * + * @param key the key + * @param timestamp the timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Boolean expireat(K key, Date timestamp); + + /** + * Set the expiration for a key as a UNIX timestamp. + * + * @param key the key + * @param timestamp the timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Boolean expireat(K key, long timestamp); + + /** + * Find all keys matching the given pattern. + * + * @param pattern the pattern type: patternkey (pattern) + * @return List<K> array-reply list of keys matching {@code pattern}. + */ + List keys(K pattern); + + /** + * Find all keys matching the given pattern. + * + * @param channel the channel + * @param pattern the pattern + * @return Long array-reply list of keys matching {@code pattern}. + */ + Long keys(KeyStreamingChannel channel, K pattern); + + /** + * Atomically transfer a key from a Redis instance to another one. + * + * @param host the host + * @param port the port + * @param key the key + * @param db the database + * @param timeout the timeout in milliseconds + * @return String simple-string-reply The command returns OK on success. + */ + String migrate(String host, int port, K key, int db, long timeout); + + /** + * Atomically transfer one or more keys from a Redis instance to another one. + * + * @param host the host + * @param port the port + * @param db the database + * @param timeout the timeout in milliseconds + * @param migrateArgs migrate args that allow to configure further options + * @return String simple-string-reply The command returns OK on success. + */ + String migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs); + + /** + * Move a key to another database. + * + * @param key the key + * @param db the db type: long + * @return Boolean integer-reply specifically: + */ + Boolean move(K key, int db); + + /** + * returns the kind of internal representation used in order to store the value associated with a key. + * + * @param key the key + * @return String + */ + String objectEncoding(K key); + + /** + * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write + * operations). + * + * @param key the key + * @return number of seconds since the object stored at the specified key is idle. + */ + Long objectIdletime(K key); + + /** + * returns the number of references of the value associated with the specified key. + * + * @param key the key + * @return Long + */ + Long objectRefcount(K key); + + /** + * Remove the expiration from a key. + * + * @param key the key + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an + * associated timeout. + */ + Boolean persist(K key); + + /** + * Set a key's time to live in milliseconds. + * + * @param key the key + * @param milliseconds the milliseconds type: long + * @return integer-reply, specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set. + */ + Boolean pexpire(K key, long milliseconds); + + /** + * Set the expiration for a key as a UNIX timestamp specified in milliseconds. + * + * @param key the key + * @param timestamp the milliseconds-timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Boolean pexpireat(K key, Date timestamp); + + /** + * Set the expiration for a key as a UNIX timestamp specified in milliseconds. + * + * @param key the key + * @param timestamp the milliseconds-timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Boolean pexpireat(K key, long timestamp); + + /** + * Get the time to live for a key in milliseconds. + * + * @param key the key + * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description + * above). + */ + Long pttl(K key); + + /** + * Return a random key from the keyspace. + * + * @return K bulk-string-reply the random key, or {@literal null} when the database is empty. + */ + K randomkey(); + + /** + * Rename a key. + * + * @param key the key + * @param newKey the newkey type: key + * @return String simple-string-reply + */ + String rename(K key, K newKey); + + /** + * Rename a key, only if the new key does not exist. + * + * @param key the key + * @param newKey the newkey type: key + * @return Boolean integer-reply specifically: + * + * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. + */ + Boolean renamenx(K key, K newKey); + + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param ttl the ttl type: long + * @param value the serialized-value type: string + * @return String simple-string-reply The command returns OK on success. + */ + String restore(K key, long ttl, byte[] value); + + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param value the serialized-value type: string + * @param args the {@link RestoreArgs}, must not be {@literal null}. + * @return String simple-string-reply The command returns OK on success. + * @since 5.1 + */ + String restore(K key, byte[] value, RestoreArgs args); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @return List<V> array-reply list of sorted elements. + */ + List sort(K key); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @return Long number of values. + */ + Long sort(ValueStreamingChannel channel, K key); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @param sortArgs sort arguments + * @return List<V> array-reply list of sorted elements. + */ + List sort(K key, SortArgs sortArgs); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param sortArgs sort arguments + * @return Long number of values. + */ + Long sort(ValueStreamingChannel channel, K key, SortArgs sortArgs); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @param sortArgs sort arguments + * @param destination the destination key to store sort results + * @return Long number of values. + */ + Long sortStore(K key, SortArgs sortArgs, K destination); + + /** + * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. + * + * @param keys the keys + * @return Long integer-reply the number of found keys. + */ + Long touch(K... keys); + + /** + * Get the time to live for a key. + * + * @param key the key + * @return Long integer-reply TTL in seconds, or a negative value in order to signal an error (see the description above). + */ + Long ttl(K key); + + /** + * Determine the type stored at key. + * + * @param key the key + * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. + */ + String type(K key); + + /** + * Incrementally iterate the keys space. + * + * @return KeyScanCursor<K> scan cursor. + */ + KeyScanCursor scan(); + + /** + * Incrementally iterate the keys space. + * + * @param scanArgs scan arguments + * @return KeyScanCursor<K> scan cursor. + */ + KeyScanCursor scan(ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return KeyScanCursor<K> scan cursor. + */ + KeyScanCursor scan(ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return KeyScanCursor<K> scan cursor. + */ + KeyScanCursor scan(ScanCursor scanCursor); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor scan(KeyStreamingChannel channel); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor scan(KeyStreamingChannel channel, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisListCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisListCommands.java new file mode 100644 index 0000000000..c2977dfb49 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisListCommands.java @@ -0,0 +1,211 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Synchronous executed commands for Lists. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisListCommands { + + /** + * Remove and get the first element in a list, or block until one is available. + * + * @param timeout the timeout in seconds + * @param keys the keys + * @return KeyValue<K,V> array-reply specifically: + * + * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk + * with the first element being the name of the key where an element was popped and the second element being the + * value of the popped element. + */ + KeyValue blpop(long timeout, K... keys); + + /** + * Remove and get the last element in a list, or block until one is available. + * + * @param timeout the timeout in seconds + * @param keys the keys + * @return KeyValue<K,V> array-reply specifically: + * + * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk + * with the first element being the name of the key where an element was popped and the second element being the + * value of the popped element. + */ + KeyValue brpop(long timeout, K... keys); + + /** + * Pop a value from a list, push it to another list and return it; or block until one is available. + * + * @param timeout the timeout in seconds + * @param source the source key + * @param destination the destination type: key + * @return V bulk-string-reply the element being popped from {@code source} and pushed to {@code destination}. If + * {@code timeout} is reached, a + */ + V brpoplpush(long timeout, K source, K destination); + + /** + * Get an element from a list by its index. + * + * @param key the key + * @param index the index type: long + * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. + */ + V lindex(K key, long index); + + /** + * Insert an element before or after another element in a list. + * + * @param key the key + * @param before the before + * @param pivot the pivot + * @param value the value + * @return Long integer-reply the length of the list after the insert operation, or {@code -1} when the value {@code pivot} + * was not found. + */ + Long linsert(K key, boolean before, V pivot, V value); + + /** + * Get the length of a list. + * + * @param key the key + * @return Long integer-reply the length of the list at {@code key}. + */ + Long llen(K key); + + /** + * Remove and get the first element in a list. + * + * @param key the key + * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. + */ + V lpop(K key); + + /** + * Prepend one or multiple values to a list. + * + * @param key the key + * @param values the value + * @return Long integer-reply the length of the list after the push operations. + */ + Long lpush(K key, V... values); + + /** + * Prepend values to a list, only if the list exists. + * + * @param key the key + * @param values the values + * @return Long integer-reply the length of the list after the push operation. + */ + Long lpushx(K key, V... values); + + /** + * Get a range of elements from a list. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return List<V> array-reply list of elements in the specified range. + */ + List lrange(K key, long start, long stop); + + /** + * Get a range of elements from a list. + * + * @param channel the channel + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long count of elements in the specified range. + */ + Long lrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Remove elements from a list. + * + * @param key the key + * @param count the count type: long + * @param value the value + * @return Long integer-reply the number of removed elements. + */ + Long lrem(K key, long count, V value); + + /** + * Set the value of an element in a list by its index. + * + * @param key the key + * @param index the index type: long + * @param value the value + * @return String simple-string-reply + */ + String lset(K key, long index, V value); + + /** + * Trim a list to the specified range. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return String simple-string-reply + */ + String ltrim(K key, long start, long stop); + + /** + * Remove and get the last element in a list. + * + * @param key the key + * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. + */ + V rpop(K key); + + /** + * Remove the last element in a list, append it to another list and return it. + * + * @param source the source key + * @param destination the destination type: key + * @return V bulk-string-reply the element being popped and pushed. + */ + V rpoplpush(K source, K destination); + + /** + * Append one or multiple values to a list. + * + * @param key the key + * @param values the value + * @return Long integer-reply the length of the list after the push operation. + */ + Long rpush(K key, V... values); + + /** + * Append values to a list, only if the list exists. + * + * @param key the key + * @param values the values + * @return Long integer-reply the length of the list after the push operation. + */ + Long rpushx(K key, V... values); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisScriptingCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisScriptingCommands.java new file mode 100644 index 0000000000..9a3b974823 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisScriptingCommands.java @@ -0,0 +1,164 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; + +import io.lettuce.core.ScriptOutputType; + +/** + * Synchronous executed commands for Scripting. {@link java.lang.String Lua scripts} are encoded by using the configured + * {@link io.lettuce.core.ClientOptions#getScriptCharset() charset}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisScriptingCommands { + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + */ + T eval(String script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + * @since 6.0 + */ + T eval(byte[] script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + T eval(String script, ScriptOutputType type, K[] keys, V... values); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + * @since 6.0 + */ + T eval(byte[] script, ScriptOutputType type, K[] keys, V... values); + + /** + * Evaluates a script cached on the server side by its SHA1 digest + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param expected return type + * @return script result + */ + T evalsha(String digest, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + T evalsha(String digest, ScriptOutputType type, K[] keys, V... values); + + /** + * Check existence of scripts in the script cache. + * + * @param digests script digests + * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 + * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 + * is returned, otherwise 0 is returned. + */ + List scriptExists(String... digests); + + /** + * Remove all the scripts from the script cache. + * + * @return String simple-string-reply + */ + String scriptFlush(); + + /** + * Kill the script currently in execution. + * + * @return String simple-string-reply + */ + String scriptKill(); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + String scriptLoad(String script); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + String scriptLoad(byte[] script); + + /** + * Create a SHA1 digest from a Lua script. + * + * @param script script content + * @return the SHA1 value + * @since 6.0 + */ + String digest(String script); + + /** + * Create a SHA1 digest from a Lua script. + * + * @param script script content + * @return the SHA1 value + * @since 6.0 + */ + String digest(byte[] script); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisServerCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisServerCommands.java new file mode 100644 index 0000000000..3254ddbf0b --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisServerCommands.java @@ -0,0 +1,373 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.Date; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.UnblockType; +import io.lettuce.core.protocol.CommandType; + +/** + * Synchronous executed commands for Server Control. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisServerCommands { + + /** + * Asynchronously rewrite the append-only file. + * + * @return String simple-string-reply always {@code OK}. + */ + String bgrewriteaof(); + + /** + * Asynchronously save the dataset to disk. + * + * @return String simple-string-reply + */ + String bgsave(); + + /** + * Get the current connection name. + * + * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. + */ + K clientGetname(); + + /** + * Set the current connection name. + * + * @param name the client name + * @return simple-string-reply {@code OK} if the connection name was successfully set. + */ + String clientSetname(K name); + + /** + * Kill the connection of a client identified by ip:port. + * + * @param addr ip:port + * @return String simple-string-reply {@code OK} if the connection exists and has been closed + */ + String clientKill(String addr); + + /** + * Kill connections of clients which are filtered by {@code killArgs} + * + * @param killArgs args for the kill operation + * @return Long integer-reply number of killed connections + */ + Long clientKill(KillArgs killArgs); + + /** + * Unblock the specified blocked client. + * + * @param id the client id. + * @param type unblock type. + * @return Long integer-reply number of unblocked connections. + * @since 5.1 + */ + Long clientUnblock(long id, UnblockType type); + + /** + * Stop processing commands from clients for some time. + * + * @param timeout the timeout value in milliseconds + * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. + */ + String clientPause(long timeout); + + /** + * Get the list of client connections. + * + * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), + * each line is composed of a succession of property=value fields separated by a space character. + */ + String clientList(); + + /** + * Get the id of the current connection. + * + * @return Long The command just returns the ID of the current connection. + * @since 5.3 + */ + Long clientId(); + + /** + * Returns an array reply of details about all Redis commands. + * + * @return List<Object> array-reply + */ + List command(); + + /** + * Returns an array reply of details about the requested commands. + * + * @param commands the commands to query for + * @return List<Object> array-reply + */ + List commandInfo(String... commands); + + /** + * Returns an array reply of details about the requested commands. + * + * @param commands the commands to query for + * @return List<Object> array-reply + */ + List commandInfo(CommandType... commands); + + /** + * Get total number of Redis commands. + * + * @return Long integer-reply of number of total commands in this Redis server. + */ + Long commandCount(); + + /** + * Get the value of a configuration parameter. + * + * @param parameter name of the parameter + * @return Map<String, String> bulk-string-reply + */ + Map configGet(String parameter); + + /** + * Reset the stats returned by INFO. + * + * @return String simple-string-reply always {@code OK}. + */ + String configResetstat(); + + /** + * Rewrite the configuration file with the in memory configuration. + * + * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is + * returned. + */ + String configRewrite(); + + /** + * Set a configuration parameter to the given value. + * + * @param parameter the parameter name + * @param value the parameter value + * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. + */ + String configSet(String parameter, String value); + + /** + * Return the number of keys in the selected database. + * + * @return Long integer-reply + */ + Long dbsize(); + + /** + * Crash and recover + * + * @param delay optional delay in milliseconds + * @return String simple-string-reply + */ + String debugCrashAndRecover(Long delay); + + /** + * Get debugging information about the internal hash-table state. + * + * @param db the database number + * @return String simple-string-reply + */ + String debugHtstats(int db); + + /** + * Get debugging information about a key. + * + * @param key the key + * @return String simple-string-reply + */ + String debugObject(K key); + + /** + * Make the server crash: Out of memory. + * + * @return nothing, because the server crashes before returning. + */ + void debugOom(); + + /** + * Make the server crash: Invalid pointer access. + * + * @return nothing, because the server crashes before returning. + */ + void debugSegfault(); + + /** + * Save RDB, clear the database and reload RDB. + * + * @return String simple-string-reply The commands returns OK on success. + */ + String debugReload(); + + /** + * Restart the server gracefully. + * + * @param delay optional delay in milliseconds + * @return String simple-string-reply + */ + String debugRestart(Long delay); + + /** + * Get debugging information about the internal SDS length. + * + * @param key the key + * @return String simple-string-reply + */ + String debugSdslen(K key); + + /** + * Remove all keys from all databases. + * + * @return String simple-string-reply + */ + String flushall(); + + /** + * Remove all keys asynchronously from all databases. + * + * @return String simple-string-reply + */ + String flushallAsync(); + + /** + * Remove all keys from the current database. + * + * @return String simple-string-reply + */ + String flushdb(); + + /** + * Remove all keys asynchronously from the current database. + * + * @return String simple-string-reply + */ + String flushdbAsync(); + + /** + * Get information and statistics about the server. + * + * @return String bulk-string-reply as a collection of text lines. + */ + String info(); + + /** + * Get information and statistics about the server. + * + * @param section the section type: string + * @return String bulk-string-reply as a collection of text lines. + */ + String info(String section); + + /** + * Get the UNIX time stamp of the last successful save to disk. + * + * @return Date integer-reply an UNIX time stamp. + */ + Date lastsave(); + + /** + * Reports the number of bytes that a key and its value require to be stored in RAM. + * + * @return memory usage in bytes. + * @since 5.2 + */ + Long memoryUsage(K key); + + /** + * Synchronously save the dataset to disk. + * + * @return String simple-string-reply The commands returns OK on success. + */ + String save(); + + /** + * Synchronously save the dataset to disk and then shut down the server. + * + * @param save {@literal true} force save operation + */ + void shutdown(boolean save); + + /** + * Make the server a replica of another instance, or promote it as master. + * + * @param host the host type: string + * @param port the port type: string + * @return String simple-string-reply + */ + String slaveof(String host, int port); + + /** + * Promote server as master. + * + * @return String simple-string-reply + */ + String slaveofNoOne(); + + /** + * Read the slow log. + * + * @return List<Object> deeply nested multi bulk replies + */ + List slowlogGet(); + + /** + * Read the slow log. + * + * @param count the count + * @return List<Object> deeply nested multi bulk replies + */ + List slowlogGet(int count); + + /** + * Obtaining the current length of the slow log. + * + * @return Long length of the slow log. + */ + Long slowlogLen(); + + /** + * Resetting the slow log. + * + * @return String simple-string-reply The commands returns OK on success. + */ + String slowlogReset(); + + /** + * Return the current server time. + * + * @return List<V> array-reply specifically: + * + * A multi bulk reply containing two elements: + * + * unix time in seconds. microseconds. + */ + List time(); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisSetCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisSetCommands.java new file mode 100644 index 0000000000..29837eb91c --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisSetCommands.java @@ -0,0 +1,308 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; +import java.util.Set; + +import io.lettuce.core.ScanArgs; +import io.lettuce.core.ScanCursor; +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.ValueScanCursor; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Synchronous executed commands for Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisSetCommands { + + /** + * Add one or more members to a set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply the number of elements that were added to the set, not including all the elements already + * present into the set. + */ + Long sadd(K key, V... members); + + /** + * Get the number of members in a set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not + * exist. + */ + Long scard(K key); + + /** + * Subtract multiple sets. + * + * @param keys the key + * @return Set<V> array-reply list with members of the resulting set. + */ + Set sdiff(K... keys); + + /** + * Subtract multiple sets. + * + * @param channel the channel + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Long sdiff(ValueStreamingChannel channel, K... keys); + + /** + * Subtract multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Long sdiffstore(K destination, K... keys); + + /** + * Intersect multiple sets. + * + * @param keys the key + * @return Set<V> array-reply list with members of the resulting set. + */ + Set sinter(K... keys); + + /** + * Intersect multiple sets. + * + * @param channel the channel + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Long sinter(ValueStreamingChannel channel, K... keys); + + /** + * Intersect multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Long sinterstore(K destination, K... keys); + + /** + * Determine if a given value is a member of a set. + * + * @param key the key + * @param member the member type: value + * @return Boolean integer-reply specifically: + * + * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the + * set, or if {@code key} does not exist. + */ + Boolean sismember(K key, V member); + + /** + * Move a member from one set to another. + * + * @param source the source key + * @param destination the destination type: key + * @param member the member type: value + * @return Boolean integer-reply specifically: + * + * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no + * operation was performed. + */ + Boolean smove(K source, K destination, V member); + + /** + * Get all the members in a set. + * + * @param key the key + * @return Set<V> array-reply all elements of the set. + */ + Set smembers(K key); + + /** + * Get all the members in a set. + * + * @param channel the channel + * @param key the keys + * @return Long count of members of the resulting set. + */ + Long smembers(ValueStreamingChannel channel, K key); + + /** + * Remove and return a random member from a set. + * + * @param key the key + * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. + */ + V spop(K key); + + /** + * Remove and return one or multiple random members from a set. + * + * @param key the key + * @param count number of members to pop + * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. + */ + Set spop(K key, long count); + + /** + * Get one random member from a set. + * + * @param key the key + * + * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the + * randomly selected element, or {@literal null} when {@code key} does not exist. + */ + V srandmember(K key); + + /** + * Get one or multiple random members from a set. + * + * @param key the key + * @param count the count type: long + * @return Set<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply + * with the randomly selected element, or {@literal null} when {@code key} does not exist. + */ + List srandmember(K key, long count); + + /** + * Get one or multiple random members from a set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param count the count + * @return Long count of members of the resulting set. + */ + Long srandmember(ValueStreamingChannel channel, K key, long count); + + /** + * Remove one or more members from a set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply the number of members that were removed from the set, not including non existing members. + */ + Long srem(K key, V... members); + + /** + * Add multiple sets. + * + * @param keys the key + * @return Set<V> array-reply list with members of the resulting set. + */ + Set sunion(K... keys); + + /** + * Add multiple sets. + * + * @param channel streaming channel that receives a call for every value + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Long sunion(ValueStreamingChannel channel, K... keys); + + /** + * Add multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Long sunionstore(K destination, K... keys); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @return ValueScanCursor<V> scan cursor. + */ + ValueScanCursor sscan(K key); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanArgs scan arguments + * @return ValueScanCursor<V> scan cursor. + */ + ValueScanCursor sscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ValueScanCursor<V> scan cursor. + */ + ValueScanCursor sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ValueScanCursor<V> scan cursor. + */ + ValueScanCursor sscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor sscan(ValueStreamingChannel channel, K key); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisSortedSetCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisSortedSetCommands.java new file mode 100644 index 0000000000..f87a38dae1 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisSortedSetCommands.java @@ -0,0 +1,1251 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; + +import io.lettuce.core.*; +import io.lettuce.core.output.ScoredValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Synchronous executed commands for Sorted Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisSortedSetCommands { + + /** + * Removes and returns a member with the lowest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + KeyValue> bzpopmin(long timeout, K... keys); + + /** + * Removes and returns a member with the highest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + KeyValue> bzpopmax(long timeout, K... keys); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the ke + * @param zAddArgs arguments for zadd + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + */ + Double zaddincr(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + * @since 4.3 + */ + Double zaddincr(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Get the number of members in a sorted set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} + * does not exist. + */ + Long zcard(K key); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + Long zcount(K key, double min, double max); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + Long zcount(K key, String min, String max); + + /** + * Count the members in a sorted set with scores within the given {@link Range}. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + Long zcount(K key, Range range); + + /** + * Increment the score of a member in a sorted set. + * + * @param key the key + * @param amount the increment type: long + * @param member the member type: value + * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented + * as string. + */ + Double zincrby(K key, double amount, V member); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Long zinterstore(K destination, K... keys); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Long zinterstore(K destination, ZStoreArgs storeArgs, K... keys); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zlexcount(java.lang.Object, Range)} + */ + @Deprecated + Long zlexcount(K key, String min, String max); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + Long zlexcount(K key, Range range); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + ScoredValue zpopmin(K key); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + List> zpopmin(K key, long count); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + ScoredValue zpopmax(K key); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + List> zpopmax(K key, long count); + + /** + * Return a range of members in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + List zrange(K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Long zrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + List> zrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Long zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + List zrangebylex(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + List zrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + List zrangebylex(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + List zrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + List zrangebyscore(K key, double min, double max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + List zrangebyscore(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + List zrangebyscore(K key, double min, double max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + List zrangebyscore(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Long zrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Long zrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + List> zrangebyscoreWithScores(K key, double min, double max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + List> zrangebyscoreWithScores(K key, String min, String max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List> zrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit limit)} + */ + @Deprecated + List> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + List> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List> zrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + Long zrank(K key, V member); + + /** + * Remove one or more members from a sorted set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply specifically: + * + * The number of members removed from the sorted set, not including non existing members. + */ + Long zrem(K key, V... members); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebylex(java.lang.Object, Range)} + */ + @Deprecated + Long zremrangebylex(K key, String min, String max); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + Long zremrangebylex(K key, Range range); + + /** + * Remove all members in a sorted set within the given indexes. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long integer-reply the number of elements removed. + */ + Long zremrangebyrank(K key, long start, long stop); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Long zremrangebyscore(K key, double min, double max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Long zremrangebyscore(K key, String min, String max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + Long zremrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + List zrevrange(K key, long start, long stop); + + /** + * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Long zrevrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + List> zrevrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Long zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrevrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrevrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + List zrevrangebyscore(K key, double max, double min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + List zrevrangebyscore(K key, String max, String min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrevrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the withscores + * @param count the null + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + List zrevrangebyscore(K key, double max, double min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + List zrevrangebyscore(K key, String max, String min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrevrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param max max score + * @param min min score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Long zrevrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Long zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + List> zrevrangebyscoreWithScores(K key, double max, double min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + List> zrevrangebyscoreWithScores(K key, String max, String min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List> zrevrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + List> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + List> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List> zrevrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + */ + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set, with scores ordered from high to low. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + Long zrevrank(K key, V member); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @return ScoredValueScanCursor<V> scan cursor. + */ + ScoredValueScanCursor zscan(K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + ScoredValueScanCursor zscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + ScoredValueScanCursor zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ScoredValueScanCursor<V> scan cursor. + */ + ScoredValueScanCursor zscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Get the score associated with the given member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as + * string. + */ + Double zscore(K key, V member); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination destination key + * @param keys source keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Long zunionstore(K destination, K... keys); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Long zunionstore(K destination, ZStoreArgs storeArgs, K... keys); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisStreamCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisStreamCommands.java new file mode 100644 index 0000000000..428d95e64b --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisStreamCommands.java @@ -0,0 +1,324 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.*; +import io.lettuce.core.XReadArgs.StreamOffset; + +/** + * Synchronous executed commands for Streams. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.1 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisStreamCommands { + + /** + * Acknowledge one or more messages as processed. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param messageIds message Id's to acknowledge. + * @return simple-reply the lenght of acknowledged messages. + */ + Long xack(K key, K group, String... messageIds); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param body message body. + * @return simple-reply the message Id. + */ + String xadd(K key, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param body message body. + * @return simple-reply the message Id. + */ + String xadd(K key, XAddArgs args, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + String xadd(K key, Object... keysAndValues); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + String xadd(K key, XAddArgs args, Object... keysAndValues); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param minIdleTime + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + List> xclaim(K key, Consumer consumer, long minIdleTime, String... messageIds); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + *

+ * Note that setting the {@code JUSTID} flag (calling this method with {@link XClaimArgs#justid()}) suppresses the message + * bode and {@link StreamMessage#getBody()} is {@code null}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param args + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + List> xclaim(K key, Consumer consumer, XClaimArgs args, String... messageIds); + + /** + * Removes the specified entries from the stream. Returns the number of items deleted, that may be different from the number + * of IDs passed in case certain IDs do not exist. + * + * @param key the stream key. + * @param messageIds stream message Id's. + * @return simple-reply number of removed entries. + */ + Long xdel(K key, String... messageIds); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + String xgroupCreate(StreamOffset streamOffset, K group); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @param args + * @return simple-reply {@literal true} if successful. + * @since 5.2 + */ + String xgroupCreate(StreamOffset streamOffset, K group, XGroupCreateArgs args); + + /** + * Delete a consumer from a consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @return simple-reply {@literal true} if successful. + */ + Boolean xgroupDelconsumer(K key, Consumer consumer); + + /** + * Destroy a consumer group. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + Boolean xgroupDestroy(K key, K group); + + /** + * Set the current {@code group} id. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply OK + */ + String xgroupSetid(StreamOffset streamOffset, K group); + + /** + * Retrieve information about the stream at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + List xinfoStream(K key); + + /** + * Retrieve information about the stream consumer groups at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + List xinfoGroups(K key); + + /** + * Retrieve information about consumer groups of group {@code group} and stream at {@code key}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply. + * @since 5.2 + */ + List xinfoConsumers(K key, K group); + + /** + * Get the length of a steam. + * + * @param key the stream key. + * @return simple-reply the lenght of the stream. + */ + Long xlen(K key); + + /** + * Read pending messages from a stream for a {@code group}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply list pending entries. + */ + List xpending(K key, K group); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + List xpending(K key, K group, Range range, Limit limit); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + List xpending(K key, Consumer consumer, Range range, Limit limit); + + /** + * Read messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xrange(K key, Range range, Limit limit); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xread(StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xread(XReadArgs args, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xreadgroup(Consumer consumer, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xreadgroup(Consumer consumer, XReadArgs args, StreamOffset... streams); + + /** + * Read messages from a stream within a specific {@link Range} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xrevrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xrevrange(K key, Range range, Limit limit); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + Long xtrim(K key, long count); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param approximateTrimming {@literal true} to trim approximately using the {@code ~} flag. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + Long xtrim(K key, boolean approximateTrimming, long count); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisStringCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisStringCommands.java new file mode 100644 index 0000000000..b176d07273 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisStringCommands.java @@ -0,0 +1,376 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.BitFieldArgs; +import io.lettuce.core.KeyValue; +import io.lettuce.core.SetArgs; +import io.lettuce.core.output.KeyValueStreamingChannel; + +/** + * Synchronous executed commands for Strings. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisStringCommands { + + /** + * Append a value to a key. + * + * @param key the key + * @param value the value + * @return Long integer-reply the length of the string after the append operation. + */ + Long append(K key, V value); + + /** + * Count set bits in a string. + * + * @param key the key + * + * @return Long integer-reply The number of bits set to 1. + */ + Long bitcount(K key); + + /** + * Count set bits in a string. + * + * @param key the key + * @param start the start + * @param end the end + * + * @return Long integer-reply The number of bits set to 1. + */ + Long bitcount(K key, long start, long end); + + /** + * Execute {@code BITFIELD} with its subcommands. + * + * @param key the key + * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. + * + * @return Long bulk-reply the results from the bitfield commands. + */ + List bitfield(K key, BitFieldArgs bitFieldArgs); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the state + * + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + */ + Long bitpos(K key, boolean state); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * @since 5.0.1 + */ + Long bitpos(K key, boolean state, long start); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @param end the end type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * + * However this behavior changes if you are looking for clear bits and specify a range with both + * start and end. If no clear bit is found in the specified range, the function + * returns -1 as the user specified a clear range and there are no 0 bits in that range. + */ + Long bitpos(K key, boolean state, long start, long end); + + /** + * Perform bitwise AND between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Long bitopAnd(K destination, K... keys); + + /** + * Perform bitwise NOT between strings. + * + * @param destination result key of the operation + * @param source operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Long bitopNot(K destination, K source); + + /** + * Perform bitwise OR between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Long bitopOr(K destination, K... keys); + + /** + * Perform bitwise XOR between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Long bitopXor(K destination, K... keys); + + /** + * Decrement the integer value of a key by one. + * + * @param key the key + * @return Long integer-reply the value of {@code key} after the decrement + */ + Long decr(K key); + + /** + * Decrement the integer value of a key by the given number. + * + * @param key the key + * @param amount the decrement type: long + * @return Long integer-reply the value of {@code key} after the decrement + */ + Long decrby(K key, long amount); + + /** + * Get the value of a key. + * + * @param key the key + * @return V bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not exist. + */ + V get(K key); + + /** + * Returns the bit value at offset in the string value stored at key. + * + * @param key the key + * @param offset the offset type: long + * @return Long integer-reply the bit value stored at offset. + */ + Long getbit(K key, long offset); + + /** + * Get a substring of the string stored at a key. + * + * @param key the key + * @param start the start type: long + * @param end the end type: long + * @return V bulk-string-reply + */ + V getrange(K key, long start, long end); + + /** + * Set the string value of a key and return its old value. + * + * @param key the key + * @param value the value + * @return V bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} did not exist. + */ + V getset(K key, V value); + + /** + * Increment the integer value of a key by one. + * + * @param key the key + * @return Long integer-reply the value of {@code key} after the increment + */ + Long incr(K key); + + /** + * Increment the integer value of a key by the given amount. + * + * @param key the key + * @param amount the increment type: long + * @return Long integer-reply the value of {@code key} after the increment + */ + Long incrby(K key, long amount); + + /** + * Increment the float value of a key by the given amount. + * + * @param key the key + * @param amount the increment type: double + * @return Double bulk-string-reply the value of {@code key} after the increment. + */ + Double incrbyfloat(K key, double amount); + + /** + * Get the values of all the given keys. + * + * @param keys the key + * @return List<V> array-reply list of values at the specified keys. + */ + List> mget(K... keys); + + /** + * Stream over the values of all the given keys. + * + * @param channel the channel + * @param keys the keys + * + * @return Long array-reply list of values at the specified keys. + */ + Long mget(KeyValueStreamingChannel channel, K... keys); + + /** + * Set multiple keys to multiple values. + * + * @param map the null + * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. + */ + String mset(Map map); + + /** + * Set multiple keys to multiple values, only if none of the keys exist. + * + * @param map the null + * @return Boolean integer-reply specifically: + * + * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). + */ + Boolean msetnx(Map map); + + /** + * Set the string value of a key. + * + * @param key the key + * @param value the value + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + String set(K key, V value); + + /** + * Set the string value of a key. + * + * @param key the key + * @param value the value + * @param setArgs the setArgs + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + String set(K key, V value, SetArgs setArgs); + + /** + * Sets or clears the bit at offset in the string value stored at key. + * + * @param key the key + * @param offset the offset type: long + * @param value the value type: string + * @return Long integer-reply the original bit value stored at offset. + */ + Long setbit(K key, long offset, int value); + + /** + * Set the value and expiration of a key. + * + * @param key the key + * @param seconds the seconds type: long + * @param value the value + * @return String simple-string-reply + */ + String setex(K key, long seconds, V value); + + /** + * Set the value and expiration in milliseconds of a key. + * + * @param key the key + * @param milliseconds the milliseconds type: long + * @param value the value + * @return String simple-string-reply + */ + String psetex(K key, long milliseconds, V value); + + /** + * Set the value of a key, only if the key does not exist. + * + * @param key the key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@code 1} if the key was set {@code 0} if the key was not set + */ + Boolean setnx(K key, V value); + + /** + * Overwrite part of a string at key starting at the specified offset. + * + * @param key the key + * @param offset the offset type: long + * @param value the value + * @return Long integer-reply the length of the string after it was modified by the command. + */ + Long setrange(K key, long offset, V value); + + /** + * Get the length of the value stored in a key. + * + * @param key the key + * @return Long integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does not exist. + */ + Long strlen(K key); +} diff --git a/src/main/java/io/lettuce/core/api/sync/RedisTransactionalCommands.java b/src/main/java/io/lettuce/core/api/sync/RedisTransactionalCommands.java new file mode 100644 index 0000000000..9e84d052a8 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/RedisTransactionalCommands.java @@ -0,0 +1,70 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api.sync; + +import io.lettuce.core.TransactionResult; + +/** + * Synchronous executed commands for Transactions. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisTransactionalCommands { + + /** + * Discard all commands issued after MULTI. + * + * @return String simple-string-reply always {@code OK}. + */ + String discard(); + + /** + * Execute all commands issued after MULTI. + * + * @return List<Object> array-reply each element being the reply to each of the commands in the atomic transaction. + * + * When using {@code WATCH}, {@code EXEC} can return a {@link TransactionResult#wasDiscarded discarded + * TransactionResult}. + * @see TransactionResult#wasDiscarded + */ + TransactionResult exec(); + + /** + * Mark the start of a transaction block. + * + * @return String simple-string-reply always {@code OK}. + */ + String multi(); + + /** + * Watch the given keys to determine execution of the MULTI/EXEC block. + * + * @param keys the key + * @return String simple-string-reply always {@code OK}. + */ + String watch(K... keys); + + /** + * Forget about all watched keys. + * + * @return String simple-string-reply always {@code OK}. + */ + String unwatch(); +} diff --git a/src/main/java/io/lettuce/core/api/sync/package-info.java b/src/main/java/io/lettuce/core/api/sync/package-info.java new file mode 100644 index 0000000000..a1a443a8d7 --- /dev/null +++ b/src/main/java/io/lettuce/core/api/sync/package-info.java @@ -0,0 +1,4 @@ +/** + * Standalone Redis API for synchronous executed commands. + */ +package io.lettuce.core.api.sync; diff --git a/src/main/java/io/lettuce/core/cluster/AbstractClusterNodeConnectionFactory.java b/src/main/java/io/lettuce/core/cluster/AbstractClusterNodeConnectionFactory.java new file mode 100644 index 0000000000..d4a0fb1300 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/AbstractClusterNodeConnectionFactory.java @@ -0,0 +1,111 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.net.SocketAddress; +import java.util.function.Supplier; + +import reactor.core.publisher.Mono; +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.resource.ClientResources; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Supporting class for {@link ClusterNodeConnectionFactory} implementations. + *

+ * Provides utility methods to resolve {@link SocketAddress} and {@link Partitions}. + * + * @author Mark Paluch + * @since 4.4 + */ +abstract class AbstractClusterNodeConnectionFactory implements ClusterNodeConnectionFactory { + + private static final InternalLogger logger = InternalLoggerFactory + .getInstance(PooledClusterConnectionProvider.DefaultClusterNodeConnectionFactory.class); + + private final ClientResources clientResources; + + private volatile Partitions partitions; + + /** + * Create a new {@link AbstractClusterNodeConnectionFactory} given {@link ClientResources}. + * + * @param clientResources must not be {@literal null}. + */ + public AbstractClusterNodeConnectionFactory(ClientResources clientResources) { + this.clientResources = clientResources; + } + + public void setPartitions(Partitions partitions) { + this.partitions = partitions; + } + + public Partitions getPartitions() { + return partitions; + } + + /** + * Get a {@link Mono} of {@link SocketAddress} for a + * {@link io.lettuce.core.cluster.ClusterNodeConnectionFactory.ConnectionKey}. + *

+ * This {@link Supplier} resolves the requested endpoint on each {@link Supplier#get()}. + * + * @param connectionKey must not be {@literal null}. + * @return + */ + Mono getSocketAddressSupplier(ConnectionKey connectionKey) { + + return Mono.fromCallable(() -> { + + if (connectionKey.nodeId != null) { + + SocketAddress socketAddress = getSocketAddress(connectionKey.nodeId); + logger.debug("Resolved SocketAddress {} using for Cluster node {}", socketAddress, connectionKey.nodeId); + return socketAddress; + } + + SocketAddress socketAddress = resolve(RedisURI.create(connectionKey.host, connectionKey.port)); + logger.debug("Resolved SocketAddress {} using for Cluster node at {}:{}", socketAddress, connectionKey.host, + connectionKey.port); + return socketAddress; + }); + } + + /** + * Get the {@link SocketAddress} for a {@code nodeId} from {@link Partitions}. + * + * @param nodeId + * @return the {@link SocketAddress}. + * @throws IllegalArgumentException if {@code nodeId} cannot be looked up. + */ + private SocketAddress getSocketAddress(String nodeId) { + + for (RedisClusterNode partition : partitions) { + if (partition.getNodeId().equals(nodeId)) { + return resolve(partition.getUri()); + } + } + + throw new IllegalArgumentException(String.format("Cannot resolve a RedisClusterNode for nodeId %s", nodeId)); + } + + private SocketAddress resolve(RedisURI redisURI) { + return clientResources.socketAddressResolver().resolve(redisURI); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/AbstractNodeSelection.java b/src/main/java/io/lettuce/core/cluster/AbstractNodeSelection.java new file mode 100644 index 0000000000..b8126d38d2 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/AbstractNodeSelection.java @@ -0,0 +1,103 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.stream.Collectors; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Abstract base class to support node selections. A node selection represents a set of Redis Cluster nodes and allows command + * execution on the selected cluster nodes. + * + * @param API type. + * @param Command command interface type to invoke multi-node operations. + * @param Key type. + * @param Value type. + * @since 4.1 + * @author Mark Paluch + */ +abstract class AbstractNodeSelection implements NodeSelectionSupport { + + @Override + public Map asMap() { + + List list = new ArrayList<>(nodes()); + Map map = new HashMap<>(list.size(), 1); + + list.forEach((key) -> map.put(key, getApi(key).join())); + + return map; + } + + @Override + public int size() { + return nodes().size(); + } + + @Override + public RedisClusterNode node(int index) { + return nodes().get(index); + } + + // This method is never called, the value is supplied by AOP magic. + @Override + public CMD commands() { + return null; + } + + @Override + public API commands(int index) { + return getApi(node(index)).join(); + } + + /** + * + * @return {@link Map} between a {@link RedisClusterNode} to its actual {@link StatefulRedisConnection}. + */ + protected Map>> statefulMap() { + return nodes().stream().collect(Collectors.toMap(redisClusterNode -> redisClusterNode, this::getConnection)); + } + + /** + * Template method to be implemented by implementing classes to obtain a {@link StatefulRedisConnection}. + * + * @param redisClusterNode must not be {@literal null}. + * @return + */ + protected abstract CompletableFuture> getConnection( + RedisClusterNode redisClusterNode); + + /** + * Template method to be implemented by implementing classes to obtain a the API object given a {@link RedisClusterNode}. + * + * @param redisClusterNode must not be {@literal null}. + * @return + */ + protected abstract CompletableFuture getApi(RedisClusterNode redisClusterNode); + + /** + * @return List of involved nodes + */ + protected abstract List nodes(); +} diff --git a/src/main/java/io/lettuce/core/cluster/AsyncClusterConnectionProvider.java b/src/main/java/io/lettuce/core/cluster/AsyncClusterConnectionProvider.java new file mode 100644 index 0000000000..72d5956dc9 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/AsyncClusterConnectionProvider.java @@ -0,0 +1,75 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.io.Closeable; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.RedisException; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.ClusterConnectionProvider.Intent; + +/** + * Asynchronous connection provider for cluster operations. + * + * @author Mark Paluch + * @since 4.4 + */ +interface AsyncClusterConnectionProvider extends Closeable { + + /** + * Provide a connection for the intent and cluster slot. The underlying connection is bound to the nodeId. If the slot + * responsibility changes, the connection will not point to the updated nodeId. + * + * @param intent {@link Intent#READ} or {@link Intent#WRITE}. {@literal READ} connections will be provided with + * {@literal READONLY} mode set. + * @param slot the slot-hash of the key, see {@link SlotHash}. + * @return a valid connection which handles the slot. + * @throws RedisException if no know node can be found for the slot + */ + CompletableFuture> getConnectionAsync(Intent intent, int slot); + + /** + * Provide a connection for the intent and host/port. The connection can survive cluster topology updates. The connection + * will be closed if the node identified by {@code host} and {@code port} is no longer part of the cluster. + * + * @param intent {@link Intent#READ} or {@link Intent#WRITE}. {@literal READ} connections will be provided with + * {@literal READONLY} mode set. + * @param host host of the node. + * @param port port of the node. + * @return a valid connection to the given host. + * @throws RedisException if the host is not part of the cluster + */ + CompletableFuture> getConnectionAsync(Intent intent, String host, int port); + + /** + * Provide a connection for the intent and nodeId. The connection can survive cluster topology updates. The connection will + * be closed if the node identified by {@code nodeId} is no longer part of the cluster. + * + * @param intent {@link Intent#READ} or {@link Intent#WRITE}. {@literal READ} connections will be provided with + * {@literal READONLY} mode set. + * @param nodeId the nodeId of the cluster node. + * @return a valid connection to the given nodeId. + * @throws RedisException if the {@code nodeId} is not part of the cluster + */ + CompletableFuture> getConnectionAsync(Intent intent, String nodeId); + + /** + * Close the connections and free all resources. + */ + @Override + void close(); +} diff --git a/src/main/java/io/lettuce/core/cluster/AsyncExecutionsImpl.java b/src/main/java/io/lettuce/core/cluster/AsyncExecutionsImpl.java new file mode 100644 index 0000000000..143f34e73f --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/AsyncExecutionsImpl.java @@ -0,0 +1,324 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.Executor; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; +import java.util.function.BiConsumer; +import java.util.function.BiFunction; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.stream.Collector; + +import io.lettuce.core.cluster.api.async.AsyncExecutions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * @author Mark Paluch + */ +class AsyncExecutionsImpl implements AsyncExecutions { + + @SuppressWarnings({ "unchecked", "rawtypes" }) + private static final AtomicReferenceFieldUpdater, CompletionStage> UPDATER = (AtomicReferenceFieldUpdater) AtomicReferenceFieldUpdater + .newUpdater(AsyncExecutionsImpl.class, CompletionStage.class, "publicStage"); + + private final Map> executions; + + private volatile CompletionStage> publicStage; + + @SuppressWarnings("unchecked") + public AsyncExecutionsImpl(Map> executions) { + + Map> map = new HashMap<>(executions); + this.executions = Collections.unmodifiableMap((Map) map); + } + + @Override + public Map> asMap() { + return executions; + } + + @Override + public Iterator> iterator() { + return asMap().values().iterator(); + } + + @Override + public Collection nodes() { + return executions.keySet(); + } + + @Override + public CompletableFuture get(RedisClusterNode redisClusterNode) { + return executions.get(redisClusterNode); + } + + @Override + @SuppressWarnings({ "rawtypes", "unchecked" }) + public CompletableFuture[] futures() { + return executions.values().toArray(new CompletableFuture[0]); + } + + @Override + public CompletionStage thenCollect(Collector collector) { + + return publicStage().thenApply(items -> { + + A container = collector.supplier().get(); + + BiConsumer accumulator = collector.accumulator(); + items.forEach(item -> accumulator.accept(container, item)); + + if (collector.characteristics().contains(Collector.Characteristics.IDENTITY_FINISH)) { + return (R) container; + } + + return collector.finisher().apply(container); + }); + } + + @SuppressWarnings({ "unchecked", "rawtypes" }) + private CompletionStage> publicStage() { + + CompletionStage stage = UPDATER.get(this); + + if (stage == null) { + stage = createPublicStage(this.executions); + UPDATER.compareAndSet(this, null, stage); + } + + return stage; + } + + @SuppressWarnings("rawtypes") + private CompletableFuture> createPublicStage(Map> map) { + + return CompletableFuture.allOf(map.values().toArray(new CompletableFuture[0])).thenApply(ignore -> { + + List results = new ArrayList<>(map.size()); + for (CompletionStage value : map.values()) { + results.add(value.toCompletableFuture().join()); + } + + return results; + }); + } + + // -------------------------------- + // delegate methods. + // -------------------------------- + + @Override + public CompletionStage thenApply(Function, ? extends U> fn) { + return publicStage().thenApply(fn); + } + + @Override + public CompletionStage thenApplyAsync(Function, ? extends U> fn) { + return publicStage().thenApplyAsync(fn); + } + + @Override + public CompletionStage thenApplyAsync(Function, ? extends U> fn, Executor executor) { + return publicStage().thenApplyAsync(fn, executor); + } + + @Override + public CompletionStage thenAccept(Consumer> action) { + return publicStage().thenAccept(action); + } + + @Override + public CompletionStage thenAcceptAsync(Consumer> action) { + return publicStage().thenAcceptAsync(action); + } + + @Override + public CompletionStage thenAcceptAsync(Consumer> action, Executor executor) { + return publicStage().thenAcceptAsync(action, executor); + } + + @Override + public CompletionStage thenRun(Runnable action) { + return publicStage().thenRun(action); + } + + @Override + public CompletionStage thenRunAsync(Runnable action) { + return publicStage().thenRunAsync(action); + } + + @Override + public CompletionStage thenRunAsync(Runnable action, Executor executor) { + return publicStage().thenRunAsync(action, executor); + } + + @Override + public CompletionStage thenCombine(CompletionStage other, + BiFunction, ? super U, ? extends V> fn) { + return publicStage().thenCombine(other, fn); + } + + @Override + public CompletionStage thenCombineAsync(CompletionStage other, + BiFunction, ? super U, ? extends V> fn) { + return publicStage().thenCombineAsync(other, fn); + } + + @Override + public CompletionStage thenCombineAsync(CompletionStage other, + BiFunction, ? super U, ? extends V> fn, Executor executor) { + return publicStage().thenCombineAsync(other, fn, executor); + } + + @Override + public CompletionStage thenAcceptBoth(CompletionStage other, + BiConsumer, ? super U> action) { + return publicStage().thenAcceptBoth(other, action); + } + + @Override + public CompletionStage thenAcceptBothAsync(CompletionStage other, + BiConsumer, ? super U> action) { + return publicStage().thenAcceptBothAsync(other, action); + } + + @Override + public CompletionStage thenAcceptBothAsync(CompletionStage other, + BiConsumer, ? super U> action, Executor executor) { + return publicStage().thenAcceptBothAsync(other, action, executor); + } + + @Override + public CompletionStage runAfterBoth(CompletionStage other, Runnable action) { + return publicStage().runAfterBoth(other, action); + } + + @Override + public CompletionStage runAfterBothAsync(CompletionStage other, Runnable action) { + return publicStage().runAfterBothAsync(other, action); + } + + @Override + public CompletionStage runAfterBothAsync(CompletionStage other, Runnable action, Executor executor) { + return publicStage().runAfterBothAsync(other, action, executor); + } + + @Override + public CompletionStage applyToEither(CompletionStage> other, Function, U> fn) { + return publicStage().applyToEither(other, fn); + } + + @Override + public CompletionStage applyToEitherAsync(CompletionStage> other, Function, U> fn) { + return publicStage().applyToEitherAsync(other, fn); + } + + @Override + public CompletionStage applyToEitherAsync(CompletionStage> other, Function, U> fn, + Executor executor) { + return publicStage().applyToEitherAsync(other, fn, executor); + } + + @Override + public CompletionStage acceptEither(CompletionStage> other, Consumer> action) { + return publicStage().acceptEither(other, action); + } + + @Override + public CompletionStage acceptEitherAsync(CompletionStage> other, Consumer> action) { + return publicStage().acceptEitherAsync(other, action); + } + + @Override + public CompletionStage acceptEitherAsync(CompletionStage> other, Consumer> action, + Executor executor) { + return publicStage().acceptEitherAsync(other, action, executor); + } + + @Override + public CompletionStage runAfterEither(CompletionStage other, Runnable action) { + return publicStage().runAfterEither(other, action); + } + + @Override + public CompletionStage runAfterEitherAsync(CompletionStage other, Runnable action) { + return publicStage().runAfterEitherAsync(other, action); + } + + @Override + public CompletionStage runAfterEitherAsync(CompletionStage other, Runnable action, Executor executor) { + return publicStage().runAfterEitherAsync(other, action, executor); + } + + @Override + public CompletionStage thenCompose(Function, ? extends CompletionStage> fn) { + return publicStage().thenCompose(fn); + } + + @Override + public CompletionStage thenComposeAsync(Function, ? extends CompletionStage> fn) { + return publicStage().thenComposeAsync(fn); + } + + @Override + public CompletionStage thenComposeAsync(Function, ? extends CompletionStage> fn, Executor executor) { + return publicStage().thenComposeAsync(fn, executor); + } + + @Override + public CompletionStage> exceptionally(Function> fn) { + return publicStage().exceptionally(fn); + } + + @Override + public CompletionStage> whenComplete(BiConsumer, ? super Throwable> action) { + return publicStage().whenComplete(action); + } + + @Override + public CompletionStage> whenCompleteAsync(BiConsumer, ? super Throwable> action) { + return publicStage().whenCompleteAsync(action); + } + + @Override + public CompletionStage> whenCompleteAsync(BiConsumer, ? super Throwable> action, Executor executor) { + return publicStage().whenCompleteAsync(action, executor); + } + + @Override + public CompletionStage handle(BiFunction, Throwable, ? extends U> fn) { + return publicStage().handle(fn); + } + + @Override + public CompletionStage handleAsync(BiFunction, Throwable, ? extends U> fn) { + return publicStage().handleAsync(fn); + } + + @Override + public CompletionStage handleAsync(BiFunction, Throwable, ? extends U> fn, Executor executor) { + return publicStage().handleAsync(fn, executor); + } + + @Override + public CompletableFuture> toCompletableFuture() { + return publicStage().toCompletableFuture(); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterClientOptions.java b/src/main/java/io/lettuce/core/cluster/ClusterClientOptions.java new file mode 100644 index 0000000000..300626ca4e --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterClientOptions.java @@ -0,0 +1,355 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.nio.charset.Charset; +import java.time.Duration; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.SocketOptions; +import io.lettuce.core.SslOptions; +import io.lettuce.core.TimeoutOptions; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.ProtocolVersion; + +/** + * Client Options to control the behavior of {@link RedisClusterClient}. + * + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class ClusterClientOptions extends ClientOptions { + + public static final boolean DEFAULT_REFRESH_CLUSTER_VIEW = false; + public static final long DEFAULT_REFRESH_PERIOD = 60; + public static final Duration DEFAULT_REFRESH_PERIOD_DURATION = Duration.ofSeconds(DEFAULT_REFRESH_PERIOD); + public static final boolean DEFAULT_CLOSE_STALE_CONNECTIONS = true; + public static final boolean DEFAULT_VALIDATE_CLUSTER_MEMBERSHIP = true; + public static final int DEFAULT_MAX_REDIRECTS = 5; + + private final boolean validateClusterNodeMembership; + private final int maxRedirects; + private final ClusterTopologyRefreshOptions topologyRefreshOptions; + + protected ClusterClientOptions(Builder builder) { + + super(builder); + + this.validateClusterNodeMembership = builder.validateClusterNodeMembership; + this.maxRedirects = builder.maxRedirects; + + ClusterTopologyRefreshOptions refreshOptions = builder.topologyRefreshOptions; + + if (refreshOptions == null) { + refreshOptions = ClusterTopologyRefreshOptions.builder() // + .enablePeriodicRefresh(DEFAULT_REFRESH_CLUSTER_VIEW) // + .refreshPeriod(DEFAULT_REFRESH_PERIOD_DURATION) // + .closeStaleConnections(builder.closeStaleConnections) // + .build(); + } + + this.topologyRefreshOptions = refreshOptions; + } + + protected ClusterClientOptions(ClusterClientOptions original) { + + super(original); + + this.validateClusterNodeMembership = original.validateClusterNodeMembership; + this.maxRedirects = original.maxRedirects; + this.topologyRefreshOptions = original.topologyRefreshOptions; + } + + /** + * Create a copy of {@literal options}. + * + * @param options the original + * @return A new instance of {@link ClusterClientOptions} containing the values of {@literal options} + */ + public static ClusterClientOptions copyOf(ClusterClientOptions options) { + return new ClusterClientOptions(options); + } + + /** + * Returns a new {@link ClusterClientOptions.Builder} to construct {@link ClusterClientOptions}. + * + * @return a new {@link ClusterClientOptions.Builder} to construct {@link ClusterClientOptions}. + */ + public static ClusterClientOptions.Builder builder() { + return new ClusterClientOptions.Builder(); + } + + /** + * Returns a new {@link ClusterClientOptions.Builder} initialized from {@link ClientOptions} to construct + * {@link ClusterClientOptions}. + * + * @return a new {@link ClusterClientOptions.Builder} to construct {@link ClusterClientOptions}. + * @since 5.1.6 + */ + public static ClusterClientOptions.Builder builder(ClientOptions clientOptions) { + + LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); + + if (clientOptions instanceof ClusterClientOptions) { + return ((ClusterClientOptions) clientOptions).mutate(); + } + + Builder builder = new Builder(); + builder.autoReconnect(clientOptions.isAutoReconnect()).bufferUsageRatio(clientOptions.getBufferUsageRatio()) + .cancelCommandsOnReconnectFailure(clientOptions.isCancelCommandsOnReconnectFailure()) + .disconnectedBehavior(clientOptions.getDisconnectedBehavior()).scriptCharset(clientOptions.getScriptCharset()) + .publishOnScheduler(clientOptions.isPublishOnScheduler()) + .protocolVersion(clientOptions.getConfiguredProtocolVersion()) + .requestQueueSize(clientOptions.getRequestQueueSize()).socketOptions(clientOptions.getSocketOptions()) + .sslOptions(clientOptions.getSslOptions()) + .suspendReconnectOnProtocolFailure(clientOptions.isSuspendReconnectOnProtocolFailure()) + .timeoutOptions(clientOptions.getTimeoutOptions()); + + return builder; + } + + /** + * Create a new {@link ClusterClientOptions} using default settings. + * + * @return a new instance of default cluster client client options. + */ + public static ClusterClientOptions create() { + return builder().build(); + } + + /** + * Builder for {@link ClusterClientOptions}. + */ + public static class Builder extends ClientOptions.Builder { + + private boolean closeStaleConnections = DEFAULT_CLOSE_STALE_CONNECTIONS; + private boolean validateClusterNodeMembership = DEFAULT_VALIDATE_CLUSTER_MEMBERSHIP; + private int maxRedirects = DEFAULT_MAX_REDIRECTS; + private ClusterTopologyRefreshOptions topologyRefreshOptions = null; + + protected Builder() { + } + + /** + * Validate the cluster node membership before allowing connections to a cluster node. Defaults to {@literal true}. See + * {@link ClusterClientOptions#DEFAULT_VALIDATE_CLUSTER_MEMBERSHIP}. + * + * @param validateClusterNodeMembership {@literal true} if validation is enabled. + * @return {@code this} + */ + public Builder validateClusterNodeMembership(boolean validateClusterNodeMembership) { + this.validateClusterNodeMembership = validateClusterNodeMembership; + return this; + } + + /** + * Number of maximal cluster redirects ({@literal -MOVED} and {@literal -ASK}) to follow in case a key was moved from + * one node to another node. Defaults to {@literal 5}. See {@link ClusterClientOptions#DEFAULT_MAX_REDIRECTS}. + * + * @param maxRedirects the limit of maximal cluster redirects + * @return {@code this} + */ + public Builder maxRedirects(int maxRedirects) { + this.maxRedirects = maxRedirects; + return this; + } + + /** + * Sets the {@link ClusterTopologyRefreshOptions} for detailed control of topology updates. + * + * @param topologyRefreshOptions the {@link ClusterTopologyRefreshOptions} + * @return {@code this} + */ + public Builder topologyRefreshOptions(ClusterTopologyRefreshOptions topologyRefreshOptions) { + this.topologyRefreshOptions = topologyRefreshOptions; + return this; + } + + @Override + public Builder pingBeforeActivateConnection(boolean pingBeforeActivateConnection) { + super.pingBeforeActivateConnection(pingBeforeActivateConnection); + return this; + } + + @Override + public Builder protocolVersion(ProtocolVersion protocolVersion) { + super.protocolVersion(protocolVersion); + return this; + } + + @Override + public Builder autoReconnect(boolean autoReconnect) { + super.autoReconnect(autoReconnect); + return this; + } + + @Override + public Builder suspendReconnectOnProtocolFailure(boolean suspendReconnectOnProtocolFailure) { + super.suspendReconnectOnProtocolFailure(suspendReconnectOnProtocolFailure); + return this; + } + + @Override + public Builder cancelCommandsOnReconnectFailure(boolean cancelCommandsOnReconnectFailure) { + super.cancelCommandsOnReconnectFailure(cancelCommandsOnReconnectFailure); + return this; + } + + @Override + public Builder publishOnScheduler(boolean publishOnScheduler) { + super.publishOnScheduler(publishOnScheduler); + return this; + } + + @Override + public Builder requestQueueSize(int requestQueueSize) { + super.requestQueueSize(requestQueueSize); + return this; + } + + @Override + public Builder disconnectedBehavior(DisconnectedBehavior disconnectedBehavior) { + super.disconnectedBehavior(disconnectedBehavior); + return this; + } + + @Override + public Builder scriptCharset(Charset scriptCharset) { + super.scriptCharset(scriptCharset); + return this; + } + + @Override + public Builder socketOptions(SocketOptions socketOptions) { + super.socketOptions(socketOptions); + return this; + } + + @Override + public Builder sslOptions(SslOptions sslOptions) { + super.sslOptions(sslOptions); + return this; + } + + @Override + public Builder timeoutOptions(TimeoutOptions timeoutOptions) { + super.timeoutOptions(timeoutOptions); + return this; + } + + @Override + public Builder bufferUsageRatio(int bufferUsageRatio) { + super.bufferUsageRatio(bufferUsageRatio); + return this; + } + + /** + * Create a new instance of {@link ClusterClientOptions} + * + * @return new instance of {@link ClusterClientOptions} + */ + public ClusterClientOptions build() { + return new ClusterClientOptions(this); + } + } + + /** + * Returns a builder to create new {@link ClusterClientOptions} whose settings are replicated from the current + * {@link ClusterClientOptions}. + * + * @return a {@link ClusterClientOptions.Builder} to create new {@link ClusterClientOptions} whose settings are replicated + * from the current {@link ClusterClientOptions}. + * + * @since 5.1 + */ + public ClusterClientOptions.Builder mutate() { + + Builder builder = new Builder(); + + builder.autoReconnect(isAutoReconnect()).bufferUsageRatio(getBufferUsageRatio()) + .cancelCommandsOnReconnectFailure(isCancelCommandsOnReconnectFailure()) + .disconnectedBehavior(getDisconnectedBehavior()).scriptCharset(getScriptCharset()) + .publishOnScheduler(isPublishOnScheduler()).pingBeforeActivateConnection(isPingBeforeActivateConnection()) + .protocolVersion(getConfiguredProtocolVersion()).requestQueueSize(getRequestQueueSize()) + .socketOptions(getSocketOptions()).sslOptions(getSslOptions()) + .suspendReconnectOnProtocolFailure(isSuspendReconnectOnProtocolFailure()).timeoutOptions(getTimeoutOptions()) + .validateClusterNodeMembership(isValidateClusterNodeMembership()).maxRedirects(getMaxRedirects()) + .topologyRefreshOptions(getTopologyRefreshOptions()); + + return builder; + } + + /** + * Flag, whether regular cluster topology updates are updated. The client starts updating the cluster topology in the + * intervals of {@link #getRefreshPeriod()}. Defaults to {@literal false}. Returns the value from + * {@link ClusterTopologyRefreshOptions} if provided. + * + * @return {@literal true} it the cluster topology view is updated periodically + */ + public boolean isRefreshClusterView() { + return topologyRefreshOptions.isPeriodicRefreshEnabled(); + } + + /** + * Period between the regular cluster topology updates. Defaults to {@literal 60}. Returns the value from + * {@link ClusterTopologyRefreshOptions} if provided. + * + * @return the period between the regular cluster topology updates + */ + public Duration getRefreshPeriod() { + return topologyRefreshOptions.getRefreshPeriod(); + } + + /** + * Flag, whether to close stale connections when refreshing the cluster topology. Defaults to {@literal true}. Comes only + * into effect if {@link #isRefreshClusterView()} is {@literal true}. Returns the value from + * {@link ClusterTopologyRefreshOptions} if provided. + * + * @return {@literal true} if stale connections are cleaned up after cluster topology updates + */ + public boolean isCloseStaleConnections() { + return topologyRefreshOptions.isCloseStaleConnections(); + } + + /** + * Validate the cluster node membership before allowing connections to a cluster node. Defaults to {@literal true}. + * + * @return {@literal true} if validation is enabled. + */ + public boolean isValidateClusterNodeMembership() { + return validateClusterNodeMembership; + } + + /** + * Number of maximal of cluster redirects ({@literal -MOVED} and {@literal -ASK}) to follow in case a key was moved from one + * node to another node. Defaults to {@literal 5}. See {@link ClusterClientOptions#DEFAULT_MAX_REDIRECTS}. + * + * @return the maximal number of followed cluster redirects + */ + public int getMaxRedirects() { + return maxRedirects; + } + + /** + * The {@link ClusterTopologyRefreshOptions} for detailed control of topology updates. + * + * @return the {@link ClusterTopologyRefreshOptions}. + */ + public ClusterTopologyRefreshOptions getTopologyRefreshOptions() { + return topologyRefreshOptions; + } + +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterCommand.java b/src/main/java/io/lettuce/core/cluster/ClusterCommand.java new file mode 100644 index 0000000000..ff2ecb0712 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterCommand.java @@ -0,0 +1,133 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.protocol.*; +import io.netty.buffer.ByteBuf; + +/** + * @author Mark Paluch + * @since 3.0 + */ +class ClusterCommand extends CommandWrapper implements RedisCommand { + + private int redirections; + private final int maxRedirections; + + private final RedisChannelWriter retry; + private boolean completed; + + /** + * + * @param command + * @param retry + * @param maxRedirections + */ + ClusterCommand(RedisCommand command, RedisChannelWriter retry, int maxRedirections) { + super(command); + this.retry = retry; + this.maxRedirections = maxRedirections; + } + + @Override + public void complete() { + + if (isMoved() || isAsk()) { + + boolean retryCommand = maxRedirections > redirections; + redirections++; + + if (retryCommand) { + try { + retry.write(this); + } catch (Exception e) { + completeExceptionally(e); + } + return; + } + } + super.complete(); + completed = true; + } + + public boolean isMoved() { + + if (getError() != null && getError().startsWith(CommandKeyword.MOVED.name())) { + return true; + } + + return false; + } + + public boolean isAsk() { + + if (getError() != null && getError().startsWith(CommandKeyword.ASK.name())) { + return true; + } + + return false; + } + + @Override + public CommandArgs getArgs() { + return command.getArgs(); + } + + @Override + public void encode(ByteBuf buf) { + command.encode(buf); + } + + @Override + public boolean completeExceptionally(Throwable ex) { + boolean result = command.completeExceptionally(ex); + completed = true; + return result; + } + + @Override + public ProtocolKeyword getType() { + return command.getType(); + } + + public boolean isCompleted() { + return completed; + } + + @Override + public boolean isDone() { + return isCompleted(); + } + + public String getError() { + if (command.getOutput() != null) { + return command.getOutput().getError(); + } + return null; + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [command=").append(command); + sb.append(", redirections=").append(redirections); + sb.append(", maxRedirections=").append(maxRedirections); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterConnectionProvider.java b/src/main/java/io/lettuce/core/cluster/ClusterConnectionProvider.java new file mode 100644 index 0000000000..84e8b3e32a --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterConnectionProvider.java @@ -0,0 +1,135 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.io.Closeable; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisException; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.models.partitions.Partitions; + +/** + * Connection provider for cluster operations. + * + * @author Mark Paluch + * @since 3.0 + */ +interface ClusterConnectionProvider extends Closeable { + + /** + * Provide a connection for the intent and cluster slot. The underlying connection is bound to the nodeId. If the slot + * responsibility changes, the connection will not point to the updated nodeId. + * + * @param intent {@link Intent#READ} or {@link ClusterConnectionProvider.Intent#WRITE}. {@literal READ} connections will be + * provided with {@literal READONLY} mode set. + * @param slot the slot-hash of the key, see {@link SlotHash}. + * @return a valid connection which handles the slot. + * @throws RedisException if no know node can be found for the slot + */ + StatefulRedisConnection getConnection(Intent intent, int slot); + + /** + * Provide a connection for the intent and host/port. The connection can survive cluster topology updates. The connection + * will be closed if the node identified by {@code host} and {@code port} is no longer part of the cluster. + * + * @param intent {@link Intent#READ} or {@link Intent#WRITE}. {@literal READ} connections will be provided with + * {@literal READONLY} mode set. + * @param host host of the node. + * @param port port of the node. + * @return a valid connection to the given host. + * @throws RedisException if the host is not part of the cluster + */ + StatefulRedisConnection getConnection(Intent intent, String host, int port); + + /** + * Provide a connection for the intent and nodeId. The connection can survive cluster topology updates. The connection will + * be closed if the node identified by {@code nodeId} is no longer part of the cluster. + * + * @param intent {@link Intent#READ} or {@link Intent#WRITE}. {@literal READ} connections will be provided with + * {@literal READONLY} mode set. + * @param nodeId the nodeId of the cluster node. + * @return a valid connection to the given nodeId. + * @throws RedisException if the {@code nodeId} is not part of the cluster + */ + StatefulRedisConnection getConnection(Intent intent, String nodeId); + + /** + * Close the connections and free all resources. + */ + @Override + void close(); + + /** + * Close the connections and free all resources asynchronously. + * + * @since 5.1 + */ + CompletableFuture closeAsync(); + + /** + * Reset the writer state. Queued commands will be canceled and the internal state will be reset. This is useful when the + * internal state machine gets out of sync with the connection. + */ + void reset(); + + /** + * Close connections that are not in use anymore/not part of the cluster. + */ + void closeStaleConnections(); + + /** + * Update partitions. + * + * @param partitions the new partitions + */ + void setPartitions(Partitions partitions); + + /** + * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands + * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is + * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. + * + * @param autoFlush state of autoFlush. + */ + void setAutoFlushCommands(boolean autoFlush); + + /** + * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to + * achieve batching. No-op if channel is not connected. + */ + void flushCommands(); + + /** + * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the + * documentation for {@link ReadFrom} for more information. + * + * @param readFrom the read from setting, must not be {@literal null} + */ + void setReadFrom(ReadFrom readFrom); + + /** + * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. + * + * @return the read from setting + */ + ReadFrom getReadFrom(); + + enum Intent { + READ, WRITE; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterDistributionChannelWriter.java b/src/main/java/io/lettuce/core/cluster/ClusterDistributionChannelWriter.java new file mode 100644 index 0000000000..25e709bc41 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterDistributionChannelWriter.java @@ -0,0 +1,499 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.SlotHash.getSlot; + +import java.nio.ByteBuffer; +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.stream.IntStream; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.ClusterConnectionProvider.Intent; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.core.resource.ClientResources; + +/** + * Channel writer for cluster operation. This writer looks up the right partition by hash/slot for the operation. + * + * @author Mark Paluch + * @since 3.0 + */ +class ClusterDistributionChannelWriter implements RedisChannelWriter { + + private final RedisChannelWriter defaultWriter; + private final ClusterEventListener clusterEventListener; + private final int executionLimit; + + private ClusterConnectionProvider clusterConnectionProvider; + private AsyncClusterConnectionProvider asyncClusterConnectionProvider; + private boolean closed = false; + private volatile Partitions partitions; + + ClusterDistributionChannelWriter(ClientOptions clientOptions, RedisChannelWriter defaultWriter, + ClusterEventListener clusterEventListener) { + + if (clientOptions instanceof ClusterClientOptions) { + this.executionLimit = ((ClusterClientOptions) clientOptions).getMaxRedirects(); + } else { + this.executionLimit = 5; + } + + this.defaultWriter = defaultWriter; + this.clusterEventListener = clusterEventListener; + } + + @Override + public RedisCommand write(RedisCommand command) { + + LettuceAssert.notNull(command, "Command must not be null"); + + if (closed) { + command.completeExceptionally(new RedisException("Connection is closed")); + return command; + } + + return doWrite(command); + } + + private RedisCommand doWrite(RedisCommand command) { + + if (command instanceof ClusterCommand && !command.isDone()) { + + ClusterCommand clusterCommand = (ClusterCommand) command; + if (clusterCommand.isMoved() || clusterCommand.isAsk()) { + + HostAndPort target; + boolean asking; + if (clusterCommand.isMoved()) { + target = getMoveTarget(clusterCommand.getError()); + clusterEventListener.onMovedRedirection(); + asking = false; + } else { + target = getAskTarget(clusterCommand.getError()); + asking = true; + clusterEventListener.onAskRedirection(); + } + + command.getOutput().setError((String) null); + + CompletableFuture> connectFuture = asyncClusterConnectionProvider + .getConnectionAsync(Intent.WRITE, target.getHostText(), target.getPort()); + + if (isSuccessfullyCompleted(connectFuture)) { + writeCommand(command, asking, connectFuture.join(), null); + } else { + connectFuture.whenComplete((connection, throwable) -> writeCommand(command, asking, connection, throwable)); + } + + return command; + } + } + + ClusterCommand commandToSend = getCommandToSend(command); + CommandArgs args = command.getArgs(); + + // exclude CLIENT commands from cluster routing + if (args != null && !CommandType.CLIENT.equals(commandToSend.getType())) { + + ByteBuffer encodedKey = args.getFirstEncodedKey(); + if (encodedKey != null) { + + int hash = getSlot(encodedKey); + Intent intent = getIntent(command.getType()); + + CompletableFuture> connectFuture = ((AsyncClusterConnectionProvider) clusterConnectionProvider) + .getConnectionAsync(intent, hash); + + if (isSuccessfullyCompleted(connectFuture)) { + writeCommand(commandToSend, false, connectFuture.join(), null); + } else { + connectFuture + .whenComplete((connection, throwable) -> writeCommand(commandToSend, false, connection, throwable)); + } + + return commandToSend; + } + } + + writeCommand(commandToSend, defaultWriter); + + return commandToSend; + } + + private static boolean isSuccessfullyCompleted(CompletableFuture connectFuture) { + return connectFuture.isDone() && !connectFuture.isCompletedExceptionally(); + } + + @SuppressWarnings("unchecked") + private ClusterCommand getCommandToSend(RedisCommand command) { + + if (command instanceof ClusterCommand) { + return (ClusterCommand) command; + } + + return new ClusterCommand<>(command, this, executionLimit); + } + + @SuppressWarnings("unchecked") + private static void writeCommand(RedisCommand command, boolean asking, + StatefulRedisConnection connection, Throwable throwable) { + + if (throwable != null) { + command.completeExceptionally(throwable); + return; + } + + try { + + if (asking) { // set asking bit + writeCommands(Arrays.asList(asking(), command), ((RedisChannelHandler) connection).getChannelWriter()); + } else { + writeCommand(command, ((RedisChannelHandler) connection).getChannelWriter()); + } + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + private static RedisCommand asking() { + return new Command(CommandType.ASKING, new StatusOutput<>(StringCodec.ASCII), new CommandArgs<>(StringCodec.ASCII)); + } + + private static void writeCommand(RedisCommand command, RedisChannelWriter writer) { + + try { + getWriterToUse(writer).write(command); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + private static void writeCommands(Collection> commands, RedisChannelWriter writer) { + + try { + getWriterToUse(writer).write(commands); + } catch (Exception e) { + commands.forEach(command -> command.completeExceptionally(e)); + } + } + + private static RedisChannelWriter getWriterToUse(RedisChannelWriter writer) { + + RedisChannelWriter writerToUse = writer; + + if (writer instanceof ClusterDistributionChannelWriter) { + writerToUse = ((ClusterDistributionChannelWriter) writer).defaultWriter; + } + return writerToUse; + } + + @SuppressWarnings("unchecked") + @Override + public Collection> write(Collection> commands) { + + LettuceAssert.notNull(commands, "Commands must not be null"); + + if (closed) { + + commands.forEach(it -> it.completeExceptionally(new RedisException("Connection is closed"))); + return (Collection>) commands; + } + + List> clusterCommands = new ArrayList<>(commands.size()); + List> defaultCommands = new ArrayList<>(commands.size()); + Map>> partitions = new HashMap<>(); + + // TODO: Retain order or retain Intent preference? + // Currently: Retain order + Intent intent = getIntent(commands); + + for (RedisCommand cmd : commands) { + + if (cmd instanceof ClusterCommand) { + clusterCommands.add((ClusterCommand) cmd); + continue; + } + + CommandArgs args = cmd.getArgs(); + ByteBuffer firstEncodedKey = args != null ? args.getFirstEncodedKey() : null; + + if (firstEncodedKey == null) { + defaultCommands.add(new ClusterCommand<>(cmd, this, executionLimit)); + continue; + } + + int hash = getSlot(args.getFirstEncodedKey()); + + List> commandPartition = partitions.computeIfAbsent(SlotIntent.of(intent, hash), + slotIntent -> new ArrayList<>()); + + commandPartition.add(new ClusterCommand<>(cmd, this, executionLimit)); + } + + for (Map.Entry>> entry : partitions.entrySet()) { + + SlotIntent slotIntent = entry.getKey(); + RedisChannelHandler connection = (RedisChannelHandler) clusterConnectionProvider + .getConnection(slotIntent.intent, slotIntent.slotHash); + + RedisChannelWriter channelWriter = connection.getChannelWriter(); + if (channelWriter instanceof ClusterDistributionChannelWriter) { + ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) channelWriter; + channelWriter = writer.defaultWriter; + } + + if (channelWriter != null && channelWriter != this && channelWriter != defaultWriter) { + channelWriter.write(entry.getValue()); + } + } + + clusterCommands.forEach(this::write); + defaultCommands.forEach(defaultWriter::write); + + return (Collection) commands; + } + + /** + * Optimization: Determine command intents and optimize for bulk execution preferring one node. + *

+ * If there is only one intent, then we take the intent derived from the commands. If there is more than one intent, then + * use {@link Intent#WRITE}. + * + * @param commands {@link Collection} of {@link RedisCommand commands}. + * @return the intent. + */ + static Intent getIntent(Collection> commands) { + + boolean w = false; + boolean r = false; + Intent singleIntent = Intent.WRITE; + + for (RedisCommand command : commands) { + + if (command instanceof ClusterCommand) { + continue; + } + + singleIntent = getIntent(command.getType()); + if (singleIntent == Intent.READ) { + r = true; + } + + if (singleIntent == Intent.WRITE) { + w = true; + } + + if (r && w) { + return Intent.WRITE; + } + } + + return singleIntent; + } + + private static Intent getIntent(ProtocolKeyword type) { + return ReadOnlyCommands.isReadOnlyCommand(type) ? Intent.READ : Intent.WRITE; + } + + static HostAndPort getMoveTarget(String errorMessage) { + + LettuceAssert.notEmpty(errorMessage, "ErrorMessage must not be empty"); + LettuceAssert.isTrue(errorMessage.startsWith(CommandKeyword.MOVED.name()), + "ErrorMessage must start with " + CommandKeyword.MOVED); + + String[] movedMessageParts = errorMessage.split(" "); + LettuceAssert.isTrue(movedMessageParts.length >= 3, "ErrorMessage must consist of 3 tokens (" + errorMessage + ")"); + + return HostAndPort.parseCompat(movedMessageParts[2]); + } + + static HostAndPort getAskTarget(String errorMessage) { + + LettuceAssert.notEmpty(errorMessage, "ErrorMessage must not be empty"); + LettuceAssert.isTrue(errorMessage.startsWith(CommandKeyword.ASK.name()), + "ErrorMessage must start with " + CommandKeyword.ASK); + + String[] movedMessageParts = errorMessage.split(" "); + LettuceAssert.isTrue(movedMessageParts.length >= 3, "ErrorMessage must consist of 3 tokens (" + errorMessage + ")"); + + return HostAndPort.parseCompat(movedMessageParts[2]); + } + + @Override + public void close() { + + if (closed) { + return; + } + + closeAsync().join(); + } + + @Override + @SuppressWarnings("rawtypes") + public CompletableFuture closeAsync() { + + if (closed) { + return CompletableFuture.completedFuture(null); + } + + closed = true; + + List> futures = new ArrayList<>(); + + if (defaultWriter != null) { + futures.add(defaultWriter.closeAsync()); + } + + if (clusterConnectionProvider != null) { + futures.add(clusterConnectionProvider.closeAsync()); + clusterConnectionProvider = null; + } + + return Futures.allOf(futures); + } + + @Override + public void setConnectionFacade(ConnectionFacade redisChannelHandler) { + defaultWriter.setConnectionFacade(redisChannelHandler); + } + + @Override + public ClientResources getClientResources() { + return defaultWriter.getClientResources(); + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + getClusterConnectionProvider().setAutoFlushCommands(autoFlush); + } + + @Override + public void flushCommands() { + getClusterConnectionProvider().flushCommands(); + } + + public ClusterConnectionProvider getClusterConnectionProvider() { + return clusterConnectionProvider; + } + + @Override + public void reset() { + defaultWriter.reset(); + clusterConnectionProvider.reset(); + } + + public void setClusterConnectionProvider(ClusterConnectionProvider clusterConnectionProvider) { + this.clusterConnectionProvider = clusterConnectionProvider; + this.asyncClusterConnectionProvider = (AsyncClusterConnectionProvider) clusterConnectionProvider; + } + + public void setPartitions(Partitions partitions) { + + this.partitions = partitions; + + if (clusterConnectionProvider != null) { + clusterConnectionProvider.setPartitions(partitions); + } + } + + public Partitions getPartitions() { + return partitions; + } + + /** + * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the + * documentation for {@link ReadFrom} for more information. + * + * @param readFrom the read from setting, must not be {@literal null} + */ + public void setReadFrom(ReadFrom readFrom) { + clusterConnectionProvider.setReadFrom(readFrom); + } + + /** + * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. + * + * @return the read from setting + */ + public ReadFrom getReadFrom() { + return clusterConnectionProvider.getReadFrom(); + } + + static class SlotIntent { + + final int slotHash; + final Intent intent; + private static final SlotIntent[] READ; + private static final SlotIntent[] WRITE; + + static { + READ = new SlotIntent[SlotHash.SLOT_COUNT]; + WRITE = new SlotIntent[SlotHash.SLOT_COUNT]; + + IntStream.range(0, SlotHash.SLOT_COUNT).forEach(i -> { + + READ[i] = new SlotIntent(i, Intent.READ); + WRITE[i] = new SlotIntent(i, Intent.WRITE); + }); + + } + + private SlotIntent(int slotHash, Intent intent) { + this.slotHash = slotHash; + this.intent = intent; + } + + public static SlotIntent of(Intent intent, int slot) { + + if (intent == Intent.READ) { + return READ[slot]; + } + + return WRITE[slot]; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof SlotIntent)) + return false; + + SlotIntent that = (SlotIntent) o; + + if (slotHash != that.slotHash) + return false; + return intent == that.intent; + } + + @Override + public int hashCode() { + int result = slotHash; + result = 31 * result + intent.hashCode(); + return result; + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterEventListener.java b/src/main/java/io/lettuce/core/cluster/ClusterEventListener.java new file mode 100644 index 0000000000..1161a6cb64 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterEventListener.java @@ -0,0 +1,61 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +/** + * Event listener for cluster state/cluster node events. + * + * @author Mark Paluch + */ +interface ClusterEventListener { + + /** + * Event callback if a command receives a {@literal ASK} redirection. + */ + default void onAskRedirection() { + } + + /** + * Event callback if a command receives a {@literal MOVED} redirection. + */ + default void onMovedRedirection() { + } + + /** + * Event callback if a connection tries to reconnect. + */ + default void onReconnectAttempt(int attempt) { + } + + /** + * Event callback if a command should be routed to a slot that is not covered. + * + * @since 5.2 + */ + default void onUncoveredSlot(int slot) { + } + + /** + * Event callback if a connection is attempted to an unknown node. + * + * @since 5.1 + */ + default void onUnknownNode() { + } + + ClusterEventListener NO_OP = new ClusterEventListener() { + }; +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterFutureSyncInvocationHandler.java b/src/main/java/io/lettuce/core/cluster/ClusterFutureSyncInvocationHandler.java new file mode 100644 index 0000000000..3d779276bc --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterFutureSyncInvocationHandler.java @@ -0,0 +1,207 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.lang.invoke.MethodHandle; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.lang.reflect.Proxy; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; +import java.util.function.Predicate; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.AbstractInvocationHandler; +import io.lettuce.core.internal.DefaultMethods; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.TimeoutProvider; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Invocation-handler to synchronize API calls which use Futures as backend. This class leverages the need to implement a full + * sync class which just delegates every request. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings("unchecked") +class ClusterFutureSyncInvocationHandler extends AbstractInvocationHandler { + + private final StatefulConnection connection; + private final TimeoutProvider timeoutProvider; + private final Class asyncCommandsInterface; + private final Class nodeSelectionInterface; + private final Class nodeSelectionCommandsInterface; + private final Object asyncApi; + + private final Map apiMethodCache = new ConcurrentHashMap<>(RedisClusterCommands.class.getMethods().length, + 1); + private final Map connectionMethodCache = new ConcurrentHashMap<>(5, 1); + private final Map methodHandleCache = new ConcurrentHashMap<>(5, 1); + + ClusterFutureSyncInvocationHandler(StatefulConnection connection, Class asyncCommandsInterface, + Class nodeSelectionInterface, Class nodeSelectionCommandsInterface, Object asyncApi) { + this.connection = connection; + this.timeoutProvider = new TimeoutProvider(() -> connection.getOptions().getTimeoutOptions(), + () -> connection.getTimeout().toNanos()); + this.asyncCommandsInterface = asyncCommandsInterface; + this.nodeSelectionInterface = nodeSelectionInterface; + this.nodeSelectionCommandsInterface = nodeSelectionCommandsInterface; + this.asyncApi = asyncApi; + } + + /** + * @see AbstractInvocationHandler#handleInvocation(Object, Method, Object[]) + */ + @Override + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + try { + + if (method.isDefault()) { + return methodHandleCache.computeIfAbsent(method, ClusterFutureSyncInvocationHandler::lookupDefaultMethod) + .bindTo(proxy).invokeWithArguments(args); + } + + if (method.getName().equals("getConnection") && args.length > 0) { + return getConnection(method, args); + } + + if (method.getName().equals("readonly") && args.length == 1) { + return nodes((Predicate) args[0], ClusterConnectionProvider.Intent.READ, false); + } + + if (method.getName().equals("nodes") && args.length == 1) { + return nodes((Predicate) args[0], ClusterConnectionProvider.Intent.WRITE, false); + } + + if (method.getName().equals("nodes") && args.length == 2) { + return nodes((Predicate) args[0], ClusterConnectionProvider.Intent.WRITE, (Boolean) args[1]); + } + + Method targetMethod = apiMethodCache.computeIfAbsent(method, key -> { + + try { + return asyncApi.getClass().getMethod(key.getName(), key.getParameterTypes()); + } catch (NoSuchMethodException e) { + throw new IllegalStateException(e); + } + }); + + Object result = targetMethod.invoke(asyncApi, args); + + if (result instanceof RedisFuture) { + RedisFuture command = (RedisFuture) result; + if (!method.getName().equals("exec") && !method.getName().equals("multi")) { + if (connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection).isMulti()) { + return null; + } + } + return Futures.awaitOrCancel(command, getTimeoutNs(command), TimeUnit.NANOSECONDS); + } + + return result; + + } catch (InvocationTargetException e) { + throw e.getTargetException(); + } + } + + private long getTimeoutNs(RedisFuture command) { + + if (command instanceof RedisCommand) { + return timeoutProvider.getTimeoutNs((RedisCommand) command); + } + + return connection.getTimeout().toNanos(); + } + + private Object getConnection(Method method, Object[] args) throws Exception { + + Method targetMethod = connectionMethodCache.computeIfAbsent(method, this::lookupMethod); + + Object result = targetMethod.invoke(connection, args); + if (result instanceof StatefulRedisClusterConnection) { + StatefulRedisClusterConnection connection = (StatefulRedisClusterConnection) result; + return connection.sync(); + } + + if (result instanceof StatefulRedisConnection) { + StatefulRedisConnection connection = (StatefulRedisConnection) result; + return connection.sync(); + } + + throw new IllegalArgumentException("Cannot call method " + method); + } + + private Method lookupMethod(Method key) { + try { + return connection.getClass().getMethod(key.getName(), key.getParameterTypes()); + } catch (NoSuchMethodException e) { + throw new IllegalArgumentException(e); + } + } + + protected Object nodes(Predicate predicate, ClusterConnectionProvider.Intent intent, boolean dynamic) { + + NodeSelectionSupport, ?> selection = null; + + if (connection instanceof StatefulRedisClusterConnectionImpl) { + + StatefulRedisClusterConnectionImpl impl = (StatefulRedisClusterConnectionImpl) connection; + + if (dynamic) { + selection = new DynamicNodeSelection, Object, K, V>( + impl.getClusterDistributionChannelWriter(), predicate, intent, StatefulRedisConnection::sync); + } else { + + selection = new StaticNodeSelection, Object, K, V>( + impl.getClusterDistributionChannelWriter(), predicate, intent, StatefulRedisConnection::sync); + } + } + + if (connection instanceof StatefulRedisClusterPubSubConnectionImpl) { + + StatefulRedisClusterPubSubConnectionImpl impl = (StatefulRedisClusterPubSubConnectionImpl) connection; + selection = new StaticNodeSelection, Object, K, V>(impl.getClusterDistributionChannelWriter(), + predicate, intent, StatefulRedisConnection::sync); + } + + NodeSelectionInvocationHandler h = new NodeSelectionInvocationHandler((AbstractNodeSelection) selection, + asyncCommandsInterface, timeoutProvider); + return Proxy.newProxyInstance(NodeSelectionSupport.class.getClassLoader(), + new Class[] { nodeSelectionCommandsInterface, nodeSelectionInterface }, h); + } + + private static MethodHandle lookupDefaultMethod(Method method) { + + try { + return DefaultMethods.lookupMethodHandle(method); + } catch (ReflectiveOperationException e) { + throw new IllegalArgumentException(e); + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterNodeConnectionFactory.java b/src/main/java/io/lettuce/core/cluster/ClusterNodeConnectionFactory.java new file mode 100644 index 0000000000..7b82412be3 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterNodeConnectionFactory.java @@ -0,0 +1,109 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.concurrent.CompletableFuture; +import java.util.function.Function; + +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.ClusterConnectionProvider.Intent; +import io.lettuce.core.cluster.models.partitions.Partitions; + +/** + * Specialized {@link Function} to obtain connections for Redis Cluster nodes. Connecting to a node returns a + * {@link CompletableFuture} for asynchronous connection and late synchronization. + * + * @author Mark Paluch + * @since 5.0 + */ +interface ClusterNodeConnectionFactory extends + Function>> { + + /** + * Set the {@link Partitions}. + * + * @param partitions + */ + void setPartitions(Partitions partitions); + + /** + * Connection to identify a connection either by nodeId or host/port. + */ + class ConnectionKey { + + final Intent intent; + final String nodeId; + final String host; + final int port; + + public ConnectionKey(Intent intent, String nodeId) { + this.intent = intent; + this.nodeId = nodeId; + this.host = null; + this.port = 0; + } + + public ConnectionKey(Intent intent, String host, int port) { + this.intent = intent; + this.host = host; + this.port = port; + this.nodeId = null; + } + + @Override + public boolean equals(Object o) { + + if (this == o) + return true; + if (!(o instanceof ConnectionKey)) + return false; + + ConnectionKey key = (ConnectionKey) o; + + if (port != key.port) + return false; + if (intent != key.intent) + return false; + if (nodeId != null ? !nodeId.equals(key.nodeId) : key.nodeId != null) + return false; + return !(host != null ? !host.equals(key.host) : key.host != null); + } + + @Override + public int hashCode() { + + int result = intent != null ? intent.name().hashCode() : 0; + result = 31 * result + (nodeId != null ? nodeId.hashCode() : 0); + result = 31 * result + (host != null ? host.hashCode() : 0); + result = 31 * result + port; + return result; + } + + @Override + public String toString() { + + StringBuffer sb = new StringBuffer(); + sb.append(getClass().getSimpleName()); + sb.append(" [intent=").append(intent); + sb.append(", nodeId='").append(nodeId).append('\''); + sb.append(", host='").append(host).append('\''); + sb.append(", port=").append(port); + sb.append(']'); + return sb.toString(); + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterNodeEndpoint.java b/src/main/java/io/lettuce/core/cluster/ClusterNodeEndpoint.java new file mode 100644 index 0000000000..d030eb21cb --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterNodeEndpoint.java @@ -0,0 +1,90 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.Collection; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.RedisException; +import io.lettuce.core.protocol.DefaultEndpoint; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Command handler for node connections within the Redis Cluster context. This handler can requeue commands if it is + * disconnected and closed but has commands in the queue. If the handler was connected it would retry commands using the + * {@literal MOVED} or {@literal ASK} redirection. + * + * @author Mark Paluch + */ +class ClusterNodeEndpoint extends DefaultEndpoint { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(ClusterNodeEndpoint.class); + + private final RedisChannelWriter clusterChannelWriter; + + /** + * Initialize a new instance that handles commands from the supplied queue. + * + * @param clientOptions client options for this connection. + * @param clientResources client resources for this connection. + * @param clusterChannelWriter top-most channel writer. + */ + public ClusterNodeEndpoint(ClientOptions clientOptions, ClientResources clientResources, + RedisChannelWriter clusterChannelWriter) { + + super(clientOptions, clientResources); + + this.clusterChannelWriter = clusterChannelWriter; + } + + /** + * Move queued and buffered commands from the inactive connection to the master command writer. This is done only if the + * current connection is disconnected and auto-reconnect is enabled (command-retries). If the connection would be open, we + * could get into a race that the commands we're moving are right now in processing. Alive connections can handle redirects + * and retries on their own. + */ + @Override + public void close() { + + logger.debug("{} close()", logPrefix()); + + if (clusterChannelWriter != null) { + retriggerCommands(doExclusive(this::drainCommands)); + } + + super.close(); + } + + protected void retriggerCommands(Collection> commands) { + + for (RedisCommand queuedCommand : commands) { + if (queuedCommand == null || queuedCommand.isCancelled()) { + continue; + } + + try { + clusterChannelWriter.write(queuedCommand); + } catch (RedisException e) { + queuedCommand.completeExceptionally(e); + } + } + } + +} diff --git a/src/main/java/io/lettuce/core/cluster/ClusterPubSubConnectionProvider.java b/src/main/java/io/lettuce/core/cluster/ClusterPubSubConnectionProvider.java new file mode 100644 index 0000000000..1f20ff3e41 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterPubSubConnectionProvider.java @@ -0,0 +1,183 @@ +/* + * Copyright 2015-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.RedisClusterPubSubListener; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.pubsub.RedisPubSubAdapter; +import io.lettuce.core.pubsub.RedisPubSubListener; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.resource.ClientResources; + +/** + * {@link ClusterConnectionProvider} to provide {@link StatefulRedisPubSubConnection}s for Redis Cluster use. + *

+ * {@link StatefulRedisPubSubConnection}s provided by this {@link ClusterConnectionProvider} get a {@link RedisPubSubListener} + * registered that propagates received events to an upstream {@link RedisClusterPubSubListener} to provide message propagation. + * Message propagation performs a {@link RedisClusterNode} lookup to distinguish notifications between cluster nodes. + * + * @author Mark Paluch + * @since 4.4 + */ +class ClusterPubSubConnectionProvider extends PooledClusterConnectionProvider { + + private final RedisClusterClient redisClusterClient; + private final RedisCodec redisCodec; + private final RedisClusterPubSubListener notifications; + + /** + * Creates a new {@link ClusterPubSubConnectionProvider}. + * + * @param redisClusterClient must not be {@literal null}. + * @param clusterWriter must not be {@literal null}. + * @param redisCodec must not be {@literal null}. + * @param notificationTarget must not be {@literal null}. + * @param clusterEventListener must not be {@literal null}. + */ + ClusterPubSubConnectionProvider(RedisClusterClient redisClusterClient, RedisChannelWriter clusterWriter, + RedisCodec redisCodec, RedisClusterPubSubListener notificationTarget, + ClusterEventListener clusterEventListener) { + + super(redisClusterClient, clusterWriter, redisCodec, clusterEventListener); + + this.redisClusterClient = redisClusterClient; + this.redisCodec = redisCodec; + this.notifications = notificationTarget; + } + + @Override + protected ClusterNodeConnectionFactory getConnectionFactory(RedisClusterClient redisClusterClient) { + return new DecoratingClusterNodeConnectionFactory(new PubSubNodeConnectionFactory(redisClusterClient.getResources())); + } + + @SuppressWarnings("unchecked") + class PubSubNodeConnectionFactory extends AbstractClusterNodeConnectionFactory { + + PubSubNodeConnectionFactory(ClientResources clientResources) { + super(clientResources); + } + + @Override + public ConnectionFuture> apply(ConnectionKey key) { + + if (key.nodeId != null) { + + // NodeId connections do not provide command recovery due to cluster reconfiguration + return redisClusterClient.connectPubSubToNodeAsync((RedisCodec) redisCodec, key.nodeId, + getSocketAddressSupplier(key)); + } + + // Host and port connections do provide command recovery due to cluster reconfiguration + return redisClusterClient.connectPubSubToNodeAsync((RedisCodec) redisCodec, key.host + ":" + key.port, + getSocketAddressSupplier(key)); + } + } + + @SuppressWarnings("unchecked") + class DecoratingClusterNodeConnectionFactory implements ClusterNodeConnectionFactory { + + private final ClusterNodeConnectionFactory delegate; + + DecoratingClusterNodeConnectionFactory(ClusterNodeConnectionFactory delegate) { + this.delegate = delegate; + } + + @Override + public void setPartitions(Partitions partitions) { + delegate.setPartitions(partitions); + } + + @Override + public ConnectionFuture> apply(ConnectionKey key) { + + ConnectionFuture> future = delegate.apply(key); + if (key.nodeId != null) { + return future.thenApply(connection -> { + ((StatefulRedisPubSubConnection) connection).addListener(new DelegatingRedisClusterPubSubListener( + key.nodeId)); + return connection; + }); + } + + return future.thenApply(connection -> { + ((StatefulRedisPubSubConnection) connection).addListener(new DelegatingRedisClusterPubSubListener(key.host, + key.port)); + + return connection; + }); + } + } + + class DelegatingRedisClusterPubSubListener extends RedisPubSubAdapter { + + private final String nodeId; + private final String host; + private final int port; + + DelegatingRedisClusterPubSubListener(String nodeId) { + + this.nodeId = nodeId; + this.host = null; + this.port = 0; + } + + DelegatingRedisClusterPubSubListener(String host, int port) { + + this.nodeId = null; + this.host = host; + this.port = port; + } + + @Override + public void message(K channel, V message) { + notifications.message(getNode(), channel, message); + } + + @Override + public void message(K pattern, K channel, V message) { + notifications.message(getNode(), pattern, channel, message); + } + + @Override + public void subscribed(K channel, long count) { + notifications.subscribed(getNode(), channel, count); + } + + @Override + public void psubscribed(K pattern, long count) { + notifications.psubscribed(getNode(), pattern, count); + } + + @Override + public void unsubscribed(K channel, long count) { + notifications.unsubscribed(getNode(), channel, count); + } + + @Override + public void punsubscribed(K pattern, long count) { + notifications.punsubscribed(getNode(), pattern, count); + } + + private RedisClusterNode getNode() { + return nodeId != null ? getPartitions().getPartitionByNodeId(nodeId) : getPartitions().getPartition(host, port); + } + } +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterScanSupport.java b/src/main/java/io/lettuce/core/cluster/ClusterScanSupport.java similarity index 79% rename from src/main/java/com/lambdaworks/redis/cluster/ClusterScanSupport.java rename to src/main/java/io/lettuce/core/cluster/ClusterScanSupport.java index 991b9076c3..4d6c0fabdc 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterScanSupport.java +++ b/src/main/java/io/lettuce/core/cluster/ClusterScanSupport.java @@ -1,21 +1,35 @@ -package com.lambdaworks.redis.cluster; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; import java.util.ArrayList; import java.util.Iterator; import java.util.List; +import java.util.concurrent.ThreadLocalRandom; import java.util.function.Function; -import rx.Observable; -import rx.functions.Func1; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.models.role.RedisNodeDescription; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.models.role.RedisNodeDescription; /** * Methods to support a Cluster-wide SCAN operation over multiple hosts. - * + * * @author Mark Paluch */ class ClusterScanSupport { @@ -23,7 +37,7 @@ class ClusterScanSupport { /** * Map a {@link RedisFuture} of {@link KeyScanCursor} to a {@link RedisFuture} of {@link ClusterKeyScanCursor}. */ - final static ScanCursorMapper>> futureKeyScanCursorMapper = new ScanCursorMapper>>() { + static final ScanCursorMapper>> futureKeyScanCursorMapper = new ScanCursorMapper>>() { @Override public RedisFuture> map(List nodeIds, String currentNodeId, RedisFuture> cursor) { @@ -39,7 +53,7 @@ public KeyScanCursor apply(KeyScanCursor result) { /** * Map a {@link RedisFuture} of {@link StreamScanCursor} to a {@link RedisFuture} of {@link ClusterStreamScanCursor}. */ - final static ScanCursorMapper> futureStreamScanCursorMapper = new ScanCursorMapper>() { + static final ScanCursorMapper> futureStreamScanCursorMapper = new ScanCursorMapper>() { @Override public RedisFuture map(List nodeIds, String currentNodeId, RedisFuture cursor) { @@ -53,38 +67,25 @@ public StreamScanCursor apply(StreamScanCursor result) { }; /** - * Map a {@link Observable} of {@link KeyScanCursor} to a {@link Observable} of {@link ClusterKeyScanCursor}. + * Map a {@link Mono} of {@link KeyScanCursor} to a {@link Mono} of {@link ClusterKeyScanCursor}. */ - final static ScanCursorMapper>> reactiveKeyScanCursorMapper = new ScanCursorMapper>>() { - @Override - public Observable> map(List nodeIds, String currentNodeId, Observable> cursor) { - return cursor.map(new Func1, KeyScanCursor>() { - @Override - public KeyScanCursor call(KeyScanCursor keyScanCursor) { - return new ClusterKeyScanCursor<>(nodeIds, currentNodeId, keyScanCursor); - } - }); - } - }; + static final ScanCursorMapper>> reactiveKeyScanCursorMapper = (nodeIds, currentNodeId, + cursor) -> cursor.map(keyScanCursor -> new ClusterKeyScanCursor<>(nodeIds, currentNodeId, keyScanCursor)); /** - * Map a {@link Observable} of {@link StreamScanCursor} to a {@link Observable} of {@link ClusterStreamScanCursor}. + * Map a {@link Mono} of {@link StreamScanCursor} to a {@link Mono} of {@link ClusterStreamScanCursor}. */ - final static ScanCursorMapper> reactiveStreamScanCursorMapper = new ScanCursorMapper>() { - @Override - public Observable map(List nodeIds, String currentNodeId, Observable cursor) { - return cursor.map(new Func1() { + static final ScanCursorMapper> reactiveStreamScanCursorMapper = (nodeIds, currentNodeId, + cursor) -> cursor.map(new Function() { @Override - public StreamScanCursor call(StreamScanCursor streamScanCursor) { + public StreamScanCursor apply(StreamScanCursor streamScanCursor) { return new ClusterStreamScanCursor(nodeIds, currentNodeId, streamScanCursor); } }); - } - }; /** * Retrieve the cursor to continue the scan. - * + * * @param scanCursor can be {@literal null}. * @return */ @@ -131,7 +132,7 @@ static String getCurrentNodeId(ScanCursor cursor, List nodeIds) { /** * Retrieve a list of node Ids to use for the SCAN operation. - * + * * @param connection * @return */ @@ -158,7 +159,13 @@ public Iterator iterator() { }); if (!selection.isEmpty()) { - RedisClusterNode selectedNode = (RedisClusterNode) selection.get(0); + + int indexToUse = 0; + if (!OrderingReadFromAccessor.isOrderSensitive(connection.getReadFrom())) { + indexToUse = ThreadLocalRandom.current().nextInt(selection.size()); + } + + RedisClusterNode selectedNode = (RedisClusterNode) selection.get(indexToUse); nodeIds.add(selectedNode.getNodeId()); continue; } @@ -202,11 +209,11 @@ static ScanCursorMapper> asyncClusterStreamScanCur return futureStreamScanCursorMapper; } - static ScanCursorMapper>> reactiveClusterKeyScanCursorMapper() { + static ScanCursorMapper>> reactiveClusterKeyScanCursorMapper() { return (ScanCursorMapper) reactiveKeyScanCursorMapper; } - static ScanCursorMapper> reactiveClusterStreamScanCursorMapper() { + static ScanCursorMapper> reactiveClusterStreamScanCursorMapper() { return reactiveStreamScanCursorMapper; } @@ -234,7 +241,7 @@ interface ClusterScanCursor { /** * State object for a cluster-wide SCAN using Key results. - * + * * @param */ private static class ClusterKeyScanCursor extends KeyScanCursor implements ClusterScanCursor { diff --git a/src/main/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshOptions.java b/src/main/java/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.java similarity index 77% rename from src/main/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshOptions.java rename to src/main/java/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.java index ccd113e3ce..665ff1b4ca 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshOptions.java +++ b/src/main/java/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.java @@ -1,13 +1,29 @@ -package com.lambdaworks.redis.cluster; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; +import java.time.Duration; import java.util.*; import java.util.concurrent.TimeUnit; -import com.lambdaworks.redis.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceAssert; /** * Options to control the Cluster topology refreshing of {@link RedisClusterClient}. - * + * * @author Mark Paluch * @since 4.2 */ @@ -16,33 +32,32 @@ public class ClusterTopologyRefreshOptions { public static final boolean DEFAULT_PERIODIC_REFRESH_ENABLED = false; public static final long DEFAULT_REFRESH_PERIOD = 60; public static final TimeUnit DEFAULT_REFRESH_PERIOD_UNIT = TimeUnit.SECONDS; + public static final Duration DEFAULT_REFRESH_PERIOD_DURATION = Duration.ofSeconds(DEFAULT_REFRESH_PERIOD); public static final boolean DEFAULT_DYNAMIC_REFRESH_SOURCES = true; public static final Set DEFAULT_ADAPTIVE_REFRESH_TRIGGERS = Collections.emptySet(); public static final long DEFAULT_ADAPTIVE_REFRESH_TIMEOUT = 30; public static final TimeUnit DEFAULT_ADAPTIVE_REFRESH_TIMEOUT_UNIT = TimeUnit.SECONDS; + public static final Duration DEFAULT_ADAPTIVE_REFRESH_TIMEOUT_DURATION = Duration + .ofSeconds(DEFAULT_ADAPTIVE_REFRESH_TIMEOUT); public static final int DEFAULT_REFRESH_TRIGGERS_RECONNECT_ATTEMPTS = 5; public static final boolean DEFAULT_CLOSE_STALE_CONNECTIONS = true; private final boolean periodicRefreshEnabled; - private final long refreshPeriod; - private final TimeUnit refreshPeriodUnit; + private final Duration refreshPeriod; private final boolean closeStaleConnections; private final boolean dynamicRefreshSources; private final Set adaptiveRefreshTriggers; - private final long adaptiveRefreshTimeout; - private final TimeUnit adaptiveRefreshTimeoutUnit; + private final Duration adaptiveRefreshTimeout; private final int refreshTriggersReconnectAttempts; protected ClusterTopologyRefreshOptions(Builder builder) { this.periodicRefreshEnabled = builder.periodicRefreshEnabled; this.refreshPeriod = builder.refreshPeriod; - this.refreshPeriodUnit = builder.refreshPeriodUnit; this.closeStaleConnections = builder.closeStaleConnections; this.dynamicRefreshSources = builder.dynamicRefreshSources; this.adaptiveRefreshTriggers = Collections.unmodifiableSet(new HashSet<>(builder.adaptiveRefreshTriggers)); this.adaptiveRefreshTimeout = builder.adaptiveRefreshTimeout; - this.adaptiveRefreshTimeoutUnit = builder.adaptiveRefreshTimeoutUnit; this.refreshTriggersReconnectAttempts = builder.refreshTriggersReconnectAttempts; } @@ -50,12 +65,10 @@ protected ClusterTopologyRefreshOptions(ClusterTopologyRefreshOptions original) this.periodicRefreshEnabled = original.periodicRefreshEnabled; this.refreshPeriod = original.refreshPeriod; - this.refreshPeriodUnit = original.refreshPeriodUnit; this.closeStaleConnections = original.closeStaleConnections; this.dynamicRefreshSources = original.dynamicRefreshSources; this.adaptiveRefreshTriggers = Collections.unmodifiableSet(new HashSet<>(original.adaptiveRefreshTriggers)); this.adaptiveRefreshTimeout = original.adaptiveRefreshTimeout; - this.adaptiveRefreshTimeoutUnit = original.adaptiveRefreshTimeoutUnit; this.refreshTriggersReconnectAttempts = original.refreshTriggersReconnectAttempts; } @@ -102,20 +115,14 @@ public static ClusterTopologyRefreshOptions enabled() { public static class Builder { private boolean periodicRefreshEnabled = DEFAULT_PERIODIC_REFRESH_ENABLED; - private long refreshPeriod = DEFAULT_REFRESH_PERIOD; - private TimeUnit refreshPeriodUnit = DEFAULT_REFRESH_PERIOD_UNIT; + private Duration refreshPeriod = DEFAULT_REFRESH_PERIOD_DURATION; private boolean closeStaleConnections = DEFAULT_CLOSE_STALE_CONNECTIONS; private boolean dynamicRefreshSources = DEFAULT_DYNAMIC_REFRESH_SOURCES; private Set adaptiveRefreshTriggers = new HashSet<>(DEFAULT_ADAPTIVE_REFRESH_TRIGGERS); - private long adaptiveRefreshTimeout = DEFAULT_ADAPTIVE_REFRESH_TIMEOUT; - private TimeUnit adaptiveRefreshTimeoutUnit = DEFAULT_ADAPTIVE_REFRESH_TIMEOUT_UNIT; + private Duration adaptiveRefreshTimeout = DEFAULT_ADAPTIVE_REFRESH_TIMEOUT_DURATION; private int refreshTriggersReconnectAttempts = DEFAULT_REFRESH_TRIGGERS_RECONNECT_ATTEMPTS; - /** - * @deprecated Use {@link ClusterTopologyRefreshOptions#builder()} - */ - @Deprecated - public Builder() { + private Builder() { } /** @@ -140,6 +147,19 @@ public Builder enablePeriodicRefresh(boolean enabled) { return this; } + /** + * Enables periodic refresh and sets the refresh period. Defaults to {@literal 60 SECONDS}. See + * {@link #DEFAULT_REFRESH_PERIOD} and {@link #DEFAULT_REFRESH_PERIOD_UNIT}. This method is a shortcut for + * {@link #refreshPeriod(long, TimeUnit)} and {@link #enablePeriodicRefresh()}. + * + * @param refreshPeriod period for triggering topology updates, must be greater {@literal 0} + * @return {@code this} + * @since 5.0 + */ + public Builder enablePeriodicRefresh(Duration refreshPeriod) { + return refreshPeriod(refreshPeriod).enablePeriodicRefresh(); + } + /** * Enables periodic refresh and sets the refresh period. Defaults to {@literal 60 SECONDS}. See * {@link #DEFAULT_REFRESH_PERIOD} and {@link #DEFAULT_REFRESH_PERIOD_UNIT}. This method is a shortcut for @@ -148,11 +168,30 @@ public Builder enablePeriodicRefresh(boolean enabled) { * @param refreshPeriod period for triggering topology updates, must be greater {@literal 0} * @param refreshPeriodUnit unit for {@code refreshPeriod}, must not be {@literal null} * @return {@code this} + * @deprecated since 5.0, use {@link #enablePeriodicRefresh(Duration)}. */ + @Deprecated public Builder enablePeriodicRefresh(long refreshPeriod, TimeUnit refreshPeriodUnit) { return refreshPeriod(refreshPeriod, refreshPeriodUnit).enablePeriodicRefresh(); } + /** + * Set the refresh period. Defaults to {@literal 60 SECONDS}. See {@link #DEFAULT_REFRESH_PERIOD} and + * {@link #DEFAULT_REFRESH_PERIOD_UNIT}. + * + * @param refreshPeriod period for triggering topology updates, must be greater {@literal 0} + * @return {@code this} + * @since 5.0 + */ + public Builder refreshPeriod(Duration refreshPeriod) { + + LettuceAssert.notNull(refreshPeriod, "RefreshPeriod duration must not be null"); + LettuceAssert.isTrue(refreshPeriod.toNanos() > 0, "RefreshPeriod must be greater 0"); + + this.refreshPeriod = refreshPeriod; + return this; + } + /** * Set the refresh period. Defaults to {@literal 60 SECONDS}. See {@link #DEFAULT_REFRESH_PERIOD} and * {@link #DEFAULT_REFRESH_PERIOD_UNIT}. @@ -160,15 +199,15 @@ public Builder enablePeriodicRefresh(long refreshPeriod, TimeUnit refreshPeriodU * @param refreshPeriod period for triggering topology updates, must be greater {@literal 0} * @param refreshPeriodUnit unit for {@code refreshPeriod}, must not be {@literal null} * @return {@code this} + * @deprecated since 5.0, use {@link #refreshPeriod(Duration)}. */ + @Deprecated public Builder refreshPeriod(long refreshPeriod, TimeUnit refreshPeriodUnit) { LettuceAssert.isTrue(refreshPeriod > 0, "RefreshPeriod must be greater 0"); LettuceAssert.notNull(refreshPeriodUnit, "TimeUnit must not be null"); - this.refreshPeriod = refreshPeriod; - this.refreshPeriodUnit = refreshPeriodUnit; - return this; + return refreshPeriod(Duration.ofNanos(refreshPeriodUnit.toNanos(refreshPeriod))); } /** @@ -211,8 +250,10 @@ public Builder dynamicRefreshSources(boolean dynamicRefreshSources) { * @return {@code this} */ public Builder enableAdaptiveRefreshTrigger(RefreshTrigger... refreshTrigger) { + LettuceAssert.notNull(refreshTrigger, "RefreshTriggers must not be null"); LettuceAssert.noNullElements(refreshTrigger, "RefreshTriggers must not contain null elements"); + adaptiveRefreshTriggers.addAll(Arrays.asList(refreshTrigger)); return this; } @@ -231,6 +272,24 @@ public Builder enableAllAdaptiveRefreshTriggers() { return this; } + /** + * Set the timeout for adaptive topology updates. This timeout is to rate-limit topology updates initiated by refresh + * triggers to one topology refresh per timeout. Defaults to {@literal 30 SECONDS}. See {@link #DEFAULT_REFRESH_PERIOD} + * and {@link #DEFAULT_REFRESH_PERIOD_UNIT}. + * + * @param timeout timeout for rate-limit adaptive topology updates, must be greater than {@literal 0}. + * @return {@code this} + * @since 5.0 + */ + public Builder adaptiveRefreshTriggersTimeout(Duration timeout) { + + LettuceAssert.notNull(refreshPeriod, "Adaptive refresh triggers timeout must not be null"); + LettuceAssert.isTrue(refreshPeriod.toNanos() > 0, "Adaptive refresh triggers timeout must be greater 0"); + + this.adaptiveRefreshTimeout = timeout; + return this; + } + /** * Set the timeout for adaptive topology updates. This timeout is to rate-limit topology updates initiated by refresh * triggers to one topology refresh per timeout. Defaults to {@literal 30 SECONDS}. See {@link #DEFAULT_REFRESH_PERIOD} @@ -239,11 +298,15 @@ public Builder enableAllAdaptiveRefreshTriggers() { * @param timeout timeout for rate-limit adaptive topology updates * @param unit unit for {@code timeout} * @return {@code this} + * @deprecated since 5.0, use {@link #adaptiveRefreshTriggersTimeout(Duration)}. */ + @Deprecated public Builder adaptiveRefreshTriggersTimeout(long timeout, TimeUnit unit) { - this.adaptiveRefreshTimeout = timeout; - this.adaptiveRefreshTimeoutUnit = unit; - return this; + + LettuceAssert.isTrue(timeout > 0, "Triggers timeout must be greater 0"); + LettuceAssert.notNull(unit, "TimeUnit must not be null"); + + return adaptiveRefreshTriggersTimeout(Duration.ofNanos(unit.toNanos(timeout))); } /** @@ -272,8 +335,8 @@ public ClusterTopologyRefreshOptions build() { /** * Flag, whether regular cluster topology updates are updated. The client starts updating the cluster topology in the - * intervals of {@link #getRefreshPeriod()} /{@link #getRefreshPeriodUnit()}. Defaults to {@literal false}. - * + * intervals of {@link #getRefreshPeriod()}. Defaults to {@literal false}. + * * @return {@literal true} it the cluster topology view is updated periodically */ public boolean isPeriodicRefreshEnabled() { @@ -282,26 +345,17 @@ public boolean isPeriodicRefreshEnabled() { /** * Period between the regular cluster topology updates. Defaults to {@literal 60}. - * + * * @return the period between the regular cluster topology updates */ - public long getRefreshPeriod() { + public Duration getRefreshPeriod() { return refreshPeriod; } - /** - * Unit for the {@link #getRefreshPeriod()}. Defaults to {@link TimeUnit#SECONDS}. - * - * @return unit for the {@link #getRefreshPeriod()} - */ - public TimeUnit getRefreshPeriodUnit() { - return refreshPeriodUnit; - } - /** * Flag, whether to close stale connections when refreshing the cluster topology. Defaults to {@literal true}. Comes only * into effect if {@link #isPeriodicRefreshEnabled()} is {@literal true}. - * + * * @return {@literal true} if stale connections are cleaned up after cluster topology updates */ public boolean isCloseStaleConnections() { @@ -313,8 +367,8 @@ public boolean isCloseStaleConnections() { * refresh will query all discovered nodes for the cluster topology and calculate the number of clients for each node.If set * to {@literal false}, only the initial seed nodes will be used as sources for topology discovery and the number of clients * will be obtained only for the initial seed nodes. This can be useful when using Redis Cluster with many nodes. - * - * @return {@link true} if dynamic refresh sources are enabled + * + * @return {@literal true} if dynamic refresh sources are enabled */ public boolean useDynamicRefreshSources() { return dynamicRefreshSources; @@ -337,19 +391,10 @@ public Set getAdaptiveRefreshTriggers() { * * @return the period between the regular cluster topology updates */ - public long getAdaptiveRefreshTimeout() { + public Duration getAdaptiveRefreshTimeout() { return adaptiveRefreshTimeout; } - /** - * Unit for the {@link #getAdaptiveRefreshTimeout()}. Defaults to {@link TimeUnit#SECONDS}. - * - * @return unit for the {@link #getRefreshPeriod()} - */ - public TimeUnit getAdaptiveRefreshTimeoutUnit() { - return adaptiveRefreshTimeoutUnit; - } - /** * Threshold for {@link RefreshTrigger#PERSISTENT_RECONNECTS}. Topology updates based on persistent reconnects lead only to * a refresh if the reconnect process tries at least {@code refreshTriggersReconnectAttempts}. See @@ -380,5 +425,19 @@ public enum RefreshTrigger { * Connections to a particular host run into persistent reconnects (more than one attempt). */ PERSISTENT_RECONNECTS, + + /** + * Attempts to use a slot that is not covered by a known node. + * + * @since 5.2 + */ + UNCOVERED_SLOT, + + /** + * Connection attempts to unknown nodes. + * + * @since 5.1 + */ + UNKNOWN_NODE } } diff --git a/src/main/java/io/lettuce/core/cluster/ClusterTopologyRefreshScheduler.java b/src/main/java/io/lettuce/core/cluster/ClusterTopologyRefreshScheduler.java new file mode 100644 index 0000000000..3f341533a9 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ClusterTopologyRefreshScheduler.java @@ -0,0 +1,313 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.time.Duration; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Supplier; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.event.cluster.AdaptiveRefreshTriggeredEvent; +import io.lettuce.core.resource.ClientResources; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.concurrent.ScheduledFuture; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Scheduler utility to schedule and initiate cluster topology refresh. + * + * @author Mark Paluch + */ +class ClusterTopologyRefreshScheduler implements Runnable, ClusterEventListener { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(ClusterTopologyRefreshScheduler.class); + private static final ClusterTopologyRefreshOptions FALLBACK_OPTIONS = ClusterTopologyRefreshOptions.create(); + + private final Supplier clientOptions; + private final Supplier partitions; + private final ClientResources clientResources; + private final ClusterTopologyRefreshTask clusterTopologyRefreshTask; + private final AtomicReference timeoutRef = new AtomicReference<>(); + + private final AtomicBoolean clusterTopologyRefreshActivated = new AtomicBoolean(false); + private final AtomicReference> clusterTopologyRefreshFuture = new AtomicReference<>(); + private final EventExecutorGroup genericWorkerPool; + + ClusterTopologyRefreshScheduler(Supplier clientOptions, Supplier partitions, + Supplier> refreshTopology, ClientResources clientResources) { + + this.clientOptions = clientOptions; + this.partitions = partitions; + this.clientResources = clientResources; + this.genericWorkerPool = this.clientResources.eventExecutorGroup(); + this.clusterTopologyRefreshTask = new ClusterTopologyRefreshTask(refreshTopology); + } + + protected void activateTopologyRefreshIfNeeded() { + + ClusterClientOptions options = clientOptions.get(); + ClusterTopologyRefreshOptions topologyRefreshOptions = options.getTopologyRefreshOptions(); + + if (!topologyRefreshOptions.isPeriodicRefreshEnabled() || clusterTopologyRefreshActivated.get()) { + return; + } + + if (clusterTopologyRefreshActivated.compareAndSet(false, true)) { + ScheduledFuture scheduledFuture = genericWorkerPool.scheduleAtFixedRate(this, + options.getRefreshPeriod().toNanos(), options.getRefreshPeriod().toNanos(), TimeUnit.NANOSECONDS); + clusterTopologyRefreshFuture.set(scheduledFuture); + } + } + + /** + * Disable periodic topology refresh. + */ + public void shutdown() { + + if (clusterTopologyRefreshActivated.compareAndSet(true, false)) { + + ScheduledFuture scheduledFuture = clusterTopologyRefreshFuture.get(); + + try { + scheduledFuture.cancel(false); + clusterTopologyRefreshFuture.set(null); + } catch (Exception e) { + logger.debug("Could not cancel Cluster topology refresh", e); + } + } + } + + @Override + public void run() { + + logger.debug("ClusterTopologyRefreshScheduler.run()"); + + if (isEventLoopActive()) { + + if (!clientOptions.get().isRefreshClusterView()) { + logger.debug("Periodic ClusterTopologyRefresh is disabled"); + return; + } + } else { + logger.debug("Periodic ClusterTopologyRefresh is disabled"); + return; + } + + clientResources.eventExecutorGroup().submit(clusterTopologyRefreshTask); + } + + @Override + public void onAskRedirection() { + + if (isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger.ASK_REDIRECT)) { + indicateTopologyRefreshSignal(); + } + } + + @Override + public void onMovedRedirection() { + + if (isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger.MOVED_REDIRECT)) { + if (indicateTopologyRefreshSignal()) { + emitAdaptiveRefreshScheduledEvent(); + } + } + } + + @Override + public void onReconnectAttempt(int attempt) { + + if (isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger.PERSISTENT_RECONNECTS) + && attempt >= getClusterTopologyRefreshOptions().getRefreshTriggersReconnectAttempts()) { + if (indicateTopologyRefreshSignal()) { + emitAdaptiveRefreshScheduledEvent(); + } + } + } + + @Override + public void onUncoveredSlot(int slot) { + + if (isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger.UNCOVERED_SLOT)) { + if (indicateTopologyRefreshSignal()) { + emitAdaptiveRefreshScheduledEvent(); + } + } + } + + @Override + public void onUnknownNode() { + + if (isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger.UNKNOWN_NODE)) { + if (indicateTopologyRefreshSignal()) { + emitAdaptiveRefreshScheduledEvent(); + } + } + } + + private void emitAdaptiveRefreshScheduledEvent() { + + AdaptiveRefreshTriggeredEvent event = new AdaptiveRefreshTriggeredEvent(partitions, this::scheduleRefresh); + + clientResources.eventBus().publish(event); + } + + private boolean indicateTopologyRefreshSignal() { + + logger.debug("ClusterTopologyRefreshScheduler.indicateTopologyRefreshSignal()"); + + if (!acquireTimeout()) { + return false; + } + + return scheduleRefresh(); + } + + private boolean scheduleRefresh() { + + if (isEventLoopActive()) { + clientResources.eventExecutorGroup().submit(clusterTopologyRefreshTask); + return true; + } + + logger.debug("ClusterTopologyRefresh is disabled"); + return false; + } + + /** + * Check if the {@link EventExecutorGroup} is active + * + * @return false if the worker pool is terminating, shutdown or terminated + */ + private boolean isEventLoopActive() { + + EventExecutorGroup eventExecutors = clientResources.eventExecutorGroup(); + if (eventExecutors.isShuttingDown() || eventExecutors.isShutdown() || eventExecutors.isTerminated()) { + return false; + } + + return true; + } + + private boolean acquireTimeout() { + + Timeout existingTimeout = timeoutRef.get(); + + if (existingTimeout != null) { + if (!existingTimeout.isExpired()) { + return false; + } + } + + ClusterTopologyRefreshOptions refreshOptions = getClusterTopologyRefreshOptions(); + Timeout timeout = new Timeout(refreshOptions.getAdaptiveRefreshTimeout()); + + if (timeoutRef.compareAndSet(existingTimeout, timeout)) { + return true; + } + + return false; + } + + private ClusterTopologyRefreshOptions getClusterTopologyRefreshOptions() { + + ClientOptions clientOptions = this.clientOptions.get(); + + if (clientOptions instanceof ClusterClientOptions) { + return ((ClusterClientOptions) clientOptions).getTopologyRefreshOptions(); + } + + return FALLBACK_OPTIONS; + } + + private boolean isEnabled(ClusterTopologyRefreshOptions.RefreshTrigger refreshTrigger) { + return getClusterTopologyRefreshOptions().getAdaptiveRefreshTriggers().contains(refreshTrigger); + } + + /** + * Value object to represent a timeout. + * + * @author Mark Paluch + * @since 4.2 + */ + private class Timeout { + + private final long expiresMs; + + public Timeout(Duration duration) { + this.expiresMs = System.currentTimeMillis() + duration.toMillis(); + } + + public boolean isExpired() { + return expiresMs < System.currentTimeMillis(); + } + + public long remaining() { + + long diff = expiresMs - System.currentTimeMillis(); + if (diff > 0) { + return diff; + } + return 0; + } + } + + private static class ClusterTopologyRefreshTask extends AtomicBoolean implements Runnable { + + private static final long serialVersionUID = -1337731371220365694L; + private final Supplier> reloadTopologyAsync; + + ClusterTopologyRefreshTask(Supplier> reloadTopologyAsync) { + this.reloadTopologyAsync = reloadTopologyAsync; + } + + public void run() { + + if (compareAndSet(false, true)) { + doRun(); + return; + } + + if (logger.isDebugEnabled()) { + logger.debug("ClusterTopologyRefreshTask already in progress"); + } + } + + void doRun() { + + if (logger.isDebugEnabled()) { + logger.debug("ClusterTopologyRefreshTask requesting partitions"); + } + try { + reloadTopologyAsync.get().whenComplete((ignore, throwable) -> { + + if (throwable != null) { + logger.warn("Cannot refresh Redis Cluster topology", throwable); + } + + set(false); + }); + } catch (Exception e) { + logger.warn("Cannot refresh Redis Cluster topology", e); + } + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/CommandSet.java b/src/main/java/io/lettuce/core/cluster/CommandSet.java new file mode 100644 index 0000000000..d4ead66271 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/CommandSet.java @@ -0,0 +1,78 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.*; + +import io.lettuce.core.models.command.CommandDetail; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Value object representing the current Redis state regarding its commands. + *

+ * {@link CommandSet} caches command details and uses {@link CommandType}. + * + * @author Mark Paluch + * @since 4.4 + */ +class CommandSet { + + private final Map commands; + private final EnumSet availableCommands = EnumSet.noneOf(CommandType.class); + + public CommandSet(Collection commands) { + + Map map = new HashMap<>(); + + for (CommandDetail command : commands) { + + map.put(command.getName().toLowerCase(), command); + + CommandType commandType = getCommandType(command); + if (commandType != null) { + availableCommands.add(commandType); + } + } + + this.commands = map; + } + + private static CommandType getCommandType(CommandDetail command) { + + try { + return CommandType.valueOf(command.getName().toUpperCase(Locale.US)); + } catch (IllegalArgumentException e) { + return null; + } + } + + /** + * Check whether Redis supports a particular command given a {@link ProtocolKeyword}. Querying commands using + * {@link CommandType} yields a better performance than other subtypes of {@link ProtocolKeyword}. + * + * @param commandName the command name, must not be {@literal null}. + * @return {@literal true} if the command is supported/available. + */ + public boolean hasCommand(ProtocolKeyword commandName) { + + if (commandName instanceof CommandType) { + return availableCommands.contains(commandName); + } + + return commands.containsKey(commandName.name().toLowerCase()); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/DynamicNodeSelection.java b/src/main/java/io/lettuce/core/cluster/DynamicNodeSelection.java new file mode 100644 index 0000000000..1a33a072a3 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/DynamicNodeSelection.java @@ -0,0 +1,71 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.stream.Collectors; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Dynamic selection of nodes. + * + * @param API type. + * @param Command command interface type to invoke multi-node operations. + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +class DynamicNodeSelection extends AbstractNodeSelection { + + private final ClusterDistributionChannelWriter writer; + private final Predicate selector; + private final ClusterConnectionProvider.Intent intent; + private final Function, API> apiExtractor; + + public DynamicNodeSelection(ClusterDistributionChannelWriter writer, Predicate selector, + ClusterConnectionProvider.Intent intent, Function, API> apiExtractor) { + + this.selector = selector; + this.intent = intent; + this.writer = writer; + this.apiExtractor = apiExtractor; + } + + @Override + protected CompletableFuture> getConnection(RedisClusterNode redisClusterNode) { + + RedisURI uri = redisClusterNode.getUri(); + AsyncClusterConnectionProvider async = (AsyncClusterConnectionProvider) writer.getClusterConnectionProvider(); + + return async.getConnectionAsync(intent, uri.getHost(), uri.getPort()); + } + + @Override + protected CompletableFuture getApi(RedisClusterNode redisClusterNode) { + return getConnection(redisClusterNode).thenApply(apiExtractor); + } + + @Override + protected List nodes() { + return writer.getPartitions().stream().filter(selector).collect(Collectors.toList()); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/MultiNodeExecution.java b/src/main/java/io/lettuce/core/cluster/MultiNodeExecution.java new file mode 100644 index 0000000000..87ef643947 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/MultiNodeExecution.java @@ -0,0 +1,134 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.atomic.AtomicLong; + +import io.lettuce.core.RedisCommandInterruptedException; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.internal.Exceptions; + +/** + * Utility to perform and synchronize command executions on multiple cluster nodes. + * + * @author Mark Paluch + */ +class MultiNodeExecution { + + static T execute(Callable function) { + try { + return function.call(); + } catch (Exception e) { + throw Exceptions.bubble(e); + } + } + + /** + * Aggregate (sum) results of the {@link RedisFuture}s. + * + * @param executions mapping of a key to the future + * @return future producing an aggregation result + */ + protected static RedisFuture aggregateAsync(Map> executions) { + + return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { + AtomicLong result = new AtomicLong(); + for (CompletionStage future : executions.values()) { + Long value = execute(() -> future.toCompletableFuture().get()); + if (value != null) { + result.getAndAdd(value); + } + } + + return result.get(); + }); + } + + /** + * Returns the result of the first {@link RedisFuture} and guarantee that all futures are finished. + * + * @param executions mapping of a key to the future + * @param result type + * @return future returning the first result. + */ + protected static RedisFuture firstOfAsync(Map> executions) { + + return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { + // make sure, that all futures are executed before returning the result. + for (CompletionStage future : executions.values()) { + execute(() -> future.toCompletableFuture().get()); + } + for (CompletionStage future : executions.values()) { + return execute(() -> future.toCompletableFuture().get()); + } + return null; + }); + } + + /** + * Returns the result of the last {@link RedisFuture} and guarantee that all futures are finished. + * + * @param executions mapping of a key to the future + * @param result type + * @return future returning the first result. + */ + static RedisFuture lastOfAsync(Map> executions) { + + return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { + // make sure, that all futures are executed before returning the result. + T result = null; + for (CompletionStage future : executions.values()) { + result = execute(() -> future.toCompletableFuture().get()); + } + return result; + }); + } + + /** + * Returns always {@literal OK} and guarantee that all futures are finished. + * + * @param executions mapping of a key to the future + * @return future returning the first result. + */ + static RedisFuture alwaysOkOfAsync(Map> executions) { + + return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { + + synchronize(executions); + + return "OK"; + }); + } + + private static void synchronize(Map> executions) { + + // make sure, that all futures are executed before returning the result. + for (CompletionStage future : executions.values()) { + try { + future.toCompletableFuture().get(); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw new RedisCommandInterruptedException(e); + } catch (ExecutionException e) { + // swallow exceptions + } + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/NodeSelectionInvocationHandler.java b/src/main/java/io/lettuce/core/cluster/NodeSelectionInvocationHandler.java new file mode 100644 index 0000000000..20ea689307 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/NodeSelectionInvocationHandler.java @@ -0,0 +1,312 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.time.Duration; +import java.util.*; +import java.util.concurrent.*; +import java.util.concurrent.atomic.AtomicLong; +import java.util.stream.Collectors; + +import org.reactivestreams.Publisher; + +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.RedisCommandTimeoutException; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.*; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Invocation handler to trigger commands on multiple connections and return a holder for the values. + * + * @author Mark Paluch + * @since 4.4 + */ +class NodeSelectionInvocationHandler extends AbstractInvocationHandler { + + private static final Method NULL_MARKER_METHOD; + + private final Map nodeSelectionMethods = new ConcurrentHashMap<>(); + private final Map connectionMethod = new ConcurrentHashMap<>(); + private final Class commandsInterface; + + private final AbstractNodeSelection selection; + private final ExecutionModel executionModel; + private final TimeoutProvider timeoutProvider; + + static { + try { + NULL_MARKER_METHOD = NodeSelectionInvocationHandler.class.getDeclaredMethod("handleInvocation", Object.class, + Method.class, Object[].class); + } catch (NoSuchMethodException e) { + throw new IllegalStateException(e); + } + } + + NodeSelectionInvocationHandler(AbstractNodeSelection selection, Class commandsInterface, + ExecutionModel executionModel) { + this(selection, commandsInterface, null, executionModel); + } + + NodeSelectionInvocationHandler(AbstractNodeSelection selection, Class commandsInterface, + TimeoutProvider timeoutProvider) { + this(selection, commandsInterface, timeoutProvider, ExecutionModel.SYNC); + } + + private NodeSelectionInvocationHandler(AbstractNodeSelection selection, Class commandsInterface, + TimeoutProvider timeoutProvider, ExecutionModel executionModel) { + + if (executionModel == ExecutionModel.SYNC) { + LettuceAssert.notNull(timeoutProvider, "TimeoutProvider must not be null"); + } + + LettuceAssert.notNull(executionModel, "ExecutionModel must not be null"); + + this.selection = selection; + this.commandsInterface = commandsInterface; + this.timeoutProvider = timeoutProvider; + this.executionModel = executionModel; + } + + @Override + @SuppressWarnings({ "rawtypes", "unchecked" }) + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + try { + + if (method.getName().equals("commands") && args.length == 0) { + return proxy; + } + + Method targetMethod = findMethod(commandsInterface, method, connectionMethod); + + if (targetMethod == null) { + + Method nodeSelectionMethod = findMethod(NodeSelectionSupport.class, method, nodeSelectionMethods); + return nodeSelectionMethod.invoke(selection, args); + } + + Map>> connections = new LinkedHashMap<>( + selection.size(), 1); + connections.putAll(selection.statefulMap()); + Map executions = new LinkedHashMap<>(selection.size(), 1); + + AtomicLong timeout = new AtomicLong(); + + for (Map.Entry>> entry : connections + .entrySet()) { + + CompletableFuture> connection = entry.getValue(); + + CompletableFuture result = connection.thenCompose(it -> { + + try { + + Object resultValue = targetMethod + .invoke(executionModel == ExecutionModel.REACTIVE ? it.reactive() : it.async(), args); + + if (timeoutProvider != null && resultValue instanceof RedisCommand && timeout.get() == 0) { + timeout.set(timeoutProvider.getTimeoutNs((RedisCommand) resultValue)); + } + + if (resultValue instanceof CompletionStage) { + return (CompletionStage) resultValue; + } + + return CompletableFuture.completedFuture(resultValue); + } catch (InvocationTargetException e) { + + CompletableFuture future = new CompletableFuture<>(); + future.completeExceptionally(e.getTargetException()); + return future; + } catch (Exception e) { + + CompletableFuture future = new CompletableFuture<>(); + future.completeExceptionally(e); + return future; + } + }); + + executions.put(entry.getKey(), result); + } + + return getExecutions(executions, timeout.get()); + } catch (InvocationTargetException e) { + throw e.getTargetException(); + } + } + + @SuppressWarnings("unchecked") + private Object getExecutions(Map executions, long timeoutNs) + throws ExecutionException, InterruptedException { + + if (executionModel == ExecutionModel.REACTIVE) { + Map>> reactiveExecutions = (Map) executions; + return new ReactiveExecutionsImpl<>(reactiveExecutions); + } + + Map> asyncExecutions = (Map) executions; + + if (executionModel == ExecutionModel.SYNC) { + + long timeoutToUse = timeoutNs >= 0 ? timeoutNs : timeoutProvider.getTimeoutNs(null); + + if (!awaitAll(timeoutToUse, TimeUnit.NANOSECONDS, asyncExecutions.values())) { + throw createTimeoutException(asyncExecutions, Duration.ofNanos(timeoutToUse)); + } + + if (atLeastOneFailed(asyncExecutions)) { + throw createExecutionException(asyncExecutions); + } + + return new SyncExecutionsImpl<>(asyncExecutions); + } + + return new AsyncExecutionsImpl<>(asyncExecutions); + } + + private static boolean awaitAll(long timeout, TimeUnit unit, Collection> futures) { + + boolean complete; + + try { + long nanos = unit.toNanos(timeout); + long time = System.nanoTime(); + + for (CompletionStage f : futures) { + + if (nanos == 0) { + f.toCompletableFuture().get(); + } else { + if (nanos < 0) { + return false; + } + try { + f.toCompletableFuture().get(nanos, TimeUnit.NANOSECONDS); + } catch (ExecutionException e) { + // ignore + } + long now = System.nanoTime(); + nanos -= now - time; + time = now; + } + } + complete = true; + } catch (TimeoutException e) { + complete = false; + } catch (Exception e) { + throw Exceptions.bubble(e); + } + + return complete; + } + + private boolean atLeastOneFailed(Map> executions) { + return executions.values().stream() + .anyMatch(completionStage -> completionStage.toCompletableFuture().isCompletedExceptionally()); + } + + private RedisCommandTimeoutException createTimeoutException(Map> executions, + Duration timeout) { + + List notFinished = new ArrayList<>(); + executions.forEach((redisClusterNode, completionStage) -> { + if (!completionStage.toCompletableFuture().isDone()) { + notFinished.add(redisClusterNode); + } + }); + + String description = getNodeDescription(notFinished); + return ExceptionFactory.createTimeoutException("Command timed out for node(s): " + description, timeout); + } + + private RedisCommandExecutionException createExecutionException(Map> executions) { + + List failed = new ArrayList<>(); + executions.forEach((redisClusterNode, completionStage) -> { + if (!completionStage.toCompletableFuture().isCompletedExceptionally()) { + failed.add(redisClusterNode); + } + }); + + RedisCommandExecutionException e = ExceptionFactory + .createExecutionException("Multi-node command execution failed on node(s): " + getNodeDescription(failed)); + + executions.forEach((redisClusterNode, completionStage) -> { + CompletableFuture completableFuture = completionStage.toCompletableFuture(); + if (completableFuture.isCompletedExceptionally()) { + try { + completableFuture.get(); + } catch (Exception innerException) { + + if (innerException instanceof ExecutionException) { + e.addSuppressed(innerException.getCause()); + } else { + e.addSuppressed(innerException); + } + } + } + }); + return e; + } + + private String getNodeDescription(List notFinished) { + return String.join(", ", notFinished.stream().map(this::getDescriptor).collect(Collectors.toList())); + } + + private String getDescriptor(RedisClusterNode redisClusterNode) { + + StringBuilder buffer = new StringBuilder(redisClusterNode.getNodeId()); + buffer.append(" ("); + + if (redisClusterNode.getUri() != null) { + buffer.append(redisClusterNode.getUri().getHost()).append(':').append(redisClusterNode.getUri().getPort()); + } + + buffer.append(')'); + return buffer.toString(); + } + + private Method findMethod(Class type, Method method, Map cache) { + + Method result = cache.get(method); + if (result != null && result != NULL_MARKER_METHOD) { + return result; + } + + for (Method typeMethod : type.getMethods()) { + if (!typeMethod.getName().equals(method.getName()) + || !Arrays.equals(typeMethod.getParameterTypes(), method.getParameterTypes())) { + continue; + } + + cache.put(method, typeMethod); + return typeMethod; + } + + // Null-marker to avoid full class method scans. + cache.put(method, NULL_MARKER_METHOD); + return null; + } + + enum ExecutionModel { + SYNC, ASYNC, REACTIVE + } +} diff --git a/src/main/java/io/lettuce/core/cluster/PartitionAccessor.java b/src/main/java/io/lettuce/core/cluster/PartitionAccessor.java new file mode 100644 index 0000000000..3664e21a4e --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/PartitionAccessor.java @@ -0,0 +1,69 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.function.Predicate; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Accessor for Partitions. + * + * @author Mark Paluch + */ +class PartitionAccessor { + + private final Collection partitions; + + PartitionAccessor(Collection partitions) { + this.partitions = partitions; + } + + List getMasters() { + return get(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)); + } + + List getReplicas() { + return get(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + + } + + List getReplicas(RedisClusterNode master) { + return get(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE) + && master.getNodeId().equals(redisClusterNode.getSlaveOf())); + } + + List getReadCandidates(RedisClusterNode master) { + return get(redisClusterNode -> redisClusterNode.getNodeId().equals(master.getNodeId()) + || (redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE) && master.getNodeId().equals( + redisClusterNode.getSlaveOf()))); + } + + List get(Predicate test) { + + List result = new ArrayList<>(partitions.size()); + for (RedisClusterNode partition : partitions) { + if (test.test(partition)) { + result.add(partition); + } + } + return result; + } + +} diff --git a/src/main/java/io/lettuce/core/cluster/PartitionException.java b/src/main/java/io/lettuce/core/cluster/PartitionException.java new file mode 100644 index 0000000000..66c1065738 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/PartitionException.java @@ -0,0 +1,56 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import io.lettuce.core.RedisException; + +/** + * Partition access exception thrown when a partition-specific operations fails. + * + * @author Mark Paluch + * @since 5.1 + */ +@SuppressWarnings("serial") +public class PartitionException extends RedisException { + + /** + * Create a {@code PartitionException} with the specified detail message. + * + * @param msg the detail message. + */ + public PartitionException(String msg) { + super(msg); + } + + /** + * Create a {@code PartitionException} with the specified detail message and nested exception. + * + * @param msg the detail message. + * @param cause the nested exception. + */ + public PartitionException(String msg, Throwable cause) { + super(msg, cause); + } + + /** + * Create a {@code PartitionException} with the specified nested exception. + * + * @param cause the nested exception. + */ + public PartitionException(Throwable cause) { + super(cause); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/PartitionSelectorException.java b/src/main/java/io/lettuce/core/cluster/PartitionSelectorException.java new file mode 100644 index 0000000000..e8c7dec59a --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/PartitionSelectorException.java @@ -0,0 +1,46 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import io.lettuce.core.cluster.models.partitions.Partitions; + +/** + * Exception thrown when a partition selection fails (slot not covered, no read candidates available). + * + * @author Mark Paluch + * @since 5.1 + */ +@SuppressWarnings("serial") +public class PartitionSelectorException extends PartitionException { + + private final Partitions partitions; + + /** + * Create a {@code UnknownPartitionException} with the specified detail message. + * + * @param msg the detail message. + * @param partitions read-only view of the current topology view. + */ + public PartitionSelectorException(String msg, Partitions partitions) { + + super(msg); + this.partitions = partitions; + } + + public Partitions getPartitions() { + return partitions; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/PartitionsConsensus.java b/src/main/java/io/lettuce/core/cluster/PartitionsConsensus.java new file mode 100644 index 0000000000..b4f9d28a7b --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/PartitionsConsensus.java @@ -0,0 +1,56 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.Map; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; + +/** + * Consensus API to decide on the {@link io.lettuce.core.cluster.models.partitions.Partitions topology view} to be used by + * {@link RedisClusterClient}. + *

+ * {@link PartitionsConsensus} takes the current {@link Partitions} and a {@link java.util.Map} of newly retrieved + * {@link Partitions} to determine a view that shall be used. Implementing classes may reuse {@link Partitions} from input + * arguments or construct a new {@link Partitions} object. + * + * @author Mark Paluch + * @since 4.2 + * @see io.lettuce.core.cluster.models.partitions.Partitions + * @see RedisClusterClient + */ +abstract class PartitionsConsensus { + + /** + * Consensus algorithm to select a partition containing the most previously known nodes. + */ + public static final PartitionsConsensus KNOWN_MAJORITY = new PartitionsConsensusImpl.KnownMajority(); + + /** + * Consensus algorithm to select a topology view containing the most active nodes. + */ + public static final PartitionsConsensus HEALTHY_MAJORITY = new PartitionsConsensusImpl.HealthyMajority(); + + /** + * Determine the {@link Partitions} to be used by {@link RedisClusterClient}. + * + * @param current the currently used topology view, must not be {@literal null}. + * @param topologyViews the newly retrieved views, must not be {@literal null}. + * @return the resulting {@link Partitions} to be used by {@link RedisClusterClient}. + */ + abstract Partitions getPartitions(Partitions current, Map topologyViews); +} diff --git a/src/main/java/io/lettuce/core/cluster/PartitionsConsensusImpl.java b/src/main/java/io/lettuce/core/cluster/PartitionsConsensusImpl.java new file mode 100644 index 0000000000..5a05d767c3 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/PartitionsConsensusImpl.java @@ -0,0 +1,118 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Implementations for {@link PartitionsConsensus}. + * + * @author Mark Paluch + * @since 4.2 + */ +class PartitionsConsensusImpl { + + /** + * Votes for {@link Partitions} that contains the most known (previously existing) nodes. + */ + static final class KnownMajority extends PartitionsConsensus { + + @Override + Partitions getPartitions(Partitions current, Map topologyViews) { + + if (topologyViews.isEmpty()) { + return current; + } + + List votedList = new ArrayList<>(); + + for (Partitions partitions : topologyViews.values()) { + + int knownNodes = 0; + for (RedisClusterNode knownNode : current) { + + if (partitions.getPartitionByNodeId(knownNode.getNodeId()) != null) { + knownNodes++; + } + } + + votedList.add(new VotedPartitions(knownNodes, partitions)); + } + + Collections.shuffle(votedList); + Collections.sort(votedList, (o1, o2) -> Integer.compare(o2.votes, o1.votes)); + + return votedList.get(0).partitions; + } + } + + /** + * Votes for {@link Partitions} that contains the most active (in total) nodes. + */ + static final class HealthyMajority extends PartitionsConsensus { + + @Override + Partitions getPartitions(Partitions current, Map topologyViews) { + + if (topologyViews.isEmpty()) { + return current; + } + + List votedList = new ArrayList<>(); + + for (Partitions partitions : topologyViews.values()) { + + int votes = 0; + + for (RedisClusterNode node : partitions) { + + if (node.is(RedisClusterNode.NodeFlag.FAIL) || node.is(RedisClusterNode.NodeFlag.EVENTUAL_FAIL) + || node.is(RedisClusterNode.NodeFlag.NOADDR)) { + continue; + } + + votes++; + + } + + votedList.add(new VotedPartitions(votes, partitions)); + } + + Collections.shuffle(votedList); + Collections.sort(votedList, (o1, o2) -> Integer.compare(o2.votes, o1.votes)); + + return votedList.get(0).partitions; + } + } + + static final class VotedPartitions { + + final int votes; + final Partitions partitions; + + public VotedPartitions(int votes, Partitions partitions) { + this.votes = votes; + this.partitions = partitions; + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/PipelinedRedisFuture.java b/src/main/java/io/lettuce/core/cluster/PipelinedRedisFuture.java new file mode 100644 index 0000000000..eff0d18967 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/PipelinedRedisFuture.java @@ -0,0 +1,82 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; + +import io.lettuce.core.RedisFuture; + +/** + * Pipelining for commands that are executed on multiple cluster nodes. Merges results and emits one composite result. + * + * @author Mark Paluch + */ +class PipelinedRedisFuture extends CompletableFuture implements RedisFuture { + + private final CountDownLatch latch = new CountDownLatch(1); + + public PipelinedRedisFuture(CompletionStage completionStage) { + this(completionStage, v -> v); + } + + public PipelinedRedisFuture(CompletionStage completionStage, Function converter) { + completionStage.thenAccept(v -> complete(converter.apply(v))) + .exceptionally(throwable -> { + completeExceptionally(throwable); + return null; + }); + } + + public PipelinedRedisFuture(Map> executions, Function, V> converter) { + + CompletableFuture.allOf(executions.values().toArray(new CompletableFuture[0])) + .thenRun(() -> complete(converter.apply(this))).exceptionally(throwable -> { + completeExceptionally(throwable); + return null; + }); + } + + @Override + public boolean complete(V value) { + boolean result = super.complete(value); + latch.countDown(); + return result; + } + + @Override + public boolean completeExceptionally(Throwable ex) { + + boolean value = super.completeExceptionally(ex); + latch.countDown(); + return value; + } + + @Override + public String getError() { + return null; + } + + @Override + public boolean await(long timeout, TimeUnit unit) throws InterruptedException { + return latch.await(timeout, unit); + } + +} diff --git a/src/main/java/io/lettuce/core/cluster/PooledClusterConnectionProvider.java b/src/main/java/io/lettuce/core/cluster/PooledClusterConnectionProvider.java new file mode 100644 index 0000000000..4e9f119551 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/PooledClusterConnectionProvider.java @@ -0,0 +1,686 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Iterator; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionException; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ThreadLocalRandom; +import java.util.function.Function; +import java.util.stream.Collectors; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.ClusterNodeConnectionFactory.ConnectionKey; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.*; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Connection provider with built-in connection caching. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings({ "unchecked", "rawtypes" }) +class PooledClusterConnectionProvider implements ClusterConnectionProvider, AsyncClusterConnectionProvider { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(PooledClusterConnectionProvider.class); + + // Contains NodeId-identified and HostAndPort-identified connections. + private final Object stateLock = new Object(); + private final boolean debugEnabled = logger.isDebugEnabled(); + private final CompletableFuture> writers[] = new CompletableFuture[SlotHash.SLOT_COUNT]; + private final CompletableFuture> readers[][] = new CompletableFuture[SlotHash.SLOT_COUNT][]; + private final RedisClusterClient redisClusterClient; + private final ClusterNodeConnectionFactory connectionFactory; + private final RedisChannelWriter clusterWriter; + private final ClusterEventListener clusterEventListener; + private final RedisCodec redisCodec; + private final AsyncConnectionProvider, ConnectionFuture>> connectionProvider; + + private Partitions partitions; + private boolean autoFlushCommands = true; + private ReadFrom readFrom; + + public PooledClusterConnectionProvider(RedisClusterClient redisClusterClient, RedisChannelWriter clusterWriter, + RedisCodec redisCodec, ClusterEventListener clusterEventListener) { + + this.redisCodec = redisCodec; + this.redisClusterClient = redisClusterClient; + this.clusterWriter = clusterWriter; + this.clusterEventListener = clusterEventListener; + this.connectionFactory = new NodeConnectionPostProcessor(getConnectionFactory(redisClusterClient)); + this.connectionProvider = new AsyncConnectionProvider<>(this.connectionFactory); + } + + @Override + public StatefulRedisConnection getConnection(Intent intent, int slot) { + + try { + return getConnectionAsync(intent, slot).get(); + } catch (Exception e) { + throw Exceptions.bubble(e); + } + } + + @Override + public CompletableFuture> getConnectionAsync(Intent intent, int slot) { + + if (debugEnabled) { + logger.debug("getConnection(" + intent + ", " + slot + ")"); + } + + if (intent == Intent.READ && readFrom != null && readFrom != ReadFrom.MASTER) { + return getReadConnection(slot); + } + + return getWriteConnection(slot).toCompletableFuture(); + } + + private CompletableFuture> getWriteConnection(int slot) { + + CompletableFuture> writer;// avoid races when reconfiguring partitions. + synchronized (stateLock) { + writer = writers[slot]; + } + + if (writer == null) { + RedisClusterNode partition = partitions.getPartitionBySlot(slot); + if (partition == null) { + clusterEventListener.onUncoveredSlot(slot); + return Futures.failed(new PartitionSelectorException("Cannot determine a partition for slot " + slot + ".", + partitions.clone())); + } + + // Use always host and port for slot-oriented operations. We don't want to get reconnected on a different + // host because the nodeId can be handled by a different host. + RedisURI uri = partition.getUri(); + ConnectionKey key = new ConnectionKey(Intent.WRITE, uri.getHost(), uri.getPort()); + + ConnectionFuture> future = getConnectionAsync(key); + + return future.thenApply(connection -> { + + synchronized (stateLock) { + if (writers[slot] == null) { + writers[slot] = CompletableFuture.completedFuture(connection); + } + } + + return connection; + }).toCompletableFuture(); + } + + return writer; + } + + private CompletableFuture> getReadConnection(int slot) { + + CompletableFuture> readerCandidates[];// avoid races when reconfiguring partitions. + + boolean cached = true; + + synchronized (stateLock) { + readerCandidates = readers[slot]; + } + + if (readerCandidates == null) { + + RedisClusterNode master = partitions.getPartitionBySlot(slot); + if (master == null) { + clusterEventListener.onUncoveredSlot(slot); + return Futures.failed(new PartitionSelectorException(String.format( + "Cannot determine a partition to read for slot %d.", slot), partitions.clone())); + } + + List candidates = getReadCandidates(master); + List selection = readFrom.select(new ReadFrom.Nodes() { + @Override + public List getNodes() { + return candidates; + } + + @Override + public Iterator iterator() { + return candidates.iterator(); + } + }); + + if (selection.isEmpty()) { + clusterEventListener.onUncoveredSlot(slot); + return Futures.failed(new PartitionSelectorException(String.format( + "Cannot determine a partition to read for slot %d with setting %s.", slot, readFrom), partitions + .clone())); + } + + readerCandidates = getReadFromConnections(selection); + cached = false; + } + + CompletableFuture> selectedReaderCandidates[] = readerCandidates; + + if (cached) { + + return CompletableFuture.allOf(readerCandidates).thenCompose( + v -> { + + boolean orderSensitive = isOrderSensitive(selectedReaderCandidates); + + if (!orderSensitive) { + + CompletableFuture> candidate = findRandomActiveConnection( + selectedReaderCandidates, Function.identity()); + + if (candidate != null) { + return candidate; + } + } + + for (CompletableFuture> candidate : selectedReaderCandidates) { + + if (candidate.join().isOpen()) { + return candidate; + } + } + + return selectedReaderCandidates[0]; + }); + } + + CompletableFuture[]> filteredReaderCandidates = new CompletableFuture<>(); + + CompletableFuture.allOf(readerCandidates).thenApply(v -> selectedReaderCandidates) + .whenComplete((candidates, throwable) -> { + + if (throwable == null) { + filteredReaderCandidates.complete(getConnections(candidates)); + return; + } + + StatefulRedisConnection[] connections = getConnections(selectedReaderCandidates); + + if (connections.length == 0) { + filteredReaderCandidates.completeExceptionally(throwable); + return; + } + + filteredReaderCandidates.complete(connections); + }); + + return filteredReaderCandidates + .thenApply(statefulRedisConnections -> { + + boolean orderSensitive = isOrderSensitive(statefulRedisConnections); + + CompletableFuture> toCache[] = new CompletableFuture[statefulRedisConnections.length]; + + for (int i = 0; i < toCache.length; i++) { + toCache[i] = CompletableFuture.completedFuture(statefulRedisConnections[i]); + } + synchronized (stateLock) { + readers[slot] = toCache; + } + + if (!orderSensitive) { + + StatefulRedisConnection candidate = findRandomActiveConnection(selectedReaderCandidates, + CompletableFuture::join); + + if (candidate != null) { + return candidate; + } + } + + for (StatefulRedisConnection candidate : statefulRedisConnections) { + if (candidate.isOpen()) { + return candidate; + } + } + + return statefulRedisConnections[0]; + }); + } + + private boolean isOrderSensitive(Object[] connections) { + return OrderingReadFromAccessor.isOrderSensitive(readFrom) || connections.length == 1; + } + + private static > T findRandomActiveConnection( + CompletableFuture[] selectedReaderCandidates, Function, T> mappingFunction) { + + // Perform up to two attempts for random nodes. + for (int i = 0; i < Math.min(2, selectedReaderCandidates.length); i++) { + + int index = ThreadLocalRandom.current().nextInt(selectedReaderCandidates.length); + CompletableFuture candidateFuture = selectedReaderCandidates[index]; + + if (candidateFuture.isDone() && !candidateFuture.isCompletedExceptionally()) { + + E candidate = candidateFuture.join(); + + if (candidate.isOpen()) { + return mappingFunction.apply(candidateFuture); + } + } + } + return null; + } + + private StatefulRedisConnection[] getConnections( + CompletableFuture>[] selectedReaderCandidates) { + + List> connections = new ArrayList<>(selectedReaderCandidates.length); + + for (CompletableFuture> candidate : selectedReaderCandidates) { + + try { + connections.add(candidate.join()); + } catch (Exception o_O) { + } + } + + StatefulRedisConnection[] result = new StatefulRedisConnection[connections.size()]; + connections.toArray(result); + return result; + } + + private CompletableFuture>[] getReadFromConnections(List selection) { + + // Use always host and port for slot-oriented operations. We don't want to get reconnected on a different + // host because the nodeId can be handled by a different host. + + CompletableFuture>[] readerCandidates = new CompletableFuture[selection.size()]; + + for (int i = 0; i < selection.size(); i++) { + + RedisNodeDescription redisClusterNode = selection.get(i); + + RedisURI uri = redisClusterNode.getUri(); + ConnectionKey key = new ConnectionKey(redisClusterNode.getRole() == RedisInstance.Role.MASTER ? Intent.WRITE + : Intent.READ, uri.getHost(), uri.getPort()); + + readerCandidates[i] = getConnectionAsync(key).toCompletableFuture(); + } + + return readerCandidates; + } + + private List getReadCandidates(RedisClusterNode master) { + + return partitions.stream() // + .filter(partition -> isReadCandidate(master, partition)) // + .collect(Collectors.toList()); + } + + private boolean isReadCandidate(RedisClusterNode master, RedisClusterNode partition) { + return master.getNodeId().equals(partition.getNodeId()) || master.getNodeId().equals(partition.getSlaveOf()); + } + + @Override + public StatefulRedisConnection getConnection(Intent intent, String nodeId) { + + if (debugEnabled) { + logger.debug("getConnection(" + intent + ", " + nodeId + ")"); + } + + return getConnection(new ConnectionKey(intent, nodeId)); + } + + @Override + public CompletableFuture> getConnectionAsync(Intent intent, String nodeId) { + + if (debugEnabled) { + logger.debug("getConnection(" + intent + ", " + nodeId + ")"); + } + + return getConnectionAsync(new ConnectionKey(intent, nodeId)).toCompletableFuture(); + } + + protected ConnectionFuture> getConnectionAsync(ConnectionKey key) { + + ConnectionFuture> connectionFuture = connectionProvider.getConnection(key); + CompletableFuture> result = new CompletableFuture<>(); + + connectionFuture.handle((connection, throwable) -> { + + if (throwable != null) { + + result.completeExceptionally( + RedisConnectionException.create(connectionFuture.getRemoteAddress(), Exceptions.bubble(throwable))); + } else { + result.complete(connection); + } + + return null; + }); + + return ConnectionFuture.from(connectionFuture.getRemoteAddress(), result); + } + + @Override + @SuppressWarnings({ "unchecked", "hiding", "rawtypes" }) + public StatefulRedisConnection getConnection(Intent intent, String host, int port) { + + try { + beforeGetConnection(intent, host, port); + + return getConnection(new ConnectionKey(intent, host, port)); + } catch (RedisException e) { + throw e; + } catch (RuntimeException e) { + throw new RedisException(e); + } + } + + private StatefulRedisConnection getConnection(ConnectionKey key) { + + ConnectionFuture> future = getConnectionAsync(key); + + try { + return future.join(); + } catch (CompletionException e) { + throw RedisConnectionException.create(future.getRemoteAddress(), e.getCause()); + } + } + + @Override + public CompletableFuture> getConnectionAsync(Intent intent, String host, int port) { + + try { + beforeGetConnection(intent, host, port); + + return connectionProvider.getConnection(new ConnectionKey(intent, host, port)).toCompletableFuture(); + } catch (RedisException e) { + throw e; + } catch (RuntimeException e) { + throw new RedisException(e); + } + } + + private void beforeGetConnection(Intent intent, String host, int port) { + + if (debugEnabled) { + logger.debug("getConnection(" + intent + ", " + host + ", " + port + ")"); + } + + RedisClusterNode redisClusterNode = partitions.getPartition(host, port); + + if (redisClusterNode == null) { + clusterEventListener.onUnknownNode(); + + if (validateClusterNodeMembership()) { + HostAndPort hostAndPort = HostAndPort.of(host, port); + throw connectionAttemptRejected(hostAndPort.toString()); + } + } + } + + @Override + public void close() { + closeAsync().join(); + } + + @Override + public CompletableFuture closeAsync() { + + resetFastConnectionCache(); + + return connectionProvider.close(); + } + + @Override + public void reset() { + connectionProvider.forEach(StatefulRedisConnection::reset); + } + + /** + * Synchronize on {@code stateLock} to initiate a happens-before relation and clear the thread caches of other threads. + * + * @param partitions the new partitions. + */ + @Override + public void setPartitions(Partitions partitions) { + + boolean reconfigurePartitions = false; + + synchronized (stateLock) { + if (this.partitions != null) { + reconfigurePartitions = true; + } + this.partitions = partitions; + this.connectionFactory.setPartitions(partitions); + } + + if (reconfigurePartitions) { + reconfigurePartitions(); + } + } + + protected Partitions getPartitions() { + return partitions; + } + + private void reconfigurePartitions() { + + resetFastConnectionCache(); + + if (redisClusterClient.expireStaleConnections()) { + closeStaleConnections(); + } + } + + /** + * Close stale connections. + */ + @Override + public void closeStaleConnections() { + + logger.debug("closeStaleConnections() count before expiring: {}", getConnectionCount()); + + connectionProvider.forEach((key, connection) -> { + if (isStale(key)) { + connectionProvider.close(key); + } + }); + + logger.debug("closeStaleConnections() count after expiring: {}", getConnectionCount()); + } + + private boolean isStale(ConnectionKey connectionKey) { + + if (connectionKey.nodeId != null && partitions.getPartitionByNodeId(connectionKey.nodeId) != null) { + return false; + } + + if (connectionKey.host != null && partitions.getPartition(connectionKey.host, connectionKey.port) != null) { + return false; + } + + return true; + } + + /** + * Set auto-flush on all commands. Synchronize on {@code stateLock} to initiate a happens-before relation and clear the + * thread caches of other threads. + * + * @param autoFlush state of autoFlush. + */ + @Override + public void setAutoFlushCommands(boolean autoFlush) { + + synchronized (stateLock) { + this.autoFlushCommands = autoFlush; + } + + connectionProvider.forEach(connection -> connection.setAutoFlushCommands(autoFlush)); + } + + @Override + public void flushCommands() { + connectionProvider.forEach(StatefulConnection::flushCommands); + } + + @Override + public void setReadFrom(ReadFrom readFrom) { + + synchronized (stateLock) { + this.readFrom = readFrom; + Arrays.fill(readers, null); + } + } + + @Override + public ReadFrom getReadFrom() { + return this.readFrom; + } + + /** + * + * @return number of connections. + */ + long getConnectionCount() { + return connectionProvider.getConnectionCount(); + } + + /** + * Reset the internal connection cache. This is necessary because the {@link Partitions} have no reference to the connection + * cache. + * + * Synchronize on {@code stateLock} to initiate a happens-before relation and clear the thread caches of other threads. + */ + private void resetFastConnectionCache() { + + synchronized (stateLock) { + Arrays.fill(writers, null); + Arrays.fill(readers, null); + } + } + + private static RuntimeException connectionAttemptRejected(String message) { + + return new UnknownPartitionException("Connection to " + message + + " not allowed. This partition is not known in the cluster view."); + } + + private boolean validateClusterNodeMembership() { + return redisClusterClient.getClusterClientOptions() == null + || redisClusterClient.getClusterClientOptions().isValidateClusterNodeMembership(); + } + + /** + * @return a factory {@link Function} + */ + protected ClusterNodeConnectionFactory getConnectionFactory(RedisClusterClient redisClusterClient) { + return new DefaultClusterNodeConnectionFactory<>(redisClusterClient, redisCodec, clusterWriter); + } + + class NodeConnectionPostProcessor implements ClusterNodeConnectionFactory { + private final ClusterNodeConnectionFactory delegate; + + NodeConnectionPostProcessor(ClusterNodeConnectionFactory delegate) { + this.delegate = delegate; + } + + @Override + public void setPartitions(Partitions partitions) { + this.delegate.setPartitions(partitions); + } + + @Override + public ConnectionFuture> apply(ConnectionKey key) { + + if (key.nodeId != null && getPartitions().getPartitionByNodeId(key.nodeId) == null) { + clusterEventListener.onUnknownNode(); + throw connectionAttemptRejected("node id " + key.nodeId); + } + + if (key.host != null && partitions.getPartition(key.host, key.port) == null) { + clusterEventListener.onUnknownNode(); + if (validateClusterNodeMembership()) { + throw connectionAttemptRejected(key.host + ":" + key.port); + } + } + + ConnectionFuture> connection = delegate.apply(key); + + LettuceAssert.notNull(connection, "Connection is null. Check ConnectionKey because host and nodeId are null."); + + if (key.intent == Intent.READ) { + + connection = connection.thenCompose(c -> { + + RedisFuture stringRedisFuture = c.async().readOnly(); + return stringRedisFuture.thenApply(s -> c).whenCompleteAsync((s, throwable) -> { + if (throwable != null) { + c.close(); + } + }); + }); + } + + connection = connection.thenApply(c -> { + synchronized (stateLock) { + c.setAutoFlushCommands(autoFlushCommands); + } + return c; + }); + + return connection; + } + } + + static class DefaultClusterNodeConnectionFactory extends AbstractClusterNodeConnectionFactory { + + private final RedisClusterClient redisClusterClient; + private final RedisCodec redisCodec; + private final RedisChannelWriter clusterWriter; + + DefaultClusterNodeConnectionFactory(RedisClusterClient redisClusterClient, RedisCodec redisCodec, + RedisChannelWriter clusterWriter) { + + super(redisClusterClient.getResources()); + this.redisClusterClient = redisClusterClient; + this.redisCodec = redisCodec; + this.clusterWriter = clusterWriter; + } + + @Override + public ConnectionFuture> apply(ConnectionKey key) { + + if (key.nodeId != null) { + // NodeId connections do not provide command recovery due to cluster reconfiguration + return redisClusterClient.connectToNodeAsync(redisCodec, key.nodeId, null, getSocketAddressSupplier(key)); + } + + // Host and port connections do provide command recovery due to cluster reconfiguration + return redisClusterClient.connectToNodeAsync(redisCodec, key.host + ":" + key.port, clusterWriter, + getSocketAddressSupplier(key)); + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/PubSubClusterEndpoint.java b/src/main/java/io/lettuce/core/cluster/PubSubClusterEndpoint.java new file mode 100644 index 0000000000..0b4f881b44 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/PubSubClusterEndpoint.java @@ -0,0 +1,202 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.List; +import java.util.concurrent.CopyOnWriteArrayList; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.RedisClusterPubSubAdapter; +import io.lettuce.core.cluster.pubsub.RedisClusterPubSubListener; +import io.lettuce.core.pubsub.PubSubEndpoint; +import io.lettuce.core.pubsub.PubSubOutput; +import io.lettuce.core.resource.ClientResources; + +/** + * @author Mark Paluch + */ +public class PubSubClusterEndpoint extends PubSubEndpoint { + + private final List> clusterListeners = new CopyOnWriteArrayList<>(); + private final NotifyingMessageListener multicast = new NotifyingMessageListener(); + private final UpstreamMessageListener upstream = new UpstreamMessageListener(); + + private volatile boolean nodeMessagePropagation = false; + private volatile RedisClusterNode clusterNode; + + /** + * Initialize a new instance that handles commands from the supplied queue. + * + * @param clientOptions client options for this connection, must not be {@literal null} + * @param clientResources client resources for this connection, must not be {@literal null}. + */ + public PubSubClusterEndpoint(ClientOptions clientOptions, ClientResources clientResources) { + super(clientOptions, clientResources); + } + + /** + * Add a new {@link RedisClusterPubSubListener listener}. + * + * @param listener the listener, must not be {@literal null}. + */ + public void addListener(RedisClusterPubSubListener listener) { + clusterListeners.add(listener); + } + + public RedisClusterPubSubListener getUpstreamListener() { + return upstream; + } + + /** + * Remove an existing {@link RedisClusterPubSubListener listener}. + * + * @param listener the listener, must not be {@literal null}. + */ + public void removeListener(RedisClusterPubSubListener listener) { + clusterListeners.remove(listener); + } + + public void setNodeMessagePropagation(boolean nodeMessagePropagation) { + this.nodeMessagePropagation = nodeMessagePropagation; + } + + void setClusterNode(RedisClusterNode clusterNode) { + this.clusterNode = clusterNode; + } + + protected void notifyListeners(PubSubOutput output) { + // update listeners + switch (output.type()) { + case message: + multicast.message(clusterNode, output.channel(), output.get()); + break; + case pmessage: + multicast.message(clusterNode, output.pattern(), output.channel(), output.get()); + break; + case psubscribe: + multicast.psubscribed(clusterNode, output.pattern(), output.count()); + break; + case punsubscribe: + multicast.punsubscribed(clusterNode, output.pattern(), output.count()); + break; + case subscribe: + multicast.subscribed(clusterNode, output.channel(), output.count()); + break; + case unsubscribe: + multicast.unsubscribed(clusterNode, output.channel(), output.count()); + break; + default: + throw new UnsupportedOperationException("Operation " + output.type() + " not supported"); + } + } + + private class UpstreamMessageListener extends NotifyingMessageListener { + + @Override + public void message(RedisClusterNode node, K channel, V message) { + + if (nodeMessagePropagation) { + super.message(node, channel, message); + } + } + + @Override + public void message(RedisClusterNode node, K pattern, K channel, V message) { + + if (nodeMessagePropagation) { + super.message(node, pattern, channel, message); + } + } + + @Override + public void subscribed(RedisClusterNode node, K channel, long count) { + + if (nodeMessagePropagation) { + super.subscribed(node, channel, count); + } + } + + @Override + public void psubscribed(RedisClusterNode node, K pattern, long count) { + + if (nodeMessagePropagation) { + super.psubscribed(node, pattern, count); + } + } + + @Override + public void unsubscribed(RedisClusterNode node, K channel, long count) { + + if (nodeMessagePropagation) { + super.unsubscribed(node, channel, count); + } + } + + @Override + public void punsubscribed(RedisClusterNode node, K pattern, long count) { + + if (nodeMessagePropagation) { + super.punsubscribed(node, pattern, count); + } + } + } + + private class NotifyingMessageListener extends RedisClusterPubSubAdapter { + + @Override + public void message(RedisClusterNode node, K channel, V message) { + + getListeners().forEach(listener -> listener.message(channel, message)); + clusterListeners.forEach(listener -> listener.message(node, channel, message)); + } + + @Override + public void message(RedisClusterNode node, K pattern, K channel, V message) { + + getListeners().forEach(listener -> listener.message(pattern, channel, message)); + clusterListeners.forEach(listener -> listener.message(node, pattern, channel, message)); + } + + @Override + public void subscribed(RedisClusterNode node, K channel, long count) { + + getListeners().forEach(listener -> listener.subscribed(channel, count)); + clusterListeners.forEach(listener -> listener.subscribed(node, channel, count)); + } + + @Override + public void psubscribed(RedisClusterNode node, K pattern, long count) { + + getListeners().forEach(listener -> listener.psubscribed(pattern, count)); + clusterListeners.forEach(listener -> listener.psubscribed(node, pattern, count)); + } + + @Override + public void unsubscribed(RedisClusterNode node, K channel, long count) { + + getListeners().forEach(listener -> listener.unsubscribed(channel, count)); + clusterListeners.forEach(listener -> listener.unsubscribed(node, channel, count)); + } + + @Override + public void punsubscribed(RedisClusterNode node, K pattern, long count) { + + getListeners().forEach(listener -> listener.punsubscribed(pattern, count)); + clusterListeners.forEach(listener -> listener.punsubscribed(node, pattern, count)); + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ReactiveExecutionsImpl.java b/src/main/java/io/lettuce/core/cluster/ReactiveExecutionsImpl.java new file mode 100644 index 0000000000..dc8afdc44c --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ReactiveExecutionsImpl.java @@ -0,0 +1,53 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.Collection; +import java.util.Map; +import java.util.concurrent.CompletionStage; + +import org.reactivestreams.Publisher; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.cluster.api.reactive.ReactiveExecutions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Default implementation of {@link ReactiveExecutions}. + * + * @author Mark Paluch + * @since 4.4 + */ +class ReactiveExecutionsImpl implements ReactiveExecutions { + + private Map>> executions; + + public ReactiveExecutionsImpl(Map>> executions) { + this.executions = executions; + } + + @Override + @SuppressWarnings("unchecked") + public Flux flux() { + return Flux.fromIterable(executions.values()).flatMap(Mono::fromCompletionStage).flatMap(f -> f); + } + + @Override + public Collection nodes() { + return executions.keySet(); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ReadOnlyCommands.java b/src/main/java/io/lettuce/core/cluster/ReadOnlyCommands.java new file mode 100644 index 0000000000..579860c9df --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ReadOnlyCommands.java @@ -0,0 +1,70 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.Collections; +import java.util.EnumSet; +import java.util.Set; + +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Contains all command names that are read-only commands. + * + * @author Mark Paluch + */ +class ReadOnlyCommands { + + private static final Set READ_ONLY_COMMANDS = EnumSet.noneOf(CommandType.class); + + static { + for (CommandName commandNames : CommandName.values()) { + READ_ONLY_COMMANDS.add(CommandType.valueOf(commandNames.name())); + } + } + + /** + * @param protocolKeyword must not be {@literal null}. + * @return {@literal true} if {@link ProtocolKeyword} is a read-only command. + */ + public static boolean isReadOnlyCommand(ProtocolKeyword protocolKeyword) { + return READ_ONLY_COMMANDS.contains(protocolKeyword); + } + + /** + * @return an unmodifiable {@link Set} of {@link CommandType read-only} commands. + */ + public static Set getReadOnlyCommands() { + return Collections.unmodifiableSet(READ_ONLY_COMMANDS); + } + + enum CommandName { + ASKING, BITCOUNT, BITPOS, CLIENT, COMMAND, DUMP, ECHO, EVAL, EVALSHA, EXISTS, // + GEODIST, GEOPOS, GEORADIUS, GEORADIUS_RO, GEORADIUSBYMEMBER, GEORADIUSBYMEMBER_RO, GEOHASH, GET, GETBIT, // + GETRANGE, HEXISTS, HGET, HGETALL, HKEYS, HLEN, HMGET, HSCAN, HSTRLEN, // + HVALS, INFO, KEYS, LINDEX, LLEN, LRANGE, MGET, PFCOUNT, PTTL, // + RANDOMKEY, READWRITE, SCAN, SCARD, SCRIPT, // + SDIFF, SINTER, SISMEMBER, SMEMBERS, SRANDMEMBER, SSCAN, STRLEN, // + SUNION, TIME, TTL, TYPE, // + XINFO, XLEN, XPENDING, XRANGE, XREVRANGE, XREAD, // + ZCARD, ZCOUNT, ZLEXCOUNT, ZRANGE, // + ZRANGEBYLEX, ZRANGEBYSCORE, ZRANK, ZREVRANGE, ZREVRANGEBYLEX, ZREVRANGEBYSCORE, ZREVRANK, ZSCAN, ZSCORE, // + + // Pub/Sub commands are no key-space commands so they are safe to execute on replica nodes + PUBLISH, PUBSUB, PSUBSCRIBE, PUNSUBSCRIBE, SUBSCRIBE, UNSUBSCRIBE + } +} diff --git a/src/main/java/io/lettuce/core/cluster/ReconnectEventListener.java b/src/main/java/io/lettuce/core/cluster/ReconnectEventListener.java new file mode 100644 index 0000000000..6d28a45195 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/ReconnectEventListener.java @@ -0,0 +1,36 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import io.lettuce.core.ConnectionEvents.Reconnect; +import io.lettuce.core.protocol.ReconnectionListener; + +/** + * @author Mark Paluch + */ +class ReconnectEventListener implements ReconnectionListener { + + private final ClusterEventListener clusterEventListener; + + public ReconnectEventListener(ClusterEventListener clusterEventListener) { + this.clusterEventListener = clusterEventListener; + } + + @Override + public void onReconnectAttempt(Reconnect reconnect) { + clusterEventListener.onReconnectAttempt(reconnect.getAttempt()); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/RedisAdvancedClusterAsyncCommandsImpl.java b/src/main/java/io/lettuce/core/cluster/RedisAdvancedClusterAsyncCommandsImpl.java new file mode 100644 index 0000000000..7f96928d86 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/RedisAdvancedClusterAsyncCommandsImpl.java @@ -0,0 +1,681 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.ClusterScanSupport.asyncClusterKeyScanCursorMapper; +import static io.lettuce.core.cluster.ClusterScanSupport.asyncClusterStreamScanCursorMapper; +import static io.lettuce.core.cluster.NodeSelectionInvocationHandler.ExecutionModel.ASYNC; +import static io.lettuce.core.cluster.models.partitions.RedisClusterNode.NodeFlag.MASTER; + +import java.lang.reflect.Proxy; +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ThreadLocalRandom; +import java.util.function.BiFunction; +import java.util.function.Function; +import java.util.function.Predicate; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.async.RedisKeyAsyncCommands; +import io.lettuce.core.api.async.RedisScriptingAsyncCommands; +import io.lettuce.core.api.async.RedisServerAsyncCommands; +import io.lettuce.core.cluster.ClusterScanSupport.ScanCursorMapper; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.AsyncNodeSelection; +import io.lettuce.core.cluster.api.async.NodeSelectionAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.output.IntegerOutput; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; + +/** + * An advanced asynchronous and thread-safe API for a Redis Cluster connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.3 + */ +@SuppressWarnings("unchecked") +public class RedisAdvancedClusterAsyncCommandsImpl extends AbstractRedisAsyncCommands + implements RedisAdvancedClusterAsyncCommands { + + private final RedisCodec codec; + + /** + * Initialize a new connection. + * + * @param connection the stateful connection + * @param codec Codec used to encode/decode keys and values. + * @deprecated since 5.1, use {@link #RedisAdvancedClusterAsyncCommandsImpl(StatefulRedisClusterConnection, RedisCodec)}. + */ + @Deprecated + public RedisAdvancedClusterAsyncCommandsImpl(StatefulRedisClusterConnectionImpl connection, RedisCodec codec) { + super(connection, codec); + this.codec = codec; + } + + /** + * Initialize a new connection. + * + * @param connection the stateful connection + * @param codec Codec used to encode/decode keys and values. + */ + public RedisAdvancedClusterAsyncCommandsImpl(StatefulRedisClusterConnection connection, RedisCodec codec) { + super(connection, codec); + this.codec = codec; + } + + @Override + public RedisFuture clientSetname(K name) { + + Map> executions = new HashMap<>(); + + CompletableFuture ok = CompletableFuture.completedFuture("OK"); + + executions.put("Default", super.clientSetname(name).toCompletableFuture()); + + for (RedisClusterNode redisClusterNode : getStatefulConnection().getPartitions()) { + + RedisURI uri = redisClusterNode.getUri(); + + CompletableFuture> byNodeId = getConnectionAsync(redisClusterNode.getNodeId()); + + executions.put("NodeId: " + redisClusterNode.getNodeId(), byNodeId.thenCompose(c -> { + + if (c.isOpen()) { + return c.clientSetname(name); + } + return ok; + })); + + CompletableFuture> byHost = getConnectionAsync(uri.getHost(), uri.getPort()); + + executions.put("HostAndPort: " + redisClusterNode.getNodeId(), byHost.thenCompose(c -> { + + if (c.isOpen()) { + return c.clientSetname(name); + } + return ok; + })); + } + + return MultiNodeExecution.firstOfAsync(executions); + } + + @Override + public RedisFuture clusterCountKeysInSlot(int slot) { + + RedisClusterAsyncCommands connectionBySlot = findConnectionBySlot(slot); + + if (connectionBySlot != null) { + return connectionBySlot.clusterCountKeysInSlot(slot); + } + + return super.clusterCountKeysInSlot(slot); + } + + @Override + public RedisFuture> clusterGetKeysInSlot(int slot, int count) { + + RedisClusterAsyncCommands connectionBySlot = findConnectionBySlot(slot); + + if (connectionBySlot != null) { + return connectionBySlot.clusterGetKeysInSlot(slot, count); + } + + return super.clusterGetKeysInSlot(slot, count); + } + + @Override + public RedisFuture dbsize() { + return MultiNodeExecution.aggregateAsync(executeOnMasters(RedisServerAsyncCommands::dbsize)); + } + + @Override + public RedisFuture del(K... keys) { + return del(Arrays.asList(keys)); + } + + @Override + public RedisFuture del(Iterable keys) { + + Map> partitioned = SlotHash.partition(codec, keys); + + if (partitioned.size() < 2) { + return super.del(keys); + } + + Map> executions = new HashMap<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + RedisFuture del = super.del(entry.getValue()); + executions.put(entry.getKey(), del); + } + + return MultiNodeExecution.aggregateAsync(executions); + } + + @Override + public RedisFuture exists(K... keys) { + return exists(Arrays.asList(keys)); + } + + public RedisFuture exists(Iterable keys) { + + Map> partitioned = SlotHash.partition(codec, keys); + + if (partitioned.size() < 2) { + return super.exists(keys); + } + + Map> executions = new HashMap<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + RedisFuture exists = super.exists(entry.getValue()); + executions.put(entry.getKey(), exists); + } + + return MultiNodeExecution.aggregateAsync(executions); + } + + @Override + public RedisFuture flushall() { + return MultiNodeExecution.firstOfAsync(executeOnMasters(RedisServerAsyncCommands::flushall)); + } + + @Override + public RedisFuture flushdb() { + return MultiNodeExecution.firstOfAsync(executeOnMasters(RedisServerAsyncCommands::flushdb)); + } + + @Override + public RedisFuture> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit) { + + if (hasRedisState() && getRedisState().hasCommand(CommandType.GEORADIUS_RO)) { + return super.georadius_ro(key, longitude, latitude, distance, unit); + } + + return super.georadius(key, longitude, latitude, distance, unit); + } + + @Override + public RedisFuture>> georadius(K key, double longitude, double latitude, double distance, + GeoArgs.Unit unit, GeoArgs geoArgs) { + + if (hasRedisState() && getRedisState().hasCommand(CommandType.GEORADIUS_RO)) { + return super.georadius_ro(key, longitude, latitude, distance, unit, geoArgs); + } + + return super.georadius(key, longitude, latitude, distance, unit, geoArgs); + } + + @Override + public RedisFuture> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit) { + + if (hasRedisState() && getRedisState().hasCommand(CommandType.GEORADIUSBYMEMBER_RO)) { + return super.georadiusbymember_ro(key, member, distance, unit); + } + + return super.georadiusbymember(key, member, distance, unit); + } + + @Override + public RedisFuture>> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, + GeoArgs geoArgs) { + + if (hasRedisState() && getRedisState().hasCommand(CommandType.GEORADIUSBYMEMBER_RO)) { + return super.georadiusbymember_ro(key, member, distance, unit, geoArgs); + } + + return super.georadiusbymember(key, member, distance, unit, geoArgs); + } + + @Override + public RedisFuture> keys(K pattern) { + + Map>> executions = executeOnMasters(commands -> commands.keys(pattern)); + + return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { + List result = new ArrayList<>(); + for (CompletableFuture> future : executions.values()) { + result.addAll(MultiNodeExecution.execute(future::get)); + } + return result; + }); + } + + @Override + public RedisFuture keys(KeyStreamingChannel channel, K pattern) { + + Map> executions = executeOnMasters(commands -> commands.keys(channel, pattern)); + return MultiNodeExecution.aggregateAsync(executions); + } + + @Override + public RedisFuture>> mget(K... keys) { + return mget(Arrays.asList(keys)); + } + + @Override + public RedisFuture>> mget(Iterable keys) { + Map> partitioned = SlotHash.partition(codec, keys); + + if (partitioned.size() < 2) { + return super.mget(keys); + } + + Map slots = SlotHash.getSlots(partitioned); + Map>>> executions = new HashMap<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + RedisFuture>> mget = super.mget(entry.getValue()); + executions.put(entry.getKey(), mget); + } + + // restore order of key + return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { + List> result = new ArrayList<>(); + for (K opKey : keys) { + int slot = slots.get(opKey); + + int position = partitioned.get(slot).indexOf(opKey); + RedisFuture>> listRedisFuture = executions.get(slot); + result.add(MultiNodeExecution.execute(() -> listRedisFuture.get().get(position))); + } + + return result; + }); + } + + @Override + public RedisFuture mget(KeyValueStreamingChannel channel, K... keys) { + return mget(channel, Arrays.asList(keys)); + } + + @Override + public RedisFuture mget(KeyValueStreamingChannel channel, Iterable keys) { + Map> partitioned = SlotHash.partition(codec, keys); + + if (partitioned.size() < 2) { + return super.mget(channel, keys); + } + + Map> executions = new HashMap<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + RedisFuture del = super.mget(channel, entry.getValue()); + executions.put(entry.getKey(), del); + } + + return MultiNodeExecution.aggregateAsync(executions); + } + + @Override + public RedisFuture mset(Map map) { + + Map> partitioned = SlotHash.partition(codec, map.keySet()); + + if (partitioned.size() < 2) { + return super.mset(map); + } + + Map> executions = new HashMap<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + + Map op = new HashMap<>(); + entry.getValue().forEach(k -> op.put(k, map.get(k))); + + RedisFuture mset = super.mset(op); + executions.put(entry.getKey(), mset); + } + + return MultiNodeExecution.firstOfAsync(executions); + } + + @Override + public RedisFuture msetnx(Map map) { + + Map> partitioned = SlotHash.partition(codec, map.keySet()); + + if (partitioned.size() < 2) { + return super.msetnx(map); + } + + Map> executions = new HashMap<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + + Map op = new HashMap<>(); + entry.getValue().forEach(k -> op.put(k, map.get(k))); + + RedisFuture msetnx = super.msetnx(op); + executions.put(entry.getKey(), msetnx); + } + + return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> { + + for (RedisFuture listRedisFuture : executions.values()) { + Boolean b = MultiNodeExecution.execute(() -> listRedisFuture.get()); + if (b == null || !b) { + return false; + } + } + + return !executions.isEmpty(); + }); + } + + @Override + public RedisFuture randomkey() { + + Partitions partitions = getStatefulConnection().getPartitions(); + int index = ThreadLocalRandom.current().nextInt(partitions.size()); + RedisClusterNode partition = partitions.getPartition(index); + + CompletableFuture future = getConnectionAsync(partition.getUri().getHost(), partition.getUri().getPort()) + .thenCompose(RedisKeyAsyncCommands::randomkey); + + return new PipelinedRedisFuture<>(future); + } + + @Override + public RedisFuture scriptFlush() { + + Map> executions = executeOnNodes(RedisScriptingAsyncCommands::scriptFlush, + redisClusterNode -> true); + return MultiNodeExecution.firstOfAsync(executions); + } + + @Override + public RedisFuture scriptKill() { + + Map> executions = executeOnNodes(RedisScriptingAsyncCommands::scriptFlush, + redisClusterNode -> true); + return MultiNodeExecution.alwaysOkOfAsync(executions); + } + + @Override + public RedisFuture scriptLoad(byte[] script) { + + Map> executions = executeOnNodes(cmd -> cmd.scriptLoad(script), + redisClusterNode -> true); + return MultiNodeExecution.lastOfAsync(executions); + } + + @Override + public void shutdown(boolean save) { + + executeOnNodes(commands -> { + commands.shutdown(save); + + Command command = new Command<>(CommandType.SHUTDOWN, new IntegerOutput<>(codec), null); + AsyncCommand async = new AsyncCommand(command); + async.complete(); + return async; + }, redisClusterNode -> true); + } + + @Override + public RedisFuture touch(K... keys) { + return touch(Arrays.asList(keys)); + } + + public RedisFuture touch(Iterable keys) { + Map> partitioned = SlotHash.partition(codec, keys); + + if (partitioned.size() < 2) { + return super.touch(keys); + } + + Map> executions = new HashMap<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + RedisFuture touch = super.touch(entry.getValue()); + executions.put(entry.getKey(), touch); + } + + return MultiNodeExecution.aggregateAsync(executions); + } + + @Override + public RedisFuture unlink(K... keys) { + return unlink(Arrays.asList(keys)); + } + + @Override + public RedisFuture unlink(Iterable keys) { + + Map> partitioned = SlotHash.partition(codec, keys); + + if (partitioned.size() < 2) { + return super.unlink(keys); + } + + Map> executions = new HashMap<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + RedisFuture unlink = super.unlink(entry.getValue()); + executions.put(entry.getKey(), unlink); + } + + return MultiNodeExecution.aggregateAsync(executions); + } + + @Override + public RedisClusterAsyncCommands getConnection(String nodeId) { + return getStatefulConnection().getConnection(nodeId).async(); + } + + @Override + public RedisClusterAsyncCommands getConnection(String host, int port) { + return getStatefulConnection().getConnection(host, port).async(); + } + + private CompletableFuture> getConnectionAsync(String nodeId) { + return getConnectionProvider(). getConnectionAsync(ClusterConnectionProvider.Intent.WRITE, nodeId) + .thenApply(StatefulRedisConnection::async); + } + + private CompletableFuture> getConnectionAsync(String host, int port) { + return getConnectionProvider(). getConnectionAsync(ClusterConnectionProvider.Intent.WRITE, host, port) + .thenApply(StatefulRedisConnection::async); + } + + @Override + public StatefulRedisClusterConnection getStatefulConnection() { + return (StatefulRedisClusterConnection) super.getConnection(); + } + + @Override + public AsyncNodeSelection nodes(Predicate predicate) { + return nodes(predicate, false); + } + + @Override + public AsyncNodeSelection readonly(Predicate predicate) { + return nodes(predicate, ClusterConnectionProvider.Intent.READ, false); + } + + @Override + public AsyncNodeSelection nodes(Predicate predicate, boolean dynamic) { + return nodes(predicate, ClusterConnectionProvider.Intent.WRITE, dynamic); + } + + @SuppressWarnings("unchecked") + protected AsyncNodeSelection nodes(Predicate predicate, ClusterConnectionProvider.Intent intent, + boolean dynamic) { + + NodeSelectionSupport, ?> selection; + + StatefulRedisClusterConnectionImpl impl = (StatefulRedisClusterConnectionImpl) getConnection(); + if (dynamic) { + selection = new DynamicNodeSelection, Object, K, V>( + impl.getClusterDistributionChannelWriter(), predicate, intent, StatefulRedisConnection::async); + } else { + selection = new StaticNodeSelection, Object, K, V>( + impl.getClusterDistributionChannelWriter(), predicate, intent, StatefulRedisConnection::async); + } + + NodeSelectionInvocationHandler h = new NodeSelectionInvocationHandler((AbstractNodeSelection) selection, + RedisClusterAsyncCommands.class, ASYNC); + return (AsyncNodeSelection) Proxy.newProxyInstance(NodeSelectionSupport.class.getClassLoader(), + new Class[] { NodeSelectionAsyncCommands.class, AsyncNodeSelection.class }, h); + } + + @Override + public RedisFuture> scan() { + return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(), asyncClusterKeyScanCursorMapper()); + } + + @Override + public RedisFuture> scan(ScanArgs scanArgs) { + return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(scanArgs), + asyncClusterKeyScanCursorMapper()); + } + + @Override + public RedisFuture> scan(ScanCursor scanCursor, ScanArgs scanArgs) { + return clusterScan(scanCursor, (connection, cursor) -> connection.scan(cursor, scanArgs), + asyncClusterKeyScanCursorMapper()); + } + + @Override + public RedisFuture> scan(ScanCursor scanCursor) { + return clusterScan(scanCursor, RedisKeyAsyncCommands::scan, asyncClusterKeyScanCursorMapper()); + } + + @Override + public RedisFuture scan(KeyStreamingChannel channel) { + return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(channel), + asyncClusterStreamScanCursorMapper()); + } + + @Override + public RedisFuture scan(KeyStreamingChannel channel, ScanArgs scanArgs) { + return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(channel, scanArgs), + asyncClusterStreamScanCursorMapper()); + } + + @Override + public RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { + return clusterScan(scanCursor, (connection, cursor) -> connection.scan(channel, cursor, scanArgs), + asyncClusterStreamScanCursorMapper()); + } + + @Override + public RedisFuture scan(KeyStreamingChannel channel, ScanCursor scanCursor) { + return clusterScan(scanCursor, (connection, cursor) -> connection.scan(channel, cursor), + asyncClusterStreamScanCursorMapper()); + } + + private RedisFuture clusterScan(ScanCursor cursor, + BiFunction, ScanCursor, RedisFuture> scanFunction, + ScanCursorMapper> resultMapper) { + + return clusterScan(getStatefulConnection(), cursor, scanFunction, resultMapper); + } + + /** + * Run a command on all available masters, + * + * @param function function producing the command + * @param result type + * @return map of a key (counter) and commands. + */ + protected Map> executeOnMasters( + Function, RedisFuture> function) { + return executeOnNodes(function, redisClusterNode -> redisClusterNode.is(MASTER)); + } + + /** + * Run a command on all available nodes that match {@code filter}. + * + * @param function function producing the command + * @param filter filter function for the node selection + * @param result type + * @return map of a key (counter) and commands. + */ + protected Map> executeOnNodes( + Function, RedisFuture> function, Function filter) { + Map> executions = new HashMap<>(); + + for (RedisClusterNode redisClusterNode : getStatefulConnection().getPartitions()) { + + if (!filter.apply(redisClusterNode)) { + continue; + } + + RedisURI uri = redisClusterNode.getUri(); + CompletableFuture> connection = getConnectionAsync(uri.getHost(), uri.getPort()); + + executions.put(redisClusterNode.getNodeId(), connection.thenCompose(function::apply)); + + } + return executions; + } + + private RedisClusterAsyncCommands findConnectionBySlot(int slot) { + RedisClusterNode node = getStatefulConnection().getPartitions().getPartitionBySlot(slot); + if (node != null) { + return getConnection(node.getUri().getHost(), node.getUri().getPort()); + } + + return null; + } + + private CommandSet getRedisState() { + return ((StatefulRedisClusterConnectionImpl) super.getConnection()).getCommandSet(); + } + + private boolean hasRedisState() { + return super.getConnection() instanceof StatefulRedisClusterConnectionImpl; + } + + private AsyncClusterConnectionProvider getConnectionProvider() { + + ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) getStatefulConnection().getChannelWriter(); + return (AsyncClusterConnectionProvider) writer.getClusterConnectionProvider(); + } + + /** + * Perform a SCAN in the cluster. + * + */ + static RedisFuture clusterScan(StatefulRedisClusterConnection connection, + ScanCursor cursor, BiFunction, ScanCursor, RedisFuture> scanFunction, + ScanCursorMapper> mapper) { + + List nodeIds = ClusterScanSupport.getNodeIds(connection, cursor); + String currentNodeId = ClusterScanSupport.getCurrentNodeId(cursor, nodeIds); + ScanCursor continuationCursor = ClusterScanSupport.getContinuationCursor(cursor); + + RedisFuture scanCursor = scanFunction.apply(connection.getConnection(currentNodeId).async(), continuationCursor); + return mapper.map(nodeIds, currentNodeId, scanCursor); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/RedisAdvancedClusterReactiveCommandsImpl.java b/src/main/java/io/lettuce/core/cluster/RedisAdvancedClusterReactiveCommandsImpl.java new file mode 100644 index 0000000000..bcadbd85c1 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/RedisAdvancedClusterReactiveCommandsImpl.java @@ -0,0 +1,611 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.ClusterScanSupport.reactiveClusterKeyScanCursorMapper; +import static io.lettuce.core.cluster.ClusterScanSupport.reactiveClusterStreamScanCursorMapper; +import static io.lettuce.core.cluster.models.partitions.RedisClusterNode.NodeFlag.MASTER; +import static io.lettuce.core.protocol.CommandType.GEORADIUSBYMEMBER_RO; +import static io.lettuce.core.protocol.CommandType.GEORADIUS_RO; + +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ThreadLocalRandom; +import java.util.function.BiFunction; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.stream.Collectors; + +import org.reactivestreams.Publisher; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisKeyReactiveCommands; +import io.lettuce.core.api.reactive.RedisScriptingReactiveCommands; +import io.lettuce.core.api.reactive.RedisServerReactiveCommands; +import io.lettuce.core.cluster.ClusterConnectionProvider.Intent; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.reactive.RedisClusterReactiveCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; + +/** + * An advanced reactive and thread-safe API to a Redis Cluster connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public class RedisAdvancedClusterReactiveCommandsImpl extends AbstractRedisReactiveCommands + implements RedisAdvancedClusterReactiveCommands { + + private static final Predicate ALL_NODES = node -> true; + + private final RedisCodec codec; + + /** + * Initialize a new connection. + * + * @param connection the stateful connection. + * @param codec Codec used to encode/decode keys and values. + * @deprecated since 5.2, use {@link #RedisAdvancedClusterReactiveCommandsImpl(StatefulRedisClusterConnection, RedisCodec)}. + */ + @Deprecated + public RedisAdvancedClusterReactiveCommandsImpl(StatefulRedisClusterConnectionImpl connection, + RedisCodec codec) { + super(connection, codec); + this.codec = codec; + } + + /** + * Initialize a new connection. + * + * @param connection the stateful connection. + * @param codec Codec used to encode/decode keys and values. + */ + public RedisAdvancedClusterReactiveCommandsImpl(StatefulRedisClusterConnection connection, RedisCodec codec) { + super(connection, codec); + this.codec = codec; + } + + @Override + public Mono clientSetname(K name) { + + List> publishers = new ArrayList<>(); + + publishers.add(super.clientSetname(name)); + + for (RedisClusterNode redisClusterNode : getStatefulConnection().getPartitions()) { + + Mono> byNodeId = getConnectionReactive(redisClusterNode.getNodeId()); + + publishers.add(byNodeId.flatMap(conn -> { + + if (conn.isOpen()) { + return conn.clientSetname(name); + } + return Mono.empty(); + })); + + Mono> byHost = getConnectionReactive(redisClusterNode.getUri().getHost(), + redisClusterNode.getUri().getPort()); + + publishers.add(byHost.flatMap(conn -> { + + if (conn.isOpen()) { + return conn.clientSetname(name); + } + return Mono.empty(); + })); + } + + return Flux.merge(publishers).last(); + } + + @Override + public Mono clusterCountKeysInSlot(int slot) { + + Mono> connectionBySlot = findConnectionBySlotReactive(slot); + return connectionBySlot.flatMap(cmd -> cmd.clusterCountKeysInSlot(slot)); + } + + @Override + public Flux clusterGetKeysInSlot(int slot, int count) { + + Mono> connectionBySlot = findConnectionBySlotReactive(slot); + return connectionBySlot.flatMapMany(conn -> conn.clusterGetKeysInSlot(slot, count)); + } + + @Override + public Mono dbsize() { + + Map> publishers = executeOnMasters(RedisServerReactiveCommands::dbsize); + return Flux.merge(publishers.values()).reduce((accu, next) -> accu + next); + } + + @Override + public Mono del(K... keys) { + return del(Arrays.asList(keys)); + } + + @Override + public Mono del(Iterable keys) { + + Map> partitioned = SlotHash.partition(codec, keys); + + if (partitioned.size() < 2) { + return super.del(keys); + } + + List> publishers = new ArrayList<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + publishers.add(super.del(entry.getValue())); + } + + return Flux.merge(publishers).reduce((accu, next) -> accu + next); + } + + @Override + public Mono exists(K... keys) { + return exists(Arrays.asList(keys)); + } + + public Mono exists(Iterable keys) { + + List keyList = LettuceLists.newList(keys); + + Map> partitioned = SlotHash.partition(codec, keyList); + + if (partitioned.size() < 2) { + return super.exists(keyList); + } + + List> publishers = new ArrayList<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + publishers.add(super.exists(entry.getValue())); + } + + return Flux.merge(publishers).reduce((accu, next) -> accu + next); + } + + @Override + public Mono flushall() { + + Map> publishers = executeOnMasters(RedisServerReactiveCommands::flushall); + return Flux.merge(publishers.values()).last(); + } + + @Override + public Mono flushdb() { + + Map> publishers = executeOnMasters(RedisServerReactiveCommands::flushdb); + return Flux.merge(publishers.values()).last(); + } + + @Override + public Flux georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit) { + + if (hasRedisState() && getRedisState().hasCommand(GEORADIUS_RO)) { + return super.georadius_ro(key, longitude, latitude, distance, unit); + } + + return super.georadius(key, longitude, latitude, distance, unit); + } + + @Override + public Flux> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, + GeoArgs geoArgs) { + + if (hasRedisState() && getRedisState().hasCommand(GEORADIUS_RO)) { + return super.georadius_ro(key, longitude, latitude, distance, unit, geoArgs); + } + + return super.georadius(key, longitude, latitude, distance, unit, geoArgs); + } + + @Override + public Flux georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit) { + + if (hasRedisState() && getRedisState().hasCommand(GEORADIUSBYMEMBER_RO)) { + return super.georadiusbymember_ro(key, member, distance, unit); + } + + return super.georadiusbymember(key, member, distance, unit); + } + + @Override + public Flux> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs) { + + if (hasRedisState() && getRedisState().hasCommand(GEORADIUSBYMEMBER_RO)) { + return super.georadiusbymember_ro(key, member, distance, unit, geoArgs); + } + + return super.georadiusbymember(key, member, distance, unit, geoArgs); + } + + @Override + public Flux keys(K pattern) { + + Map> publishers = executeOnMasters(commands -> commands.keys(pattern)); + return Flux.merge(publishers.values()); + } + + @Override + public Mono keys(KeyStreamingChannel channel, K pattern) { + + Map> publishers = executeOnMasters(commands -> commands.keys(channel, pattern)); + return Flux.merge(publishers.values()).reduce((accu, next) -> accu + next); + } + + @Override + public Flux> mget(K... keys) { + return mget(Arrays.asList(keys)); + } + + @SuppressWarnings({ "unchecked", "rawtypes" }) + public Flux> mget(Iterable keys) { + + List keyList = LettuceLists.newList(keys); + Map> partitioned = SlotHash.partition(codec, keyList); + + if (partitioned.size() < 2) { + return super.mget(keyList); + } + + List>> publishers = new ArrayList<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + publishers.add(super.mget(entry.getValue())); + } + + Flux> fluxes = Flux.concat(publishers); + + Mono>> map = fluxes.collectList().map(vs -> { + + KeyValue[] values = new KeyValue[vs.size()]; + int offset = 0; + for (Map.Entry> entry : partitioned.entrySet()) { + + for (int i = 0; i < keyList.size(); i++) { + + int index = entry.getValue().indexOf(keyList.get(i)); + if (index == -1) { + continue; + } + + values[i] = vs.get(offset + index); + } + + offset += entry.getValue().size(); + } + + return Arrays.asList(values); + }); + + return map.flatMapIterable(keyValues -> keyValues); + } + + @Override + public Mono mget(KeyValueStreamingChannel channel, K... keys) { + return mget(channel, Arrays.asList(keys)); + } + + @Override + public Mono mget(KeyValueStreamingChannel channel, Iterable keys) { + + List keyList = LettuceLists.newList(keys); + + Map> partitioned = SlotHash.partition(codec, keyList); + + if (partitioned.size() < 2) { + return super.mget(channel, keyList); + } + + List> publishers = new ArrayList<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + publishers.add(super.mget(channel, entry.getValue())); + } + + return Flux.merge(publishers).reduce((accu, next) -> accu + next); + } + + @Override + public Mono msetnx(Map map) { + + return pipeliningWithMap(map, kvMap -> RedisAdvancedClusterReactiveCommandsImpl.super.msetnx(kvMap).flux(), + booleanFlux -> booleanFlux).reduce((accu, next) -> accu && next); + } + + @Override + public Mono mset(Map map) { + return pipeliningWithMap(map, kvMap -> RedisAdvancedClusterReactiveCommandsImpl.super.mset(kvMap).flux(), + booleanFlux -> booleanFlux).last(); + } + + @Override + public Mono randomkey() { + + Partitions partitions = getStatefulConnection().getPartitions(); + int index = ThreadLocalRandom.current().nextInt(partitions.size()); + + Mono> connection = getConnectionReactive(partitions.getPartition(index).getNodeId()); + return connection.flatMap(RedisKeyReactiveCommands::randomkey); + } + + @Override + public Mono scriptFlush() { + Map> publishers = executeOnNodes(RedisScriptingReactiveCommands::scriptFlush, ALL_NODES); + return Flux.merge(publishers.values()).last(); + } + + @Override + public Mono scriptKill() { + Map> publishers = executeOnNodes(RedisScriptingReactiveCommands::scriptFlush, ALL_NODES); + return Flux.merge(publishers.values()).onErrorReturn("OK").last(); + } + + @Override + public Mono scriptLoad(byte[] script) { + Map> publishers = executeOnNodes((commands) -> commands.scriptLoad(script), ALL_NODES); + return Flux.merge(publishers.values()).last(); + } + + @Override + public Mono shutdown(boolean save) { + Map> publishers = executeOnNodes(commands -> commands.shutdown(save), ALL_NODES); + return Flux.merge(publishers.values()).then(); + } + + @Override + public Mono touch(K... keys) { + return touch(Arrays.asList(keys)); + } + + public Mono touch(Iterable keys) { + + List keyList = LettuceLists.newList(keys); + Map> partitioned = SlotHash.partition(codec, keyList); + + if (partitioned.size() < 2) { + return super.touch(keyList); + } + + List> publishers = new ArrayList<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + publishers.add(super.touch(entry.getValue())); + } + + return Flux.merge(publishers).reduce((accu, next) -> accu + next); + } + + @Override + public Mono unlink(K... keys) { + return unlink(Arrays.asList(keys)); + } + + @Override + public Mono unlink(Iterable keys) { + + Map> partitioned = SlotHash.partition(codec, keys); + + if (partitioned.size() < 2) { + return super.unlink(keys); + } + + List> publishers = new ArrayList<>(); + + for (Map.Entry> entry : partitioned.entrySet()) { + publishers.add(super.unlink(entry.getValue())); + } + + return Flux.merge(publishers).reduce((accu, next) -> accu + next); + } + + @Override + public RedisClusterReactiveCommands getConnection(String nodeId) { + return getStatefulConnection().getConnection(nodeId).reactive(); + } + + private Mono> getConnectionReactive(String nodeId) { + return getMono(getConnectionProvider(). getConnectionAsync(Intent.WRITE, nodeId)) + .map(StatefulRedisConnection::reactive); + } + + @Override + public RedisClusterReactiveCommands getConnection(String host, int port) { + return getStatefulConnection().getConnection(host, port).reactive(); + } + + private Mono> getConnectionReactive(String host, int port) { + return getMono(getConnectionProvider(). getConnectionAsync(Intent.WRITE, host, port)) + .map(StatefulRedisConnection::reactive); + } + + @Override + public StatefulRedisClusterConnection getStatefulConnection() { + return (StatefulRedisClusterConnection) super.getConnection(); + } + + @Override + public Mono> scan() { + return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(), reactiveClusterKeyScanCursorMapper()); + } + + @Override + public Mono> scan(ScanArgs scanArgs) { + return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(scanArgs), + reactiveClusterKeyScanCursorMapper()); + } + + @Override + public Mono> scan(ScanCursor scanCursor, ScanArgs scanArgs) { + return clusterScan(scanCursor, (connection, cursor) -> connection.scan(cursor, scanArgs), + reactiveClusterKeyScanCursorMapper()); + } + + @Override + public Mono> scan(ScanCursor scanCursor) { + return clusterScan(scanCursor, RedisKeyReactiveCommands::scan, reactiveClusterKeyScanCursorMapper()); + } + + @Override + public Mono scan(KeyStreamingChannel channel) { + return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(channel), + reactiveClusterStreamScanCursorMapper()); + } + + @Override + public Mono scan(KeyStreamingChannel channel, ScanArgs scanArgs) { + return clusterScan(ScanCursor.INITIAL, (connection, cursor) -> connection.scan(channel, scanArgs), + reactiveClusterStreamScanCursorMapper()); + } + + @Override + public Mono scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs) { + return clusterScan(scanCursor, (connection, cursor) -> connection.scan(channel, cursor, scanArgs), + reactiveClusterStreamScanCursorMapper()); + } + + @Override + public Mono scan(KeyStreamingChannel channel, ScanCursor scanCursor) { + return clusterScan(scanCursor, (connection, cursor) -> connection.scan(channel, cursor), + reactiveClusterStreamScanCursorMapper()); + } + + @SuppressWarnings("unchecked") + private Mono clusterScan(ScanCursor cursor, + BiFunction, ScanCursor, Mono> scanFunction, + ClusterScanSupport.ScanCursorMapper> resultMapper) { + + return clusterScan(getStatefulConnection(), getConnectionProvider(), cursor, scanFunction, + (ClusterScanSupport.ScanCursorMapper) resultMapper); + } + + private Flux pipeliningWithMap(Map map, Function, Flux> function, + Function, Flux> resultFunction) { + + Map> partitioned = SlotHash.partition(codec, map.keySet()); + + if (partitioned.size() < 2) { + return function.apply(map); + } + + List> publishers = partitioned.values().stream().map(ks -> { + Map op = new HashMap<>(); + ks.forEach(k -> op.put(k, map.get(k))); + return function.apply(op); + }).collect(Collectors.toList()); + + return resultFunction.apply(Flux.merge(publishers)); + } + + /** + * Run a command on all available masters, + * + * @param function function producing the command + * @param result type + * @return map of a key (counter) and commands. + */ + protected Map> executeOnMasters( + Function, ? extends Publisher> function) { + return executeOnNodes(function, redisClusterNode -> redisClusterNode.is(MASTER)); + } + + /** + * Run a command on all available nodes that match {@code filter}. + * + * @param function function producing the command + * @param filter filter function for the node selection + * @param result type + * @return map of a key (counter) and commands. + */ + protected Map> executeOnNodes( + Function, ? extends Publisher> function, Predicate filter) { + + Map> executions = new HashMap<>(); + + for (RedisClusterNode redisClusterNode : getStatefulConnection().getPartitions()) { + + if (!filter.test(redisClusterNode)) { + continue; + } + + RedisURI uri = redisClusterNode.getUri(); + Mono> connection = getConnectionReactive(uri.getHost(), uri.getPort()); + + executions.put(redisClusterNode.getNodeId(), connection.flatMapMany(function::apply)); + } + return executions; + } + + private Mono> findConnectionBySlotReactive(int slot) { + + RedisClusterNode node = getStatefulConnection().getPartitions().getPartitionBySlot(slot); + if (node != null) { + return getConnectionReactive(node.getUri().getHost(), node.getUri().getPort()); + } + + return Mono.error(new RedisException("No partition for slot " + slot)); + } + + private CommandSet getRedisState() { + return ((StatefulRedisClusterConnectionImpl) super.getConnection()).getCommandSet(); + } + + private boolean hasRedisState() { + return super.getConnection() instanceof StatefulRedisClusterConnectionImpl; + } + + private AsyncClusterConnectionProvider getConnectionProvider() { + + ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) getStatefulConnection().getChannelWriter(); + return (AsyncClusterConnectionProvider) writer.getClusterConnectionProvider(); + } + + /** + * Perform a SCAN in the cluster. + * + */ + static Mono clusterScan(StatefulRedisClusterConnection connection, + AsyncClusterConnectionProvider connectionProvider, ScanCursor cursor, + BiFunction, ScanCursor, Mono> scanFunction, + ClusterScanSupport.ScanCursorMapper> mapper) { + + List nodeIds = ClusterScanSupport.getNodeIds(connection, cursor); + String currentNodeId = ClusterScanSupport.getCurrentNodeId(cursor, nodeIds); + ScanCursor continuationCursor = ClusterScanSupport.getContinuationCursor(cursor); + + Mono scanCursor = getMono(connectionProvider. getConnectionAsync(Intent.WRITE, currentNodeId)) + .flatMap(conn -> scanFunction.apply(conn.reactive(), continuationCursor)); + return mapper.map(nodeIds, currentNodeId, scanCursor); + } + + private static Mono getMono(CompletableFuture future) { + return Mono.fromCompletionStage(future); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/RedisClusterClient.java b/src/main/java/io/lettuce/core/cluster/RedisClusterClient.java new file mode 100644 index 0000000000..340d5ab28b --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/RedisClusterClient.java @@ -0,0 +1,1186 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.io.Closeable; +import java.net.SocketAddress; +import java.net.URI; +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; + +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.event.ClusterTopologyChangedEvent; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.core.cluster.topology.ClusterTopologyRefresh; +import io.lettuce.core.cluster.topology.NodeConnectionFactory; +import io.lettuce.core.cluster.topology.TopologyComparators; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.Exceptions; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.models.command.CommandDetailParser; +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.protocol.CommandExpiryWriter; +import io.lettuce.core.protocol.CommandHandler; +import io.lettuce.core.protocol.DefaultEndpoint; +import io.lettuce.core.pubsub.PubSubCommandHandler; +import io.lettuce.core.pubsub.PubSubEndpoint; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnectionImpl; +import io.lettuce.core.resource.ClientResources; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * A scalable and thread-safe Redis cluster client supporting synchronous, asynchronous and + * reactive execution models. Multiple threads may share one connection. The cluster client handles command routing based on the + * first key of the command and maintains a view of the cluster that is available when calling the {@link #getPartitions()} + * method. + * + *

+ * Connections to the cluster members are opened on the first access to the cluster node and managed by the + * {@link StatefulRedisClusterConnection}. You should not use transactional commands on cluster connections since {@code MULTI}, + * {@code EXEC} and {@code DISCARD} have no key and cannot be assigned to a particular node. A cluster connection uses a default + * connection to run non-keyed commands. + *

+ *

+ * The Redis cluster client provides a {@link RedisAdvancedClusterCommands sync}, {@link RedisAdvancedClusterAsyncCommands + * async} and {@link io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands reactive} API. + *

+ * + *

+ * Connections to particular nodes can be obtained by {@link StatefulRedisClusterConnection#getConnection(String)} providing the + * node id or {@link StatefulRedisClusterConnection#getConnection(String, int)} by host and port. + *

+ * + *

+ * Multiple keys operations have to operate on a key + * that hashes to the same slot. Following commands do not need to follow that rule since they are pipelined according to its + * hash value to multiple nodes in parallel on the sync, async and, reactive API: + *

+ *
    + *
  • {@link RedisAdvancedClusterAsyncCommands#del(Object[]) DEL}
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#unlink(Object[]) UNLINK}
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#mget(Object[]) MGET}
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#mget(KeyValueStreamingChannel, Object[])} ) MGET with streaming}
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#mset(Map) MSET}
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#msetnx(Map) MSETNX}
  • + *
+ * + *

+ * Following commands on the Cluster sync, async and, reactive API are implemented with a Cluster-flavor: + *

+ *
    + *
  • {@link RedisAdvancedClusterAsyncCommands#clientSetname(Object)} Executes {@code CLIENT SET} on all connections and + * initializes new connections with the {@code clientName}.
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#flushall()} Run {@code FLUSHALL} on all master nodes.
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#flushdb()} Executes {@code FLUSHDB} on all master nodes.
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#keys(Object)} Executes {@code KEYS} on all.
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#randomkey()} Returns a random key from a random master node.
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#scriptFlush()} Executes {@code SCRIPT FLUSH} on all nodes.
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#scriptKill()} Executes {@code SCRIPT KILL} on all nodes.
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#shutdown(boolean)} Executes {@code SHUTDOWN} on all nodes.
  • + *
  • {@link RedisAdvancedClusterAsyncCommands#scan()} Executes a {@code SCAN} on all nodes according to {@link ReadFrom}. The + * resulting cursor must be reused across the {@code SCAN} to scan iteratively across the whole cluster.
  • + *
+ * + *

+ * Cluster commands can be issued to multiple hosts in parallel by using the {@link NodeSelectionSupport} API. A set of nodes is + * selected using a {@link java.util.function.Predicate} and commands can be issued to the node selection + * + *

+ * AsyncExecutions<String> ping = commands.masters().commands().ping();
+ * Collection<RedisClusterNode> nodes = ping.nodes();
+ * nodes.stream().forEach(redisClusterNode -> ping.get(redisClusterNode));
+ * 
+ * + *

+ * + * {@link RedisClusterClient} is an expensive resource. Reuse this instance or share external {@link ClientResources} as much as + * possible. + * + * @author Mark Paluch + * @since 3.0 + * @see RedisURI + * @see StatefulRedisClusterConnection + * @see RedisCodec + * @see ClusterClientOptions + * @see ClientResources + */ +public class RedisClusterClient extends AbstractRedisClient { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(RedisClusterClient.class); + + private final ClusterTopologyRefresh refresh = ClusterTopologyRefresh.create(new NodeConnectionFactoryImpl(), + getResources()); + private final ClusterTopologyRefreshScheduler topologyRefreshScheduler = new ClusterTopologyRefreshScheduler( + this::getClusterClientOptions, this::getPartitions, this::refreshPartitionsAsync, getResources()); + private final Iterable initialUris; + + private volatile Partitions partitions; + + /** + * Non-private constructor to make {@link RedisClusterClient} proxyable. + */ + protected RedisClusterClient() { + + super(null); + + initialUris = Collections.emptyList(); + } + + /** + * Initialize the client with a list of cluster URI's. All uris are tried in sequence for connecting initially to the + * cluster. If any uri is successful for connection, the others are not tried anymore. The initial uri is needed to discover + * the cluster structure for distributing the requests. + * + * @param clientResources the client resources. If {@literal null}, the client will create a new dedicated instance of + * client resources and keep track of them. + * @param redisURIs iterable of initial {@link RedisURI cluster URIs}. Must not be {@literal null} and not empty. + */ + protected RedisClusterClient(ClientResources clientResources, Iterable redisURIs) { + + super(clientResources); + + assertNotEmpty(redisURIs); + assertSameOptions(redisURIs); + + this.initialUris = Collections.unmodifiableList(LettuceLists.newList(redisURIs)); + + setDefaultTimeout(getFirstUri().getTimeout()); + setOptions(ClusterClientOptions.create()); + } + + private static void assertSameOptions(Iterable redisURIs) { + + Boolean ssl = null; + Boolean startTls = null; + Boolean verifyPeer = null; + + for (RedisURI redisURI : redisURIs) { + + if (ssl == null) { + ssl = redisURI.isSsl(); + } + if (startTls == null) { + startTls = redisURI.isStartTls(); + } + if (verifyPeer == null) { + verifyPeer = redisURI.isVerifyPeer(); + } + + if (ssl.booleanValue() != redisURI.isSsl()) { + throw new IllegalArgumentException( + "RedisURI " + redisURI + " SSL is not consistent with the other seed URI SSL settings"); + } + + if (startTls.booleanValue() != redisURI.isStartTls()) { + throw new IllegalArgumentException( + "RedisURI " + redisURI + " StartTLS is not consistent with the other seed URI StartTLS settings"); + } + + if (verifyPeer.booleanValue() != redisURI.isVerifyPeer()) { + throw new IllegalArgumentException( + "RedisURI " + redisURI + " VerifyPeer is not consistent with the other seed URI VerifyPeer settings"); + } + } + } + + /** + * Create a new client that connects to the supplied {@link RedisURI uri} with default {@link ClientResources}. You can + * connect to different Redis servers but you must supply a {@link RedisURI} on connecting. + * + * @param redisURI the Redis URI, must not be {@literal null} + * @return a new instance of {@link RedisClusterClient} + */ + public static RedisClusterClient create(RedisURI redisURI) { + assertNotNull(redisURI); + return create(Collections.singleton(redisURI)); + } + + /** + * Create a new client that connects to the supplied {@link RedisURI uri} with default {@link ClientResources}. You can + * connect to different Redis servers but you must supply a {@link RedisURI} on connecting. + * + * @param redisURIs one or more Redis URI, must not be {@literal null} and not empty. + * @return a new instance of {@link RedisClusterClient} + */ + public static RedisClusterClient create(Iterable redisURIs) { + assertNotEmpty(redisURIs); + assertSameOptions(redisURIs); + return new RedisClusterClient(null, redisURIs); + } + + /** + * Create a new client that connects to the supplied uri with default {@link ClientResources}. You can connect to different + * Redis servers but you must supply a {@link RedisURI} on connecting. + * + * @param uri the Redis URI, must not be empty or {@literal null}. + * @return a new instance of {@link RedisClusterClient} + */ + public static RedisClusterClient create(String uri) { + LettuceAssert.notEmpty(uri, "URI must not be empty"); + return create(RedisClusterURIUtil.toRedisURIs(URI.create(uri))); + } + + /** + * Create a new client that connects to the supplied {@link RedisURI uri} with shared {@link ClientResources}. You need to + * shut down the {@link ClientResources} upon shutting down your application.You can connect to different Redis servers but + * you must supply a {@link RedisURI} on connecting. + * + * @param clientResources the client resources, must not be {@literal null} + * @param redisURI the Redis URI, must not be {@literal null} + * @return a new instance of {@link RedisClusterClient} + */ + public static RedisClusterClient create(ClientResources clientResources, RedisURI redisURI) { + assertNotNull(clientResources); + assertNotNull(redisURI); + return create(clientResources, Collections.singleton(redisURI)); + } + + /** + * Create a new client that connects to the supplied uri with shared {@link ClientResources}.You need to shut down the + * {@link ClientResources} upon shutting down your application. You can connect to different Redis servers but you must + * supply a {@link RedisURI} on connecting. + * + * @param clientResources the client resources, must not be {@literal null} + * @param uri the Redis URI, must not be empty or {@literal null}. + * @return a new instance of {@link RedisClusterClient} + */ + public static RedisClusterClient create(ClientResources clientResources, String uri) { + assertNotNull(clientResources); + LettuceAssert.notEmpty(uri, "URI must not be empty"); + return create(clientResources, RedisClusterURIUtil.toRedisURIs(URI.create(uri))); + } + + /** + * Create a new client that connects to the supplied {@link RedisURI uri} with shared {@link ClientResources}. You need to + * shut down the {@link ClientResources} upon shutting down your application.You can connect to different Redis servers but + * you must supply a {@link RedisURI} on connecting. + * + * @param clientResources the client resources, must not be {@literal null} + * @param redisURIs one or more Redis URI, must not be {@literal null} and not empty + * @return a new instance of {@link RedisClusterClient} + */ + public static RedisClusterClient create(ClientResources clientResources, Iterable redisURIs) { + assertNotNull(clientResources); + assertNotEmpty(redisURIs); + assertSameOptions(redisURIs); + return new RedisClusterClient(clientResources, redisURIs); + } + + /** + * Set the {@link ClusterClientOptions} for the client. + * + * @param clientOptions client options for the client and connections that are created after setting the options + */ + public void setOptions(ClusterClientOptions clientOptions) { + super.setOptions(clientOptions); + } + + /** + * Retrieve the cluster view. Partitions are shared amongst all connections opened by this client instance. + * + * @return the partitions. + */ + public Partitions getPartitions() { + if (partitions == null) { + get(initializePartitions(), e -> new RedisException("Cannot obtain initial Redis Cluster topology", e)); + } + return partitions; + } + + /** + * Returns the seed {@link RedisURI} for the topology refreshing. This method is called before each topology refresh to + * provide an {@link Iterable} of {@link RedisURI} that is used to perform the next topology refresh. + *

+ * Subclasses of {@link RedisClusterClient} may override that method. + * + * @return {@link Iterable} of {@link RedisURI} for the next topology refresh. + */ + protected Iterable getTopologyRefreshSource() { + + boolean initialSeedNodes = !useDynamicRefreshSources(); + + Iterable seed; + if (initialSeedNodes || partitions == null || partitions.isEmpty()) { + seed = this.initialUris; + } else { + List uris = new ArrayList<>(); + for (RedisClusterNode partition : TopologyComparators.sortByUri(partitions)) { + uris.add(partition.getUri()); + } + seed = uris; + } + return seed; + } + + /** + * Connect to a Redis Cluster and treat keys and values as UTF-8 strings. + *

+ * What to expect from this connection: + *

+ *
    + *
  • A default connection is created to the node with the lowest latency
  • + *
  • Keyless commands are send to the default connection
  • + *
  • Single-key keyspace commands are routed to the appropriate node
  • + *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • + *
  • Pub/sub commands are sent to the node that handles the slot derived from the pub/sub channel
  • + *
+ * + * @return A new stateful Redis Cluster connection + */ + public StatefulRedisClusterConnection connect() { + return connect(newStringStringCodec()); + } + + /** + * Connect to a Redis Cluster. Use the supplied {@link RedisCodec codec} to encode/decode keys and values. + *

+ * What to expect from this connection: + *

+ *
    + *
  • A default connection is created to the node with the lowest latency
  • + *
  • Keyless commands are send to the default connection
  • + *
  • Single-key keyspace commands are routed to the appropriate node
  • + *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • + *
  • Pub/sub commands are sent to the node that handles the slot derived from the pub/sub channel
  • + *
+ * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new stateful Redis Cluster connection + */ + public StatefulRedisClusterConnection connect(RedisCodec codec) { + + assertInitialPartitions(); + + return getConnection(connectClusterAsync(codec)); + } + + /** + * Connect asynchronously to a Redis Cluster. Use the supplied {@link RedisCodec codec} to encode/decode keys and values. + * Connecting asynchronously requires an initialized topology. Call {@link #getPartitions()} first, otherwise the connect + * will fail with a{@link IllegalStateException}. + *

+ * What to expect from this connection: + *

+ *
    + *
  • A default connection is created to the node with the lowest latency
  • + *
  • Keyless commands are send to the default connection
  • + *
  • Single-key keyspace commands are routed to the appropriate node
  • + *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • + *
  • Pub/sub commands are sent to the node that handles the slot derived from the pub/sub channel
  • + *
+ * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return a {@link CompletableFuture} that is notified with the connection progress. + * @since 5.1 + */ + public CompletableFuture> connectAsync(RedisCodec codec) { + return transformAsyncConnectionException(connectClusterAsync(codec), getInitialUris()); + } + + /** + * Connect to a Redis Cluster using pub/sub connections and treat keys and values as UTF-8 strings. + *

+ * What to expect from this connection: + *

+ *
    + *
  • A default connection is created to the node with the least number of clients
  • + *
  • Pub/sub commands are sent to the node with the least number of clients
  • + *
  • Keyless commands are send to the default connection
  • + *
  • Single-key keyspace commands are routed to the appropriate node
  • + *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • + *
+ * + * @return A new stateful Redis Cluster connection + */ + public StatefulRedisClusterPubSubConnection connectPubSub() { + return connectPubSub(newStringStringCodec()); + } + + /** + * Connect to a Redis Cluster using pub/sub connections. Use the supplied {@link RedisCodec codec} to encode/decode keys and + * values. + *

+ * What to expect from this connection: + *

+ *
    + *
  • A default connection is created to the node with the least number of clients
  • + *
  • Pub/sub commands are sent to the node with the least number of clients
  • + *
  • Keyless commands are send to the default connection
  • + *
  • Single-key keyspace commands are routed to the appropriate node
  • + *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • + *
+ * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return A new stateful Redis Cluster connection + */ + public StatefulRedisClusterPubSubConnection connectPubSub(RedisCodec codec) { + + assertInitialPartitions(); + + return getConnection(connectClusterPubSubAsync(codec)); + } + + /** + * Connect asynchronously to a Redis Cluster using pub/sub connections. Use the supplied {@link RedisCodec codec} to + * encode/decode keys and values. Connecting asynchronously requires an initialized topology. Call {@link #getPartitions()} + * first, otherwise the connect will fail with a{@link IllegalStateException}. + *

+ * What to expect from this connection: + *

+ *
    + *
  • A default connection is created to the node with the least number of clients
  • + *
  • Pub/sub commands are sent to the node with the least number of clients
  • + *
  • Keyless commands are send to the default connection
  • + *
  • Single-key keyspace commands are routed to the appropriate node
  • + *
  • Multi-key keyspace commands require the same slot-hash and are routed to the appropriate node
  • + *
+ * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return a {@link CompletableFuture} that is notified with the connection progress. + * @since 5.1 + */ + public CompletableFuture> connectPubSubAsync(RedisCodec codec) { + return transformAsyncConnectionException(connectClusterPubSubAsync(codec), getInitialUris()); + } + + StatefulRedisConnection connectToNode(SocketAddress socketAddress) { + return connectToNode(newStringStringCodec(), socketAddress.toString(), null, Mono.just(socketAddress)); + } + + /** + * Create a connection to a redis socket address. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param nodeId the nodeId + * @param clusterWriter global cluster writer + * @param socketAddressSupplier supplier for the socket address + * @param Key type + * @param Value type + * @return A new connection + */ + StatefulRedisConnection connectToNode(RedisCodec codec, String nodeId, RedisChannelWriter clusterWriter, + Mono socketAddressSupplier) { + return getConnection(connectToNodeAsync(codec, nodeId, clusterWriter, socketAddressSupplier)); + } + + /** + * Create a connection to a redis socket address. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param nodeId the nodeId + * @param clusterWriter global cluster writer + * @param socketAddressSupplier supplier for the socket address + * @param Key type + * @param Value type + * @return A new connection + */ + ConnectionFuture> connectToNodeAsync(RedisCodec codec, String nodeId, + RedisChannelWriter clusterWriter, Mono socketAddressSupplier) { + + assertNotNull(codec); + assertNotEmpty(initialUris); + LettuceAssert.notNull(socketAddressSupplier, "SocketAddressSupplier must not be null"); + + ClusterNodeEndpoint endpoint = new ClusterNodeEndpoint(getClusterClientOptions(), getResources(), clusterWriter); + + RedisChannelWriter writer = endpoint; + + if (CommandExpiryWriter.isSupported(getClusterClientOptions())) { + writer = new CommandExpiryWriter(writer, getClusterClientOptions(), getResources()); + } + + StatefulRedisConnectionImpl connection = new StatefulRedisConnectionImpl<>(writer, codec, getDefaultTimeout()); + + ConnectionFuture> connectionFuture = connectStatefulAsync(connection, endpoint, + getFirstUri(), socketAddressSupplier, + () -> new CommandHandler(getClusterClientOptions(), getResources(), endpoint)); + + return connectionFuture.whenComplete((conn, throwable) -> { + if (throwable != null) { + connection.close(); + } + }); + } + + /** + * Create a pub/sub connection to a redis socket address. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param nodeId the nodeId + * @param socketAddressSupplier supplier for the socket address + * @param Key type + * @param Value type + * @return A new connection + */ + ConnectionFuture> connectPubSubToNodeAsync(RedisCodec codec, String nodeId, + Mono socketAddressSupplier) { + + assertNotNull(codec); + assertNotEmpty(initialUris); + + LettuceAssert.notNull(socketAddressSupplier, "SocketAddressSupplier must not be null"); + + logger.debug("connectPubSubToNode(" + nodeId + ")"); + + PubSubEndpoint endpoint = new PubSubEndpoint<>(getClusterClientOptions(), getResources()); + + RedisChannelWriter writer = endpoint; + + if (CommandExpiryWriter.isSupported(getClusterClientOptions())) { + writer = new CommandExpiryWriter(writer, getClusterClientOptions(), getResources()); + } + + StatefulRedisPubSubConnectionImpl connection = new StatefulRedisPubSubConnectionImpl<>(endpoint, writer, codec, + getDefaultTimeout()); + + ConnectionFuture> connectionFuture = connectStatefulAsync(connection, endpoint, + getFirstUri(), socketAddressSupplier, + () -> new PubSubCommandHandler<>(getClusterClientOptions(), getResources(), codec, endpoint)); + return connectionFuture.whenComplete((conn, throwable) -> { + if (throwable != null) { + connection.close(); + } + }); + } + + /** + * Create a clustered pub/sub connection with command distributor. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return a new connection + */ + private CompletableFuture> connectClusterAsync(RedisCodec codec) { + + if (partitions == null) { + return Futures.failed(new IllegalStateException( + "Partitions not initialized. Initialize via RedisClusterClient.getPartitions().")); + } + + topologyRefreshScheduler.activateTopologyRefreshIfNeeded(); + + logger.debug("connectCluster(" + initialUris + ")"); + + Mono socketAddressSupplier = getSocketAddressSupplier(TopologyComparators::sortByClientCount); + + DefaultEndpoint endpoint = new DefaultEndpoint(getClusterClientOptions(), getResources()); + RedisChannelWriter writer = endpoint; + + if (CommandExpiryWriter.isSupported(getClusterClientOptions())) { + writer = new CommandExpiryWriter(writer, getClusterClientOptions(), getResources()); + } + + ClusterDistributionChannelWriter clusterWriter = new ClusterDistributionChannelWriter(getClusterClientOptions(), writer, + topologyRefreshScheduler); + PooledClusterConnectionProvider pooledClusterConnectionProvider = new PooledClusterConnectionProvider<>(this, + clusterWriter, codec, topologyRefreshScheduler); + + clusterWriter.setClusterConnectionProvider(pooledClusterConnectionProvider); + + StatefulRedisClusterConnectionImpl connection = new StatefulRedisClusterConnectionImpl<>(clusterWriter, codec, + getDefaultTimeout()); + + connection.setReadFrom(ReadFrom.MASTER); + connection.setPartitions(partitions); + + Supplier commandHandlerSupplier = () -> new CommandHandler(getClusterClientOptions(), getResources(), + endpoint); + + Mono> connectionMono = Mono + .defer(() -> connect(socketAddressSupplier, endpoint, connection, commandHandlerSupplier)); + + for (int i = 1; i < getConnectionAttempts(); i++) { + connectionMono = connectionMono + .onErrorResume(t -> connect(socketAddressSupplier, endpoint, connection, commandHandlerSupplier)); + } + + return connectionMono.flatMap(c -> c.reactive().command().collectList() + // + .map(CommandDetailParser::parse) + // + .doOnNext(detail -> c.setCommandSet(new CommandSet(detail))) + // + .doOnError(e -> c.setCommandSet(new CommandSet(Collections.emptyList()))).then(Mono.just(c)) + .onErrorResume(RedisCommandExecutionException.class, e -> Mono.just(c))) + .doOnNext( + c -> connection.registerCloseables(closeableResources, clusterWriter, pooledClusterConnectionProvider)) + .map(it -> (StatefulRedisClusterConnection) it).toFuture(); + } + + private Mono connect(Mono socketAddressSupplier, DefaultEndpoint endpoint, + StatefulRedisClusterConnectionImpl connection, Supplier commandHandlerSupplier) { + + ConnectionFuture future = connectStatefulAsync(connection, endpoint, getFirstUri(), socketAddressSupplier, + commandHandlerSupplier); + + return Mono.fromCompletionStage(future).doOnError(t -> logger.warn(t.getMessage())); + } + + private Mono connect(Mono socketAddressSupplier, DefaultEndpoint endpoint, + StatefulRedisConnectionImpl connection, Supplier commandHandlerSupplier) { + + ConnectionFuture future = connectStatefulAsync(connection, endpoint, getFirstUri(), socketAddressSupplier, + commandHandlerSupplier); + + return Mono.fromCompletionStage(future).doOnError(t -> logger.warn(t.getMessage())); + } + + /** + * Create a clustered connection with command distributor. + * + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null} + * @param Key type + * @param Value type + * @return a new connection + */ + private CompletableFuture> connectClusterPubSubAsync( + RedisCodec codec) { + + if (partitions == null) { + return Futures.failed(new IllegalStateException( + "Partitions not initialized. Initialize via RedisClusterClient.getPartitions().")); + } + + topologyRefreshScheduler.activateTopologyRefreshIfNeeded(); + + logger.debug("connectClusterPubSub(" + initialUris + ")"); + + Mono socketAddressSupplier = getSocketAddressSupplier(TopologyComparators::sortByClientCount); + + PubSubClusterEndpoint endpoint = new PubSubClusterEndpoint<>(getClusterClientOptions(), getResources()); + RedisChannelWriter writer = endpoint; + + if (CommandExpiryWriter.isSupported(getClusterClientOptions())) { + writer = new CommandExpiryWriter(writer, getClusterClientOptions(), getResources()); + } + + ClusterDistributionChannelWriter clusterWriter = new ClusterDistributionChannelWriter(getClusterClientOptions(), writer, + topologyRefreshScheduler); + + StatefulRedisClusterPubSubConnectionImpl connection = new StatefulRedisClusterPubSubConnectionImpl<>(endpoint, + clusterWriter, codec, getDefaultTimeout()); + + ClusterPubSubConnectionProvider pooledClusterConnectionProvider = new ClusterPubSubConnectionProvider<>(this, + clusterWriter, codec, connection.getUpstreamListener(), topologyRefreshScheduler); + + clusterWriter.setClusterConnectionProvider(pooledClusterConnectionProvider); + connection.setPartitions(partitions); + + Supplier commandHandlerSupplier = () -> new PubSubCommandHandler<>(getClusterClientOptions(), + getResources(), codec, endpoint); + + Mono> connectionMono = Mono + .defer(() -> connect(socketAddressSupplier, endpoint, connection, commandHandlerSupplier)); + + for (int i = 1; i < getConnectionAttempts(); i++) { + connectionMono = connectionMono + .onErrorResume(t -> connect(socketAddressSupplier, endpoint, connection, commandHandlerSupplier)); + } + + return connectionMono.flatMap(c -> c.reactive().command().collectList() + // + .map(CommandDetailParser::parse) + // + .doOnNext(detail -> c.setCommandSet(new CommandSet(detail))) + .doOnError(e -> c.setCommandSet(new CommandSet(Collections.emptyList()))).then(Mono.just(c)) + .onErrorResume(RedisCommandExecutionException.class, e -> Mono.just(c))) + .doOnNext( + c -> connection.registerCloseables(closeableResources, clusterWriter, pooledClusterConnectionProvider)) + .map(it -> (StatefulRedisClusterPubSubConnection) it).toFuture(); + } + + private int getConnectionAttempts() { + return Math.max(1, partitions.size()); + } + + /** + * Initiates a channel connection considering {@link ClientOptions} initialization options, authentication and client name + * options. + */ + @SuppressWarnings("unchecked") + private , S> ConnectionFuture connectStatefulAsync(T connection, + DefaultEndpoint endpoint, RedisURI connectionSettings, Mono socketAddressSupplier, + Supplier commandHandlerSupplier) { + + ConnectionBuilder connectionBuilder = createConnectionBuilder(connection, connection.getConnectionState(), endpoint, + connectionSettings, socketAddressSupplier, commandHandlerSupplier); + + ConnectionFuture> future = initializeChannelAsync(connectionBuilder); + + return future.thenApply(channelHandler -> (S) connection); + } + + /** + * Initiates a channel connection considering {@link ClientOptions} initialization options, authentication and client name + * options. + */ + @SuppressWarnings("unchecked") + private , S> ConnectionFuture connectStatefulAsync(T connection, + DefaultEndpoint endpoint, RedisURI connectionSettings, Mono socketAddressSupplier, + Supplier commandHandlerSupplier) { + + ConnectionBuilder connectionBuilder = createConnectionBuilder(connection, connection.getConnectionState(), endpoint, + connectionSettings, socketAddressSupplier, commandHandlerSupplier); + + ConnectionFuture> future = initializeChannelAsync(connectionBuilder); + + return future.thenApply(channelHandler -> (S) connection); + } + + private ConnectionBuilder createConnectionBuilder(RedisChannelHandler connection, ConnectionState state, + DefaultEndpoint endpoint, RedisURI connectionSettings, Mono socketAddressSupplier, + Supplier commandHandlerSupplier) { + + ConnectionBuilder connectionBuilder; + if (connectionSettings.isSsl()) { + SslConnectionBuilder sslConnectionBuilder = SslConnectionBuilder.sslConnectionBuilder(); + sslConnectionBuilder.ssl(connectionSettings); + connectionBuilder = sslConnectionBuilder; + } else { + connectionBuilder = ConnectionBuilder.connectionBuilder(); + } + + state.apply(connectionSettings); + connectionBuilder.connectionInitializer(createHandshake(state)); + + connectionBuilder.reconnectionListener(new ReconnectEventListener(topologyRefreshScheduler)); + connectionBuilder.clientOptions(getClusterClientOptions()); + connectionBuilder.connection(connection); + connectionBuilder.clientResources(getResources()); + connectionBuilder.endpoint(endpoint); + connectionBuilder.commandHandler(commandHandlerSupplier); + connectionBuilder(socketAddressSupplier, connectionBuilder, connectionSettings); + channelType(connectionBuilder, connectionSettings); + + return connectionBuilder; + } + + /** + * Refresh partitions and re-initialize the routing table. + * + * @deprecated since 6.0. Renamed to {@link #refreshPartitions()}. + */ + @Deprecated + public void reloadPartitions() { + refreshPartitions(); + } + + /** + * Refresh partitions and re-initialize the routing table. + * + * @since 6.0 + */ + public void refreshPartitions() { + get(refreshPartitionsAsync().toCompletableFuture(), e -> new RedisException("Cannot reload Redis Cluster topology", e)); + } + + /** + * Asynchronously reload partitions and re-initialize the distribution table. + * + * @return a {@link CompletionStage} that signals completion. + * @since 6.0 + */ + public CompletionStage refreshPartitionsAsync() { + + if (partitions == null) { + return initializePartitions().thenAccept(Partitions::updateCache); + } + + return loadPartitionsAsync().thenAccept(loadedPartitions -> { + + if (TopologyComparators.isChanged(getPartitions(), loadedPartitions)) { + + logger.debug("Using a new cluster topology"); + + List before = new ArrayList<>(getPartitions()); + List after = new ArrayList<>(loadedPartitions); + + getResources().eventBus().publish(new ClusterTopologyChangedEvent(before, after)); + } + + this.partitions.reload(loadedPartitions.getPartitions()); + updatePartitionsInConnections(); + }); + } + + protected void updatePartitionsInConnections() { + + forEachClusterConnection(input -> { + input.setPartitions(partitions); + }); + + forEachClusterPubSubConnection(input -> { + input.setPartitions(partitions); + }); + } + + protected CompletableFuture initializePartitions() { + return loadPartitionsAsync().thenApply(it -> this.partitions = it); + } + + private void assertInitialPartitions() { + if (partitions == null) { + get(initializePartitions(), + e -> new RedisConnectionException("Unable to establish a connection to Redis Cluster", e)); + } + } + + /** + * Retrieve partitions. Nodes within {@link Partitions} are ordered by latency. Lower latency nodes come first. + * + * @return Partitions + */ + protected Partitions loadPartitions() { + return get(loadPartitionsAsync(), Function.identity()); + } + + private static T get(CompletableFuture future, Function mapper) { + try { + return future.get(); + } catch (ExecutionException e) { + + if (e.getCause() instanceof RedisException) { + throw mapper.apply((RedisException) e.getCause()); + } + + throw Exceptions.bubble(e); + } catch (Exception e) { + throw Exceptions.bubble(e); + } + } + + /** + * Retrieve partitions. Nodes within {@link Partitions} are ordered by latency. Lower latency nodes come first. + * + * @return future that emits {@link Partitions} upon a successful topology lookup. + * @since 6.0 + */ + protected CompletableFuture loadPartitionsAsync() { + + Iterable topologyRefreshSource = getTopologyRefreshSource(); + CompletableFuture future = new CompletableFuture<>(); + + fetchPartitions(topologyRefreshSource).whenComplete((nodes, throwable) -> { + + if (throwable == null) { + future.complete(nodes); + return; + } + + // Attempt recovery using initial seed nodes + if (useDynamicRefreshSources() && topologyRefreshSource != initialUris) { + + fetchPartitions(initialUris).whenComplete((nextNodes, nextThrowable) -> { + + if (nextThrowable != null) { + Throwable exception = Exceptions.unwrap(nextThrowable); + exception.addSuppressed(Exceptions.unwrap(throwable)); + + future.completeExceptionally(exception); + } else { + future.complete(nextNodes); + } + }); + } else { + future.completeExceptionally(Exceptions.unwrap(throwable)); + } + }); + + return future; + } + + private CompletionStage fetchPartitions(Iterable topologyRefreshSource) { + + CompletionStage> topology = refresh.loadViews(topologyRefreshSource, + getClusterClientOptions().getSocketOptions().getConnectTimeout(), useDynamicRefreshSources()); + + return topology.thenApply(partitions -> { + + if (partitions.isEmpty()) { + throw new RedisException(String.format("Cannot retrieve initial cluster partitions from initial URIs %s", + topologyRefreshSource)); + } + + Partitions loadedPartitions = determinePartitions(this.partitions, partitions); + RedisURI viewedBy = getViewedBy(partitions, loadedPartitions); + + for (RedisClusterNode partition : loadedPartitions) { + if (viewedBy != null) { + RedisURI uri = partition.getUri(); + RedisClusterURIUtil.applyUriConnectionSettings(viewedBy, uri); + } + } + + topologyRefreshScheduler.activateTopologyRefreshIfNeeded(); + + return loadedPartitions; + }); + } + + /** + * Determines a {@link Partitions topology view} based on the current and the obtain topology views. + * + * @param current the current topology view. May be {@literal null} if {@link RedisClusterClient} has no topology view yet. + * @param topologyViews the obtain topology views + * @return the {@link Partitions topology view} to use. + */ + protected Partitions determinePartitions(Partitions current, Map topologyViews) { + + if (current == null) { + return PartitionsConsensus.HEALTHY_MAJORITY.getPartitions(null, topologyViews); + } + + return PartitionsConsensus.KNOWN_MAJORITY.getPartitions(current, topologyViews); + } + + /** + * Sets the new cluster topology. The partitions are not applied to existing connections. + * + * @param partitions partitions object + */ + public void setPartitions(Partitions partitions) { + this.partitions = partitions; + } + + /** + * Shutdown this client and close all open connections asynchronously. The client should be discarded after calling + * shutdown. + * + * @param quietPeriod the quiet period as described in the documentation + * @param timeout the maximum amount of time to wait until the executor is shutdown regardless if a task was submitted + * during the quiet period + * @param timeUnit the unit of {@code quietPeriod} and {@code timeout} + * @since 4.4 + */ + @Override + public CompletableFuture shutdownAsync(long quietPeriod, long timeout, TimeUnit timeUnit) { + + topologyRefreshScheduler.shutdown(); + + return super.shutdownAsync(quietPeriod, timeout, timeUnit); + } + + // ------------------------------------------------------------------------- + // Implementation hooks and helper methods + // ------------------------------------------------------------------------- + + /** + * Returns the first {@link RedisURI} configured with this {@link RedisClusterClient} instance. + * + * @return the first {@link RedisURI}. + */ + protected RedisURI getFirstUri() { + assertNotEmpty(initialUris); + Iterator iterator = initialUris.iterator(); + return iterator.next(); + } + + /** + * Returns a {@link Supplier} for {@link SocketAddress connection points}. + * + * @param sortFunction Sort function to enforce a specific order. The sort function must not change the order or the input + * parameter but create a new collection with the desired order, must not be {@literal null}. + * @return {@link Supplier} for {@link SocketAddress connection points}. + */ + protected Mono getSocketAddressSupplier(Function> sortFunction) { + + LettuceAssert.notNull(sortFunction, "Sort function must not be null"); + + final RoundRobinSocketAddressSupplier socketAddressSupplier = new RoundRobinSocketAddressSupplier(partitions, + sortFunction, getResources()); + + return Mono.defer(() -> { + + if (partitions.isEmpty()) { + return Mono.fromCallable(() -> { + SocketAddress socketAddress = getResources().socketAddressResolver().resolve(getFirstUri()); + logger.debug("Resolved SocketAddress {} using {}", socketAddress, getFirstUri()); + return socketAddress; + }); + } + + return Mono.fromCallable(socketAddressSupplier::get); + }); + } + + /** + * Returns an {@link Iterable} of the initial {@link RedisURI URIs}. + * + * @return the initial {@link RedisURI URIs} + */ + protected Iterable getInitialUris() { + return initialUris; + } + + /** + * Apply a {@link Consumer} of {@link StatefulRedisClusterConnectionImpl} to all active connections. + * + * @param function the {@link Consumer}. + */ + protected void forEachClusterConnection(Consumer> function) { + forEachCloseable(input -> input instanceof StatefulRedisClusterConnectionImpl, function); + } + + /** + * Apply a {@link Consumer} of {@link StatefulRedisClusterPubSubConnectionImpl} to all active connections. + * + * @param function the {@link Consumer}. + */ + protected void forEachClusterPubSubConnection(Consumer> function) { + forEachCloseable(input -> input instanceof StatefulRedisClusterPubSubConnectionImpl, function); + } + + /** + * Apply a {@link Consumer} of {@link Closeable} to all active connections. + * + * @param + * @param function the {@link Consumer}. + */ + @SuppressWarnings("unchecked") + protected void forEachCloseable(Predicate selector, Consumer function) { + for (Closeable c : closeableResources) { + if (selector.test(c)) { + function.accept((T) c); + } + } + } + + /** + * Returns {@literal true} if {@link ClusterTopologyRefreshOptions#useDynamicRefreshSources() dynamic refresh sources} are + * enabled. + *

+ * Subclasses of {@link RedisClusterClient} may override that method. + * + * @return {@literal true} if dynamic refresh sources are used. + * @see ClusterTopologyRefreshOptions#useDynamicRefreshSources() + */ + protected boolean useDynamicRefreshSources() { + + ClusterTopologyRefreshOptions topologyRefreshOptions = getClusterClientOptions().getTopologyRefreshOptions(); + + return topologyRefreshOptions.useDynamicRefreshSources(); + } + + /** + * Returns a {@link String} {@link RedisCodec codec}. + * + * @return a {@link String} {@link RedisCodec codec}. + * @see StringCodec#UTF8 + */ + protected RedisCodec newStringStringCodec() { + return StringCodec.UTF8; + } + + /** + * Resolve a {@link RedisURI} from a map of cluster views by {@link Partitions} as key + * + * @param map the map + * @param partitions the key + * @return a {@link RedisURI} or null + */ + private static RedisURI getViewedBy(Map map, Partitions partitions) { + + for (Map.Entry entry : map.entrySet()) { + if (entry.getValue() == partitions) { + return entry.getKey(); + } + } + + return null; + } + + ClusterClientOptions getClusterClientOptions() { + return (ClusterClientOptions) getOptions(); + } + + boolean expireStaleConnections() { + return getClusterClientOptions() == null || getClusterClientOptions().isCloseStaleConnections(); + } + + protected static CompletableFuture transformAsyncConnectionException(CompletionStage future, + Iterable target) { + + return ConnectionFuture.from(null, future.toCompletableFuture()).thenCompose((v, e) -> { + + if (e != null) { + return Futures.failed(RedisConnectionException.create(target.toString(), e)); + } + + return CompletableFuture.completedFuture(v); + }).toCompletableFuture(); + } + + private static void assertNotNull(RedisCodec codec) { + LettuceAssert.notNull(codec, "RedisCodec must not be null"); + } + + private static void assertNotEmpty(Iterable redisURIs) { + LettuceAssert.notNull(redisURIs, "RedisURIs must not be null"); + LettuceAssert.isTrue(redisURIs.iterator().hasNext(), "RedisURIs must not be empty"); + } + + private static RedisURI assertNotNull(RedisURI redisURI) { + LettuceAssert.notNull(redisURI, "RedisURI must not be null"); + return redisURI; + } + + private static void assertNotNull(ClientResources clientResources) { + LettuceAssert.notNull(clientResources, "ClientResources must not be null"); + } + + private class NodeConnectionFactoryImpl implements NodeConnectionFactory { + + @Override + public StatefulRedisConnection connectToNode(RedisCodec codec, SocketAddress socketAddress) { + return RedisClusterClient.this.connectToNode(codec, socketAddress.toString(), null, Mono.just(socketAddress)); + } + + @Override + public ConnectionFuture> connectToNodeAsync(RedisCodec codec, + SocketAddress socketAddress) { + return RedisClusterClient.this.connectToNodeAsync(codec, socketAddress.toString(), null, Mono.just(socketAddress)); + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/RedisClusterPubSubAsyncCommandsImpl.java b/src/main/java/io/lettuce/core/cluster/RedisClusterPubSubAsyncCommandsImpl.java new file mode 100644 index 0000000000..959e30ef35 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/RedisClusterPubSubAsyncCommandsImpl.java @@ -0,0 +1,158 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.NodeSelectionInvocationHandler.ExecutionModel.ASYNC; + +import java.lang.reflect.Proxy; +import java.util.List; +import java.util.Set; +import java.util.concurrent.CompletableFuture; +import java.util.function.Predicate; +import java.util.stream.Collectors; + +import io.lettuce.core.GeoArgs; +import io.lettuce.core.GeoWithin; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.core.cluster.pubsub.api.async.NodeSelectionPubSubAsyncCommands; +import io.lettuce.core.cluster.pubsub.api.async.PubSubAsyncNodeSelection; +import io.lettuce.core.cluster.pubsub.api.async.RedisClusterPubSubAsyncCommands; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.pubsub.RedisPubSubAsyncCommandsImpl; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; + +/** + * An asynchronous and thread-safe API for a Redis pub/sub connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.0 + */ +public class RedisClusterPubSubAsyncCommandsImpl extends RedisPubSubAsyncCommandsImpl + implements RedisClusterPubSubAsyncCommands { + + /** + * Initialize a new connection. + * + * @param connection the connection . + * @param codec Codec used to encode/decode keys and values. + */ + public RedisClusterPubSubAsyncCommandsImpl(StatefulRedisPubSubConnection connection, RedisCodec codec) { + super(connection, codec); + } + + @Override + public RedisFuture> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit) { + + if (getStatefulConnection().getCommandSet().hasCommand(CommandType.GEORADIUS_RO)) { + return super.georadius_ro(key, longitude, latitude, distance, unit); + } + + return super.georadius(key, longitude, latitude, distance, unit); + } + + @Override + public RedisFuture>> georadius(K key, double longitude, double latitude, double distance, + GeoArgs.Unit unit, GeoArgs geoArgs) { + + if (getStatefulConnection().getCommandSet().hasCommand(CommandType.GEORADIUS_RO)) { + return super.georadius_ro(key, longitude, latitude, distance, unit, geoArgs); + } + + return super.georadius(key, longitude, latitude, distance, unit, geoArgs); + } + + @Override + public RedisFuture> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit) { + + if (getStatefulConnection().getCommandSet().hasCommand(CommandType.GEORADIUSBYMEMBER_RO)) { + return super.georadiusbymember_ro(key, member, distance, unit); + } + + return super.georadiusbymember(key, member, distance, unit); + } + + @Override + public RedisFuture>> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, + GeoArgs geoArgs) { + + if (getStatefulConnection().getCommandSet().hasCommand(CommandType.GEORADIUSBYMEMBER_RO)) { + return super.georadiusbymember_ro(key, member, distance, unit, geoArgs); + } + + return super.georadiusbymember(key, member, distance, unit, geoArgs); + } + + @Override + public StatefulRedisClusterPubSubConnectionImpl getStatefulConnection() { + return (StatefulRedisClusterPubSubConnectionImpl) super.getStatefulConnection(); + } + + @SuppressWarnings("unchecked") + @Override + public PubSubAsyncNodeSelection nodes(Predicate predicate) { + + PubSubAsyncNodeSelection selection = new StaticPubSubAsyncNodeSelection<>(getStatefulConnection(), predicate); + + NodeSelectionInvocationHandler h = new NodeSelectionInvocationHandler((AbstractNodeSelection) selection, + RedisPubSubAsyncCommands.class, ASYNC); + return (PubSubAsyncNodeSelection) Proxy.newProxyInstance(NodeSelectionSupport.class.getClassLoader(), + new Class[] { NodeSelectionPubSubAsyncCommands.class, PubSubAsyncNodeSelection.class }, h); + } + + private static class StaticPubSubAsyncNodeSelection + extends AbstractNodeSelection, NodeSelectionPubSubAsyncCommands, K, V> + implements PubSubAsyncNodeSelection { + + private final List redisClusterNodes; + private final ClusterDistributionChannelWriter writer; + + @SuppressWarnings("unchecked") + public StaticPubSubAsyncNodeSelection(StatefulRedisClusterPubSubConnection globalConnection, + Predicate selector) { + + this.redisClusterNodes = globalConnection.getPartitions().stream().filter(selector) + .collect(Collectors.toList()); + writer = ((StatefulRedisClusterPubSubConnectionImpl) globalConnection).getClusterDistributionChannelWriter(); + } + + @Override + protected CompletableFuture> getApi(RedisClusterNode redisClusterNode) { + return getConnection(redisClusterNode).thenApply(StatefulRedisPubSubConnection::async); + } + + protected List nodes() { + return redisClusterNodes; + } + + @SuppressWarnings("unchecked") + protected CompletableFuture> getConnection(RedisClusterNode redisClusterNode) { + + RedisURI uri = redisClusterNode.getUri(); + AsyncClusterConnectionProvider async = (AsyncClusterConnectionProvider) writer.getClusterConnectionProvider(); + + return async.getConnectionAsync(ClusterConnectionProvider.Intent.WRITE, uri.getHost(), uri.getPort()) + .thenApply(it -> (StatefulRedisPubSubConnection) it); + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/RedisClusterPubSubReactiveCommandsImpl.java b/src/main/java/io/lettuce/core/cluster/RedisClusterPubSubReactiveCommandsImpl.java new file mode 100644 index 0000000000..218b538270 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/RedisClusterPubSubReactiveCommandsImpl.java @@ -0,0 +1,157 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.NodeSelectionInvocationHandler.ExecutionModel.REACTIVE; + +import java.lang.reflect.Proxy; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.function.Predicate; +import java.util.stream.Collectors; + +import reactor.core.publisher.Flux; +import io.lettuce.core.GeoArgs; +import io.lettuce.core.GeoWithin; +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.core.cluster.pubsub.api.reactive.NodeSelectionPubSubReactiveCommands; +import io.lettuce.core.cluster.pubsub.api.reactive.PubSubReactiveNodeSelection; +import io.lettuce.core.cluster.pubsub.api.reactive.RedisClusterPubSubReactiveCommands; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.pubsub.RedisPubSubReactiveCommandsImpl; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.pubsub.api.reactive.RedisPubSubReactiveCommands; + +/** + * A reactive and thread-safe API for a Redis pub/sub connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.0 + */ +public class RedisClusterPubSubReactiveCommandsImpl extends RedisPubSubReactiveCommandsImpl + implements RedisClusterPubSubReactiveCommands { + + /** + * Initialize a new connection. + * + * @param connection the connection. + * @param codec Codec used to encode/decode keys and values. + */ + public RedisClusterPubSubReactiveCommandsImpl(StatefulRedisPubSubConnection connection, RedisCodec codec) { + super(connection, codec); + } + + @Override + public Flux georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit) { + + if (getStatefulConnection().getCommandSet().hasCommand(CommandType.GEORADIUS_RO)) { + return super.georadius_ro(key, longitude, latitude, distance, unit); + } + + return super.georadius(key, longitude, latitude, distance, unit); + } + + @Override + public Flux> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, + GeoArgs geoArgs) { + + if (getStatefulConnection().getCommandSet().hasCommand(CommandType.GEORADIUS_RO)) { + return super.georadius_ro(key, longitude, latitude, distance, unit, geoArgs); + } + + return super.georadius(key, longitude, latitude, distance, unit, geoArgs); + } + + @Override + public Flux georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit) { + + if (getStatefulConnection().getCommandSet().hasCommand(CommandType.GEORADIUS_RO)) { + return super.georadiusbymember_ro(key, member, distance, unit); + } + + return super.georadiusbymember(key, member, distance, unit); + } + + @Override + public Flux> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs) { + + if (getStatefulConnection().getCommandSet().hasCommand(CommandType.GEORADIUS_RO)) { + return super.georadiusbymember_ro(key, member, distance, unit, geoArgs); + } + + return super.georadiusbymember(key, member, distance, unit, geoArgs); + } + + @Override + public StatefulRedisClusterPubSubConnectionImpl getStatefulConnection() { + return (StatefulRedisClusterPubSubConnectionImpl) super.getStatefulConnection(); + } + + @SuppressWarnings("unchecked") + @Override + public PubSubReactiveNodeSelection nodes(Predicate predicate) { + + PubSubReactiveNodeSelection selection = new StaticPubSubReactiveNodeSelection(getStatefulConnection(), + predicate); + + NodeSelectionInvocationHandler h = new NodeSelectionInvocationHandler((AbstractNodeSelection) selection, + RedisPubSubReactiveCommands.class, REACTIVE); + return (PubSubReactiveNodeSelection) Proxy.newProxyInstance(NodeSelectionSupport.class.getClassLoader(), + new Class[] { NodeSelectionPubSubReactiveCommands.class, PubSubReactiveNodeSelection.class }, h); + } + + private static class StaticPubSubReactiveNodeSelection + extends AbstractNodeSelection, NodeSelectionPubSubReactiveCommands, K, V> + implements PubSubReactiveNodeSelection { + + private final List redisClusterNodes; + private final ClusterDistributionChannelWriter writer; + + @SuppressWarnings("unchecked") + public StaticPubSubReactiveNodeSelection(StatefulRedisClusterPubSubConnection globalConnection, + Predicate selector) { + + this.redisClusterNodes = globalConnection.getPartitions().stream().filter(selector) + .collect(Collectors.toList()); + writer = ((StatefulRedisClusterPubSubConnectionImpl) globalConnection).getClusterDistributionChannelWriter(); + } + + @Override + protected CompletableFuture> getApi(RedisClusterNode redisClusterNode) { + return getConnection(redisClusterNode).thenApply(StatefulRedisPubSubConnection::reactive); + } + + protected List nodes() { + return redisClusterNodes; + } + + @SuppressWarnings("unchecked") + protected CompletableFuture> getConnection(RedisClusterNode redisClusterNode) { + RedisURI uri = redisClusterNode.getUri(); + + AsyncClusterConnectionProvider async = (AsyncClusterConnectionProvider) writer.getClusterConnectionProvider(); + + return async.getConnectionAsync(ClusterConnectionProvider.Intent.WRITE, uri.getHost(), uri.getPort()) + .thenApply(it -> (StatefulRedisPubSubConnection) it); + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/RedisClusterURIUtil.java b/src/main/java/io/lettuce/core/cluster/RedisClusterURIUtil.java new file mode 100644 index 0000000000..4ccdf3e8a3 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/RedisClusterURIUtil.java @@ -0,0 +1,83 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.net.URI; +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.internal.HostAndPort; + +/** + * {@link RedisClusterURIUtil} is a collection of {@link RedisURI}-based utility methods for {@link RedisClusterClient} use. + * + * @author Mark Paluch + * @since 4.4 + */ +public abstract class RedisClusterURIUtil { + + private RedisClusterURIUtil() { + } + + /** + * Parse a Redis Cluster URI with potentially multiple hosts into a {@link List} of {@link RedisURI}. + * + * An URI follows the syntax: {@code redis://[password@]host[:port][,host2[:port2]]} + * + * @param uri must not be empty or {@literal null}. + * @return {@link List} of {@link RedisURI}. + */ + public static List toRedisURIs(URI uri) { + + RedisURI redisURI = RedisURI.create(uri); + + String[] parts = redisURI.getHost().split("\\,"); + + List redisURIs = new ArrayList<>(parts.length); + + for (String part : parts) { + HostAndPort hostAndPort = HostAndPort.parse(part); + + RedisURI nodeUri = RedisURI.create(hostAndPort.getHostText(), hostAndPort.hasPort() ? hostAndPort.getPort() + : redisURI.getPort()); + + applyUriConnectionSettings(redisURI, nodeUri); + + redisURIs.add(nodeUri); + } + + return redisURIs; + } + + /** + * Apply {@link RedisURI} settings such as SSL/Timeout/password. + * + * @param from from {@link RedisURI}. + * @param to from {@link RedisURI}. + */ + static void applyUriConnectionSettings(RedisURI from, RedisURI to) { + + if (from.getPassword() != null && from.getPassword().length != 0) { + to.setPassword(new String(from.getPassword())); + } + + to.setTimeout(from.getTimeout()); + to.setSsl(from.isSsl()); + to.setStartTls(from.isStartTls()); + to.setVerifyPeer(from.isVerifyPeer()); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/RoundRobin.java b/src/main/java/io/lettuce/core/cluster/RoundRobin.java new file mode 100644 index 0000000000..883c2ad972 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/RoundRobin.java @@ -0,0 +1,85 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; + +/** + * Circular element provider. This class allows infinite scrolling over a collection with the possibility to provide an initial + * offset. + * + * @author Mark Paluch + */ +class RoundRobin { + + protected volatile Collection collection = Collections.emptyList(); + + protected volatile V offset; + + /** + * Return whether this {@link RoundRobin} is still consistent and contains all items from the master {@link Collection} and + * vice versa. + * + * @param master the master collection containing source elements for this {@link RoundRobin}. + * @return {@literal true} if this {@link RoundRobin} is consistent with the master {@link Collection}. + */ + public boolean isConsistent(Collection master) { + + Collection collection = this.collection; + + return collection.containsAll(master) && master.containsAll(collection); + } + + /** + * Rebuild the {@link RoundRobin} from the master {@link Collection}. + * + * @param master the master collection containing source elements for this {@link RoundRobin}. + */ + public void rebuild(Collection master) { + + this.collection = new ArrayList<>(master); + this.offset = null; + } + + /** + * Returns the next item. + * + * @return the next item + */ + public V next() { + + Collection collection = this.collection; + V offset = this.offset; + + if (offset != null) { + boolean accept = false; + for (V element : collection) { + if (element == offset) { + accept = true; + continue; + } + + if (accept) { + return this.offset = element; + } + } + } + + return this.offset = collection.iterator().next(); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/RoundRobinSocketAddressSupplier.java b/src/main/java/io/lettuce/core/cluster/RoundRobinSocketAddressSupplier.java new file mode 100644 index 0000000000..0fe228930e --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/RoundRobinSocketAddressSupplier.java @@ -0,0 +1,79 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.net.SocketAddress; +import java.util.Collection; +import java.util.function.Function; +import java.util.function.Supplier; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.resource.ClientResources; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Round-Robin socket address supplier. Cluster nodes are iterated circular/infinitely. + * + * @author Mark Paluch + */ +class RoundRobinSocketAddressSupplier implements Supplier { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(RoundRobinSocketAddressSupplier.class); + + private final Collection partitions; + private final Function, Collection> sortFunction; + private final ClientResources clientResources; + + private RoundRobin roundRobin; + + public RoundRobinSocketAddressSupplier(Collection partitions, + Function, Collection> sortFunction, + ClientResources clientResources) { + + LettuceAssert.notNull(partitions, "Partitions must not be null"); + LettuceAssert.notNull(sortFunction, "Sort-Function must not be null"); + + this.partitions = partitions; + this.roundRobin = new RoundRobin<>(); + this.sortFunction = (Function) sortFunction; + this.clientResources = clientResources; + resetRoundRobin(); + } + + @Override + public SocketAddress get() { + + if (!roundRobin.isConsistent(partitions)) { + resetRoundRobin(); + } + + RedisClusterNode redisClusterNode = roundRobin.next(); + return getSocketAddress(redisClusterNode); + } + + protected void resetRoundRobin() { + roundRobin.rebuild(sortFunction.apply(partitions)); + } + + protected SocketAddress getSocketAddress(RedisClusterNode redisClusterNode) { + + SocketAddress resolvedAddress = clientResources.socketAddressResolver().resolve(redisClusterNode.getUri()); + logger.debug("Resolved SocketAddress {} using for Cluster node {}", resolvedAddress, redisClusterNode.getNodeId()); + return resolvedAddress; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/SlotHash.java b/src/main/java/io/lettuce/core/cluster/SlotHash.java new file mode 100644 index 0000000000..b475c01035 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/SlotHash.java @@ -0,0 +1,157 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.nio.ByteBuffer; +import java.util.*; + +import io.lettuce.core.codec.CRC16; +import io.lettuce.core.codec.RedisCodec; + +/** + * Utility to calculate the slot from a key. + * + * @author Mark Paluch + * @since 3.0 + */ +public class SlotHash { + + /** + * Constant for a subkey start. + */ + public static final byte SUBKEY_START = (byte) '{'; + + /** + * Constant for a subkey end. + */ + public static final byte SUBKEY_END = (byte) '}'; + + /** + * Number of redis cluster slot hashes. + */ + public static final int SLOT_COUNT = 16384; + + private SlotHash() { + + } + + /** + * Calculate the slot from the given key. + * + * @param key the key + * @return slot + */ + public static final int getSlot(String key) { + return getSlot(key.getBytes()); + } + + /** + * Calculate the slot from the given key. + * + * @param key the key + * @return slot + */ + public static int getSlot(byte[] key) { + return getSlot(ByteBuffer.wrap(key)); + } + + /** + * Calculate the slot from the given key. + * + * @param key the key + * @return slot + */ + public static int getSlot(ByteBuffer key) { + + int limit = key.limit(); + int position = key.position(); + + int start = indexOf(key, SUBKEY_START); + if (start != -1) { + int end = indexOf(key, start + 1, SUBKEY_END); + if (end != -1 && end != start + 1) { + key.position(start + 1).limit(end); + } + } + + try { + if (key.hasArray()) { + return CRC16.crc16(key.array(), key.position(), key.limit() - key.position()) % SLOT_COUNT; + } + return CRC16.crc16(key) % SLOT_COUNT; + } finally { + key.position(position).limit(limit); + } + } + + private static int indexOf(ByteBuffer haystack, byte needle) { + return indexOf(haystack, haystack.position(), needle); + } + + private static int indexOf(ByteBuffer haystack, int start, byte needle) { + + for (int i = start; i < haystack.remaining(); i++) { + + if (haystack.get(i) == needle) { + return i; + } + } + + return -1; + } + + /** + * Partition keys by slot-hash. The resulting map honors order of the keys. + * + * @param codec codec to encode the key + * @param keys iterable of keys + * @param Key type. + * @param Value type. + * @result map between slot-hash and an ordered list of keys. + * + */ + static Map> partition(RedisCodec codec, Iterable keys) { + + Map> partitioned = new HashMap<>(); + for (K key : keys) { + int slot = getSlot(codec.encodeKey(key)); + if (!partitioned.containsKey(slot)) { + partitioned.put(slot, new ArrayList<>()); + } + Collection list = partitioned.get(slot); + list.add(key); + } + return partitioned; + } + + /** + * Create mapping between the Key and hash slot. + * + * @param partitioned map partitioned by slothash and keys + * @param + */ + static Map getSlots(Map> partitioned) { + + Map result = new HashMap<>(); + for (Map.Entry> entry : partitioned.entrySet()) { + for (K key : entry.getValue()) { + result.put(key, entry.getKey()); + } + } + + return result; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/StatefulRedisClusterConnectionImpl.java b/src/main/java/io/lettuce/core/cluster/StatefulRedisClusterConnectionImpl.java new file mode 100644 index 0000000000..dac67f5b9f --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/StatefulRedisClusterConnectionImpl.java @@ -0,0 +1,281 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.protocol.CommandType.AUTH; +import static io.lettuce.core.protocol.CommandType.READONLY; +import static io.lettuce.core.protocol.CommandType.READWRITE; + +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.time.Duration; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.function.Consumer; +import java.util.stream.Collectors; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.NodeSelection; +import io.lettuce.core.cluster.api.sync.NodeSelectionCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandArgsAccessor; +import io.lettuce.core.protocol.CompleteableCommand; +import io.lettuce.core.protocol.ConnectionWatchdog; +import io.lettuce.core.protocol.RedisCommand; + +/** + * A thread-safe connection to a Redis Cluster. Multiple threads may share one {@link StatefulRedisClusterConnectionImpl} + * + * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All + * pending commands will be (re)sent after successful reconnection. + * + * @author Mark Paluch + * @since 4.0 + */ +public class StatefulRedisClusterConnectionImpl extends RedisChannelHandler + implements StatefulRedisClusterConnection { + + protected final RedisCodec codec; + protected final RedisAdvancedClusterCommands sync; + protected final RedisAdvancedClusterAsyncCommandsImpl async; + protected final RedisAdvancedClusterReactiveCommandsImpl reactive; + + private final ClusterConnectionState connectionState = new ClusterConnectionState(); + + private Partitions partitions; + private volatile CommandSet commandSet; + + /** + * Initialize a new connection. + * + * @param writer the channel writer + * @param codec Codec used to encode/decode keys and values. + * @param timeout Maximum time to wait for a response. + */ + public StatefulRedisClusterConnectionImpl(RedisChannelWriter writer, RedisCodec codec, Duration timeout) { + + super(writer, timeout); + this.codec = codec; + + this.async = new RedisAdvancedClusterAsyncCommandsImpl<>(this, codec); + this.sync = (RedisAdvancedClusterCommands) Proxy.newProxyInstance(AbstractRedisClient.class.getClassLoader(), + new Class[] { RedisAdvancedClusterCommands.class }, syncInvocationHandler()); + this.reactive = new RedisAdvancedClusterReactiveCommandsImpl<>(this, codec); + } + + @Override + public RedisAdvancedClusterCommands sync() { + return sync; + } + + protected InvocationHandler syncInvocationHandler() { + return new ClusterFutureSyncInvocationHandler<>(this, RedisClusterAsyncCommands.class, NodeSelection.class, + NodeSelectionCommands.class, async()); + } + + @Override + public RedisAdvancedClusterAsyncCommands async() { + return async; + } + + @Override + public RedisAdvancedClusterReactiveCommands reactive() { + return reactive; + } + + CommandSet getCommandSet() { + return commandSet; + } + + void setCommandSet(CommandSet commandSet) { + this.commandSet = commandSet; + } + + private RedisURI lookup(String nodeId) { + + for (RedisClusterNode partition : partitions) { + if (partition.getNodeId().equals(nodeId)) { + return partition.getUri(); + } + } + return null; + } + + @Override + public StatefulRedisConnection getConnection(String nodeId) { + + RedisURI redisURI = lookup(nodeId); + + if (redisURI == null) { + throw new RedisException("NodeId " + nodeId + " does not belong to the cluster"); + } + + return getClusterDistributionChannelWriter().getClusterConnectionProvider() + .getConnection(ClusterConnectionProvider.Intent.WRITE, nodeId); + } + + @Override + public CompletableFuture> getConnectionAsync(String nodeId) { + + RedisURI redisURI = lookup(nodeId); + + if (redisURI == null) { + throw new RedisException("NodeId " + nodeId + " does not belong to the cluster"); + } + + AsyncClusterConnectionProvider provider = (AsyncClusterConnectionProvider) getClusterDistributionChannelWriter() + .getClusterConnectionProvider(); + + return provider.getConnectionAsync(ClusterConnectionProvider.Intent.WRITE, nodeId); + } + + @Override + public StatefulRedisConnection getConnection(String host, int port) { + + return getClusterDistributionChannelWriter().getClusterConnectionProvider() + .getConnection(ClusterConnectionProvider.Intent.WRITE, host, port); + } + + @Override + public CompletableFuture> getConnectionAsync(String host, int port) { + + AsyncClusterConnectionProvider provider = (AsyncClusterConnectionProvider) getClusterDistributionChannelWriter() + .getClusterConnectionProvider(); + + return provider.getConnectionAsync(ClusterConnectionProvider.Intent.WRITE, host, port); + } + + ClusterDistributionChannelWriter getClusterDistributionChannelWriter() { + return (ClusterDistributionChannelWriter) super.getChannelWriter(); + } + + @Override + public RedisCommand dispatch(RedisCommand command) { + return super.dispatch(preProcessCommand(command)); + } + + @Override + public Collection> dispatch(Collection> commands) { + + List> commandsToSend = new ArrayList<>(commands.size()); + for (RedisCommand command : commands) { + commandsToSend.add(preProcessCommand(command)); + } + + return super.dispatch(commandsToSend); + } + + private RedisCommand preProcessCommand(RedisCommand command) { + + RedisCommand local = command; + + if (local.getType().name().equals(AUTH.name())) { + local = attachOnComplete(local, status -> { + if (status.equals("OK")) { + List args = CommandArgsAccessor.getCharArrayArguments(command.getArgs()); + + if (!args.isEmpty()) { + this.connectionState.setUserNamePassword(args); + } else { + + List stringArgs = CommandArgsAccessor.getStringArguments(command.getArgs()); + this.connectionState + .setUserNamePassword(stringArgs.stream().map(String::toCharArray).collect(Collectors.toList())); + } + } + }); + } + + if (local.getType().name().equals(READONLY.name())) { + local = attachOnComplete(local, status -> { + if (status.equals("OK")) { + this.connectionState.setReadOnly(true); + } + }); + } + + if (local.getType().name().equals(READWRITE.name())) { + local = attachOnComplete(local, status -> { + if (status.equals("OK")) { + this.connectionState.setReadOnly(false); + } + }); + } + return local; + } + + private RedisCommand attachOnComplete(RedisCommand command, Consumer consumer) { + + if (command instanceof CompleteableCommand) { + CompleteableCommand completeable = (CompleteableCommand) command; + completeable.onComplete(consumer); + } + return command; + } + + public void setPartitions(Partitions partitions) { + this.partitions = partitions; + getClusterDistributionChannelWriter().setPartitions(partitions); + } + + public Partitions getPartitions() { + return partitions; + } + + @Override + public void setReadFrom(ReadFrom readFrom) { + LettuceAssert.notNull(readFrom, "ReadFrom must not be null"); + getClusterDistributionChannelWriter().setReadFrom(readFrom); + } + + @Override + public ReadFrom getReadFrom() { + return getClusterDistributionChannelWriter().getReadFrom(); + } + + ConnectionState getConnectionState() { + return connectionState; + } + + static class ClusterConnectionState extends ConnectionState { + + @Override + protected void setUserNamePassword(List args) { + super.setUserNamePassword(args); + } + + @Override + protected void setDb(int db) { + super.setDb(db); + } + + @Override + protected void setReadOnly(boolean readOnly) { + super.setReadOnly(readOnly); + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/StatefulRedisClusterPubSubConnectionImpl.java b/src/main/java/io/lettuce/core/cluster/StatefulRedisClusterPubSubConnectionImpl.java new file mode 100644 index 0000000000..96704a7253 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/StatefulRedisClusterPubSubConnectionImpl.java @@ -0,0 +1,217 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.time.Duration; +import java.util.List; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.*; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.RedisClusterPubSubListener; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.core.cluster.pubsub.api.async.RedisClusterPubSubAsyncCommands; +import io.lettuce.core.cluster.pubsub.api.reactive.RedisClusterPubSubReactiveCommands; +import io.lettuce.core.cluster.pubsub.api.sync.NodeSelectionPubSubCommands; +import io.lettuce.core.cluster.pubsub.api.sync.PubSubNodeSelection; +import io.lettuce.core.cluster.pubsub.api.sync.RedisClusterPubSubCommands; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.pubsub.RedisPubSubAsyncCommandsImpl; +import io.lettuce.core.pubsub.RedisPubSubReactiveCommandsImpl; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnectionImpl; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; +import io.lettuce.core.pubsub.api.sync.RedisPubSubCommands; + +/** + * @author Mark Paluch + */ +class StatefulRedisClusterPubSubConnectionImpl extends StatefulRedisPubSubConnectionImpl + implements StatefulRedisClusterPubSubConnection { + + private final PubSubClusterEndpoint endpoint; + private volatile Partitions partitions; + private volatile CommandSet commandSet; + + /** + * Initialize a new connection. + * + * @param writer the channel writer + * @param codec Codec used to encode/decode keys and values. + * @param timeout Maximum time to wait for a response. + */ + public StatefulRedisClusterPubSubConnectionImpl(PubSubClusterEndpoint endpoint, RedisChannelWriter writer, + RedisCodec codec, Duration timeout) { + + super(endpoint, writer, codec, timeout); + + this.endpoint = endpoint; + } + + @Override + public RedisClusterPubSubAsyncCommands async() { + return (RedisClusterPubSubAsyncCommands) super.async(); + } + + @Override + protected RedisPubSubAsyncCommandsImpl newRedisAsyncCommandsImpl() { + return new RedisClusterPubSubAsyncCommandsImpl<>(this, codec); + } + + @Override + public RedisClusterPubSubCommands sync() { + return (RedisClusterPubSubCommands) super.sync(); + } + + @SuppressWarnings("unchecked") + @Override + protected RedisPubSubCommands newRedisSyncCommandsImpl() { + + return (RedisPubSubCommands) Proxy.newProxyInstance(AbstractRedisClient.class.getClassLoader(), + new Class[] { RedisClusterPubSubCommands.class, RedisPubSubCommands.class }, syncInvocationHandler()); + } + + private InvocationHandler syncInvocationHandler() { + return new ClusterFutureSyncInvocationHandler(this, RedisPubSubAsyncCommands.class, PubSubNodeSelection.class, + NodeSelectionPubSubCommands.class, async()); + } + + @Override + public RedisClusterPubSubReactiveCommands reactive() { + return (RedisClusterPubSubReactiveCommands) super.reactive(); + } + + @Override + protected RedisPubSubReactiveCommandsImpl newRedisReactiveCommandsImpl() { + return new RedisClusterPubSubReactiveCommandsImpl(this, codec); + } + + CommandSet getCommandSet() { + return commandSet; + } + + void setCommandSet(CommandSet commandSet) { + this.commandSet = commandSet; + } + + @Override + protected List> resubscribe() { + + async().clusterMyId().thenAccept(nodeId -> endpoint.setClusterNode(partitions.getPartitionByNodeId(nodeId))); + + return super.resubscribe(); + } + + @Override + public StatefulRedisPubSubConnection getConnection(String nodeId) { + + RedisURI redisURI = lookup(nodeId); + + if (redisURI == null) { + throw new RedisException("NodeId " + nodeId + " does not belong to the cluster"); + } + + return (StatefulRedisPubSubConnection) getClusterDistributionChannelWriter().getClusterConnectionProvider() + .getConnection(ClusterConnectionProvider.Intent.WRITE, nodeId); + } + + @Override + @SuppressWarnings("unchecked") + public CompletableFuture> getConnectionAsync(String nodeId) { + + RedisURI redisURI = lookup(nodeId); + + if (redisURI == null) { + throw new RedisException("NodeId " + nodeId + " does not belong to the cluster"); + } + + AsyncClusterConnectionProvider provider = (AsyncClusterConnectionProvider) getClusterDistributionChannelWriter() + .getClusterConnectionProvider(); + return (CompletableFuture) provider.getConnectionAsync(ClusterConnectionProvider.Intent.WRITE, nodeId); + + } + + @Override + public StatefulRedisPubSubConnection getConnection(String host, int port) { + + return (StatefulRedisPubSubConnection) getClusterDistributionChannelWriter().getClusterConnectionProvider() + .getConnection(ClusterConnectionProvider.Intent.WRITE, host, port); + } + + @Override + public CompletableFuture> getConnectionAsync(String host, int port) { + + AsyncClusterConnectionProvider provider = (AsyncClusterConnectionProvider) getClusterDistributionChannelWriter() + .getClusterConnectionProvider(); + + return (CompletableFuture) provider.getConnectionAsync(ClusterConnectionProvider.Intent.WRITE, host, port); + } + + public void setPartitions(Partitions partitions) { + this.partitions = partitions; + getClusterDistributionChannelWriter().setPartitions(partitions); + } + + public Partitions getPartitions() { + return partitions; + } + + @Override + public void setNodeMessagePropagation(boolean enabled) { + this.endpoint.setNodeMessagePropagation(enabled); + } + + /** + * Add a new {@link RedisClusterPubSubListener listener}. + * + * @param listener the listener, must not be {@literal null}. + */ + @Override + public void addListener(RedisClusterPubSubListener listener) { + endpoint.addListener(listener); + } + + /** + * Remove an existing {@link RedisClusterPubSubListener listener}. + * + * @param listener the listener, must not be {@literal null}. + */ + @Override + public void removeListener(RedisClusterPubSubListener listener) { + endpoint.removeListener(listener); + } + + RedisClusterPubSubListener getUpstreamListener() { + return endpoint.getUpstreamListener(); + } + + protected ClusterDistributionChannelWriter getClusterDistributionChannelWriter() { + return (ClusterDistributionChannelWriter) super.getChannelWriter(); + } + + private RedisURI lookup(String nodeId) { + + for (RedisClusterNode partition : partitions) { + if (partition.getNodeId().equals(nodeId)) { + return partition.getUri(); + } + } + return null; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/StaticNodeSelection.java b/src/main/java/io/lettuce/core/cluster/StaticNodeSelection.java new file mode 100644 index 0000000000..b3c6f5c0d7 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/StaticNodeSelection.java @@ -0,0 +1,72 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.stream.Collectors; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Static selection of nodes. + * + * @param API type. + * @param Command command interface type to invoke multi-node operations. + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +class StaticNodeSelection extends AbstractNodeSelection { + + private final ClusterDistributionChannelWriter writer; + private final ClusterConnectionProvider.Intent intent; + private final List redisClusterNodes; + private final Function, API> apiExtractor; + + public StaticNodeSelection(ClusterDistributionChannelWriter writer, Predicate selector, + ClusterConnectionProvider.Intent intent, Function, API> apiExtractor) { + + this.writer = writer; + this.intent = intent; + this.apiExtractor = apiExtractor; + + this.redisClusterNodes = writer.getPartitions().stream().filter(selector).collect(Collectors.toList()); + } + + @Override + protected CompletableFuture> getConnection(RedisClusterNode redisClusterNode) { + + RedisURI uri = redisClusterNode.getUri(); + + AsyncClusterConnectionProvider async = (AsyncClusterConnectionProvider) writer.getClusterConnectionProvider(); + return async.getConnectionAsync(intent, uri.getHost(), uri.getPort()); + } + + @Override + protected CompletableFuture getApi(RedisClusterNode redisClusterNode) { + return getConnection(redisClusterNode).thenApply(apiExtractor); + } + + @Override + protected List nodes() { + return redisClusterNodes; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/SyncExecutionsImpl.java b/src/main/java/io/lettuce/core/cluster/SyncExecutionsImpl.java new file mode 100644 index 0000000000..38c42caae7 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/SyncExecutionsImpl.java @@ -0,0 +1,61 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.Collection; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ExecutionException; + +import io.lettuce.core.cluster.api.sync.Executions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * @author Mark Paluch + * TODO: Add timeout handling + */ +class SyncExecutionsImpl implements Executions { + + private Map executions; + + public SyncExecutionsImpl(Map> executions) throws ExecutionException, + InterruptedException { + + Map result = new HashMap<>(executions.size(), 1); + for (Map.Entry> entry : executions.entrySet()) { + result.put(entry.getKey(), entry.getValue().toCompletableFuture().get()); + } + + this.executions = result; + } + + @Override + public Map asMap() { + return executions; + } + + @Override + public Collection nodes() { + return executions.keySet(); + } + + @Override + public T get(RedisClusterNode redisClusterNode) { + return executions.get(redisClusterNode); + } + +} diff --git a/src/main/java/io/lettuce/core/cluster/UnknownPartitionException.java b/src/main/java/io/lettuce/core/cluster/UnknownPartitionException.java new file mode 100644 index 0000000000..30cf9235dd --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/UnknownPartitionException.java @@ -0,0 +1,35 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +/** + * Exception thrown when an unknown partition is requested. + * + * @author Mark Paluch + * @since 5.1 + */ +@SuppressWarnings("serial") +public class UnknownPartitionException extends PartitionException { + + /** + * Create a {@code UnknownPartitionException} with the specified detail message. + * + * @param msg the detail message. + */ + public UnknownPartitionException(String msg) { + super(msg); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/api/NodeSelectionSupport.java b/src/main/java/io/lettuce/core/cluster/api/NodeSelectionSupport.java new file mode 100644 index 0000000000..5f704c2158 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/NodeSelectionSupport.java @@ -0,0 +1,63 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api; + +import java.util.Map; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * A node selection represents a set of Redis Cluster nodes. Provides access to particular node connection APIs and allows the + * execution of commands on the selected cluster nodes. + * + * @param API type. + * @param Command command interface type to invoke multi-node operations. + * @author Mark Paluch + * @since 4.0 + */ +public interface NodeSelectionSupport { + + /** + * @return number of nodes. + */ + int size(); + + /** + * @return commands API to run on this node selection. + */ + CMD commands(); + + /** + * Obtain the connection/commands to a particular node. + * + * @param index index of the node + * @return the connection/commands object + */ + API commands(int index); + + /** + * Get the {@link RedisClusterNode}. + * + * @param index index of the cluster node + * @return the cluster node + */ + RedisClusterNode node(int index); + + /** + * @return map of {@link RedisClusterNode} and the connection/commands objects + */ + Map asMap(); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/StatefulRedisClusterConnection.java b/src/main/java/io/lettuce/core/cluster/api/StatefulRedisClusterConnection.java new file mode 100644 index 0000000000..c81eeef717 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/StatefulRedisClusterConnection.java @@ -0,0 +1,160 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api; + +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.RedisException; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.ClusterClientOptions; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; + +/** + * A stateful cluster connection. Advanced cluster connections provide transparent command routing based on the first command + * key. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface StatefulRedisClusterConnection extends StatefulConnection { + + /** + * Returns the {@link RedisAdvancedClusterCommands} API for the current connection. Does not create a new connection. + * + * @return the synchronous API for the underlying connection. + */ + RedisAdvancedClusterCommands sync(); + + /** + * Returns the {@link RedisAdvancedClusterAsyncCommands} API for the current connection. Does not create a new connection. + * + * @return the asynchronous API for the underlying connection. + */ + RedisAdvancedClusterAsyncCommands async(); + + /** + * Returns the {@link RedisAdvancedClusterReactiveCommands} API for the current connection. Does not create a new + * connection. + * + * @return the reactive API for the underlying connection. + */ + RedisAdvancedClusterReactiveCommands reactive(); + + /** + * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. This + * connection is bound to the node id. Once the cluster topology view is updated, the connection will try to reconnect the + * to the node with the specified {@code nodeId}, that behavior can also lead to a closed connection once the node with the + * specified {@code nodeId} is no longer part of the cluster. + * + * Do not close the connections. Otherwise, unpredictable behavior will occur. The nodeId must be part of the cluster and is + * validated against the current topology view in {@link io.lettuce.core.cluster.models.partitions.Partitions}. + * + * + * In contrast to the {@link StatefulRedisClusterConnection}, node-connections do not route commands to other cluster nodes. + * + * @param nodeId the node Id + * @return a connection to the requested cluster node + * @throws RedisException if the requested node identified by {@code nodeId} is not part of the cluster + */ + StatefulRedisConnection getConnection(String nodeId); + + /** + * Retrieve asynchronously a connection to the specified cluster node using the nodeId. Host and port are looked up in the + * node list. This connection is bound to the node id. Once the cluster topology view is updated, the connection will try to + * reconnect the to the node with the specified {@code nodeId}, that behavior can also lead to a closed connection once the + * node with the specified {@code nodeId} is no longer part of the cluster. + * + * Do not close the connections. Otherwise, unpredictable behavior will occur. The nodeId must be part of the cluster and is + * validated against the current topology view in {@link io.lettuce.core.cluster.models.partitions.Partitions}. + * + * + * In contrast to the {@link StatefulRedisClusterConnection}, node-connections do not route commands to other cluster nodes. + * + * @param nodeId the node Id + * @return {@link CompletableFuture} to indicate success or failure to connect to the requested cluster node. + * @throws RedisException if the requested node identified by {@code nodeId} is not part of the cluster + * @since 5.0 + */ + CompletableFuture> getConnectionAsync(String nodeId); + + /** + * Retrieve a connection to the specified cluster node using host and port. This connection is bound to a host and port. + * Updates to the cluster topology view can close the connection once the host, identified by {@code host} and {@code port}, + * are no longer part of the cluster. + *

+ * Do not close the connections. Otherwise, unpredictable behavior will occur. Host and port connections are verified by + * default for cluster membership, see {@link ClusterClientOptions#isValidateClusterNodeMembership()}. + *

+ * In contrast to the {@link StatefulRedisClusterConnection}, node-connections do not route commands to other cluster nodes. + * + * @param host the host + * @param port the port + * @return a connection to the requested cluster node + * @throws RedisException if the requested node identified by {@code host} and {@code port} is not part of the cluster + */ + StatefulRedisConnection getConnection(String host, int port); + + /** + * Retrieve asynchronously a connection to the specified cluster node using host and port. This connection is bound to a + * host and port. Updates to the cluster topology view can close the connection once the host, identified by {@code host} + * and {@code port}, are no longer part of the cluster. + *

+ * Do not close the connections. Otherwise, unpredictable behavior will occur. Host and port connections are verified by + * default for cluster membership, see {@link ClusterClientOptions#isValidateClusterNodeMembership()}. + *

+ * In contrast to the {@link StatefulRedisClusterConnection}, node-connections do not route commands to other cluster nodes. + * + * @param host the host + * @param port the port + * @return {@link CompletableFuture} to indicate success or failure to connect to the requested cluster node. + * @throws RedisException if the requested node identified by {@code host} and {@code port} is not part of the cluster + * @since 5.0 + */ + CompletableFuture> getConnectionAsync(String host, int port); + + /** + * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the + * documentation for {@link ReadFrom} for more information. + * + * @param readFrom the read from setting, must not be {@literal null} + */ + void setReadFrom(ReadFrom readFrom); + + /** + * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. + * + * @return the read from setting + */ + ReadFrom getReadFrom(); + + /** + * @return Known partitions for this connection. + */ + Partitions getPartitions(); + + /** + * @return the underlying {@link RedisChannelWriter}. + */ + RedisChannelWriter getChannelWriter(); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/async/AsyncExecutions.java b/src/main/java/io/lettuce/core/cluster/api/async/AsyncExecutions.java new file mode 100644 index 0000000000..f6d36b4278 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/AsyncExecutions.java @@ -0,0 +1,79 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; + +import java.util.Collection; +import java.util.List; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.stream.Collector; +import java.util.stream.Stream; +import java.util.stream.StreamSupport; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Result holder for a command that was executed asynchronously on multiple nodes. + * + * @author Mark Paluch + * @since 4.0 + */ +public interface AsyncExecutions extends Iterable>, CompletionStage> { + + /** + * + * @return map between {@link RedisClusterNode} and the {@link CompletionStage} + */ + Map> asMap(); + + /** + * + * @return collection of nodes on which the command was executed. + */ + Collection nodes(); + + /** + * + * @param redisClusterNode the node + * @return the completion stage for this node + */ + CompletionStage get(RedisClusterNode redisClusterNode); + + /** + * @return array of futures. + */ + CompletableFuture[] futures(); + + /** + * Returns a new {@link CompletionStage} that, when this stage completes normally, is executed with this stage's results as + * the argument to the {@link Collector}. + * + * See the {@link CompletionStage} documentation for rules covering exceptional completion. + * + * @param collector the {@link Collector} to collect this stages results. + * @return the new {@link CompletionStage}. + * @see 5.2 + */ + CompletionStage thenCollect(Collector collector); + + /** + * @return a sequential {@code Stream} over the {@link CompletionStage CompletionStages} in this collection + */ + default Stream> stream() { + return StreamSupport.stream(spliterator(), false); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/api/async/AsyncNodeSelection.java b/src/main/java/io/lettuce/core/cluster/api/async/AsyncNodeSelection.java new file mode 100644 index 0000000000..edb08d7842 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/AsyncNodeSelection.java @@ -0,0 +1,30 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; + +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.cluster.api.NodeSelectionSupport; + +/** + * Node selection with access to asynchronous executed commands on the set. + * + * @author Mark Paluch + * @since 4.0 + */ +public interface AsyncNodeSelection + extends NodeSelectionSupport, NodeSelectionAsyncCommands> { + +} diff --git a/src/main/java/io/lettuce/core/cluster/api/async/BaseNodeSelectionAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/BaseNodeSelectionAsyncCommands.java new file mode 100644 index 0000000000..62256f0b89 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/BaseNodeSelectionAsyncCommands.java @@ -0,0 +1,136 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Asynchronous executed commands on a node selection for basic commands. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi + */ +public interface BaseNodeSelectionAsyncCommands { + + /** + * Post a message to a channel. + * + * @param channel the channel type: key + * @param message the message type: value + * @return Long integer-reply the number of clients that received the message. + */ + AsyncExecutions publish(K channel, V message); + + /** + * Lists the currently *active channels*. + * + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + AsyncExecutions> pubsubChannels(); + + /** + * Lists the currently *active channels*. + * + * @param channel the key + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + AsyncExecutions> pubsubChannels(K channel); + + /** + * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. + * + * @param channels channel keys + * @return array-reply a list of channels and number of subscribers for every channel. + */ + AsyncExecutions> pubsubNumsub(K... channels); + + /** + * Returns the number of subscriptions to patterns. + * + * @return Long integer-reply the number of patterns all the clients are subscribed to. + */ + AsyncExecutions pubsubNumpat(); + + /** + * Echo the given string. + * + * @param msg the message type: value + * @return V bulk-string-reply + */ + AsyncExecutions echo(V msg); + + /** + * Return the role of the instance in the context of replication. + * + * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional + * elements are role-specific. + */ + AsyncExecutions> role(); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + AsyncExecutions ping(); + + /** + * Instructs Redis to disconnect the connection. Note that if auto-reconnect is enabled then Lettuce will auto-reconnect if + * the connection was disconnected. Use {@link io.lettuce.core.api.StatefulConnection#close} to close connections and + * release resources. + * + * @return String simple-string-reply always OK. + */ + AsyncExecutions quit(); + + /** + * Wait for replication. + * + * @param replicas minimum number of replicas + * @param timeout timeout in milliseconds + * @return number of replicas + */ + AsyncExecutions waitForReplication(int replicas, long timeout); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param response type + * @return the command response + */ + AsyncExecutions dispatch(ProtocolKeyword type, CommandOutput output); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param args the command arguments, must not be {@literal null}. + * @param response type + * @return the command response + */ + AsyncExecutions dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionAsyncCommands.java new file mode 100644 index 0000000000..4f2b6f533a --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionAsyncCommands.java @@ -0,0 +1,31 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; + +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.api.sync.NodeSelectionStreamCommands; + +/** + * Asynchronous and thread-safe Redis API to execute commands on a {@link NodeSelectionSupport}. + * + * @author Mark Paluch + */ +public interface NodeSelectionAsyncCommands extends BaseNodeSelectionAsyncCommands, + NodeSelectionGeoAsyncCommands, NodeSelectionHashAsyncCommands, NodeSelectionHLLAsyncCommands, + NodeSelectionKeyAsyncCommands, NodeSelectionListAsyncCommands, NodeSelectionScriptingAsyncCommands, + NodeSelectionServerAsyncCommands, NodeSelectionSetAsyncCommands, NodeSelectionSortedSetAsyncCommands, + NodeSelectionStreamCommands, NodeSelectionStringAsyncCommands { +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionGeoAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionGeoAsyncCommands.java similarity index 85% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionGeoAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionGeoAsyncCommands.java index e303c012df..d14b0e8852 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionGeoAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionGeoAsyncCommands.java @@ -1,19 +1,31 @@ -package com.lambdaworks.redis.cluster.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; -import com.lambdaworks.redis.GeoArgs; -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.GeoRadiusStoreArgs; -import com.lambdaworks.redis.GeoWithin; import java.util.List; import java.util.Set; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.*; /** * Asynchronous executed commands on a node selection for the Geo-API. * * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi */ public interface NodeSelectionGeoAsyncCommands { @@ -44,7 +56,7 @@ public interface NodeSelectionGeoAsyncCommands { * @param members the members * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. */ - AsyncExecutions> geohash(K key, V... members); + AsyncExecutions>> geohash(K key, V... members); /** * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. @@ -72,7 +84,7 @@ public interface NodeSelectionGeoAsyncCommands { AsyncExecutions>> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); /** - * Perform a {@link #georadius(Object, double, double, double, Unit, GeoArgs)} query and store the results in a sorted set. + * Perform a {@link #georadius(Object, double, double, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. * * @param key the key of the geo set * @param longitude the longitude coordinate according to WGS84 @@ -98,7 +110,6 @@ public interface NodeSelectionGeoAsyncCommands { AsyncExecutions> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); /** - * * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the * results. * @@ -112,7 +123,7 @@ public interface NodeSelectionGeoAsyncCommands { AsyncExecutions>> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); /** - * Perform a {@link #georadiusbymember(Object, Object, double, Unit, GeoArgs)} query and store the results in a sorted set. + * Perform a {@link #georadiusbymember(Object, Object, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. * * @param key the key of the geo set * @param member reference member @@ -136,7 +147,6 @@ public interface NodeSelectionGeoAsyncCommands { AsyncExecutions> geopos(K key, V... members); /** - * * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is * returned. Default in meters by, otherwise according to {@code unit} * diff --git a/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionHLLAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionHLLAsyncCommands.java new file mode 100644 index 0000000000..cc652a05fd --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionHLLAsyncCommands.java @@ -0,0 +1,61 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; + +/** + * Asynchronous executed commands on a node selection for HyperLogLog (PF* commands). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi + */ +public interface NodeSelectionHLLAsyncCommands { + + /** + * Adds the specified elements to the specified HyperLogLog. + * + * @param key the key + * @param values the values + * + * @return Long integer-reply specifically: + * + * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. + */ + AsyncExecutions pfadd(K key, V... values); + + /** + * Merge N different HyperLogLogs into a single one. + * + * @param destkey the destination key + * @param sourcekeys the source key + * + * @return String simple-string-reply The command just returns {@code OK}. + */ + AsyncExecutions pfmerge(K destkey, K... sourcekeys); + + /** + * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). + * + * @param keys the keys + * + * @return Long integer-reply specifically: + * + * The approximated number of unique elements observed via {@code PFADD}. + */ + AsyncExecutions pfcount(K... keys); +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionHashAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionHashAsyncCommands.java similarity index 84% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionHashAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionHashAsyncCommands.java index dec72c511f..41a6ee1d2c 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionHashAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionHashAsyncCommands.java @@ -1,30 +1,42 @@ -package com.lambdaworks.redis.cluster.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; import java.util.List; import java.util.Map; -import com.lambdaworks.redis.MapScanCursor; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; /** * Asynchronous executed commands on a node selection for Hashes (Key-Value pairs). - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi */ public interface NodeSelectionHashAsyncCommands { /** * Delete one or more hash fields. - * + * * @param key the key * @param fields the field type: key * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing @@ -34,11 +46,11 @@ public interface NodeSelectionHashAsyncCommands { /** * Determine if a hash field exists. - * + * * @param key the key * @param field the field type: key * @return Boolean integer-reply specifically: - * + * * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, * or {@code key} does not exist. */ @@ -46,7 +58,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Get the value of a hash field. - * + * * @param key the key * @param field the field type: key * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present @@ -56,7 +68,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Increment the integer value of a hash field by the given number. - * + * * @param key the key * @param field the field type: key * @param amount the increment type: long @@ -66,7 +78,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Increment the float value of a hash field by the given amount. - * + * * @param key the key * @param field the field type: key * @param amount the increment type: double @@ -76,7 +88,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Get all the fields and values in a hash. - * + * * @param key the key * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} * does not exist. @@ -85,17 +97,17 @@ public interface NodeSelectionHashAsyncCommands { /** * Stream over all the fields and values in a hash. - * + * * @param channel the channel * @param key the key - * + * * @return Long count of the keys. */ AsyncExecutions hgetall(KeyValueStreamingChannel channel, K key); /** * Get all the fields in a hash. - * + * * @param key the key * @return List<K> array-reply list of fields in the hash, or an empty list when {@code key} does not exist. */ @@ -103,17 +115,17 @@ public interface NodeSelectionHashAsyncCommands { /** * Stream over all the fields in a hash. - * + * * @param channel the channel * @param key the key - * + * * @return Long count of the keys. */ AsyncExecutions hkeys(KeyStreamingChannel channel, K key); /** * Get the number of fields in a hash. - * + * * @param key the key * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. */ @@ -121,27 +133,27 @@ public interface NodeSelectionHashAsyncCommands { /** * Get the values of all the given hash fields. - * + * * @param key the key * @param fields the field type: key * @return List<V> array-reply list of values associated with the given fields, in the same */ - AsyncExecutions> hmget(K key, K... fields); + AsyncExecutions>> hmget(K key, K... fields); /** * Stream over the values of all the given hash fields. - * + * * @param channel the channel * @param key the key * @param fields the fields - * + * * @return Long count of the keys */ - AsyncExecutions hmget(ValueStreamingChannel channel, K key, K... fields); + AsyncExecutions hmget(KeyValueStreamingChannel channel, K key, K... fields); /** * Set multiple hash fields to multiple values. - * + * * @param key the key * @param map the null * @return String simple-string-reply @@ -150,7 +162,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @return MapScanCursor<K, V> map scan cursor. */ @@ -158,7 +170,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanArgs scan arguments * @return MapScanCursor<K, V> map scan cursor. @@ -167,7 +179,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -177,7 +189,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return MapScanCursor<K, V> map scan cursor. @@ -186,7 +198,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @return StreamScanCursor scan cursor. @@ -195,7 +207,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanArgs scan arguments @@ -205,7 +217,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -216,7 +228,7 @@ public interface NodeSelectionHashAsyncCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -237,6 +249,16 @@ public interface NodeSelectionHashAsyncCommands { */ AsyncExecutions hset(K key, K field, V value); + /** + * Set multiple hash fields to multiple values. + * + * @param key the key of the hash + * @param map the field/value pairs to update + * @return Long integer-reply: the number of fields that were added. + * @since 5.3 + */ + AsyncExecutions hset(K key, Map map); + /** * Set the value of a hash field, only if the field does not exist. * diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionKeyAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionKeyAsyncCommands.java similarity index 87% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionKeyAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionKeyAsyncCommands.java index 184e953757..635c6adeb8 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionKeyAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionKeyAsyncCommands.java @@ -1,25 +1,35 @@ -package com.lambdaworks.redis.cluster.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; import java.util.Date; import java.util.List; -import com.lambdaworks.redis.KeyScanCursor; -import com.lambdaworks.redis.MigrateArgs; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.SortArgs; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; /** * Asynchronous executed commands on a node selection for Keys (Key manipulation/querying). - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi */ public interface NodeSelectionKeyAsyncCommands { @@ -41,7 +51,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Return a serialized version of the value stored at the specified key. - * + * * @param key the key * @return byte[] bulk-string-reply the serialized value. */ @@ -57,11 +67,11 @@ public interface NodeSelectionKeyAsyncCommands { /** * Set a key's time to live in seconds. - * + * * @param key the key * @param seconds the seconds type: long * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set. */ @@ -69,11 +79,11 @@ public interface NodeSelectionKeyAsyncCommands { /** * Set the expiration for a key as a UNIX timestamp. - * + * * @param key the key * @param timestamp the timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -81,11 +91,11 @@ public interface NodeSelectionKeyAsyncCommands { /** * Set the expiration for a key as a UNIX timestamp. - * + * * @param key the key * @param timestamp the timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -93,7 +103,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Find all keys matching the given pattern. - * + * * @param pattern the pattern type: patternkey (pattern) * @return List<K> array-reply list of keys matching {@code pattern}. */ @@ -101,7 +111,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Find all keys matching the given pattern. - * + * * @param channel the channel * @param pattern the pattern * @return Long array-reply list of keys matching {@code pattern}. @@ -110,7 +120,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Atomically transfer a key from a Redis instance to another one. - * + * * @param host the host * @param port the port * @param key the key @@ -134,7 +144,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Move a key to another database. - * + * * @param key the key * @param db the db type: long * @return Boolean integer-reply specifically: @@ -143,7 +153,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * returns the kind of internal representation used in order to store the value associated with a key. - * + * * @param key the key * @return String */ @@ -152,7 +162,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write * operations). - * + * * @param key the key * @return number of seconds since the object stored at the specified key is idle. */ @@ -160,7 +170,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * returns the number of references of the value associated with the specified key. - * + * * @param key the key * @return Long */ @@ -168,10 +178,10 @@ public interface NodeSelectionKeyAsyncCommands { /** * Remove the expiration from a key. - * + * * @param key the key * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an * associated timeout. */ @@ -179,11 +189,11 @@ public interface NodeSelectionKeyAsyncCommands { /** * Set a key's time to live in milliseconds. - * + * * @param key the key * @param milliseconds the milliseconds type: long * @return integer-reply, specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set. */ @@ -191,11 +201,11 @@ public interface NodeSelectionKeyAsyncCommands { /** * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * + * * @param key the key * @param timestamp the milliseconds-timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -203,11 +213,11 @@ public interface NodeSelectionKeyAsyncCommands { /** * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * + * * @param key the key * @param timestamp the milliseconds-timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -215,7 +225,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Get the time to live for a key in milliseconds. - * + * * @param key the key * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description * above). @@ -224,14 +234,14 @@ public interface NodeSelectionKeyAsyncCommands { /** * Return a random key from the keyspace. - * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. + * + * @return K bulk-string-reply the random key, or {@literal null} when the database is empty. */ - AsyncExecutions randomkey(); + AsyncExecutions randomkey(); /** * Rename a key. - * + * * @param key the key * @param newKey the newkey type: key * @return String simple-string-reply @@ -240,18 +250,18 @@ public interface NodeSelectionKeyAsyncCommands { /** * Rename a key, only if the new key does not exist. - * + * * @param key the key * @param newKey the newkey type: key * @return Boolean integer-reply specifically: - * + * * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. */ AsyncExecutions renamenx(K key, K newKey); /** * Create a key using the provided serialized value, previously obtained using DUMP. - * + * * @param key the key * @param ttl the ttl type: long * @param value the serialized-value type: string @@ -259,9 +269,20 @@ public interface NodeSelectionKeyAsyncCommands { */ AsyncExecutions restore(K key, long ttl, byte[] value); + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param value the serialized-value type: string + * @param args the {@link RestoreArgs}, must not be {@literal null}. + * @return String simple-string-reply The command returns OK on success. + * @since 5.1 + */ + AsyncExecutions restore(K key, byte[] value, RestoreArgs args); + /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @return List<V> array-reply list of sorted elements. */ @@ -269,7 +290,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @return Long number of values. @@ -278,7 +299,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @param sortArgs sort arguments * @return List<V> array-reply list of sorted elements. @@ -287,7 +308,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param sortArgs sort arguments @@ -297,7 +318,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @param sortArgs sort arguments * @param destination the destination key to store sort results @@ -307,7 +328,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * + * * @param keys the keys * @return Long integer-reply the number of found keys. */ @@ -323,7 +344,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Determine the type stored at key. - * + * * @param key the key * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. */ @@ -331,14 +352,14 @@ public interface NodeSelectionKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @return KeyScanCursor<K> scan cursor. */ AsyncExecutions> scan(); /** * Incrementally iterate the keys space. - * + * * @param scanArgs scan arguments * @return KeyScanCursor<K> scan cursor. */ @@ -346,7 +367,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments * @return KeyScanCursor<K> scan cursor. @@ -355,7 +376,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return KeyScanCursor<K> scan cursor. */ @@ -363,7 +384,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @return StreamScanCursor scan cursor. */ @@ -371,7 +392,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanArgs scan arguments * @return StreamScanCursor scan cursor. @@ -380,7 +401,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -390,7 +411,7 @@ public interface NodeSelectionKeyAsyncCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return StreamScanCursor scan cursor. diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionListAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionListAsyncCommands.java similarity index 85% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionListAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionListAsyncCommands.java index c99450d89b..c8ae954211 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionListAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionListAsyncCommands.java @@ -1,28 +1,43 @@ -package com.lambdaworks.redis.cluster.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; import java.util.List; -import com.lambdaworks.redis.KeyValue; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.output.ValueStreamingChannel; /** * Asynchronous executed commands on a node selection for Lists. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi */ public interface NodeSelectionListAsyncCommands { /** * Remove and get the first element in a list, or block until one is available. - * + * * @param timeout the timeout in seconds * @param keys the keys * @return KeyValue<K,V> array-reply specifically: - * + * * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk * with the first element being the name of the key where an element was popped and the second element being the * value of the popped element. @@ -31,11 +46,11 @@ public interface NodeSelectionListAsyncCommands { /** * Remove and get the last element in a list, or block until one is available. - * + * * @param timeout the timeout in seconds * @param keys the keys * @return KeyValue<K,V> array-reply specifically: - * + * * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk * with the first element being the name of the key where an element was popped and the second element being the * value of the popped element. @@ -44,7 +59,7 @@ public interface NodeSelectionListAsyncCommands { /** * Pop a value from a list, push it to another list and return it; or block until one is available. - * + * * @param timeout the timeout in seconds * @param source the source key * @param destination the destination type: key @@ -55,7 +70,7 @@ public interface NodeSelectionListAsyncCommands { /** * Get an element from a list by its index. - * + * * @param key the key * @param index the index type: long * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. @@ -64,7 +79,7 @@ public interface NodeSelectionListAsyncCommands { /** * Insert an element before or after another element in a list. - * + * * @param key the key * @param before the before * @param pivot the pivot @@ -76,7 +91,7 @@ public interface NodeSelectionListAsyncCommands { /** * Get the length of a list. - * + * * @param key the key * @return Long integer-reply the length of the list at {@code key}. */ @@ -84,7 +99,7 @@ public interface NodeSelectionListAsyncCommands { /** * Remove and get the first element in a list. - * + * * @param key the key * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. */ @@ -92,23 +107,13 @@ public interface NodeSelectionListAsyncCommands { /** * Prepend one or multiple values to a list. - * + * * @param key the key * @param values the value * @return Long integer-reply the length of the list after the push operations. */ AsyncExecutions lpush(K key, V... values); - /** - * Prepend a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #lpushx(Object, Object[])} - */ - AsyncExecutions lpushx(K key, V value); - /** * Prepend values to a list, only if the list exists. * @@ -120,7 +125,7 @@ public interface NodeSelectionListAsyncCommands { /** * Get a range of elements from a list. - * + * * @param key the key * @param start the start type: long * @param stop the stop type: long @@ -130,7 +135,7 @@ public interface NodeSelectionListAsyncCommands { /** * Get a range of elements from a list. - * + * * @param channel the channel * @param key the key * @param start the start type: long @@ -141,7 +146,7 @@ public interface NodeSelectionListAsyncCommands { /** * Remove elements from a list. - * + * * @param key the key * @param count the count type: long * @param value the value @@ -151,7 +156,7 @@ public interface NodeSelectionListAsyncCommands { /** * Set the value of an element in a list by its index. - * + * * @param key the key * @param index the index type: long * @param value the value @@ -161,7 +166,7 @@ public interface NodeSelectionListAsyncCommands { /** * Trim a list to the specified range. - * + * * @param key the key * @param start the start type: long * @param stop the stop type: long @@ -171,7 +176,7 @@ public interface NodeSelectionListAsyncCommands { /** * Remove and get the last element in a list. - * + * * @param key the key * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. */ @@ -179,7 +184,7 @@ public interface NodeSelectionListAsyncCommands { /** * Remove the last element in a list, append it to another list and return it. - * + * * @param source the source key * @param destination the destination type: key * @return V bulk-string-reply the element being popped and pushed. @@ -188,23 +193,13 @@ public interface NodeSelectionListAsyncCommands { /** * Append one or multiple values to a list. - * + * * @param key the key * @param values the value * @return Long integer-reply the length of the list after the push operation. */ AsyncExecutions rpush(K key, V... values); - /** - * Append a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #rpushx(java.lang.Object, java.lang.Object[])} - */ - AsyncExecutions rpushx(K key, V value); - /** * Append values to a list, only if the list exists. * diff --git a/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionScriptingAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionScriptingAsyncCommands.java new file mode 100644 index 0000000000..e10759e907 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionScriptingAsyncCommands.java @@ -0,0 +1,146 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; + +import java.util.List; + +import io.lettuce.core.ScriptOutputType; + +/** + * Asynchronous executed commands on a node selection for Scripting. {@link java.lang.String Lua scripts} are encoded by using + * the configured {@link io.lettuce.core.ClientOptions#getScriptCharset() charset}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi + */ +public interface NodeSelectionScriptingAsyncCommands { + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + */ + AsyncExecutions eval(String script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + * @since 6.0 + */ + AsyncExecutions eval(byte[] script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + AsyncExecutions eval(String script, ScriptOutputType type, K[] keys, V... values); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + * @since 6.0 + */ + AsyncExecutions eval(byte[] script, ScriptOutputType type, K[] keys, V... values); + + /** + * Evaluates a script cached on the server side by its SHA1 digest + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param expected return type + * @return script result + */ + AsyncExecutions evalsha(String digest, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + AsyncExecutions evalsha(String digest, ScriptOutputType type, K[] keys, V... values); + + /** + * Check existence of scripts in the script cache. + * + * @param digests script digests + * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 + * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 + * is returned, otherwise 0 is returned. + */ + AsyncExecutions> scriptExists(String... digests); + + /** + * Remove all the scripts from the script cache. + * + * @return String simple-string-reply + */ + AsyncExecutions scriptFlush(); + + /** + * Kill the script currently in execution. + * + * @return String simple-string-reply + */ + AsyncExecutions scriptKill(); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + AsyncExecutions scriptLoad(String script); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + AsyncExecutions scriptLoad(byte[] script); +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionServerAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionServerAsyncCommands.java similarity index 79% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionServerAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionServerAsyncCommands.java index bf47db28d0..9f6c2b8333 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionServerAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionServerAsyncCommands.java @@ -1,46 +1,63 @@ -package com.lambdaworks.redis.cluster.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; import java.util.Date; import java.util.List; -import com.lambdaworks.redis.KillArgs; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.RedisFuture; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.UnblockType; +import io.lettuce.core.protocol.CommandType; /** * Asynchronous executed commands on a node selection for Server Control. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi */ public interface NodeSelectionServerAsyncCommands { /** * Asynchronously rewrite the append-only file. - * + * * @return String simple-string-reply always {@code OK}. */ AsyncExecutions bgrewriteaof(); /** * Asynchronously save the dataset to disk. - * + * * @return String simple-string-reply */ AsyncExecutions bgsave(); /** * Get the current connection name. - * + * * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. */ AsyncExecutions clientGetname(); /** * Set the current connection name. - * + * * @param name the client name * @return simple-string-reply {@code OK} if the connection name was successfully set. */ @@ -48,7 +65,7 @@ public interface NodeSelectionServerAsyncCommands { /** * Kill the connection of a client identified by ip:port. - * + * * @param addr ip:port * @return String simple-string-reply {@code OK} if the connection exists and has been closed */ @@ -62,9 +79,19 @@ public interface NodeSelectionServerAsyncCommands { */ AsyncExecutions clientKill(KillArgs killArgs); + /** + * Unblock the specified blocked client. + * + * @param id the client id. + * @param type unblock type. + * @return Long integer-reply number of unblocked connections. + * @since 5.1 + */ + AsyncExecutions clientUnblock(long id, UnblockType type); + /** * Stop processing commands from clients for some time. - * + * * @param timeout the timeout value in milliseconds * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. */ @@ -72,22 +99,30 @@ public interface NodeSelectionServerAsyncCommands { /** * Get the list of client connections. - * + * * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), * each line is composed of a succession of property=value fields separated by a space character. */ AsyncExecutions clientList(); + /** + * Get the id of the current connection. + * + * @return Long The command just returns the ID of the current connection. + * @since 5.3 + */ + AsyncExecutions clientId(); + /** * Returns an array reply of details about all Redis commands. - * + * * @return List<Object> array-reply */ AsyncExecutions> command(); /** * Returns an array reply of details about the requested commands. - * + * * @param commands the commands to query for * @return List<Object> array-reply */ @@ -95,7 +130,7 @@ public interface NodeSelectionServerAsyncCommands { /** * Returns an array reply of details about the requested commands. - * + * * @param commands the commands to query for * @return List<Object> array-reply */ @@ -103,29 +138,29 @@ public interface NodeSelectionServerAsyncCommands { /** * Get total number of Redis commands. - * + * * @return Long integer-reply of number of total commands in this Redis server. */ AsyncExecutions commandCount(); /** * Get the value of a configuration parameter. - * + * * @param parameter name of the parameter - * @return List<String> bulk-string-reply + * @return Map<String, String> bulk-string-reply */ - AsyncExecutions> configGet(String parameter); + AsyncExecutions> configGet(String parameter); /** * Reset the stats returned by INFO. - * + * * @return String simple-string-reply always {@code OK}. */ AsyncExecutions configResetstat(); /** * Rewrite the configuration file with the in memory configuration. - * + * * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is * returned. */ @@ -133,7 +168,7 @@ public interface NodeSelectionServerAsyncCommands { /** * Set a configuration parameter to the given value. - * + * * @param parameter the parameter name * @param value the parameter value * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. @@ -142,13 +177,14 @@ public interface NodeSelectionServerAsyncCommands { /** * Return the number of keys in the selected database. - * + * * @return Long integer-reply */ AsyncExecutions dbsize(); /** * Crash and recover + * * @param delay optional delay in milliseconds * @return String simple-string-reply */ @@ -164,7 +200,7 @@ public interface NodeSelectionServerAsyncCommands { /** * Get debugging information about a key. - * + * * @param key the key * @return String simple-string-reply */ @@ -179,6 +215,7 @@ public interface NodeSelectionServerAsyncCommands { /** * Restart the server gracefully. + * * @param delay optional delay in milliseconds * @return String simple-string-reply */ @@ -194,7 +231,7 @@ public interface NodeSelectionServerAsyncCommands { /** * Remove all keys from all databases. - * + * * @return String simple-string-reply */ AsyncExecutions flushall(); @@ -208,7 +245,7 @@ public interface NodeSelectionServerAsyncCommands { /** * Remove all keys from the current database. - * + * * @return String simple-string-reply */ AsyncExecutions flushdb(); @@ -222,14 +259,14 @@ public interface NodeSelectionServerAsyncCommands { /** * Get information and statistics about the server. - * + * * @return String bulk-string-reply as a collection of text lines. */ AsyncExecutions info(); /** * Get information and statistics about the server. - * + * * @param section the section type: string * @return String bulk-string-reply as a collection of text lines. */ @@ -237,21 +274,29 @@ public interface NodeSelectionServerAsyncCommands { /** * Get the UNIX time stamp of the last successful save to disk. - * + * * @return Date integer-reply an UNIX time stamp. */ AsyncExecutions lastsave(); + /** + * Reports the number of bytes that a key and its value require to be stored in RAM. + * + * @return memory usage in bytes. + * @since 5.2 + */ + AsyncExecutions memoryUsage(K key); + /** * Synchronously save the dataset to disk. - * + * * @return String simple-string-reply The commands returns OK on success. */ AsyncExecutions save(); /** - * Make the server a slave of another instance, or promote it as master. - * + * Make the server a replica of another instance, or promote it as master. + * * @param host the host type: string * @param port the port type: string * @return String simple-string-reply @@ -260,21 +305,21 @@ public interface NodeSelectionServerAsyncCommands { /** * Promote server as master. - * + * * @return String simple-string-reply */ AsyncExecutions slaveofNoOne(); /** * Read the slow log. - * + * * @return List<Object> deeply nested multi bulk replies */ AsyncExecutions> slowlogGet(); /** * Read the slow log. - * + * * @param count the count * @return List<Object> deeply nested multi bulk replies */ @@ -282,33 +327,25 @@ public interface NodeSelectionServerAsyncCommands { /** * Obtaining the current length of the slow log. - * + * * @return Long length of the slow log. */ AsyncExecutions slowlogLen(); /** * Resetting the slow log. - * + * * @return String simple-string-reply The commands returns OK on success. */ AsyncExecutions slowlogReset(); - /** - * Internal command used for replication. - * - * @return String simple-string-reply - */ - @Deprecated - AsyncExecutions sync(); - /** * Return the current server time. - * + * * @return List<V> array-reply specifically: - * + * * A multi bulk reply containing two elements: - * + * * unix time in seconds. microseconds. */ AsyncExecutions> time(); diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionSetAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionSetAsyncCommands.java similarity index 88% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionSetAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionSetAsyncCommands.java index 6e9d1c6337..7b2211f630 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionSetAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionSetAsyncCommands.java @@ -1,28 +1,43 @@ -package com.lambdaworks.redis.cluster.api.async; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; import java.util.List; import java.util.Set; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ValueScanCursor; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.RedisFuture; + +import io.lettuce.core.ScanArgs; +import io.lettuce.core.ScanCursor; +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.ValueScanCursor; +import io.lettuce.core.output.ValueStreamingChannel; /** * Asynchronous executed commands on a node selection for Sets. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi */ public interface NodeSelectionSetAsyncCommands { /** * Add one or more members to a set. - * + * * @param key the key * @param members the member type: value * @return Long integer-reply the number of elements that were added to the set, not including all the elements already @@ -32,7 +47,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Get the number of members in a set. - * + * * @param key the key * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not * exist. @@ -41,7 +56,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Subtract multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -49,7 +64,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Subtract multiple sets. - * + * * @param channel the channel * @param keys the keys * @return Long count of members of the resulting set. @@ -58,7 +73,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Subtract multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -67,7 +82,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Intersect multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -75,7 +90,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Intersect multiple sets. - * + * * @param channel the channel * @param keys the keys * @return Long count of members of the resulting set. @@ -84,7 +99,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Intersect multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -93,11 +108,11 @@ public interface NodeSelectionSetAsyncCommands { /** * Determine if a given value is a member of a set. - * + * * @param key the key * @param member the member type: value * @return Boolean integer-reply specifically: - * + * * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the * set, or if {@code key} does not exist. */ @@ -105,12 +120,12 @@ public interface NodeSelectionSetAsyncCommands { /** * Move a member from one set to another. - * + * * @param source the source key * @param destination the destination type: key * @param member the member type: value * @return Boolean integer-reply specifically: - * + * * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no * operation was performed. */ @@ -118,7 +133,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Get all the members in a set. - * + * * @param key the key * @return Set<V> array-reply all elements of the set. */ @@ -126,7 +141,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Get all the members in a set. - * + * * @param channel the channel * @param key the keys * @return Long count of members of the resulting set. @@ -135,7 +150,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Remove and return a random member from a set. - * + * * @param key the key * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. */ @@ -152,9 +167,9 @@ public interface NodeSelectionSetAsyncCommands { /** * Get one random member from a set. - * + * * @param key the key - * + * * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the * randomly selected element, or {@literal null} when {@code key} does not exist. */ @@ -162,7 +177,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Get one or multiple random members from a set. - * + * * @param key the key * @param count the count type: long * @return Set<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply @@ -172,7 +187,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Get one or multiple random members from a set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param count the count @@ -182,7 +197,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Remove one or more members from a set. - * + * * @param key the key * @param members the member type: value * @return Long integer-reply the number of members that were removed from the set, not including non existing members. @@ -191,7 +206,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Add multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -199,7 +214,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Add multiple sets. - * + * * @param channel streaming channel that receives a call for every value * @param keys the keys * @return Long count of members of the resulting set. @@ -208,7 +223,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Add multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -217,7 +232,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @return ValueScanCursor<V> scan cursor. */ @@ -225,7 +240,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanArgs scan arguments * @return ValueScanCursor<V> scan cursor. @@ -234,7 +249,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -244,7 +259,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return ValueScanCursor<V> scan cursor. @@ -253,7 +268,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @return StreamScanCursor scan cursor. @@ -262,7 +277,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanArgs scan arguments @@ -272,7 +287,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -283,7 +298,7 @@ public interface NodeSelectionSetAsyncCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} diff --git a/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionSortedSetAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionSortedSetAsyncCommands.java new file mode 100644 index 0000000000..bc8efc95db --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionSortedSetAsyncCommands.java @@ -0,0 +1,1251 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; + +import java.util.List; + +import io.lettuce.core.*; +import io.lettuce.core.output.ScoredValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Asynchronous executed commands on a node selection for Sorted Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi + */ +public interface NodeSelectionSortedSetAsyncCommands { + + /** + * Removes and returns a member with the lowest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + AsyncExecutions>> bzpopmin(long timeout, K... keys); + + /** + * Removes and returns a member with the highest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + AsyncExecutions>> bzpopmax(long timeout, K... keys); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + AsyncExecutions zadd(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + AsyncExecutions zadd(K key, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + AsyncExecutions zadd(K key, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + AsyncExecutions zadd(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + AsyncExecutions zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the ke + * @param zAddArgs arguments for zadd + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + AsyncExecutions zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + */ + AsyncExecutions zaddincr(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + * @since 4.3 + */ + AsyncExecutions zaddincr(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Get the number of members in a sorted set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} + * does not exist. + */ + AsyncExecutions zcard(K key); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zcount(K key, double min, double max); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zcount(K key, String min, String max); + + /** + * Count the members in a sorted set with scores within the given {@link Range}. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions zcount(K key, Range range); + + /** + * Increment the score of a member in a sorted set. + * + * @param key the key + * @param amount the increment type: long + * @param member the member type: value + * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented + * as string. + */ + AsyncExecutions zincrby(K key, double amount, V member); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + AsyncExecutions zinterstore(K destination, K... keys); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + AsyncExecutions zinterstore(K destination, ZStoreArgs storeArgs, K... keys); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zlexcount(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zlexcount(K key, String min, String max); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions zlexcount(K key, Range range); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + AsyncExecutions> zpopmin(K key); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + AsyncExecutions>> zpopmin(K key, long count); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + AsyncExecutions> zpopmax(K key); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + AsyncExecutions>> zpopmax(K key, long count); + + /** + * Return a range of members in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + AsyncExecutions> zrange(K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + AsyncExecutions zrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + AsyncExecutions>> zrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + AsyncExecutions zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions> zrangebylex(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + AsyncExecutions> zrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions> zrangebylex(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + AsyncExecutions> zrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions> zrangebyscore(K key, double min, double max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions> zrangebyscore(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions> zrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions> zrangebyscore(K key, double min, double max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions> zrangebyscore(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions> zrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions zrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions>> zrangebyscoreWithScores(K key, double min, double max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions>> zrangebyscoreWithScores(K key, String min, String max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions>> zrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit limit)} + */ + @Deprecated + AsyncExecutions>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions>> zrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + AsyncExecutions zrank(K key, V member); + + /** + * Remove one or more members from a sorted set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply specifically: + * + * The number of members removed from the sorted set, not including non existing members. + */ + AsyncExecutions zrem(K key, V... members); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebylex(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zremrangebylex(K key, String min, String max); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + AsyncExecutions zremrangebylex(K key, Range range); + + /** + * Remove all members in a sorted set within the given indexes. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long integer-reply the number of elements removed. + */ + AsyncExecutions zremrangebyrank(K key, long start, long stop); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zremrangebyscore(K key, double min, double max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zremrangebyscore(K key, String min, String max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + AsyncExecutions zremrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + AsyncExecutions> zrevrange(K key, long start, long stop); + + /** + * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + AsyncExecutions zrevrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + AsyncExecutions>> zrevrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + AsyncExecutions zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions> zrevrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions> zrevrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions> zrevrangebyscore(K key, double max, double min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions> zrevrangebyscore(K key, String max, String min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions> zrevrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the withscores + * @param count the null + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions> zrevrangebyscore(K key, double max, double min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions> zrevrangebyscore(K key, String max, String min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions> zrevrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param max max score + * @param min min score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + * @since 4.3 + */ + AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + AsyncExecutions zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions>> zrevrangebyscoreWithScores(K key, double max, double min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions>> zrevrangebyscoreWithScores(K key, String max, String min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions>> zrevrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + AsyncExecutions>> zrevrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + */ + AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + AsyncExecutions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set, with scores ordered from high to low. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + AsyncExecutions zrevrank(K key, V member); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @return ScoredValueScanCursor<V> scan cursor. + */ + AsyncExecutions> zscan(K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + AsyncExecutions> zscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + AsyncExecutions> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ScoredValueScanCursor<V> scan cursor. + */ + AsyncExecutions> zscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + AsyncExecutions zscan(ScoredValueStreamingChannel channel, K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + AsyncExecutions zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + AsyncExecutions zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + AsyncExecutions zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Get the score associated with the given member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as + * string. + */ + AsyncExecutions zscore(K key, V member); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination destination key + * @param keys source keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + AsyncExecutions zunionstore(K destination, K... keys); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + AsyncExecutions zunionstore(K destination, ZStoreArgs storeArgs, K... keys); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionStreamAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionStreamAsyncCommands.java new file mode 100644 index 0000000000..865288853e --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionStreamAsyncCommands.java @@ -0,0 +1,324 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.*; +import io.lettuce.core.XReadArgs.StreamOffset; + +/** + * Asynchronous executed commands on a node selection for Streams. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.1 + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi + */ +public interface NodeSelectionStreamAsyncCommands { + + /** + * Acknowledge one or more messages as processed. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param messageIds message Id's to acknowledge. + * @return simple-reply the lenght of acknowledged messages. + */ + AsyncExecutions xack(K key, K group, String... messageIds); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param body message body. + * @return simple-reply the message Id. + */ + AsyncExecutions xadd(K key, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param body message body. + * @return simple-reply the message Id. + */ + AsyncExecutions xadd(K key, XAddArgs args, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + AsyncExecutions xadd(K key, Object... keysAndValues); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + AsyncExecutions xadd(K key, XAddArgs args, Object... keysAndValues); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param minIdleTime + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + AsyncExecutions>> xclaim(K key, Consumer consumer, long minIdleTime, String... messageIds); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + *

+ * Note that setting the {@code JUSTID} flag (calling this method with {@link XClaimArgs#justid()}) suppresses the message + * bode and {@link StreamMessage#getBody()} is {@code null}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param args + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + AsyncExecutions>> xclaim(K key, Consumer consumer, XClaimArgs args, String... messageIds); + + /** + * Removes the specified entries from the stream. Returns the number of items deleted, that may be different from the number + * of IDs passed in case certain IDs do not exist. + * + * @param key the stream key. + * @param messageIds stream message Id's. + * @return simple-reply number of removed entries. + */ + AsyncExecutions xdel(K key, String... messageIds); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + AsyncExecutions xgroupCreate(StreamOffset streamOffset, K group); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @param args + * @return simple-reply {@literal true} if successful. + * @since 5.2 + */ + AsyncExecutions xgroupCreate(StreamOffset streamOffset, K group, XGroupCreateArgs args); + + /** + * Delete a consumer from a consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @return simple-reply {@literal true} if successful. + */ + AsyncExecutions xgroupDelconsumer(K key, Consumer consumer); + + /** + * Destroy a consumer group. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + AsyncExecutions xgroupDestroy(K key, K group); + + /** + * Set the current {@code group} id. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply OK + */ + AsyncExecutions xgroupSetid(StreamOffset streamOffset, K group); + + /** + * Retrieve information about the stream at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + AsyncExecutions> xinfoStream(K key); + + /** + * Retrieve information about the stream consumer groups at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + AsyncExecutions> xinfoGroups(K key); + + /** + * Retrieve information about consumer groups of group {@code group} and stream at {@code key}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply. + * @since 5.2 + */ + AsyncExecutions> xinfoConsumers(K key, K group); + + /** + * Get the length of a steam. + * + * @param key the stream key. + * @return simple-reply the lenght of the stream. + */ + AsyncExecutions xlen(K key); + + /** + * Read pending messages from a stream for a {@code group}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply list pending entries. + */ + AsyncExecutions> xpending(K key, K group); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + AsyncExecutions> xpending(K key, K group, Range range, Limit limit); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + AsyncExecutions> xpending(K key, Consumer consumer, Range range, Limit limit); + + /** + * Read messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + AsyncExecutions>> xrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + AsyncExecutions>> xrange(K key, Range range, Limit limit); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + AsyncExecutions>> xread(StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + AsyncExecutions>> xread(XReadArgs args, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + AsyncExecutions>> xreadgroup(Consumer consumer, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + AsyncExecutions>> xreadgroup(Consumer consumer, XReadArgs args, StreamOffset... streams); + + /** + * Read messages from a stream within a specific {@link Range} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + AsyncExecutions>> xrevrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + AsyncExecutions>> xrevrange(K key, Range range, Limit limit); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + AsyncExecutions xtrim(K key, long count); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param approximateTrimming {@literal true} to trim approximately using the {@code ~} flag. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + AsyncExecutions xtrim(K key, boolean approximateTrimming, long count); +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionStringAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionStringAsyncCommands.java similarity index 84% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionStringAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionStringAsyncCommands.java index 21228dbf78..00114bd641 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/NodeSelectionStringAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/NodeSelectionStringAsyncCommands.java @@ -1,12 +1,28 @@ -package com.lambdaworks.redis.cluster.api.async; - -import com.lambdaworks.redis.BitFieldArgs; -import com.lambdaworks.redis.SetArgs; -import com.lambdaworks.redis.output.ValueStreamingChannel; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; import java.util.List; import java.util.Map; +import io.lettuce.core.BitFieldArgs; +import io.lettuce.core.KeyValue; +import io.lettuce.core.SetArgs; +import io.lettuce.core.output.KeyValueStreamingChannel; + /** * Asynchronous executed commands on a node selection for Strings. * @@ -14,7 +30,7 @@ * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateAsyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateAsyncNodeSelectionClusterApi */ public interface NodeSelectionStringAsyncCommands { @@ -74,13 +90,30 @@ public interface NodeSelectionStringAsyncCommands { * * Basically the function consider the right of the string as padded with zeros if you look for clear bits and * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. */ AsyncExecutions bitpos(K key, boolean state); + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * @since 5.0.1 + */ + AsyncExecutions bitpos(K key, boolean state, long start); + /** * Find first bit set or clear in a string. * @@ -231,7 +264,7 @@ public interface NodeSelectionStringAsyncCommands { * @param keys the key * @return List<V> array-reply list of values at the specified keys. */ - AsyncExecutions> mget(K... keys); + AsyncExecutions>> mget(K... keys); /** * Stream over the values of all the given keys. @@ -241,7 +274,7 @@ public interface NodeSelectionStringAsyncCommands { * * @return Long array-reply list of values at the specified keys. */ - AsyncExecutions mget(ValueStreamingChannel channel, K... keys); + AsyncExecutions mget(KeyValueStreamingChannel channel, K... keys); /** * Set multiple keys to multiple values. diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/RedisAdvancedClusterAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/RedisAdvancedClusterAsyncCommands.java similarity index 76% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/RedisAdvancedClusterAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/RedisAdvancedClusterAsyncCommands.java index 4ed24b0adf..48720453ea 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/RedisAdvancedClusterAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/RedisAdvancedClusterAsyncCommands.java @@ -1,20 +1,34 @@ -package com.lambdaworks.redis.cluster.api.async; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; import java.util.List; import java.util.Map; import java.util.function.Predicate; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.async.RedisKeyAsyncCommands; -import com.lambdaworks.redis.api.async.RedisScriptingAsyncCommands; -import com.lambdaworks.redis.api.async.RedisServerAsyncCommands; -import com.lambdaworks.redis.api.async.RedisStringAsyncCommands; -import com.lambdaworks.redis.cluster.ClusterClientOptions; -import com.lambdaworks.redis.cluster.RedisAdvancedClusterAsyncConnection; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.output.KeyStreamingChannel; +import io.lettuce.core.*; +import io.lettuce.core.api.async.RedisKeyAsyncCommands; +import io.lettuce.core.api.async.RedisScriptingAsyncCommands; +import io.lettuce.core.api.async.RedisServerAsyncCommands; +import io.lettuce.core.api.async.RedisStringAsyncCommands; +import io.lettuce.core.cluster.ClusterClientOptions; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.output.KeyStreamingChannel; /** * Advanced asynchronous and thread-safe Redis Cluster API. @@ -22,8 +36,7 @@ * @author Mark Paluch * @since 4.0 */ -public interface RedisAdvancedClusterAsyncCommands - extends RedisClusterAsyncCommands, RedisAdvancedClusterAsyncConnection { +public interface RedisAdvancedClusterAsyncCommands extends RedisClusterAsyncCommands { /** * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. @@ -63,23 +76,49 @@ default AsyncNodeSelection masters() { } /** - * Select all slaves. + * Select all replicas. * - * @return API with asynchronous executed commands on a selection of slave cluster nodes. + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas()} */ + @Deprecated default AsyncNodeSelection slaves() { return readonly(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); } /** - * Select all slaves. + * Select all replicas. * * @param predicate Predicate to filter nodes - * @return API with asynchronous executed commands on a selection of slave cluster nodes. + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @deprecated use {@link #replicas(Predicate)} */ + @Deprecated default AsyncNodeSelection slaves(Predicate predicate) { - return readonly( - redisClusterNode -> predicate.test(redisClusterNode) && redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + return readonly(redisClusterNode -> predicate.test(redisClusterNode) + && redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + } + + /** + * Select all replicas. + * + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default AsyncNodeSelection replicas() { + return readonly(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + + /** + * Select all replicas. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default AsyncNodeSelection replicas(Predicate predicate) { + return readonly(redisClusterNode -> predicate.test(redisClusterNode) + && redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); } /** @@ -92,8 +131,8 @@ default AsyncNodeSelection all() { } /** - * Select slave nodes by a predicate and keeps a static selection. Slave connections operate in {@literal READONLY} mode. - * The set of nodes within the {@link NodeSelectionSupport} does not change when the cluster view changes. + * Select replica nodes by a predicate and keeps a static selection. Replica connections operate in {@literal READONLY} + * mode. The set of nodes within the {@link NodeSelectionSupport} does not change when the cluster view changes. * * @param predicate Predicate to filter nodes * @return API with asynchronous executed commands on a selection of cluster nodes matching {@code predicate} @@ -138,7 +177,8 @@ default AsyncNodeSelection all() { RedisFuture unlink(K... keys); /** - * Determine how many keys exist with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. + * Determine how many keys exist with pipelining. Cross-slot keys will result in multiple calls to the particular cluster + * nodes. * * @param keys the keys * @return Long integer-reply specifically: Number of existing keys @@ -153,7 +193,7 @@ default AsyncNodeSelection all() { * @return List<V> array-reply list of values at the specified keys. * @see RedisStringAsyncCommands#mget(Object[]) */ - RedisFuture> mget(K... keys); + RedisFuture>> mget(K... keys); /** * Set multiple keys to multiple values with pipelining. Cross-slot keys will result in multiple calls to the particular @@ -232,10 +272,10 @@ default AsyncNodeSelection all() { /** * Return a random key from the keyspace on a random master. * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. + * @return K bulk-string-reply the random key, or {@literal null} when the database is empty. * @see RedisKeyAsyncCommands#randomkey() */ - RedisFuture randomkey(); + RedisFuture randomkey(); /** * Remove all the scripts from the script cache on all cluster nodes. @@ -253,6 +293,24 @@ default AsyncNodeSelection all() { */ RedisFuture scriptKill(); + /** + * Load the specified Lua script into the script cache on all cluster nodes. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + RedisFuture scriptLoad(String script); + + /** + * Load the specified Lua script into the script cache on all cluster nodes. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + RedisFuture scriptLoad(byte[] script); + /** * Synchronously save the dataset to disk and then shut down all nodes of the cluster. * diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/async/RedisClusterAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/api/async/RedisClusterAsyncCommands.java similarity index 78% rename from src/main/java/com/lambdaworks/redis/cluster/api/async/RedisClusterAsyncCommands.java rename to src/main/java/io/lettuce/core/cluster/api/async/RedisClusterAsyncCommands.java index afeec58229..33d909ad60 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/async/RedisClusterAsyncCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/async/RedisClusterAsyncCommands.java @@ -1,42 +1,67 @@ -package com.lambdaworks.redis.cluster.api.async; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.async; +import java.time.Duration; import java.util.List; import java.util.Map; import java.util.concurrent.TimeUnit; -import com.lambdaworks.redis.RedisClusterAsyncConnection; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.api.async.*; +import io.lettuce.core.KeyValue; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.api.async.*; /** * A complete asynchronous and thread-safe cluster Redis API with 400+ Methods. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 */ -public interface RedisClusterAsyncCommands - extends RedisHashAsyncCommands, RedisKeyAsyncCommands, RedisStringAsyncCommands, - RedisListAsyncCommands, RedisSetAsyncCommands, RedisSortedSetAsyncCommands, - RedisScriptingAsyncCommands, RedisServerAsyncCommands, RedisHLLAsyncCommands, - RedisGeoAsyncCommands, BaseRedisAsyncCommands, RedisClusterAsyncConnection { +public interface RedisClusterAsyncCommands extends BaseRedisAsyncCommands, RedisGeoAsyncCommands, + RedisHashAsyncCommands, RedisHLLAsyncCommands, RedisKeyAsyncCommands, RedisListAsyncCommands, + RedisScriptingAsyncCommands, RedisServerAsyncCommands, RedisSetAsyncCommands, + RedisSortedSetAsyncCommands, RedisStreamAsyncCommands, RedisStringAsyncCommands { /** - * Set the default timeout for operations. - * + * Set the default timeout for operations. A zero timeout value indicates to not time out. + * * @param timeout the timeout value - * @param unit the unit of the timeout value + * @since 5.0 */ - void setTimeout(long timeout, TimeUnit unit); + void setTimeout(Duration timeout); /** * Authenticate to the server. - * + * + * @param password the password + * @return String simple-string-reply + */ + RedisFuture auth(CharSequence password); + + /** + * Authenticate to the server with username and password. Requires Redis 6 or newer. + * + * @param username the username * @param password the password * @return String simple-string-reply + * @since 6.0 */ - String auth(String password); + RedisFuture auth(String username, CharSequence password); /** * Generate a new config epoch, incrementing the current epoch, assign the new epoch to this node, WITHOUT any consensus and @@ -134,19 +159,19 @@ public interface RedisClusterAsyncCommands /** * Obtain details about all cluster nodes. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} + * {@link io.lettuce.core.cluster.models.partitions.ClusterPartitionParser#parse} * * @return String bulk-string-reply as a collection of text lines */ RedisFuture clusterNodes(); /** - * List slaves for a certain node identified by its {@code nodeId}. Can be parsed using - * {@link com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser#parse} + * List replicas for a certain node identified by its {@code nodeId}. Can be parsed using + * {@link io.lettuce.core.cluster.models.partitions.ClusterPartitionParser#parse} * * @param nodeId node id of the master node - * @return List<String> array-reply list of slaves. The command returns data in the same format as - * {@link #clusterNodes()} but one line per slave. + * @return List<String> array-reply list of replicas. The command returns data in the same format as + * {@link #clusterNodes()} but one line per replica. */ RedisFuture> clusterSlaves(String nodeId); @@ -180,7 +205,7 @@ public interface RedisClusterAsyncCommands /** * Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and * testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Basically the same as - * {@link com.lambdaworks.redis.cluster.SlotHash#getSlot(byte[])}. If not, call Houston and report that we've got a problem. + * {@link io.lettuce.core.cluster.SlotHash#getSlot(byte[])}. If not, call Houston and report that we've got a problem. * * @param key the key. * @return Integer reply: The hash slot number. @@ -208,7 +233,7 @@ public interface RedisClusterAsyncCommands /** * Get array of cluster slots to node mappings. - * + * * @return RedisFuture<List<Object>> array-reply nested list of slot ranges with IP/Port mappings. */ RedisFuture> clusterSlots(); @@ -222,7 +247,7 @@ public interface RedisClusterAsyncCommands RedisFuture asking(); /** - * Turn this node into a slave of the node with the id {@code nodeId}. + * Turn this node into a replica of the node with the id {@code nodeId}. * * @param nodeId master node id * @return String simple-string-reply @@ -230,7 +255,7 @@ public interface RedisClusterAsyncCommands RedisFuture clusterReplicate(String nodeId); /** - * Failover a cluster node. Turns the currently connected node into a master and the master into its slave. + * Failover a cluster node. Turns the currently connected node into a master and the master into its replica. * * @param force do not coordinate with master if {@literal true} * @return String simple-string-reply @@ -242,11 +267,11 @@ public interface RedisClusterAsyncCommands *

    *
  • All other nodes are forgotten
  • *
  • All the assigned / open slots are released
  • - *
  • If the node is a slave, it turns into a master
  • + *
  • If the node is a replica, it turns into a master
  • *
  • Only for hard reset: a new Node ID is generated
  • *
  • Only for hard reset: currentEpoch and configEpoch are set to 0
  • *
  • The new configuration is saved and the cluster state updated
  • - *
  • If the node was a slave, the whole data set is flushed away
  • + *
  • If the node was a replica, the whole data set is flushed away
  • *
* * @param hard {@literal true} for hard reset. Generates a new nodeId and currentEpoch/configEpoch are set to 0 @@ -262,7 +287,7 @@ public interface RedisClusterAsyncCommands RedisFuture clusterFlushslots(); /** - * Tells a Redis cluster slave node that the client is ok reading possibly stale data and is not interested in running write + * Tells a Redis cluster replica node that the client is ok reading possibly stale data and is not interested in running write * queries. * * @return String simple-string-reply @@ -291,7 +316,7 @@ public interface RedisClusterAsyncCommands * @param keys the key * @return RedisFuture<List<V>> array-reply list of values at the specified keys. */ - RedisFuture> mget(K... keys); + RedisFuture>> mget(K... keys); /** * Set multiple keys to multiple values with pipelining. Cross-slot keys will result in multiple calls to the particular diff --git a/src/main/java/io/lettuce/core/cluster/api/async/package-info.java b/src/main/java/io/lettuce/core/cluster/api/async/package-info.java new file mode 100644 index 0000000000..c1297ebb83 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/async/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Cluster API for asynchronous executed commands. + */ +package io.lettuce.core.cluster.api.async; diff --git a/src/main/java/io/lettuce/core/cluster/api/package-info.java b/src/main/java/io/lettuce/core/cluster/api/package-info.java new file mode 100644 index 0000000000..3d6f8eb663 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Cluster connection API. + */ +package io.lettuce.core.cluster.api; diff --git a/src/main/java/io/lettuce/core/cluster/api/reactive/ReactiveExecutions.java b/src/main/java/io/lettuce/core/cluster/api/reactive/ReactiveExecutions.java new file mode 100644 index 0000000000..20c2a4edde --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/reactive/ReactiveExecutions.java @@ -0,0 +1,42 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.reactive; + +import java.util.Collection; + +import reactor.core.publisher.Flux; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Execution holder for a reactive command to be executed on multiple nodes. + * + * @author Mark Paluch + * @since 4.4 + */ +public interface ReactiveExecutions { + + /** + * Return a {@link Flux} that contains a combined stream of the multi-node execution. + * + * @return + */ + Flux flux(); + + /** + * @return collection of nodes on which the command was executed. + */ + Collection nodes(); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/reactive/RedisAdvancedClusterReactiveCommands.java b/src/main/java/io/lettuce/core/cluster/api/reactive/RedisAdvancedClusterReactiveCommands.java new file mode 100644 index 0000000000..f1299232f5 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/reactive/RedisAdvancedClusterReactiveCommands.java @@ -0,0 +1,317 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.reactive; + +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.api.reactive.RedisKeyReactiveCommands; +import io.lettuce.core.api.reactive.RedisScriptingReactiveCommands; +import io.lettuce.core.api.reactive.RedisServerReactiveCommands; +import io.lettuce.core.api.reactive.RedisStringReactiveCommands; +import io.lettuce.core.cluster.ClusterClientOptions; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.output.KeyStreamingChannel; + +/** + * Advanced reactive and thread-safe Redis Cluster API. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface RedisAdvancedClusterReactiveCommands extends RedisClusterReactiveCommands { + + /** + * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. In + * contrast to the {@link RedisAdvancedClusterReactiveCommands}, node-connections do not route commands to other cluster + * nodes + * + * @param nodeId the node Id + * @return a connection to the requested cluster node + */ + RedisClusterReactiveCommands getConnection(String nodeId); + + /** + * Retrieve a connection to the specified cluster node using host and port. In contrast to the + * {@link RedisAdvancedClusterReactiveCommands}, node-connections do not route commands to other cluster nodes. Host and + * port connections are verified by default for cluster membership, see + * {@link ClusterClientOptions#isValidateClusterNodeMembership()}. + * + * @param host the host + * @param port the port + * @return a connection to the requested cluster node + */ + RedisClusterReactiveCommands getConnection(String host, int port); + + /** + * @return the underlying connection. + */ + StatefulRedisClusterConnection getStatefulConnection(); + + /** + * Delete one or more keys with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. + * + * @param keys the keys + * @return Long integer-reply The number of keys that were removed. + * @see RedisKeyReactiveCommands#del(Object[]) + */ + Mono del(K... keys); + + /** + * Unlink one or more keys with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. + * + * @param keys the keys + * @return Long integer-reply The number of keys that were removed. + * @see RedisKeyReactiveCommands#unlink(Object[]) + */ + Mono unlink(K... keys); + + /** + * Determine how many keys exist with pipelining. Cross-slot keys will result in multiple calls to the particular cluster + * nodes. + * + * @param keys the keys + * @return Long integer-reply specifically: Number of existing keys + */ + Mono exists(K... keys); + + /** + * Get the values of all the given keys with pipelining. Cross-slot keys will result in multiple calls to the particular + * cluster nodes. + * + * @param keys the key + * @return V array-reply list of values at the specified keys. + * @see RedisStringReactiveCommands#mget(Object[]) + */ + Flux> mget(K... keys); + + /** + * Set multiple keys to multiple values with pipelining. Cross-slot keys will result in multiple calls to the particular + * cluster nodes. + * + * @param map the map + * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. + * @see RedisStringReactiveCommands#mset(Map) + */ + Mono mset(Map map); + + /** + * Set multiple keys to multiple values, only if none of the keys exist with pipelining. Cross-slot keys will result in + * multiple calls to the particular cluster nodes. + * + * @param map the null + * @return Boolean integer-reply specifically: + * + * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). + * + * @see RedisStringReactiveCommands#msetnx(Map) + */ + Mono msetnx(Map map); + + /** + * Set the current connection name on all cluster nodes with pipelining. + * + * @param name the client name + * @return simple-string-reply {@code OK} if the connection name was successfully set. + * @see RedisServerReactiveCommands#clientSetname(Object) + */ + Mono clientSetname(K name); + + /** + * Remove all keys from all databases on all cluster masters with pipelining. + * + * @return String simple-string-reply + * @see RedisServerReactiveCommands#flushall() + */ + Mono flushall(); + + /** + * Remove all keys from the current database on all cluster masters with pipelining. + * + * @return String simple-string-reply + * @see RedisServerReactiveCommands#flushdb() + */ + Mono flushdb(); + + /** + * Return the number of keys in the selected database on all cluster masters. + * + * @return Long integer-reply + * @see RedisServerReactiveCommands#dbsize() + */ + Mono dbsize(); + + /** + * Find all keys matching the given pattern on all cluster masters. + * + * @param pattern the pattern type: patternkey (pattern) + * @return List<K> array-reply list of keys matching {@code pattern}. + * @see RedisKeyReactiveCommands#keys(Object) + */ + Flux keys(K pattern); + + /** + * Find all keys matching the given pattern on all cluster masters. + * + * @param channel the channel + * @param pattern the pattern + * @return Long array-reply list of keys matching {@code pattern}. + * @see RedisKeyReactiveCommands#keys(KeyStreamingChannel, Object) + */ + Mono keys(KeyStreamingChannel channel, K pattern); + + /** + * Return a random key from the keyspace on a random master. + * + * @return K bulk-string-reply the random key, or a {@link Mono} that completes empty when the database is empty. + * @see RedisKeyReactiveCommands#randomkey() + */ + Mono randomkey(); + + /** + * Remove all the scripts from the script cache on all cluster nodes. + * + * @return String simple-string-reply + * @see RedisScriptingReactiveCommands#scriptFlush() + */ + Mono scriptFlush(); + + /** + * Kill the script currently in execution on all cluster nodes. This call does not fail even if no scripts are running. + * + * @return String simple-string-reply, always {@literal OK}. + * @see RedisScriptingReactiveCommands#scriptKill() + */ + Mono scriptKill(); + + /** + * Load the specified Lua script into the script cache on all cluster nodes. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + Mono scriptLoad(String script); + + /** + * Load the specified Lua script into the script cache on all cluster nodes. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + Mono scriptLoad(byte[] script); + + /** + * Synchronously save the dataset to disk and then shut down all nodes of the cluster. + * + * @param save {@literal true} force save operation + * @see RedisServerReactiveCommands#shutdown(boolean) + */ + Mono shutdown(boolean save); + + /** + * Incrementally iterate the keys space over the whole Cluster. + * + * @return KeyScanCursor<K> scan cursor. + * @see RedisKeyReactiveCommands#scan(ScanArgs) + */ + Mono> scan(); + + /** + * Incrementally iterate the keys space over the whole Cluster. + * + * @param scanArgs scan arguments + * @return KeyScanCursor<K> scan cursor. + * @see RedisKeyReactiveCommands#scan(ScanArgs) + */ + Mono> scan(ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space over the whole Cluster. + * + * @param scanCursor cursor to resume the scan. It's required to reuse the {@code scanCursor} instance from the previous + * {@link #scan()} call. + * @param scanArgs scan arguments + * @return KeyScanCursor<K> scan cursor. + * @see RedisKeyReactiveCommands#scan(ScanCursor, ScanArgs) + */ + Mono> scan(ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space over the whole Cluster. + * + * @param scanCursor cursor to resume the scan. It's required to reuse the {@code scanCursor} instance from the previous + * {@link #scan()} call. + * @return KeyScanCursor<K> scan cursor. + * @see RedisKeyReactiveCommands#scan(ScanCursor) + */ + Mono> scan(ScanCursor scanCursor); + + /** + * Incrementally iterate the keys space over the whole Cluster. + * + * @param channel streaming channel that receives a call for every key + * @return StreamScanCursor scan cursor. + * @see RedisKeyReactiveCommands#scan(KeyStreamingChannel) + */ + Mono scan(KeyStreamingChannel channel); + + /** + * Incrementally iterate the keys space over the whole Cluster. + * + * @param channel streaming channel that receives a call for every key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + * @see RedisKeyReactiveCommands#scan(KeyStreamingChannel, ScanArgs) + */ + Mono scan(KeyStreamingChannel channel, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space over the whole Cluster. + * + * @param channel streaming channel that receives a call for every key + * @param scanCursor cursor to resume the scan. It's required to reuse the {@code scanCursor} instance from the previous + * {@link #scan()} call. + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + * @see RedisKeyReactiveCommands#scan(KeyStreamingChannel, ScanCursor, ScanArgs) + */ + Mono scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space over the whole Cluster. + * + * @param channel streaming channel that receives a call for every key + * @param scanCursor cursor to resume the scan. It's required to reuse the {@code scanCursor} instance from the previous + * {@link #scan()} call. + * @return StreamScanCursor scan cursor. + * @see RedisKeyReactiveCommands#scan(ScanCursor, ScanArgs) + */ + Mono scan(KeyStreamingChannel channel, ScanCursor scanCursor); + + /** + * Touch one or more keys with pipelining. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. + * Cross-slot keys will result in multiple calls to the particular cluster nodes. + * + * @param keys the keys + * @return Long integer-reply the number of found keys. + */ + Mono touch(K... keys); + +} diff --git a/src/main/java/io/lettuce/core/cluster/api/reactive/RedisClusterReactiveCommands.java b/src/main/java/io/lettuce/core/cluster/api/reactive/RedisClusterReactiveCommands.java new file mode 100644 index 0000000000..2747e0259c --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/reactive/RedisClusterReactiveCommands.java @@ -0,0 +1,341 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.reactive; + +import java.time.Duration; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.KeyValue; +import io.lettuce.core.api.reactive.*; + +/** + * A complete reactive and thread-safe cluster Redis API with 400+ Methods. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.0 + */ +public interface RedisClusterReactiveCommands extends BaseRedisReactiveCommands, RedisGeoReactiveCommands, + RedisHashReactiveCommands, RedisHLLReactiveCommands, RedisKeyReactiveCommands, + RedisListReactiveCommands, RedisScriptingReactiveCommands, RedisServerReactiveCommands, + RedisSetReactiveCommands, RedisSortedSetReactiveCommands, RedisStreamReactiveCommands, + RedisStringReactiveCommands { + + /** + * Set the default timeout for operations. A zero timeout value indicates to not time out. + * + * @param timeout the timeout value + * @since 5.0 + */ + void setTimeout(Duration timeout); + + /** + * Authenticate to the server. + * + * @param password the password + * @return String simple-string-reply + */ + Mono auth(CharSequence password); + + /** + * Authenticate to the server with username and password. Requires Redis 6 or newer. + * + * @param username the username + * @param password the password + * @return String simple-string-reply + * @since 6.0 + */ + Mono auth(String username, CharSequence password); + + /** + * Generate a new config epoch, incrementing the current epoch, assign the new epoch to this node, WITHOUT any consensus and + * persist the configuration on disk before sending packets with the new configuration. + * + * @return String simple-string-reply If the new config epoch is generated and assigned either BUMPED (epoch) or STILL + * (epoch) are returned. + */ + Mono clusterBumpepoch(); + + /** + * Meet another cluster node to include the node into the cluster. The command starts the cluster handshake and returns with + * {@literal OK} when the node was added to the cluster. + * + * @param ip IP address of the host + * @param port port number. + * @return String simple-string-reply + */ + Mono clusterMeet(String ip, int port); + + /** + * Blacklist and remove the cluster node from the cluster. + * + * @param nodeId the node Id + * @return String simple-string-reply + */ + Mono clusterForget(String nodeId); + + /** + * Adds slots to the cluster node. The current node will become the master for the specified slots. + * + * @param slots one or more slots from {@literal 0} to {@literal 16384} + * @return String simple-string-reply + */ + Mono clusterAddSlots(int... slots); + + /** + * Removes slots from the cluster node. + * + * @param slots one or more slots from {@literal 0} to {@literal 16384} + * @return String simple-string-reply + */ + Mono clusterDelSlots(int... slots); + + /** + * Assign a slot to a node. The command migrates the specified slot from the current node to the specified node in + * {@code nodeId} + * + * @param slot the slot + * @param nodeId the id of the node that will become the master for the slot + * @return String simple-string-reply + */ + Mono clusterSetSlotNode(int slot, String nodeId); + + /** + * Clears migrating / importing state from the slot. + * + * @param slot the slot + * @return String simple-string-reply + */ + Mono clusterSetSlotStable(int slot); + + /** + * Flag a slot as {@literal MIGRATING} (outgoing) towards the node specified in {@code nodeId}. The slot must be handled by + * the current node in order to be migrated. + * + * @param slot the slot + * @param nodeId the id of the node is targeted to become the master for the slot + * @return String simple-string-reply + */ + Mono clusterSetSlotMigrating(int slot, String nodeId); + + /** + * Flag a slot as {@literal IMPORTING} (incoming) from the node specified in {@code nodeId}. + * + * @param slot the slot + * @param nodeId the id of the node is the master of the slot + * @return String simple-string-reply + */ + Mono clusterSetSlotImporting(int slot, String nodeId); + + /** + * Get information and statistics about the cluster viewed by the current node. + * + * @return String bulk-string-reply as a collection of text lines. + */ + Mono clusterInfo(); + + /** + * Obtain the nodeId for the currently connected node. + * + * @return String simple-string-reply + */ + Mono clusterMyId(); + + /** + * Obtain details about all cluster nodes. Can be parsed using + * {@link io.lettuce.core.cluster.models.partitions.ClusterPartitionParser#parse} + * + * @return String bulk-string-reply as a collection of text lines + */ + Mono clusterNodes(); + + /** + * List replicas for a certain node identified by its {@code nodeId}. Can be parsed using + * {@link io.lettuce.core.cluster.models.partitions.ClusterPartitionParser#parse} + * + * @param nodeId node id of the master node + * @return List<String> array-reply list of replicas. The command returns data in the same format as + * {@link #clusterNodes()} but one line per replica. + */ + Flux clusterSlaves(String nodeId); + + /** + * Retrieve the list of keys within the {@code slot}. + * + * @param slot the slot + * @param count maximal number of keys + * @return List<K> array-reply list of keys + */ + Flux clusterGetKeysInSlot(int slot, int count); + + /** + * Returns the number of keys in the specified Redis Cluster hash {@code slot}. + * + * @param slot the slot + * @return Integer reply: The number of keys in the specified hash slot, or an error if the hash slot is invalid. + */ + Mono clusterCountKeysInSlot(int slot); + + /** + * Returns the number of failure reports for the specified node. Failure reports are the way Redis Cluster uses in order to + * promote a {@literal PFAIL} state, that means a node is not reachable, to a {@literal FAIL} state, that means that the + * majority of masters in the cluster agreed within a window of time that the node is not reachable. + * + * @param nodeId the node id + * @return Integer reply: The number of active failure reports for the node. + */ + Mono clusterCountFailureReports(String nodeId); + + /** + * Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and + * testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Basically the same as + * {@link io.lettuce.core.cluster.SlotHash#getSlot(byte[])}. If not, call Houston and report that we've got a problem. + * + * @param key the key. + * @return Integer reply: The hash slot number. + */ + Mono clusterKeyslot(K key); + + /** + * Forces a node to save the nodes.conf configuration on disk. + * + * @return String simple-string-reply: {@code OK} or an error if the operation fails. + */ + Mono clusterSaveconfig(); + + /** + * This command sets a specific config epoch in a fresh node. It only works when: + *
    + *
  • The nodes table of the node is empty.
  • + *
  • The node current config epoch is zero.
  • + *
+ * + * @param configEpoch the config epoch + * @return String simple-string-reply: {@code OK} or an error if the operation fails. + */ + Mono clusterSetConfigEpoch(long configEpoch); + + /** + * Get array of cluster slots to node mappings. + * + * @return List<Object> array-reply nested list of slot ranges with IP/Port mappings. + */ + Flux clusterSlots(); + + /** + * The asking command is required after a {@code -ASK} redirection. The client should issue {@code ASKING} before to + * actually send the command to the target instance. See the Redis Cluster specification for more information. + * + * @return String simple-string-reply + */ + Mono asking(); + + /** + * Turn this node into a replica of the node with the id {@code nodeId}. + * + * @param nodeId master node id + * @return String simple-string-reply + */ + Mono clusterReplicate(String nodeId); + + /** + * Failover a cluster node. Turns the currently connected node into a master and the master into its replica. + * + * @param force do not coordinate with master if {@literal true} + * @return String simple-string-reply + */ + Mono clusterFailover(boolean force); + + /** + * Reset a node performing a soft or hard reset: + *
    + *
  • All other nodes are forgotten
  • + *
  • All the assigned / open slots are released
  • + *
  • If the node is a replica, it turns into a master
  • + *
  • Only for hard reset: a new Node ID is generated
  • + *
  • Only for hard reset: currentEpoch and configEpoch are set to 0
  • + *
  • The new configuration is saved and the cluster state updated
  • + *
  • If the node was a replica, the whole data set is flushed away
  • + *
+ * + * @param hard {@literal true} for hard reset. Generates a new nodeId and currentEpoch/configEpoch are set to 0 + * @return String simple-string-reply + */ + Mono clusterReset(boolean hard); + + /** + * Delete all the slots associated with the specified node. The number of deleted slots is returned. + * + * @return String simple-string-reply + */ + Mono clusterFlushslots(); + + /** + * Tells a Redis cluster replica node that the client is ok reading possibly stale data and is not interested in running write + * queries. + * + * @return String simple-string-reply + */ + Mono readOnly(); + + /** + * Resets readOnly flag. + * + * @return String simple-string-reply + */ + Mono readWrite(); + + /** + * Delete a key with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. + * + * @param keys the key + * @return Flux<Long> integer-reply The number of keys that were removed. + */ + Mono del(K... keys); + + /** + * Get the values of all the given keys with pipelining. Cross-slot keys will result in multiple calls to the particular + * cluster nodes. + * + * @param keys the key + * @return Flux<List<V>> array-reply list of values at the specified keys. + */ + Flux> mget(K... keys); + + /** + * Set multiple keys to multiple values with pipelining. Cross-slot keys will result in multiple calls to the particular + * cluster nodes. + * + * @param map the null + * @return Flux<String> simple-string-reply always {@code OK} since {@code MSET} can't fail. + */ + Mono mset(Map map); + + /** + * Set multiple keys to multiple values, only if none of the keys exist with pipelining. Cross-slot keys will result in + * multiple calls to the particular cluster nodes. + * + * @param map the null + * @return Flux<Boolean> integer-reply specifically: + * + * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). + */ + Mono msetnx(Map map); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/reactive/package-info.java b/src/main/java/io/lettuce/core/cluster/api/reactive/package-info.java new file mode 100644 index 0000000000..70344a7387 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/reactive/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Cluster API for reactive command execution. + */ +package io.lettuce.core.cluster.api.reactive; diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/BaseNodeSelectionCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/BaseNodeSelectionCommands.java new file mode 100644 index 0000000000..dd945539f7 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/BaseNodeSelectionCommands.java @@ -0,0 +1,111 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +import java.util.List; +import java.util.Map; + +/** + * Synchronous executed commands on a node selection for basic commands. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi + */ +public interface BaseNodeSelectionCommands { + + /** + * Post a message to a channel. + * + * @param channel the channel type: key + * @param message the message type: value + * @return Long integer-reply the number of clients that received the message. + */ + Executions publish(K channel, V message); + + /** + * Lists the currently *active channels*. + * + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + Executions> pubsubChannels(); + + /** + * Lists the currently *active channels*. + * + * @param channel the key + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + Executions> pubsubChannels(K channel); + + /** + * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. + * + * @param channels channel keys + * @return array-reply a list of channels and number of subscribers for every channel. + */ + Executions> pubsubNumsub(K... channels); + + /** + * Returns the number of subscriptions to patterns. + * + * @return Long integer-reply the number of patterns all the clients are subscribed to. + */ + Executions pubsubNumpat(); + + /** + * Echo the given string. + * + * @param msg the message type: value + * @return V bulk-string-reply + */ + Executions echo(V msg); + + /** + * Return the role of the instance in the context of replication. + * + * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional + * elements are role-specific. + */ + Executions> role(); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + Executions ping(); + + /** + * Instructs Redis to disconnect the connection. Note that if auto-reconnect is enabled then Lettuce will auto-reconnect if + * the connection was disconnected. Use {@link io.lettuce.core.api.StatefulConnection#close} to close connection and release + * resources. + * + * @return String simple-string-reply always OK. + */ + Executions quit(); + + /** + * Wait for replication. + * + * @param replicas minimum number of replicas + * @param timeout timeout in milliseconds + * @return number of replicas + */ + Executions waitForReplication(int replicas, long timeout); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/Executions.java b/src/main/java/io/lettuce/core/cluster/api/sync/Executions.java new file mode 100644 index 0000000000..7f70e012bf --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/Executions.java @@ -0,0 +1,76 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +import java.util.*; +import java.util.concurrent.CompletionStage; +import java.util.stream.Stream; +import java.util.stream.StreamSupport; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Result holder for a command that was executed synchronously on multiple nodes. + * + * @author Mark Paluch + * @since 4.0 + */ +public interface Executions extends Iterable { + + /** + * + * @return map between {@link RedisClusterNode} and the {@link CompletionStage} + */ + Map asMap(); + + /** + * + * @return collection of nodes on which the command was executed. + */ + Collection nodes(); + + /** + * + * @param redisClusterNode the node + * @return the completion stage for this node + */ + T get(RedisClusterNode redisClusterNode); + + /** + * + * @return iterator over the {@link CompletionStage}s + */ + @Override + default Iterator iterator() { + return asMap().values().iterator(); + } + + /** + * + * @return a {@code Spliterator} over the elements in this collection + */ + @Override + default Spliterator spliterator() { + return Spliterators.spliterator(iterator(), nodes().size(), 0); + } + + /** + * @return a sequential {@code Stream} over the elements in this collection + */ + default Stream stream() { + return StreamSupport.stream(spliterator(), false); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelection.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelection.java new file mode 100644 index 0000000000..39b89614b6 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelection.java @@ -0,0 +1,30 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.NodeSelectionSupport; + +/** + * Node selection with access to synchronous executed commands on the set. Commands are triggered concurrently to the selected + * nodes and synchronized afterwards. + * + * @author Mark Paluch + * @since 4.0 + */ +public interface NodeSelection extends NodeSelectionSupport, NodeSelectionCommands> { + +} diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionCommands.java new file mode 100644 index 0000000000..f7ac54704e --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionCommands.java @@ -0,0 +1,30 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +import io.lettuce.core.cluster.api.NodeSelectionSupport; + +/** + * Synchronous and thread-safe Redis API to execute commands on a {@link NodeSelectionSupport}. + * + * @author Mark Paluch + */ +public interface NodeSelectionCommands extends BaseNodeSelectionCommands, NodeSelectionGeoCommands, + NodeSelectionHashCommands, NodeSelectionHLLCommands, NodeSelectionKeyCommands, + NodeSelectionListCommands, NodeSelectionScriptingCommands, NodeSelectionServerCommands, + NodeSelectionSetCommands, NodeSelectionSortedSetCommands, NodeSelectionStreamCommands, + NodeSelectionStringCommands { +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionGeoCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionGeoCommands.java similarity index 85% rename from src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionGeoCommands.java rename to src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionGeoCommands.java index 950ef1d1fb..41a06190e8 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionGeoCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionGeoCommands.java @@ -1,18 +1,31 @@ -package com.lambdaworks.redis.cluster.api.sync; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; -import com.lambdaworks.redis.GeoArgs; -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.GeoRadiusStoreArgs; -import com.lambdaworks.redis.GeoWithin; import java.util.List; import java.util.Set; +import io.lettuce.core.*; + /** * Synchronous executed commands on a node selection for the Geo-API. * * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi */ public interface NodeSelectionGeoCommands { @@ -43,7 +56,7 @@ public interface NodeSelectionGeoCommands { * @param members the members * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. */ - Executions> geohash(K key, V... members); + Executions>> geohash(K key, V... members); /** * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. @@ -71,7 +84,7 @@ public interface NodeSelectionGeoCommands { Executions>> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); /** - * Perform a {@link #georadius(Object, double, double, double, Unit, GeoArgs)} query and store the results in a sorted set. + * Perform a {@link #georadius(Object, double, double, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. * * @param key the key of the geo set * @param longitude the longitude coordinate according to WGS84 @@ -97,7 +110,6 @@ public interface NodeSelectionGeoCommands { Executions> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); /** - * * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the * results. * @@ -111,7 +123,7 @@ public interface NodeSelectionGeoCommands { Executions>> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); /** - * Perform a {@link #georadiusbymember(Object, Object, double, Unit, GeoArgs)} query and store the results in a sorted set. + * Perform a {@link #georadiusbymember(Object, Object, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. * * @param key the key of the geo set * @param member reference member @@ -135,7 +147,6 @@ public interface NodeSelectionGeoCommands { Executions> geopos(K key, V... members); /** - * * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is * returned. Default in meters by, otherwise according to {@code unit} * diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionHLLCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionHLLCommands.java new file mode 100644 index 0000000000..6421d0a19c --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionHLLCommands.java @@ -0,0 +1,61 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +/** + * Synchronous executed commands on a node selection for HyperLogLog (PF* commands). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi + */ +public interface NodeSelectionHLLCommands { + + /** + * Adds the specified elements to the specified HyperLogLog. + * + * @param key the key + * @param values the values + * + * @return Long integer-reply specifically: + * + * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. + */ + Executions pfadd(K key, V... values); + + /** + * Merge N different HyperLogLogs into a single one. + * + * @param destkey the destination key + * @param sourcekeys the source key + * + * @return String simple-string-reply The command just returns {@code OK}. + */ + Executions pfmerge(K destkey, K... sourcekeys); + + /** + * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). + * + * @param keys the keys + * + * @return Long integer-reply specifically: + * + * The approximated number of unique elements observed via {@code PFADD}. + */ + Executions pfcount(K... keys); +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionHashCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionHashCommands.java similarity index 84% rename from src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionHashCommands.java rename to src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionHashCommands.java index 924acb1aaf..57dea1cc7f 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionHashCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionHashCommands.java @@ -1,29 +1,42 @@ -package com.lambdaworks.redis.cluster.api.sync; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; import java.util.List; import java.util.Map; -import com.lambdaworks.redis.MapScanCursor; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; /** * Synchronous executed commands on a node selection for Hashes (Key-Value pairs). - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi */ public interface NodeSelectionHashCommands { /** * Delete one or more hash fields. - * + * * @param key the key * @param fields the field type: key * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing @@ -33,11 +46,11 @@ public interface NodeSelectionHashCommands { /** * Determine if a hash field exists. - * + * * @param key the key * @param field the field type: key * @return Boolean integer-reply specifically: - * + * * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, * or {@code key} does not exist. */ @@ -45,7 +58,7 @@ public interface NodeSelectionHashCommands { /** * Get the value of a hash field. - * + * * @param key the key * @param field the field type: key * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present @@ -55,7 +68,7 @@ public interface NodeSelectionHashCommands { /** * Increment the integer value of a hash field by the given number. - * + * * @param key the key * @param field the field type: key * @param amount the increment type: long @@ -65,7 +78,7 @@ public interface NodeSelectionHashCommands { /** * Increment the float value of a hash field by the given amount. - * + * * @param key the key * @param field the field type: key * @param amount the increment type: double @@ -75,7 +88,7 @@ public interface NodeSelectionHashCommands { /** * Get all the fields and values in a hash. - * + * * @param key the key * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} * does not exist. @@ -84,17 +97,17 @@ public interface NodeSelectionHashCommands { /** * Stream over all the fields and values in a hash. - * + * * @param channel the channel * @param key the key - * + * * @return Long count of the keys. */ Executions hgetall(KeyValueStreamingChannel channel, K key); /** * Get all the fields in a hash. - * + * * @param key the key * @return List<K> array-reply list of fields in the hash, or an empty list when {@code key} does not exist. */ @@ -102,17 +115,17 @@ public interface NodeSelectionHashCommands { /** * Stream over all the fields in a hash. - * + * * @param channel the channel * @param key the key - * + * * @return Long count of the keys. */ Executions hkeys(KeyStreamingChannel channel, K key); /** * Get the number of fields in a hash. - * + * * @param key the key * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. */ @@ -120,27 +133,27 @@ public interface NodeSelectionHashCommands { /** * Get the values of all the given hash fields. - * + * * @param key the key * @param fields the field type: key * @return List<V> array-reply list of values associated with the given fields, in the same */ - Executions> hmget(K key, K... fields); + Executions>> hmget(K key, K... fields); /** * Stream over the values of all the given hash fields. - * + * * @param channel the channel * @param key the key * @param fields the fields - * + * * @return Long count of the keys */ - Executions hmget(ValueStreamingChannel channel, K key, K... fields); + Executions hmget(KeyValueStreamingChannel channel, K key, K... fields); /** * Set multiple hash fields to multiple values. - * + * * @param key the key * @param map the null * @return String simple-string-reply @@ -149,7 +162,7 @@ public interface NodeSelectionHashCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @return MapScanCursor<K, V> map scan cursor. */ @@ -157,7 +170,7 @@ public interface NodeSelectionHashCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanArgs scan arguments * @return MapScanCursor<K, V> map scan cursor. @@ -166,7 +179,7 @@ public interface NodeSelectionHashCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -176,7 +189,7 @@ public interface NodeSelectionHashCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return MapScanCursor<K, V> map scan cursor. @@ -185,7 +198,7 @@ public interface NodeSelectionHashCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @return StreamScanCursor scan cursor. @@ -194,7 +207,7 @@ public interface NodeSelectionHashCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanArgs scan arguments @@ -204,7 +217,7 @@ public interface NodeSelectionHashCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -215,7 +228,7 @@ public interface NodeSelectionHashCommands { /** * Incrementally iterate hash fields and associated values. - * + * * @param channel streaming channel that receives a call for every key-value pair * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -236,6 +249,16 @@ public interface NodeSelectionHashCommands { */ Executions hset(K key, K field, V value); + /** + * Set multiple hash fields to multiple values. + * + * @param key the key of the hash + * @param map the field/value pairs to update + * @return Long integer-reply: the number of fields that were added. + * @since 5.3 + */ + Executions hset(K key, Map map); + /** * Set the value of a hash field, only if the field does not exist. * diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionKeyCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionKeyCommands.java similarity index 87% rename from src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionKeyCommands.java rename to src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionKeyCommands.java index d43d141f9e..27ee2a7bfc 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionKeyCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionKeyCommands.java @@ -1,24 +1,35 @@ -package com.lambdaworks.redis.cluster.api.sync; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; import java.util.Date; import java.util.List; -import com.lambdaworks.redis.KeyScanCursor; -import com.lambdaworks.redis.MigrateArgs; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.SortArgs; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; /** * Synchronous executed commands on a node selection for Keys (Key manipulation/querying). - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi */ public interface NodeSelectionKeyCommands { @@ -40,7 +51,7 @@ public interface NodeSelectionKeyCommands { /** * Return a serialized version of the value stored at the specified key. - * + * * @param key the key * @return byte[] bulk-string-reply the serialized value. */ @@ -56,11 +67,11 @@ public interface NodeSelectionKeyCommands { /** * Set a key's time to live in seconds. - * + * * @param key the key * @param seconds the seconds type: long * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set. */ @@ -68,11 +79,11 @@ public interface NodeSelectionKeyCommands { /** * Set the expiration for a key as a UNIX timestamp. - * + * * @param key the key * @param timestamp the timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -80,11 +91,11 @@ public interface NodeSelectionKeyCommands { /** * Set the expiration for a key as a UNIX timestamp. - * + * * @param key the key * @param timestamp the timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -92,7 +103,7 @@ public interface NodeSelectionKeyCommands { /** * Find all keys matching the given pattern. - * + * * @param pattern the pattern type: patternkey (pattern) * @return List<K> array-reply list of keys matching {@code pattern}. */ @@ -100,7 +111,7 @@ public interface NodeSelectionKeyCommands { /** * Find all keys matching the given pattern. - * + * * @param channel the channel * @param pattern the pattern * @return Long array-reply list of keys matching {@code pattern}. @@ -109,7 +120,7 @@ public interface NodeSelectionKeyCommands { /** * Atomically transfer a key from a Redis instance to another one. - * + * * @param host the host * @param port the port * @param key the key @@ -133,7 +144,7 @@ public interface NodeSelectionKeyCommands { /** * Move a key to another database. - * + * * @param key the key * @param db the db type: long * @return Boolean integer-reply specifically: @@ -142,7 +153,7 @@ public interface NodeSelectionKeyCommands { /** * returns the kind of internal representation used in order to store the value associated with a key. - * + * * @param key the key * @return String */ @@ -151,7 +162,7 @@ public interface NodeSelectionKeyCommands { /** * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write * operations). - * + * * @param key the key * @return number of seconds since the object stored at the specified key is idle. */ @@ -159,7 +170,7 @@ public interface NodeSelectionKeyCommands { /** * returns the number of references of the value associated with the specified key. - * + * * @param key the key * @return Long */ @@ -167,10 +178,10 @@ public interface NodeSelectionKeyCommands { /** * Remove the expiration from a key. - * + * * @param key the key * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an * associated timeout. */ @@ -178,11 +189,11 @@ public interface NodeSelectionKeyCommands { /** * Set a key's time to live in milliseconds. - * + * * @param key the key * @param milliseconds the milliseconds type: long * @return integer-reply, specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set. */ @@ -190,11 +201,11 @@ public interface NodeSelectionKeyCommands { /** * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * + * * @param key the key * @param timestamp the milliseconds-timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -202,11 +213,11 @@ public interface NodeSelectionKeyCommands { /** * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * + * * @param key the key * @param timestamp the milliseconds-timestamp type: posix time * @return Boolean integer-reply specifically: - * + * * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not * be set (see: {@code EXPIRE}). */ @@ -214,7 +225,7 @@ public interface NodeSelectionKeyCommands { /** * Get the time to live for a key in milliseconds. - * + * * @param key the key * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description * above). @@ -223,14 +234,14 @@ public interface NodeSelectionKeyCommands { /** * Return a random key from the keyspace. - * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. + * + * @return K bulk-string-reply the random key, or {@literal null} when the database is empty. */ - Executions randomkey(); + Executions randomkey(); /** * Rename a key. - * + * * @param key the key * @param newKey the newkey type: key * @return String simple-string-reply @@ -239,18 +250,18 @@ public interface NodeSelectionKeyCommands { /** * Rename a key, only if the new key does not exist. - * + * * @param key the key * @param newKey the newkey type: key * @return Boolean integer-reply specifically: - * + * * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. */ Executions renamenx(K key, K newKey); /** * Create a key using the provided serialized value, previously obtained using DUMP. - * + * * @param key the key * @param ttl the ttl type: long * @param value the serialized-value type: string @@ -258,9 +269,20 @@ public interface NodeSelectionKeyCommands { */ Executions restore(K key, long ttl, byte[] value); + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param value the serialized-value type: string + * @param args the {@link RestoreArgs}, must not be {@literal null}. + * @return String simple-string-reply The command returns OK on success. + * @since 5.1 + */ + Executions restore(K key, byte[] value, RestoreArgs args); + /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @return List<V> array-reply list of sorted elements. */ @@ -268,7 +290,7 @@ public interface NodeSelectionKeyCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @return Long number of values. @@ -277,7 +299,7 @@ public interface NodeSelectionKeyCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @param sortArgs sort arguments * @return List<V> array-reply list of sorted elements. @@ -286,7 +308,7 @@ public interface NodeSelectionKeyCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param sortArgs sort arguments @@ -296,7 +318,7 @@ public interface NodeSelectionKeyCommands { /** * Sort the elements in a list, set or sorted set. - * + * * @param key the key * @param sortArgs sort arguments * @param destination the destination key to store sort results @@ -306,7 +328,7 @@ public interface NodeSelectionKeyCommands { /** * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * + * * @param keys the keys * @return Long integer-reply the number of found keys. */ @@ -322,7 +344,7 @@ public interface NodeSelectionKeyCommands { /** * Determine the type stored at key. - * + * * @param key the key * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. */ @@ -330,14 +352,14 @@ public interface NodeSelectionKeyCommands { /** * Incrementally iterate the keys space. - * + * * @return KeyScanCursor<K> scan cursor. */ Executions> scan(); /** * Incrementally iterate the keys space. - * + * * @param scanArgs scan arguments * @return KeyScanCursor<K> scan cursor. */ @@ -345,7 +367,7 @@ public interface NodeSelectionKeyCommands { /** * Incrementally iterate the keys space. - * + * * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments * @return KeyScanCursor<K> scan cursor. @@ -354,7 +376,7 @@ public interface NodeSelectionKeyCommands { /** * Incrementally iterate the keys space. - * + * * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return KeyScanCursor<K> scan cursor. */ @@ -362,7 +384,7 @@ public interface NodeSelectionKeyCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @return StreamScanCursor scan cursor. */ @@ -370,7 +392,7 @@ public interface NodeSelectionKeyCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanArgs scan arguments * @return StreamScanCursor scan cursor. @@ -379,7 +401,7 @@ public interface NodeSelectionKeyCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -389,7 +411,7 @@ public interface NodeSelectionKeyCommands { /** * Incrementally iterate the keys space. - * + * * @param channel streaming channel that receives a call for every key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return StreamScanCursor scan cursor. diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionListCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionListCommands.java similarity index 85% rename from src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionListCommands.java rename to src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionListCommands.java index a8a1dafe1d..18cda20d3f 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionListCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionListCommands.java @@ -1,27 +1,43 @@ -package com.lambdaworks.redis.cluster.api.sync; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; import java.util.List; -import com.lambdaworks.redis.KeyValue; -import com.lambdaworks.redis.output.ValueStreamingChannel; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.output.ValueStreamingChannel; /** * Synchronous executed commands on a node selection for Lists. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi */ public interface NodeSelectionListCommands { /** * Remove and get the first element in a list, or block until one is available. - * + * * @param timeout the timeout in seconds * @param keys the keys * @return KeyValue<K,V> array-reply specifically: - * + * * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk * with the first element being the name of the key where an element was popped and the second element being the * value of the popped element. @@ -30,11 +46,11 @@ public interface NodeSelectionListCommands { /** * Remove and get the last element in a list, or block until one is available. - * + * * @param timeout the timeout in seconds * @param keys the keys * @return KeyValue<K,V> array-reply specifically: - * + * * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk * with the first element being the name of the key where an element was popped and the second element being the * value of the popped element. @@ -43,7 +59,7 @@ public interface NodeSelectionListCommands { /** * Pop a value from a list, push it to another list and return it; or block until one is available. - * + * * @param timeout the timeout in seconds * @param source the source key * @param destination the destination type: key @@ -54,7 +70,7 @@ public interface NodeSelectionListCommands { /** * Get an element from a list by its index. - * + * * @param key the key * @param index the index type: long * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. @@ -63,7 +79,7 @@ public interface NodeSelectionListCommands { /** * Insert an element before or after another element in a list. - * + * * @param key the key * @param before the before * @param pivot the pivot @@ -75,7 +91,7 @@ public interface NodeSelectionListCommands { /** * Get the length of a list. - * + * * @param key the key * @return Long integer-reply the length of the list at {@code key}. */ @@ -83,7 +99,7 @@ public interface NodeSelectionListCommands { /** * Remove and get the first element in a list. - * + * * @param key the key * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. */ @@ -91,23 +107,13 @@ public interface NodeSelectionListCommands { /** * Prepend one or multiple values to a list. - * + * * @param key the key * @param values the value * @return Long integer-reply the length of the list after the push operations. */ Executions lpush(K key, V... values); - /** - * Prepend a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #lpushx(Object, Object[])} - */ - Executions lpushx(K key, V value); - /** * Prepend values to a list, only if the list exists. * @@ -119,7 +125,7 @@ public interface NodeSelectionListCommands { /** * Get a range of elements from a list. - * + * * @param key the key * @param start the start type: long * @param stop the stop type: long @@ -129,7 +135,7 @@ public interface NodeSelectionListCommands { /** * Get a range of elements from a list. - * + * * @param channel the channel * @param key the key * @param start the start type: long @@ -140,7 +146,7 @@ public interface NodeSelectionListCommands { /** * Remove elements from a list. - * + * * @param key the key * @param count the count type: long * @param value the value @@ -150,7 +156,7 @@ public interface NodeSelectionListCommands { /** * Set the value of an element in a list by its index. - * + * * @param key the key * @param index the index type: long * @param value the value @@ -160,7 +166,7 @@ public interface NodeSelectionListCommands { /** * Trim a list to the specified range. - * + * * @param key the key * @param start the start type: long * @param stop the stop type: long @@ -170,7 +176,7 @@ public interface NodeSelectionListCommands { /** * Remove and get the last element in a list. - * + * * @param key the key * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. */ @@ -178,7 +184,7 @@ public interface NodeSelectionListCommands { /** * Remove the last element in a list, append it to another list and return it. - * + * * @param source the source key * @param destination the destination type: key * @return V bulk-string-reply the element being popped and pushed. @@ -187,23 +193,13 @@ public interface NodeSelectionListCommands { /** * Append one or multiple values to a list. - * + * * @param key the key * @param values the value * @return Long integer-reply the length of the list after the push operation. */ Executions rpush(K key, V... values); - /** - * Append a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #rpushx(java.lang.Object, java.lang.Object[])} - */ - Executions rpushx(K key, V value); - /** * Append values to a list, only if the list exists. * diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionScriptingCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionScriptingCommands.java new file mode 100644 index 0000000000..a3f40922c6 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionScriptingCommands.java @@ -0,0 +1,146 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +import java.util.List; + +import io.lettuce.core.ScriptOutputType; + +/** + * Synchronous executed commands on a node selection for Scripting. {@link java.lang.String Lua scripts} are encoded by using + * the configured {@link io.lettuce.core.ClientOptions#getScriptCharset() charset}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi + */ +public interface NodeSelectionScriptingCommands { + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + */ + Executions eval(String script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + * @since 6.0 + */ + Executions eval(byte[] script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + Executions eval(String script, ScriptOutputType type, K[] keys, V... values); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + * @since 6.0 + */ + Executions eval(byte[] script, ScriptOutputType type, K[] keys, V... values); + + /** + * Evaluates a script cached on the server side by its SHA1 digest + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param expected return type + * @return script result + */ + Executions evalsha(String digest, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + Executions evalsha(String digest, ScriptOutputType type, K[] keys, V... values); + + /** + * Check existence of scripts in the script cache. + * + * @param digests script digests + * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 + * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 + * is returned, otherwise 0 is returned. + */ + Executions> scriptExists(String... digests); + + /** + * Remove all the scripts from the script cache. + * + * @return String simple-string-reply + */ + Executions scriptFlush(); + + /** + * Kill the script currently in execution. + * + * @return String simple-string-reply + */ + Executions scriptKill(); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + Executions scriptLoad(String script); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + Executions scriptLoad(byte[] script); +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionServerCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionServerCommands.java similarity index 79% rename from src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionServerCommands.java rename to src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionServerCommands.java index 7599ee4809..2d718a060f 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionServerCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionServerCommands.java @@ -1,45 +1,63 @@ -package com.lambdaworks.redis.cluster.api.sync; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; import java.util.Date; import java.util.List; -import com.lambdaworks.redis.KillArgs; -import com.lambdaworks.redis.protocol.CommandType; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.UnblockType; +import io.lettuce.core.protocol.CommandType; /** * Synchronous executed commands on a node selection for Server Control. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi */ public interface NodeSelectionServerCommands { /** * Asynchronously rewrite the append-only file. - * + * * @return String simple-string-reply always {@code OK}. */ Executions bgrewriteaof(); /** * Asynchronously save the dataset to disk. - * + * * @return String simple-string-reply */ Executions bgsave(); /** * Get the current connection name. - * + * * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. */ Executions clientGetname(); /** * Set the current connection name. - * + * * @param name the client name * @return simple-string-reply {@code OK} if the connection name was successfully set. */ @@ -47,7 +65,7 @@ public interface NodeSelectionServerCommands { /** * Kill the connection of a client identified by ip:port. - * + * * @param addr ip:port * @return String simple-string-reply {@code OK} if the connection exists and has been closed */ @@ -61,9 +79,19 @@ public interface NodeSelectionServerCommands { */ Executions clientKill(KillArgs killArgs); + /** + * Unblock the specified blocked client. + * + * @param id the client id. + * @param type unblock type. + * @return Long integer-reply number of unblocked connections. + * @since 5.1 + */ + Executions clientUnblock(long id, UnblockType type); + /** * Stop processing commands from clients for some time. - * + * * @param timeout the timeout value in milliseconds * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. */ @@ -71,22 +99,30 @@ public interface NodeSelectionServerCommands { /** * Get the list of client connections. - * + * * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), * each line is composed of a succession of property=value fields separated by a space character. */ Executions clientList(); + /** + * Get the id of the current connection. + * + * @return Long The command just returns the ID of the current connection. + * @since 5.3 + */ + Executions clientId(); + /** * Returns an array reply of details about all Redis commands. - * + * * @return List<Object> array-reply */ Executions> command(); /** * Returns an array reply of details about the requested commands. - * + * * @param commands the commands to query for * @return List<Object> array-reply */ @@ -94,7 +130,7 @@ public interface NodeSelectionServerCommands { /** * Returns an array reply of details about the requested commands. - * + * * @param commands the commands to query for * @return List<Object> array-reply */ @@ -102,29 +138,29 @@ public interface NodeSelectionServerCommands { /** * Get total number of Redis commands. - * + * * @return Long integer-reply of number of total commands in this Redis server. */ Executions commandCount(); /** * Get the value of a configuration parameter. - * + * * @param parameter name of the parameter - * @return List<String> bulk-string-reply + * @return Map<String, String> bulk-string-reply */ - Executions> configGet(String parameter); + Executions> configGet(String parameter); /** * Reset the stats returned by INFO. - * + * * @return String simple-string-reply always {@code OK}. */ Executions configResetstat(); /** * Rewrite the configuration file with the in memory configuration. - * + * * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is * returned. */ @@ -132,7 +168,7 @@ public interface NodeSelectionServerCommands { /** * Set a configuration parameter to the given value. - * + * * @param parameter the parameter name * @param value the parameter value * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. @@ -141,13 +177,14 @@ public interface NodeSelectionServerCommands { /** * Return the number of keys in the selected database. - * + * * @return Long integer-reply */ Executions dbsize(); /** * Crash and recover + * * @param delay optional delay in milliseconds * @return String simple-string-reply */ @@ -163,7 +200,7 @@ public interface NodeSelectionServerCommands { /** * Get debugging information about a key. - * + * * @param key the key * @return String simple-string-reply */ @@ -178,6 +215,7 @@ public interface NodeSelectionServerCommands { /** * Restart the server gracefully. + * * @param delay optional delay in milliseconds * @return String simple-string-reply */ @@ -193,7 +231,7 @@ public interface NodeSelectionServerCommands { /** * Remove all keys from all databases. - * + * * @return String simple-string-reply */ Executions flushall(); @@ -207,7 +245,7 @@ public interface NodeSelectionServerCommands { /** * Remove all keys from the current database. - * + * * @return String simple-string-reply */ Executions flushdb(); @@ -221,14 +259,14 @@ public interface NodeSelectionServerCommands { /** * Get information and statistics about the server. - * + * * @return String bulk-string-reply as a collection of text lines. */ Executions info(); /** * Get information and statistics about the server. - * + * * @param section the section type: string * @return String bulk-string-reply as a collection of text lines. */ @@ -236,21 +274,29 @@ public interface NodeSelectionServerCommands { /** * Get the UNIX time stamp of the last successful save to disk. - * + * * @return Date integer-reply an UNIX time stamp. */ Executions lastsave(); + /** + * Reports the number of bytes that a key and its value require to be stored in RAM. + * + * @return memory usage in bytes. + * @since 5.2 + */ + Executions memoryUsage(K key); + /** * Synchronously save the dataset to disk. - * + * * @return String simple-string-reply The commands returns OK on success. */ Executions save(); /** - * Make the server a slave of another instance, or promote it as master. - * + * Make the server a replica of another instance, or promote it as master. + * * @param host the host type: string * @param port the port type: string * @return String simple-string-reply @@ -259,21 +305,21 @@ public interface NodeSelectionServerCommands { /** * Promote server as master. - * + * * @return String simple-string-reply */ Executions slaveofNoOne(); /** * Read the slow log. - * + * * @return List<Object> deeply nested multi bulk replies */ Executions> slowlogGet(); /** * Read the slow log. - * + * * @param count the count * @return List<Object> deeply nested multi bulk replies */ @@ -281,33 +327,25 @@ public interface NodeSelectionServerCommands { /** * Obtaining the current length of the slow log. - * + * * @return Long length of the slow log. */ Executions slowlogLen(); /** * Resetting the slow log. - * + * * @return String simple-string-reply The commands returns OK on success. */ Executions slowlogReset(); - /** - * Internal command used for replication. - * - * @return String simple-string-reply - */ - @Deprecated - Executions sync(); - /** * Return the current server time. - * + * * @return List<V> array-reply specifically: - * + * * A multi bulk reply containing two elements: - * + * * unix time in seconds. microseconds. */ Executions> time(); diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionSetCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionSetCommands.java similarity index 88% rename from src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionSetCommands.java rename to src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionSetCommands.java index ff8ade03ca..d222788047 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionSetCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionSetCommands.java @@ -1,27 +1,43 @@ -package com.lambdaworks.redis.cluster.api.sync; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; import java.util.List; import java.util.Set; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ValueScanCursor; -import com.lambdaworks.redis.output.ValueStreamingChannel; + +import io.lettuce.core.ScanArgs; +import io.lettuce.core.ScanCursor; +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.ValueScanCursor; +import io.lettuce.core.output.ValueStreamingChannel; /** * Synchronous executed commands on a node selection for Sets. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi */ public interface NodeSelectionSetCommands { /** * Add one or more members to a set. - * + * * @param key the key * @param members the member type: value * @return Long integer-reply the number of elements that were added to the set, not including all the elements already @@ -31,7 +47,7 @@ public interface NodeSelectionSetCommands { /** * Get the number of members in a set. - * + * * @param key the key * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not * exist. @@ -40,7 +56,7 @@ public interface NodeSelectionSetCommands { /** * Subtract multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -48,7 +64,7 @@ public interface NodeSelectionSetCommands { /** * Subtract multiple sets. - * + * * @param channel the channel * @param keys the keys * @return Long count of members of the resulting set. @@ -57,7 +73,7 @@ public interface NodeSelectionSetCommands { /** * Subtract multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -66,7 +82,7 @@ public interface NodeSelectionSetCommands { /** * Intersect multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -74,7 +90,7 @@ public interface NodeSelectionSetCommands { /** * Intersect multiple sets. - * + * * @param channel the channel * @param keys the keys * @return Long count of members of the resulting set. @@ -83,7 +99,7 @@ public interface NodeSelectionSetCommands { /** * Intersect multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -92,11 +108,11 @@ public interface NodeSelectionSetCommands { /** * Determine if a given value is a member of a set. - * + * * @param key the key * @param member the member type: value * @return Boolean integer-reply specifically: - * + * * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the * set, or if {@code key} does not exist. */ @@ -104,12 +120,12 @@ public interface NodeSelectionSetCommands { /** * Move a member from one set to another. - * + * * @param source the source key * @param destination the destination type: key * @param member the member type: value * @return Boolean integer-reply specifically: - * + * * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no * operation was performed. */ @@ -117,7 +133,7 @@ public interface NodeSelectionSetCommands { /** * Get all the members in a set. - * + * * @param key the key * @return Set<V> array-reply all elements of the set. */ @@ -125,7 +141,7 @@ public interface NodeSelectionSetCommands { /** * Get all the members in a set. - * + * * @param channel the channel * @param key the keys * @return Long count of members of the resulting set. @@ -134,7 +150,7 @@ public interface NodeSelectionSetCommands { /** * Remove and return a random member from a set. - * + * * @param key the key * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. */ @@ -151,9 +167,9 @@ public interface NodeSelectionSetCommands { /** * Get one random member from a set. - * + * * @param key the key - * + * * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the * randomly selected element, or {@literal null} when {@code key} does not exist. */ @@ -161,7 +177,7 @@ public interface NodeSelectionSetCommands { /** * Get one or multiple random members from a set. - * + * * @param key the key * @param count the count type: long * @return Set<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply @@ -171,7 +187,7 @@ public interface NodeSelectionSetCommands { /** * Get one or multiple random members from a set. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param count the count @@ -181,7 +197,7 @@ public interface NodeSelectionSetCommands { /** * Remove one or more members from a set. - * + * * @param key the key * @param members the member type: value * @return Long integer-reply the number of members that were removed from the set, not including non existing members. @@ -190,7 +206,7 @@ public interface NodeSelectionSetCommands { /** * Add multiple sets. - * + * * @param keys the key * @return Set<V> array-reply list with members of the resulting set. */ @@ -198,7 +214,7 @@ public interface NodeSelectionSetCommands { /** * Add multiple sets. - * + * * @param channel streaming channel that receives a call for every value * @param keys the keys * @return Long count of members of the resulting set. @@ -207,7 +223,7 @@ public interface NodeSelectionSetCommands { /** * Add multiple sets and store the resulting set in a key. - * + * * @param destination the destination type: key * @param keys the key * @return Long integer-reply the number of elements in the resulting set. @@ -216,7 +232,7 @@ public interface NodeSelectionSetCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @return ValueScanCursor<V> scan cursor. */ @@ -224,7 +240,7 @@ public interface NodeSelectionSetCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanArgs scan arguments * @return ValueScanCursor<V> scan cursor. @@ -233,7 +249,7 @@ public interface NodeSelectionSetCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @param scanArgs scan arguments @@ -243,7 +259,7 @@ public interface NodeSelectionSetCommands { /** * Incrementally iterate Set elements. - * + * * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} * @return ValueScanCursor<V> scan cursor. @@ -252,7 +268,7 @@ public interface NodeSelectionSetCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @return StreamScanCursor scan cursor. @@ -261,7 +277,7 @@ public interface NodeSelectionSetCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanArgs scan arguments @@ -271,7 +287,7 @@ public interface NodeSelectionSetCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} @@ -282,7 +298,7 @@ public interface NodeSelectionSetCommands { /** * Incrementally iterate Set elements. - * + * * @param channel streaming channel that receives a call for every value * @param key the key * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionSortedSetCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionSortedSetCommands.java new file mode 100644 index 0000000000..9f65f9456a --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionSortedSetCommands.java @@ -0,0 +1,1251 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +import java.util.List; + +import io.lettuce.core.*; +import io.lettuce.core.output.ScoredValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Synchronous executed commands on a node selection for Sorted Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi + */ +public interface NodeSelectionSortedSetCommands { + + /** + * Removes and returns a member with the lowest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + Executions>> bzpopmin(long timeout, K... keys); + + /** + * Removes and returns a member with the highest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + Executions>> bzpopmax(long timeout, K... keys); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Executions zadd(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Executions zadd(K key, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Executions zadd(K key, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Executions zadd(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Executions zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the ke + * @param zAddArgs arguments for zadd + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Executions zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + */ + Executions zaddincr(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + * @since 4.3 + */ + Executions zaddincr(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Get the number of members in a sorted set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} + * does not exist. + */ + Executions zcard(K key); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + Executions zcount(K key, double min, double max); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + Executions zcount(K key, String min, String max); + + /** + * Count the members in a sorted set with scores within the given {@link Range}. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + Executions zcount(K key, Range range); + + /** + * Increment the score of a member in a sorted set. + * + * @param key the key + * @param amount the increment type: long + * @param member the member type: value + * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented + * as string. + */ + Executions zincrby(K key, double amount, V member); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Executions zinterstore(K destination, K... keys); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Executions zinterstore(K destination, ZStoreArgs storeArgs, K... keys); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zlexcount(java.lang.Object, Range)} + */ + @Deprecated + Executions zlexcount(K key, String min, String max); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + Executions zlexcount(K key, Range range); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + Executions> zpopmin(K key); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + Executions>> zpopmin(K key, long count); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + Executions> zpopmax(K key); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + Executions>> zpopmax(K key, long count); + + /** + * Return a range of members in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + Executions> zrange(K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Executions zrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + Executions>> zrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Executions zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + Executions> zrangebylex(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + Executions> zrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + Executions> zrangebylex(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + Executions> zrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Executions> zrangebyscore(K key, double min, double max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Executions> zrangebyscore(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions> zrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions> zrangebyscore(K key, double min, double max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions> zrangebyscore(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions> zrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Executions zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Executions zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Executions zrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Executions zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Executions zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Executions zrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + Executions>> zrangebyscoreWithScores(K key, double min, double max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + Executions>> zrangebyscoreWithScores(K key, String min, String max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions>> zrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Executions>> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions>> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions>> zrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Executions zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + Executions zrank(K key, V member); + + /** + * Remove one or more members from a sorted set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply specifically: + * + * The number of members removed from the sorted set, not including non existing members. + */ + Executions zrem(K key, V... members); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebylex(java.lang.Object, Range)} + */ + @Deprecated + Executions zremrangebylex(K key, String min, String max); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + Executions zremrangebylex(K key, Range range); + + /** + * Remove all members in a sorted set within the given indexes. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long integer-reply the number of elements removed. + */ + Executions zremrangebyrank(K key, long start, long stop); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Executions zremrangebyscore(K key, double min, double max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Executions zremrangebyscore(K key, String min, String max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + Executions zremrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + Executions> zrevrange(K key, long start, long stop); + + /** + * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Executions zrevrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + Executions>> zrevrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Executions zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions> zrevrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions> zrevrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Executions> zrevrangebyscore(K key, double max, double min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Executions> zrevrangebyscore(K key, String max, String min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions> zrevrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the withscores + * @param count the null + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions> zrevrangebyscore(K key, double max, double min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions> zrevrangebyscore(K key, String max, String min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions> zrevrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param max max score + * @param min min score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Executions zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Executions zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Executions zrevrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Executions zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + Executions>> zrevrangebyscoreWithScores(K key, double max, double min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + Executions>> zrevrangebyscoreWithScores(K key, String max, String min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions>> zrevrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions>> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions>> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + Executions>> zrevrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + */ + Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Executions zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set, with scores ordered from high to low. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + Executions zrevrank(K key, V member); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @return ScoredValueScanCursor<V> scan cursor. + */ + Executions> zscan(K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + Executions> zscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + Executions> zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ScoredValueScanCursor<V> scan cursor. + */ + Executions> zscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + Executions zscan(ScoredValueStreamingChannel channel, K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Executions zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + Executions zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + Executions zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Get the score associated with the given member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as + * string. + */ + Executions zscore(K key, V member); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination destination key + * @param keys source keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Executions zunionstore(K destination, K... keys); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Executions zunionstore(K destination, ZStoreArgs storeArgs, K... keys); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionStreamCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionStreamCommands.java new file mode 100644 index 0000000000..eb0f2d5f5b --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionStreamCommands.java @@ -0,0 +1,324 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.*; +import io.lettuce.core.XReadArgs.StreamOffset; + +/** + * Synchronous executed commands on a node selection for Streams. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.1 + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi + */ +public interface NodeSelectionStreamCommands { + + /** + * Acknowledge one or more messages as processed. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param messageIds message Id's to acknowledge. + * @return simple-reply the lenght of acknowledged messages. + */ + Executions xack(K key, K group, String... messageIds); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param body message body. + * @return simple-reply the message Id. + */ + Executions xadd(K key, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param body message body. + * @return simple-reply the message Id. + */ + Executions xadd(K key, XAddArgs args, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + Executions xadd(K key, Object... keysAndValues); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + Executions xadd(K key, XAddArgs args, Object... keysAndValues); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param minIdleTime + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + Executions>> xclaim(K key, Consumer consumer, long minIdleTime, String... messageIds); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + *

+ * Note that setting the {@code JUSTID} flag (calling this method with {@link XClaimArgs#justid()}) suppresses the message + * bode and {@link StreamMessage#getBody()} is {@code null}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param args + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + Executions>> xclaim(K key, Consumer consumer, XClaimArgs args, String... messageIds); + + /** + * Removes the specified entries from the stream. Returns the number of items deleted, that may be different from the number + * of IDs passed in case certain IDs do not exist. + * + * @param key the stream key. + * @param messageIds stream message Id's. + * @return simple-reply number of removed entries. + */ + Executions xdel(K key, String... messageIds); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + Executions xgroupCreate(StreamOffset streamOffset, K group); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @param args + * @return simple-reply {@literal true} if successful. + * @since 5.2 + */ + Executions xgroupCreate(StreamOffset streamOffset, K group, XGroupCreateArgs args); + + /** + * Delete a consumer from a consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @return simple-reply {@literal true} if successful. + */ + Executions xgroupDelconsumer(K key, Consumer consumer); + + /** + * Destroy a consumer group. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + Executions xgroupDestroy(K key, K group); + + /** + * Set the current {@code group} id. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply OK + */ + Executions xgroupSetid(StreamOffset streamOffset, K group); + + /** + * Retrieve information about the stream at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + Executions> xinfoStream(K key); + + /** + * Retrieve information about the stream consumer groups at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + Executions> xinfoGroups(K key); + + /** + * Retrieve information about consumer groups of group {@code group} and stream at {@code key}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply. + * @since 5.2 + */ + Executions> xinfoConsumers(K key, K group); + + /** + * Get the length of a steam. + * + * @param key the stream key. + * @return simple-reply the lenght of the stream. + */ + Executions xlen(K key); + + /** + * Read pending messages from a stream for a {@code group}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply list pending entries. + */ + Executions> xpending(K key, K group); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + Executions> xpending(K key, K group, Range range, Limit limit); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + Executions> xpending(K key, Consumer consumer, Range range, Limit limit); + + /** + * Read messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + Executions>> xrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + Executions>> xrange(K key, Range range, Limit limit); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + Executions>> xread(StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + Executions>> xread(XReadArgs args, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + Executions>> xreadgroup(Consumer consumer, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + Executions>> xreadgroup(Consumer consumer, XReadArgs args, StreamOffset... streams); + + /** + * Read messages from a stream within a specific {@link Range} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + Executions>> xrevrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + Executions>> xrevrange(K key, Range range, Limit limit); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + Executions xtrim(K key, long count); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param approximateTrimming {@literal true} to trim approximately using the {@code ~} flag. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + Executions xtrim(K key, boolean approximateTrimming, long count); +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionStringCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionStringCommands.java similarity index 81% rename from src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionStringCommands.java rename to src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionStringCommands.java index f85d35899e..3e0e9b437e 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/NodeSelectionStringCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/sync/NodeSelectionStringCommands.java @@ -1,25 +1,42 @@ -package com.lambdaworks.redis.cluster.api.sync; +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; import java.util.List; import java.util.Map; -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.BitFieldArgs; -import com.lambdaworks.redis.SetArgs; + +import io.lettuce.core.BitFieldArgs; +import io.lettuce.core.KeyValue; +import io.lettuce.core.SetArgs; +import io.lettuce.core.output.KeyValueStreamingChannel; /** * Synchronous executed commands on a node selection for Strings. - * + * * @param Key type. * @param Value type. * @author Mark Paluch * @since 4.0 - * @generated by com.lambdaworks.apigenerator.CreateSyncNodeSelectionClusterApi + * @generated by io.lettuce.apigenerator.CreateSyncNodeSelectionClusterApi */ public interface NodeSelectionStringCommands { /** * Append a value to a key. - * + * * @param key the key * @param value the value * @return Long integer-reply the length of the string after the append operation. @@ -28,77 +45,94 @@ public interface NodeSelectionStringCommands { /** * Count set bits in a string. - * + * * @param key the key - * + * * @return Long integer-reply The number of bits set to 1. */ Executions bitcount(K key); /** * Count set bits in a string. - * + * * @param key the key * @param start the start * @param end the end - * + * * @return Long integer-reply The number of bits set to 1. */ Executions bitcount(K key, long start, long end); /** * Execute {@code BITFIELD} with its subcommands. - * + * * @param key the key * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. - * + * * @return Long bulk-reply the results from the bitfield commands. */ Executions> bitfield(K key, BitFieldArgs bitFieldArgs); /** * Find first bit set or clear in a string. - * + * * @param key the key * @param state the state - * + * * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * + * * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is * returned. - * + * * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * + * * Basically the function consider the right of the string as padded with zeros if you look for clear bits and * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. */ Executions bitpos(K key, boolean state); /** * Find first bit set or clear in a string. - * + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * @since 5.0.1 + */ + Executions bitpos(K key, boolean state, long start); + + /** + * Find first bit set or clear in a string. + * * @param key the key * @param state the bit type: long * @param start the start type: long * @param end the end type: long * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * + * * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is * returned. - * + * * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * + * * Basically the function consider the right of the string as padded with zeros if you look for clear bits and * specify no range or the start argument only. - * + * * However this behavior changes if you are looking for clear bits and specify a range with both * start and end. If no clear bit is found in the specified range, the function * returns -1 as the user specified a clear range and there are no 0 bits in that range. @@ -107,7 +141,7 @@ public interface NodeSelectionStringCommands { /** * Perform bitwise AND between strings. - * + * * @param destination result key of the operation * @param keys operation input key names * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest @@ -117,7 +151,7 @@ public interface NodeSelectionStringCommands { /** * Perform bitwise NOT between strings. - * + * * @param destination result key of the operation * @param source operation input key names * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest @@ -127,7 +161,7 @@ public interface NodeSelectionStringCommands { /** * Perform bitwise OR between strings. - * + * * @param destination result key of the operation * @param keys operation input key names * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest @@ -137,7 +171,7 @@ public interface NodeSelectionStringCommands { /** * Perform bitwise XOR between strings. - * + * * @param destination result key of the operation * @param keys operation input key names * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest @@ -147,7 +181,7 @@ public interface NodeSelectionStringCommands { /** * Decrement the integer value of a key by one. - * + * * @param key the key * @return Long integer-reply the value of {@code key} after the decrement */ @@ -155,7 +189,7 @@ public interface NodeSelectionStringCommands { /** * Decrement the integer value of a key by the given number. - * + * * @param key the key * @param amount the decrement type: long * @return Long integer-reply the value of {@code key} after the decrement @@ -164,7 +198,7 @@ public interface NodeSelectionStringCommands { /** * Get the value of a key. - * + * * @param key the key * @return V bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not exist. */ @@ -172,7 +206,7 @@ public interface NodeSelectionStringCommands { /** * Returns the bit value at offset in the string value stored at key. - * + * * @param key the key * @param offset the offset type: long * @return Long integer-reply the bit value stored at offset. @@ -181,7 +215,7 @@ public interface NodeSelectionStringCommands { /** * Get a substring of the string stored at a key. - * + * * @param key the key * @param start the start type: long * @param end the end type: long @@ -191,7 +225,7 @@ public interface NodeSelectionStringCommands { /** * Set the string value of a key and return its old value. - * + * * @param key the key * @param value the value * @return V bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} did not exist. @@ -200,7 +234,7 @@ public interface NodeSelectionStringCommands { /** * Increment the integer value of a key by one. - * + * * @param key the key * @return Long integer-reply the value of {@code key} after the increment */ @@ -208,7 +242,7 @@ public interface NodeSelectionStringCommands { /** * Increment the integer value of a key by the given amount. - * + * * @param key the key * @param amount the increment type: long * @return Long integer-reply the value of {@code key} after the increment @@ -217,7 +251,7 @@ public interface NodeSelectionStringCommands { /** * Increment the float value of a key by the given amount. - * + * * @param key the key * @param amount the increment type: double * @return Double bulk-string-reply the value of {@code key} after the increment. @@ -226,25 +260,25 @@ public interface NodeSelectionStringCommands { /** * Get the values of all the given keys. - * + * * @param keys the key * @return List<V> array-reply list of values at the specified keys. */ - Executions> mget(K... keys); + Executions>> mget(K... keys); /** * Stream over the values of all the given keys. - * + * * @param channel the channel * @param keys the keys - * + * * @return Long array-reply list of values at the specified keys. */ - Executions mget(ValueStreamingChannel channel, K... keys); + Executions mget(KeyValueStreamingChannel channel, K... keys); /** * Set multiple keys to multiple values. - * + * * @param map the null * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. */ @@ -252,38 +286,38 @@ public interface NodeSelectionStringCommands { /** * Set multiple keys to multiple values, only if none of the keys exist. - * + * * @param map the null * @return Boolean integer-reply specifically: - * + * * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). */ Executions msetnx(Map map); /** * Set the string value of a key. - * + * * @param key the key * @param value the value - * + * * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. */ Executions set(K key, V value); /** * Set the string value of a key. - * + * * @param key the key * @param value the value * @param setArgs the setArgs - * + * * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. */ Executions set(K key, V value, SetArgs setArgs); /** * Sets or clears the bit at offset in the string value stored at key. - * + * * @param key the key * @param offset the offset type: long * @param value the value type: string @@ -293,7 +327,7 @@ public interface NodeSelectionStringCommands { /** * Set the value and expiration of a key. - * + * * @param key the key * @param seconds the seconds type: long * @param value the value @@ -303,7 +337,7 @@ public interface NodeSelectionStringCommands { /** * Set the value and expiration in milliseconds of a key. - * + * * @param key the key * @param milliseconds the milliseconds type: long * @param value the value @@ -313,18 +347,18 @@ public interface NodeSelectionStringCommands { /** * Set the value of a key, only if the key does not exist. - * + * * @param key the key * @param value the value * @return Boolean integer-reply specifically: - * + * * {@code 1} if the key was set {@code 0} if the key was not set */ Executions setnx(K key, V value); /** * Overwrite part of a string at key starting at the specified offset. - * + * * @param key the key * @param offset the offset type: long * @param value the value @@ -334,7 +368,7 @@ public interface NodeSelectionStringCommands { /** * Get the length of the value stored in a key. - * + * * @param key the key * @return Long integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does not exist. */ diff --git a/src/main/java/com/lambdaworks/redis/cluster/api/sync/RedisAdvancedClusterCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/RedisAdvancedClusterCommands.java similarity index 80% rename from src/main/java/com/lambdaworks/redis/cluster/api/sync/RedisAdvancedClusterCommands.java rename to src/main/java/io/lettuce/core/cluster/api/sync/RedisAdvancedClusterCommands.java index 5bf7a14e68..65b180cee3 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/api/sync/RedisAdvancedClusterCommands.java +++ b/src/main/java/io/lettuce/core/cluster/api/sync/RedisAdvancedClusterCommands.java @@ -1,33 +1,47 @@ -package com.lambdaworks.redis.cluster.api.sync; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; import java.util.List; import java.util.Map; import java.util.function.Predicate; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.sync.RedisKeyCommands; -import com.lambdaworks.redis.api.sync.RedisScriptingCommands; -import com.lambdaworks.redis.api.sync.RedisServerCommands; -import com.lambdaworks.redis.api.sync.RedisStringCommands; -import com.lambdaworks.redis.cluster.ClusterClientOptions; -import com.lambdaworks.redis.cluster.RedisAdvancedClusterConnection; -import com.lambdaworks.redis.cluster.api.NodeSelectionSupport; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.output.KeyStreamingChannel; +import io.lettuce.core.*; +import io.lettuce.core.api.sync.RedisKeyCommands; +import io.lettuce.core.api.sync.RedisScriptingCommands; +import io.lettuce.core.api.sync.RedisServerCommands; +import io.lettuce.core.api.sync.RedisStringCommands; +import io.lettuce.core.cluster.ClusterClientOptions; +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.output.KeyStreamingChannel; /** * Advanced synchronous and thread-safe Redis Cluster API. - * + * * @author Mark Paluch * @since 4.0 */ -public interface RedisAdvancedClusterCommands extends RedisClusterCommands, RedisAdvancedClusterConnection { +public interface RedisAdvancedClusterCommands extends RedisClusterCommands { /** * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. In * contrast to the {@link RedisAdvancedClusterCommands}, node-connections do not route commands to other cluster nodes - * + * * @param nodeId the node Id * @return a connection to the requested cluster node */ @@ -38,7 +52,7 @@ public interface RedisAdvancedClusterCommands extends RedisClusterCommands * {@link RedisAdvancedClusterCommands}, node-connections do not route commands to other cluster nodes. Host and port * connections are verified by default for cluster membership, see * {@link ClusterClientOptions#isValidateClusterNodeMembership()}. - * + * * @param host the host * @param port the port * @return a connection to the requested cluster node @@ -60,25 +74,51 @@ default NodeSelection masters() { } /** - * Select all slaves. + * Select all replicas. * - * @return API with synchronous executed commands on a selection of slave cluster nodes. + * @return API with synchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas()} */ + @Deprecated default NodeSelection slaves() { return readonly(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); } /** - * Select all slaves. + * Select all replicas. * * @param predicate Predicate to filter nodes - * @return API with synchronous executed commands on a selection of slave cluster nodes. + * @return API with synchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas(Predicate)} */ + @Deprecated default NodeSelection slaves(Predicate predicate) { return readonly( redisClusterNode -> predicate.test(redisClusterNode) && redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); } + /** + * Select all replicas. + * + * @return API with synchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default NodeSelection replicas() { + return readonly(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + + /** + * Select all replicas. + * + * @param predicate Predicate to filter nodes + * @return API with synchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default NodeSelection replicas(Predicate predicate) { + return readonly( + redisClusterNode -> predicate.test(redisClusterNode) && redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + /** * Select all known cluster nodes. * @@ -89,8 +129,8 @@ default NodeSelection all() { } /** - * Select slave nodes by a predicate and keeps a static selection. Slave connections operate in {@literal READONLY} mode. - * The set of nodes within the {@link NodeSelectionSupport} does not change when the cluster view changes. + * Select replica nodes by a predicate and keeps a static selection. Replica connections operate in {@literal READONLY} + * mode. The set of nodes within the {@link NodeSelectionSupport} does not change when the cluster view changes. * * @param predicate Predicate to filter nodes * @return API with synchronous executed commands on a selection of cluster nodes matching {@code predicate} @@ -135,7 +175,8 @@ default NodeSelection all() { Long unlink(K... keys); /** - * Determine how many keys exist with pipelining. Cross-slot keys will result in multiple calls to the particular cluster nodes. + * Determine how many keys exist with pipelining. Cross-slot keys will result in multiple calls to the particular cluster + * nodes. * * @param keys the keys * @return Long integer-reply specifically: Number of existing keys @@ -145,17 +186,17 @@ default NodeSelection all() { /** * Get the values of all the given keys with pipelining. Cross-slot keys will result in multiple calls to the particular * cluster nodes. - * + * * @param keys the key * @return List<V> array-reply list of values at the specified keys. * @see RedisStringCommands#mget(Object[]) */ - List mget(K... keys); + List> mget(K... keys); /** * Set multiple keys to multiple values with pipelining. Cross-slot keys will result in multiple calls to the particular * cluster nodes. - * + * * @param map the map * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. * @see RedisStringCommands#mset(Map) @@ -165,10 +206,10 @@ default NodeSelection all() { /** * Set multiple keys to multiple values, only if none of the keys exist with pipelining. Cross-slot keys will result in * multiple calls to the particular cluster nodes. - * + * * @param map the null * @return Boolean integer-reply specifically: - * + * * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). * @see RedisStringCommands#msetnx(Map) */ @@ -229,10 +270,10 @@ default NodeSelection all() { /** * Return a random key from the keyspace on a random master. * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. + * @return K bulk-string-reply the random key, or {@literal null} when the database is empty. * @see RedisKeyCommands#randomkey() */ - V randomkey(); + K randomkey(); /** * Remove all the scripts from the script cache on all cluster nodes. @@ -252,7 +293,7 @@ default NodeSelection all() { /** * Synchronously save the dataset to disk and then shut down all nodes of the cluster. - * + * * @param save {@literal true} force save operation * @see RedisServerCommands#shutdown(boolean) */ diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/RedisClusterCommands.java b/src/main/java/io/lettuce/core/cluster/api/sync/RedisClusterCommands.java new file mode 100644 index 0000000000..0313e3a205 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/RedisClusterCommands.java @@ -0,0 +1,300 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.api.sync; + +import java.time.Duration; +import java.util.List; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.api.sync.*; + +/** + * A complete synchronous and thread-safe Redis Cluster API with 400+ Methods. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisClusterCommands + extends BaseRedisCommands, RedisGeoCommands, RedisHashCommands, RedisHLLCommands, + RedisKeyCommands, RedisListCommands, RedisScriptingCommands, RedisServerCommands, + RedisSetCommands, RedisSortedSetCommands, RedisStreamCommands, RedisStringCommands { + + /** + * Set the default timeout for operations. A zero timeout value indicates to not time out. + * + * @param timeout the timeout value + * @since 5.0 + */ + void setTimeout(Duration timeout); + + /** + * Authenticate to the server. + * + * @param password the password + * @return String simple-string-reply + */ + String auth(CharSequence password); + + /** + * Authenticate to the server with username and password. Requires Redis 6 or newer. + * + * @param username the username + * @param password the password + * @return String simple-string-reply + * @since 6.0 + */ + String auth(String username, CharSequence password); + + /** + * Generate a new config epoch, incrementing the current epoch, assign the new epoch to this node, WITHOUT any consensus and + * persist the configuration on disk before sending packets with the new configuration. + * + * @return String simple-string-reply If the new config epoch is generated and assigned either BUMPED (epoch) or STILL + * (epoch) are returned. + */ + String clusterBumpepoch(); + + /** + * Meet another cluster node to include the node into the cluster. The command starts the cluster handshake and returns with + * {@literal OK} when the node was added to the cluster. + * + * @param ip IP address of the host + * @param port port number. + * @return String simple-string-reply + */ + String clusterMeet(String ip, int port); + + /** + * Blacklist and remove the cluster node from the cluster. + * + * @param nodeId the node Id + * @return String simple-string-reply + */ + String clusterForget(String nodeId); + + /** + * Adds slots to the cluster node. The current node will become the master for the specified slots. + * + * @param slots one or more slots from {@literal 0} to {@literal 16384} + * @return String simple-string-reply + */ + String clusterAddSlots(int... slots); + + /** + * Removes slots from the cluster node. + * + * @param slots one or more slots from {@literal 0} to {@literal 16384} + * @return String simple-string-reply + */ + String clusterDelSlots(int... slots); + + /** + * Assign a slot to a node. The command migrates the specified slot from the current node to the specified node in + * {@code nodeId} + * + * @param slot the slot + * @param nodeId the id of the node that will become the master for the slot + * @return String simple-string-reply + */ + String clusterSetSlotNode(int slot, String nodeId); + + /** + * Clears migrating / importing state from the slot. + * + * @param slot the slot + * @return String simple-string-reply + */ + String clusterSetSlotStable(int slot); + + /** + * Flag a slot as {@literal MIGRATING} (outgoing) towards the node specified in {@code nodeId}. The slot must be handled by + * the current node in order to be migrated. + * + * @param slot the slot + * @param nodeId the id of the node is targeted to become the master for the slot + * @return String simple-string-reply + */ + String clusterSetSlotMigrating(int slot, String nodeId); + + /** + * Flag a slot as {@literal IMPORTING} (incoming) from the node specified in {@code nodeId}. + * + * @param slot the slot + * @param nodeId the id of the node is the master of the slot + * @return String simple-string-reply + */ + String clusterSetSlotImporting(int slot, String nodeId); + + /** + * Get information and statistics about the cluster viewed by the current node. + * + * @return String bulk-string-reply as a collection of text lines. + */ + String clusterInfo(); + + /** + * Obtain the nodeId for the currently connected node. + * + * @return String simple-string-reply + */ + String clusterMyId(); + + /** + * Obtain details about all cluster nodes. Can be parsed using + * {@link io.lettuce.core.cluster.models.partitions.ClusterPartitionParser#parse} + * + * @return String bulk-string-reply as a collection of text lines + */ + String clusterNodes(); + + /** + * List replicas for a certain node identified by its {@code nodeId}. Can be parsed using + * {@link io.lettuce.core.cluster.models.partitions.ClusterPartitionParser#parse} + * + * @param nodeId node id of the master node + * @return List<String> array-reply list of replicas. The command returns data in the same format as + * {@link #clusterNodes()} but one line per replica. + */ + List clusterSlaves(String nodeId); + + /** + * Retrieve the list of keys within the {@code slot}. + * + * @param slot the slot + * @param count maximal number of keys + * @return List<K> array-reply list of keys + */ + List clusterGetKeysInSlot(int slot, int count); + + /** + * Returns the number of keys in the specified Redis Cluster hash {@code slot}. + * + * @param slot the slot + * @return Integer reply: The number of keys in the specified hash slot, or an error if the hash slot is invalid. + */ + Long clusterCountKeysInSlot(int slot); + + /** + * Returns the number of failure reports for the specified node. Failure reports are the way Redis Cluster uses in order to + * promote a {@literal PFAIL} state, that means a node is not reachable, to a {@literal FAIL} state, that means that the + * majority of masters in the cluster agreed within a window of time that the node is not reachable. + * + * @param nodeId the node id + * @return Integer reply: The number of active failure reports for the node. + */ + Long clusterCountFailureReports(String nodeId); + + /** + * Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and + * testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Basically the same as + * {@link io.lettuce.core.cluster.SlotHash#getSlot(byte[])}. If not, call Houston and report that we've got a problem. + * + * @param key the key. + * @return Integer reply: The hash slot number. + */ + Long clusterKeyslot(K key); + + /** + * Forces a node to save the nodes.conf configuration on disk. + * + * @return String simple-string-reply: {@code OK} or an error if the operation fails. + */ + String clusterSaveconfig(); + + /** + * This command sets a specific config epoch in a fresh node. It only works when: + *

    + *
  • The nodes table of the node is empty.
  • + *
  • The node current config epoch is zero.
  • + *
+ * + * @param configEpoch the config epoch + * @return String simple-string-reply: {@code OK} or an error if the operation fails. + */ + String clusterSetConfigEpoch(long configEpoch); + + /** + * Get array of cluster slots to node mappings. + * + * @return List<Object> array-reply nested list of slot ranges with IP/Port mappings. + */ + List clusterSlots(); + + /** + * The asking command is required after a {@code -ASK} redirection. The client should issue {@code ASKING} before to + * actually send the command to the target instance. See the Redis Cluster specification for more information. + * + * @return String simple-string-reply + */ + String asking(); + + /** + * Turn this node into a replica of the node with the id {@code nodeId}. + * + * @param nodeId master node id + * @return String simple-string-reply + */ + String clusterReplicate(String nodeId); + + /** + * Failover a cluster node. Turns the currently connected node into a master and the master into its replica. + * + * @param force do not coordinate with master if {@literal true} + * @return String simple-string-reply + */ + String clusterFailover(boolean force); + + /** + * Reset a node performing a soft or hard reset: + *
    + *
  • All other nodes are forgotten
  • + *
  • All the assigned / open slots are released
  • + *
  • If the node is a replica, it turns into a master
  • + *
  • Only for hard reset: a new Node ID is generated
  • + *
  • Only for hard reset: currentEpoch and configEpoch are set to 0
  • + *
  • The new configuration is saved and the cluster state updated
  • + *
  • If the node was a replica, the whole data set is flushed away
  • + *
+ * + * @param hard {@literal true} for hard reset. Generates a new nodeId and currentEpoch/configEpoch are set to 0 + * @return String simple-string-reply + */ + String clusterReset(boolean hard); + + /** + * Delete all the slots associated with the specified node. The number of deleted slots is returned. + * + * @return String simple-string-reply + */ + String clusterFlushslots(); + + /** + * Tells a Redis cluster replica node that the client is ok reading possibly stale data and is not interested in running + * write queries. + * + * @return String simple-string-reply + */ + String readOnly(); + + /** + * Resets readOnly flag. + * + * @return String simple-string-reply + */ + String readWrite(); +} diff --git a/src/main/java/io/lettuce/core/cluster/api/sync/package-info.java b/src/main/java/io/lettuce/core/cluster/api/sync/package-info.java new file mode 100644 index 0000000000..12e9559f96 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/api/sync/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Cluster API for synchronous executed commands. + */ +package io.lettuce.core.cluster.api.sync; diff --git a/src/main/java/io/lettuce/core/cluster/event/ClusterTopologyChangedEvent.java b/src/main/java/io/lettuce/core/cluster/event/ClusterTopologyChangedEvent.java new file mode 100644 index 0000000000..80b8932610 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/event/ClusterTopologyChangedEvent.java @@ -0,0 +1,73 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.event; + +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.event.Event; + +/** + * Signals a discovered cluster topology change. The event carries the view {@link #before()} and {@link #after} the change. + * + * @author Mark Paluch + * @since 3.4 + */ +public class ClusterTopologyChangedEvent implements Event { + + private final List before; + private final List after; + + /** + * Creates a new {@link ClusterTopologyChangedEvent}. + * + * @param before the cluster topology view before the topology changed, must not be {@literal null} + * @param after the cluster topology view after the topology changed, must not be {@literal null} + */ + public ClusterTopologyChangedEvent(List before, List after) { + this.before = Collections.unmodifiableList(before); + this.after = Collections.unmodifiableList(after); + } + + /** + * Returns the cluster topology view before the topology changed. + * + * @return the cluster topology view before the topology changed. + */ + public List before() { + return before; + } + + /** + * Returns the cluster topology view after the topology changed. + * + * @return the cluster topology view after the topology changed. + */ + public List after() { + return after; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [before=").append(before.size()); + sb.append(", after=").append(after.size()); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/event/package-info.java b/src/main/java/io/lettuce/core/cluster/event/package-info.java new file mode 100644 index 0000000000..1b6ed4555a --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/event/package-info.java @@ -0,0 +1,5 @@ +/** + * Cluster event types. + */ +package io.lettuce.core.cluster.event; + diff --git a/src/main/java/io/lettuce/core/cluster/models/partitions/ClusterPartitionParser.java b/src/main/java/io/lettuce/core/cluster/models/partitions/ClusterPartitionParser.java new file mode 100644 index 0000000000..84e4b244d9 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/models/partitions/ClusterPartitionParser.java @@ -0,0 +1,196 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.partitions; + +import java.util.*; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.SlotHash; +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceLists; + +/** + * Parser for node information output of {@code CLUSTER NODES} and {@code CLUSTER SLAVES}. + * + * @author Mark Paluch + * @since 3.0 + */ +public class ClusterPartitionParser { + + public static final String CONNECTED = "connected"; + + private static final String TOKEN_SLOT_IN_TRANSITION = "["; + private static final char TOKEN_NODE_SEPARATOR = '\n'; + private static final Map FLAG_MAPPING; + + static { + Map map = new HashMap<>(); + + map.put("noflags", RedisClusterNode.NodeFlag.NOFLAGS); + map.put("myself", RedisClusterNode.NodeFlag.MYSELF); + map.put("master", RedisClusterNode.NodeFlag.MASTER); + map.put("slave", RedisClusterNode.NodeFlag.SLAVE); + map.put("replica", RedisClusterNode.NodeFlag.REPLICA); + map.put("fail?", RedisClusterNode.NodeFlag.EVENTUAL_FAIL); + map.put("fail", RedisClusterNode.NodeFlag.FAIL); + map.put("handshake", RedisClusterNode.NodeFlag.HANDSHAKE); + map.put("noaddr", RedisClusterNode.NodeFlag.NOADDR); + FLAG_MAPPING = Collections.unmodifiableMap(map); + } + + /** + * Utility constructor. + */ + private ClusterPartitionParser() { + + } + + /** + * Parse partition lines into Partitions object. + * + * @param nodes output of CLUSTER NODES + * @return the partitions object. + */ + public static Partitions parse(String nodes) { + Partitions result = new Partitions(); + + try { + + String[] lines = nodes.split(Character.toString(TOKEN_NODE_SEPARATOR)); + List mappedNodes = new ArrayList<>(lines.length); + + for (String line : lines) { + + if (line.isEmpty()) { + continue; + + } + mappedNodes.add(ClusterPartitionParser.parseNode(line)); + } + result.addAll(mappedNodes); + } catch (Exception e) { + throw new RedisException("Cannot parse " + nodes, e); + } + + return result; + } + + private static RedisClusterNode parseNode(String nodeInformation) { + + Iterator iterator = Arrays.asList(nodeInformation.split(" ")).iterator(); + + String nodeId = iterator.next(); + boolean connected = false; + RedisURI uri = null; + + String hostAndPortPart = iterator.next(); + if (hostAndPortPart.contains("@")) { + hostAndPortPart = hostAndPortPart.substring(0, hostAndPortPart.indexOf('@')); + } + + HostAndPort hostAndPort = HostAndPort.parseCompat(hostAndPortPart); + + if (LettuceStrings.isNotEmpty(hostAndPort.getHostText())) { + uri = RedisURI.Builder.redis(hostAndPort.getHostText(), hostAndPort.getPort()).build(); + } + + String flags = iterator.next(); + List flagStrings = LettuceLists.newList(flags.split("\\,")); + + Set nodeFlags = readFlags(flagStrings); + + String replicaOfString = iterator.next(); // (nodeId or -) + String replicaOf = "-".equals(replicaOfString) ? null : replicaOfString; + + long pingSentTs = getLongFromIterator(iterator, 0); + long pongReceivedTs = getLongFromIterator(iterator, 0); + long configEpoch = getLongFromIterator(iterator, 0); + + String connectedFlags = iterator.next(); // "connected" : "disconnected" + + if (CONNECTED.equals(connectedFlags)) { + connected = true; + } + + List slotStrings = LettuceLists.newList(iterator); // slot, from-to [slot->-nodeID] [slot-<-nodeID] + BitSet slots = readSlots(slotStrings); + + RedisClusterNode partition = new RedisClusterNode(uri, nodeId, connected, replicaOf, pingSentTs, pongReceivedTs, + configEpoch, slots, nodeFlags); + + return partition; + + } + + private static Set readFlags(List flagStrings) { + + Set flags = new HashSet<>(); + for (String flagString : flagStrings) { + if (FLAG_MAPPING.containsKey(flagString)) { + flags.add(FLAG_MAPPING.get(flagString)); + } + } + + if (flags.contains(RedisClusterNode.NodeFlag.SLAVE)) { + flags.add(RedisClusterNode.NodeFlag.REPLICA); + } + + return Collections.unmodifiableSet(flags); + } + + private static BitSet readSlots(List slotStrings) { + + BitSet slots = new BitSet(SlotHash.SLOT_COUNT); + for (String slotString : slotStrings) { + + if (slotString.startsWith(TOKEN_SLOT_IN_TRANSITION)) { + // not interesting + continue; + + } + + if (slotString.contains("-")) { + // slot range + Iterator it = Arrays.asList(slotString.split("\\-")).iterator(); + int from = Integer.parseInt(it.next()); + int to = Integer.parseInt(it.next()); + + for (int slot = from; slot <= to; slot++) { + slots.set(slot); + + } + continue; + } + + slots.set(Integer.parseInt(slotString)); + } + + return slots; + } + + private static long getLongFromIterator(Iterator iterator, long defaultValue) { + if (iterator.hasNext()) { + Object object = iterator.next(); + if (object instanceof String) { + return Long.parseLong((String) object); + } + } + return defaultValue; + } + +} diff --git a/src/main/java/com/lambdaworks/redis/cluster/models/partitions/Partitions.java b/src/main/java/io/lettuce/core/cluster/models/partitions/Partitions.java similarity index 78% rename from src/main/java/com/lambdaworks/redis/cluster/models/partitions/Partitions.java rename to src/main/java/io/lettuce/core/cluster/models/partitions/Partitions.java index 1312eb7ce0..04a74fe0e8 100644 --- a/src/main/java/com/lambdaworks/redis/cluster/models/partitions/Partitions.java +++ b/src/main/java/io/lettuce/core/cluster/models/partitions/Partitions.java @@ -1,9 +1,25 @@ -package com.lambdaworks.redis.cluster.models.partitions; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.partitions; import java.util.*; -import com.lambdaworks.redis.cluster.SlotHash; -import com.lambdaworks.redis.internal.LettuceAssert; +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.SlotHash; +import io.lettuce.core.internal.LettuceAssert; /** * Cluster topology view. An instance of {@link Partitions} provides access to the partitions of a Redis Cluster. A partition is @@ -18,11 +34,11 @@ * Topology changes are: * *
    - *
  • Changes in {@link com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode.NodeFlag#MASTER}/ - * {@link com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode.NodeFlag#SLAVE} state
  • + *
  • Changes in {@link io.lettuce.core.cluster.models.partitions.RedisClusterNode.NodeFlag#MASTER}/ + * {@link io.lettuce.core.cluster.models.partitions.RedisClusterNode.NodeFlag#SLAVE} state
  • *
  • Newly added or removed nodes to/from the Redis Cluster
  • *
  • Changes in {@link RedisClusterNode#getSlots()} responsibility
  • - *
  • Changes to the {@link RedisClusterNode#getSlaveOf() slave replication source} (the master of a slave)
  • + *
  • Changes to the {@link RedisClusterNode#getSlaveOf() replication source} (the master of a replica)
  • *
  • Changes to the {@link RedisClusterNode#getUri()} () connection point}
  • *
* @@ -36,17 +52,39 @@ */ public class Partitions implements Collection { + private static final RedisClusterNode[] EMPTY = new RedisClusterNode[SlotHash.SLOT_COUNT]; + private final List partitions = new ArrayList<>(); - private final static RedisClusterNode[] EMPTY = new RedisClusterNode[SlotHash.SLOT_COUNT]; private volatile RedisClusterNode slotCache[] = EMPTY; private volatile Collection nodeReadView = Collections.emptyList(); + /** + * Create a deep copy of this {@link Partitions} object. + * + * @return a deep copy of this {@link Partitions} object. + */ + @Override + public Partitions clone() { + + Collection readView = new ArrayList<>(nodeReadView); + + Partitions copy = new Partitions(); + + for (RedisClusterNode node : readView) { + copy.addPartition(node.clone()); + } + + copy.updateCache(); + + return copy; + } + /** * Retrieve a {@link RedisClusterNode} by its slot number. This method does not distinguish between masters and slaves. * - * @param slot the slot - * @return RedisClusterNode or {@literal null} + * @param slot the slot hash. + * @return the {@link RedisClusterNode} or {@literal null} if not found. */ public RedisClusterNode getPartitionBySlot(int slot) { return slotCache[slot]; @@ -55,8 +93,8 @@ public RedisClusterNode getPartitionBySlot(int slot) { /** * Retrieve a {@link RedisClusterNode} by its node id. * - * @param nodeId the nodeId - * @return RedisClusterNode or {@literal null} + * @param nodeId the nodeId. + * @return the {@link RedisClusterNode} or {@literal null} if not found. */ public RedisClusterNode getPartitionByNodeId(String nodeId) { @@ -65,9 +103,42 @@ public RedisClusterNode getPartitionByNodeId(String nodeId) { return partition; } } + return null; } + /** + * Retrieve a {@link RedisClusterNode} by its hostname/port considering node aliases. + * + * @param host hostname. + * @param port port number. + * @return the {@link RedisClusterNode} or {@literal null} if not found. + */ + public RedisClusterNode getPartition(String host, int port) { + + for (RedisClusterNode partition : nodeReadView) { + + RedisURI uri = partition.getUri(); + + if (matches(uri, host, port)) { + return partition; + } + + for (RedisURI redisURI : partition.getAliases()) { + + if (matches(redisURI, host, port)) { + return partition; + } + } + } + + return null; + } + + private static boolean matches(RedisURI uri, String host, int port) { + return uri.getPort() == port && host.equals(uri.getHost()); + } + /** * Update the partition cache. Updates are necessary after the partition details have changed. */ @@ -87,9 +158,7 @@ public void updateCache() { for (RedisClusterNode partition : partitions) { readView.add(partition); - for (Integer integer : partition.getSlots()) { - slotCache[integer.intValue()] = partition; - } + partition.forEachSlot(i -> slotCache[i] = partition); } this.slotCache = slotCache; @@ -130,7 +199,7 @@ public void addPartition(RedisClusterNode partition) { LettuceAssert.notNull(partition, "Partition must not be null"); - synchronized (this) { + synchronized (partitions) { slotCache = EMPTY; partitions.add(partition); } @@ -163,7 +232,7 @@ public void reload(List partitions) { LettuceAssert.noNullElements(partitions, "Partitions must not contain null elements"); - synchronized (partitions) { + synchronized (this.partitions) { this.partitions.clear(); this.partitions.addAll(partitions); updateCache(); diff --git a/src/main/java/io/lettuce/core/cluster/models/partitions/RedisClusterNode.java b/src/main/java/io/lettuce/core/cluster/models/partitions/RedisClusterNode.java new file mode 100644 index 0000000000..43d0ce00eb --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/models/partitions/RedisClusterNode.java @@ -0,0 +1,438 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.partitions; + +import java.io.Serializable; +import java.util.*; +import java.util.function.IntConsumer; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.SlotHash; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * Representation of a Redis Cluster node. A {@link RedisClusterNode} is identified by its {@code nodeId}. + *

+ * A {@link RedisClusterNode} can be a {@link #getRole() responsible master} or replica. Masters can be responsible for zero to + * {@link io.lettuce.core.cluster.SlotHash#SLOT_COUNT 16384} slots. Each replica refers to exactly one {@link #getSlaveOf() + * master}. Nodes can have different {@link io.lettuce.core.cluster.models.partitions.RedisClusterNode.NodeFlag flags} assigned. + *

+ * This class is mutable and not thread-safe if mutated by multiple threads concurrently. + * + * @author Mark Paluch + * @author Alessandro Simi + * @since 3.0 + */ +@SuppressWarnings("serial") +public class RedisClusterNode implements Serializable, RedisNodeDescription { + + private RedisURI uri; + private String nodeId; + + private boolean connected; + private String slaveOf; + private long pingSentTimestamp; + private long pongReceivedTimestamp; + private long configEpoch; + + private BitSet slots; + private final Set flags = EnumSet.noneOf(NodeFlag.class); + private final List aliases = new ArrayList<>(); + + public RedisClusterNode() { + } + + public RedisClusterNode(RedisURI uri, String nodeId, boolean connected, String slaveOf, long pingSentTimestamp, + long pongReceivedTimestamp, long configEpoch, List slots, Set flags) { + + this.uri = uri; + this.nodeId = nodeId; + this.connected = connected; + this.slaveOf = slaveOf; + this.pingSentTimestamp = pingSentTimestamp; + this.pongReceivedTimestamp = pongReceivedTimestamp; + this.configEpoch = configEpoch; + + setSlotBits(slots); + setFlags(flags); + } + + RedisClusterNode(RedisURI uri, String nodeId, boolean connected, String slaveOf, long pingSentTimestamp, + long pongReceivedTimestamp, long configEpoch, BitSet slots, Set flags) { + + this.uri = uri; + this.nodeId = nodeId; + this.connected = connected; + this.slaveOf = slaveOf; + this.pingSentTimestamp = pingSentTimestamp; + this.pongReceivedTimestamp = pongReceivedTimestamp; + this.configEpoch = configEpoch; + + this.slots = new BitSet(slots.length()); + this.slots.or(slots); + + setFlags(flags); + } + + public RedisClusterNode(RedisClusterNode redisClusterNode) { + + LettuceAssert.notNull(redisClusterNode, "RedisClusterNode must not be null"); + + this.uri = redisClusterNode.uri; + this.nodeId = redisClusterNode.nodeId; + this.connected = redisClusterNode.connected; + this.slaveOf = redisClusterNode.slaveOf; + this.pingSentTimestamp = redisClusterNode.pingSentTimestamp; + this.pongReceivedTimestamp = redisClusterNode.pongReceivedTimestamp; + this.configEpoch = redisClusterNode.configEpoch; + this.aliases.addAll(redisClusterNode.aliases); + + if (redisClusterNode.slots != null && !redisClusterNode.slots.isEmpty()) { + this.slots = new BitSet(SlotHash.SLOT_COUNT); + this.slots.or(redisClusterNode.slots); + } + + setFlags(redisClusterNode.flags); + } + + /** + * Create a new instance of {@link RedisClusterNode} by passing the {@code nodeId} + * + * @param nodeId the nodeId + * @return a new instance of {@link RedisClusterNode} + */ + public static RedisClusterNode of(String nodeId) { + + LettuceAssert.notNull(nodeId, "NodeId must not be null"); + + RedisClusterNode redisClusterNode = new RedisClusterNode(); + redisClusterNode.setNodeId(nodeId); + + return redisClusterNode; + } + + /** + * Clone {@code this} {@link RedisClusterNode}. + * + * @return a copy of {@code this} {@link RedisClusterNode}. + */ + @Override + public RedisClusterNode clone() { + return new RedisClusterNode(this); + } + + public RedisURI getUri() { + return uri; + } + + /** + * Sets the connection point details. Usually the host/ip/port where a particular Redis Cluster node server is running. + * + * @param uri the {@link RedisURI}, must not be {@literal null} + */ + public void setUri(RedisURI uri) { + + LettuceAssert.notNull(uri, "RedisURI must not be null"); + this.uri = uri; + } + + public String getNodeId() { + return nodeId; + } + + /** + * Sets {@code nodeId}. + * + * @param nodeId the {@code nodeId} + */ + public void setNodeId(String nodeId) { + LettuceAssert.notNull(nodeId, "NodeId must not be null"); + this.nodeId = nodeId; + } + + public boolean isConnected() { + return connected; + } + + /** + * Sets the {@code connected} flag. The {@code connected} flag describes whether the node which provided details about the + * node is connected to the particular {@link RedisClusterNode}. + * + * @param connected the {@code connected} flag + */ + public void setConnected(boolean connected) { + this.connected = connected; + } + + public String getSlaveOf() { + return slaveOf; + } + + /** + * Sets the replication source. + * + * @param slaveOf the replication source, can be {@literal null} + */ + public void setSlaveOf(String slaveOf) { + this.slaveOf = slaveOf; + } + + public long getPingSentTimestamp() { + return pingSentTimestamp; + } + + /** + * Sets the last {@code pingSentTimestamp}. + * + * @param pingSentTimestamp the last {@code pingSentTimestamp} + */ + public void setPingSentTimestamp(long pingSentTimestamp) { + this.pingSentTimestamp = pingSentTimestamp; + } + + public long getPongReceivedTimestamp() { + return pongReceivedTimestamp; + } + + /** + * Sets the last {@code pongReceivedTimestamp}. + * + * @param pongReceivedTimestamp the last {@code pongReceivedTimestamp} + */ + public void setPongReceivedTimestamp(long pongReceivedTimestamp) { + this.pongReceivedTimestamp = pongReceivedTimestamp; + } + + public long getConfigEpoch() { + return configEpoch; + } + + /** + * Sets the {@code configEpoch}. + * + * @param configEpoch the {@code configEpoch} + */ + public void setConfigEpoch(long configEpoch) { + this.configEpoch = configEpoch; + } + + /** + * Return the slots as {@link List}. Note that this method creates a new {@link List} for each time it gets called. + * + * @return the slots as {@link List}. + */ + public List getSlots() { + + if (slots == null || slots.isEmpty()) { + return Collections.emptyList(); + } + + List slots = new ArrayList<>(); + + for (int i = 0; i < SlotHash.SLOT_COUNT; i++) { + + if (this.slots.get(i)) { + slots.add(i); + } + } + + return slots; + } + + /** + * Performs the given action for each slot of this {@link RedisClusterNode} until all elements have been processed or the + * action throws an exception. Unless otherwise specified by the implementing class, actions are performed in the order of + * iteration (if an iteration order is specified). Exceptions thrown by the action are relayed to the caller. + * + * @param consumer + * @since 5.2 + */ + public void forEachSlot(IntConsumer consumer) { + + if (slots == null || slots.isEmpty()) { + return; + } + + for (int i = 0; i < this.slots.length(); i++) { + + if (this.slots.get(i)) { + consumer.accept(i); + } + } + } + + /** + * Sets the list of slots for which this {@link RedisClusterNode} is the + * {@link io.lettuce.core.cluster.models.partitions.RedisClusterNode.NodeFlag#MASTER}. The list is empty if this node is not + * a master or the node is not responsible for any slots at all. + * + * @param slots list of slots, must not be {@literal null} but may be empty + */ + public void setSlots(List slots) { + + LettuceAssert.notNull(slots, "Slots must not be null"); + + setSlotBits(slots); + } + + private void setSlotBits(List slots) { + + if (slots.isEmpty() && this.slots == null) { + return; + } + + if (this.slots == null) { + this.slots = new BitSet(SlotHash.SLOT_COUNT); + } + + this.slots.clear(); + + for (Integer slot : slots) { + this.slots.set(slot); + } + } + + /** + * Return {@literal true} if {@link RedisClusterNode the other node} contains the same slots as {@code this node}. + * + * @param other the node to compare with. + * @return {@literal true} if {@link RedisClusterNode the other node} contains the same slots as {@code this node}. + */ + public boolean hasSameSlotsAs(RedisClusterNode other) { + + if (this.slots == null && other.slots == null) { + return true; + } + + if (this.slots == null || other.slots == null) { + return false; + } + + return this.slots.equals(other.slots); + } + + /** + * Return the {@link NodeFlag NodeFlags}. + * + * @return the {@link NodeFlag NodeFlags}. + */ + public Set getFlags() { + return flags; + } + + /** + * Set of {@link io.lettuce.core.cluster.models.partitions.RedisClusterNode.NodeFlag node flags}. + * + * @param flags the set of node flags. + */ + public void setFlags(Set flags) { + + this.flags.clear(); + this.flags.addAll(flags); + } + + /** + * @param nodeFlag the node flag + * @return true if the {@linkplain NodeFlag} is contained within the flags. + */ + public boolean is(NodeFlag nodeFlag) { + return getFlags().contains(nodeFlag); + } + + /** + * Add an alias to {@link RedisClusterNode}. + * + * @param alias must not be {@literal null}. + */ + public void addAlias(RedisURI alias) { + + LettuceAssert.notNull(alias, "Alias URI must not be null"); + this.aliases.add(alias); + } + + public List getAliases() { + return aliases; + } + + /** + * @param slot the slot hash + * @return true if the slot is contained within the handled slots. + */ + public boolean hasSlot(int slot) { + return slot <= SlotHash.SLOT_COUNT && this.slots != null && this.slots.get(slot); + } + + /** + * Returns the {@link Role} of the Redis Cluster node based on the {@link #getFlags() flags}. + * + * @return the Redis Cluster node role + */ + @Override + public Role getRole() { + return is(NodeFlag.MASTER) ? Role.MASTER : Role.SLAVE; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (!(o instanceof RedisClusterNode)) { + return false; + } + + RedisClusterNode that = (RedisClusterNode) o; + + if (nodeId != null ? !nodeId.equals(that.nodeId) : that.nodeId != null) { + return false; + } + + return true; + } + + @Override + public int hashCode() { + return 31 * (nodeId != null ? nodeId.hashCode() : 0); + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [uri=").append(uri); + sb.append(", nodeId='").append(nodeId).append('\''); + sb.append(", connected=").append(connected); + sb.append(", slaveOf='").append(slaveOf).append('\''); + sb.append(", pingSentTimestamp=").append(pingSentTimestamp); + sb.append(", pongReceivedTimestamp=").append(pongReceivedTimestamp); + sb.append(", configEpoch=").append(configEpoch); + sb.append(", flags=").append(flags); + sb.append(", aliases=").append(aliases); + if (slots != null) { + sb.append(", slot count=").append(slots.cardinality()); + } + sb.append(']'); + return sb.toString(); + } + + /** + * Redis Cluster node flags. + */ + public enum NodeFlag { + NOFLAGS, MYSELF, SLAVE, REPLICA, MASTER, EVENTUAL_FAIL, FAIL, HANDSHAKE, NOADDR; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/models/partitions/package-info.java b/src/main/java/io/lettuce/core/cluster/models/partitions/package-info.java new file mode 100644 index 0000000000..46df050ab5 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/models/partitions/package-info.java @@ -0,0 +1,4 @@ +/** + * Model and parser for the {@code CLUSTER NODES} and {@code CLUSTER SLAVES} output. + */ +package io.lettuce.core.cluster.models.partitions; diff --git a/src/main/java/io/lettuce/core/cluster/models/slots/ClusterSlotRange.java b/src/main/java/io/lettuce/core/cluster/models/slots/ClusterSlotRange.java new file mode 100644 index 0000000000..78f4ecab76 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/models/slots/ClusterSlotRange.java @@ -0,0 +1,138 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.slots; + +import java.io.Serializable; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Represents a range of slots together with its master and replicas. + * + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings("serial") +public class ClusterSlotRange implements Serializable { + + private int from; + private int to; + + private RedisClusterNode masterNode; + private List replicaNodes = Collections.emptyList(); + + public ClusterSlotRange() { + } + + /** + * Constructs a {@link ClusterSlotRange} + * + * @param from from slot + * @param to to slot + * @param masterNode master for the slots, may be {@literal null} + * @param replicaNodes list of replicas must not be {@literal null} but may be empty + */ + public ClusterSlotRange(int from, int to, RedisClusterNode masterNode, List replicaNodes) { + + LettuceAssert.notNull(masterNode, "MasterNode must not be null"); + LettuceAssert.notNull(replicaNodes, "ReplicaNodes must not be null"); + + this.from = from; + this.to = to; + this.masterNode = masterNode; + this.replicaNodes = replicaNodes; + } + + private RedisClusterNode toRedisClusterNode(HostAndPort hostAndPort, String slaveOf, Set flags) { + + int port = hostAndPort.hasPort() ? hostAndPort.getPort() : RedisURI.DEFAULT_REDIS_PORT; + RedisClusterNode redisClusterNode = new RedisClusterNode(); + redisClusterNode.setUri(RedisURI.create(hostAndPort.getHostText(), port)); + redisClusterNode.setSlaveOf(slaveOf); + redisClusterNode.setFlags(flags); + return redisClusterNode; + } + + private List toRedisClusterNodes(List hostAndPorts, String slaveOf, + Set flags) { + List result = new ArrayList<>(); + for (HostAndPort hostAndPort : hostAndPorts) { + result.add(toRedisClusterNode(hostAndPort, slaveOf, flags)); + } + return result; + } + + public int getFrom() { + return from; + } + + public int getTo() { + return to; + } + + public RedisClusterNode getMasterNode() { + return masterNode; + } + + public void setMasterNode(RedisClusterNode masterNode) { + this.masterNode = masterNode; + } + + @Deprecated + public List getSlaveNodes() { + return replicaNodes; + } + + @Deprecated + public void setSlaveNodes(List slaveNodes) { + this.replicaNodes = slaveNodes; + } + + public List getReplicaNodes() { + return replicaNodes; + } + + public void setReplicaNodes(List replicaNodes) { + this.replicaNodes = replicaNodes; + } + + public void setFrom(int from) { + this.from = from; + } + + public void setTo(int to) { + this.to = to; + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [from=").append(from); + sb.append(", to=").append(to); + sb.append(", masterNode=").append(masterNode); + sb.append(", replicaNodes=").append(replicaNodes); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/models/slots/ClusterSlotsParser.java b/src/main/java/io/lettuce/core/cluster/models/slots/ClusterSlotsParser.java new file mode 100644 index 0000000000..0eab7dc25d --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/models/slots/ClusterSlotsParser.java @@ -0,0 +1,167 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.slots; + +import java.util.*; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Parser for Redis CLUSTER SLOTS command output. + * + * @author Mark Paluch + * @since 3.0 + */ +public class ClusterSlotsParser { + + /** + * Utility constructor. + */ + private ClusterSlotsParser() { + + } + + /** + * Parse the output of the Redis CLUSTER SLOTS command and convert it to a list of + * {@link io.lettuce.core.cluster.models.slots.ClusterSlotRange} + * + * @param clusterSlotsOutput output of CLUSTER SLOTS command + * @return List>ClusterSlotRange> + */ + public static List parse(List clusterSlotsOutput) { + List result = new ArrayList<>(); + Map nodeCache = new HashMap<>(); + + for (Object o : clusterSlotsOutput) { + + if (!(o instanceof List)) { + continue; + } + + List range = (List) o; + if (range.size() < 2) { + continue; + } + + ClusterSlotRange clusterSlotRange = parseRange(range, nodeCache); + result.add(clusterSlotRange); + } + + Collections.sort(result, new Comparator() { + @Override + public int compare(ClusterSlotRange o1, ClusterSlotRange o2) { + return o1.getFrom() - o2.getFrom(); + } + }); + + return Collections.unmodifiableList(result); + } + + private static ClusterSlotRange parseRange(List range, Map nodeCache) { + Iterator iterator = range.iterator(); + + int from = Math.toIntExact(getLongFromIterator(iterator, 0)); + int to = Math.toIntExact(getLongFromIterator(iterator, 0)); + RedisClusterNode master = null; + + List replicas = new ArrayList<>(); + if (iterator.hasNext()) { + master = getRedisClusterNode(iterator, nodeCache); + if (master != null) { + master.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MASTER)); + Set slots = new TreeSet<>(master.getSlots()); + slots.addAll(createSlots(from, to)); + master.setSlots(new ArrayList<>(slots)); + } + } + + while (iterator.hasNext()) { + RedisClusterNode replica = getRedisClusterNode(iterator, nodeCache); + if (replica != null && master != null) { + replica.setSlaveOf(master.getNodeId()); + replica.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.SLAVE)); + replicas.add(replica); + } + } + + return new ClusterSlotRange(from, to, master, Collections.unmodifiableList(replicas)); + } + + private static List createSlots(int from, int to) { + List slots = new ArrayList<>(); + for (int i = from; i < to + 1; i++) { + slots.add(i); + } + return slots; + } + + private static RedisClusterNode getRedisClusterNode(Iterator iterator, Map nodeCache) { + Object element = iterator.next(); + RedisClusterNode redisClusterNode = null; + if (element instanceof List) { + List hostAndPortList = (List) element; + if (hostAndPortList.size() < 2) { + return null; + } + + Iterator hostAndPortIterator = hostAndPortList.iterator(); + String host = (String) hostAndPortIterator.next(); + int port = Math.toIntExact(getLongFromIterator(hostAndPortIterator, 0)); + String nodeId; + + if (hostAndPortIterator.hasNext()) { + nodeId = (String) hostAndPortIterator.next(); + + redisClusterNode = nodeCache.get(nodeId); + if (redisClusterNode == null) { + redisClusterNode = createNode(host, port); + nodeCache.put(nodeId, redisClusterNode); + redisClusterNode.setNodeId(nodeId); + } + } else { + String key = host + ":" + port; + redisClusterNode = nodeCache.get(key); + if (redisClusterNode == null) { + redisClusterNode = createNode(host, port); + nodeCache.put(key, redisClusterNode); + } + } + } + return redisClusterNode; + } + + private static RedisClusterNode createNode(String host, int port) { + RedisClusterNode redisClusterNode = new RedisClusterNode(); + redisClusterNode.setUri(RedisURI.create(host, port)); + redisClusterNode.setSlots(new ArrayList<>()); + return redisClusterNode; + } + + private static long getLongFromIterator(Iterator iterator, long defaultValue) { + if (iterator.hasNext()) { + Object object = iterator.next(); + if (object instanceof String) { + return Long.parseLong((String) object); + } + + if (object instanceof Number) { + return ((Number) object).longValue(); + } + } + return defaultValue; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/models/slots/package-info.java b/src/main/java/io/lettuce/core/cluster/models/slots/package-info.java new file mode 100644 index 0000000000..005245a04b --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/models/slots/package-info.java @@ -0,0 +1,4 @@ +/** + * Model and parser for the {@code CLUSTER SLOTS} output. + */ +package io.lettuce.core.cluster.models.slots; diff --git a/src/main/java/io/lettuce/core/cluster/package-info.java b/src/main/java/io/lettuce/core/cluster/package-info.java new file mode 100644 index 0000000000..26a38ed9b1 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/package-info.java @@ -0,0 +1,4 @@ +/** + * Client for Redis Cluster, see {@link io.lettuce.core.cluster.RedisClusterClient}. + */ +package io.lettuce.core.cluster; diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubAdapter.java b/src/main/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubAdapter.java new file mode 100644 index 0000000000..afb21c1831 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubAdapter.java @@ -0,0 +1,59 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Convenience adapter with an empty implementation of all {@link RedisClusterPubSubListener} callback methods. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.4 + */ +public class RedisClusterPubSubAdapter implements RedisClusterPubSubListener { + + @Override + public void message(RedisClusterNode node, K channel, V message) { + // empty adapter method + } + + @Override + public void message(RedisClusterNode node, K pattern, K channel, V message) { + // empty adapter method + } + + @Override + public void subscribed(RedisClusterNode node, K channel, long count) { + // empty adapter method + } + + @Override + public void psubscribed(RedisClusterNode node, K pattern, long count) { + // empty adapter method + } + + @Override + public void unsubscribed(RedisClusterNode node, K channel, long count) { + // empty adapter method + } + + @Override + public void punsubscribed(RedisClusterNode node, K pattern, long count) { + // empty adapter method + } +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubListener.java b/src/main/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubListener.java new file mode 100644 index 0000000000..890da24de0 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubListener.java @@ -0,0 +1,83 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * Interface for Redis Cluster Pub/Sub listeners. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.4 + */ +public interface RedisClusterPubSubListener { + + /** + * Message received from a channel subscription. + * + * @param node the {@link RedisClusterNode} where the {@literal message} originates. + * @param channel Channel. + * @param message Message. + */ + void message(RedisClusterNode node, K channel, V message); + + /** + * Message received from a pattern subscription. + * + * @param node the {@link RedisClusterNode} where the {@literal message} originates. + * @param pattern Pattern + * @param channel Channel + * @param message Message + */ + void message(RedisClusterNode node, K pattern, K channel, V message); + + /** + * Subscribed to a channel. + * + * @param node the {@link RedisClusterNode} where the {@literal message} originates. + * @param channel Channel + * @param count Subscription count. + */ + void subscribed(RedisClusterNode node, K channel, long count); + + /** + * Subscribed to a pattern. + * + * @param pattern Pattern. + * @param count Subscription count. + */ + void psubscribed(RedisClusterNode node, K pattern, long count); + + /** + * Unsubscribed from a channel. + * + * @param node the {@link RedisClusterNode} where the {@literal message} originates. + * @param channel Channel + * @param count Subscription count. + */ + void unsubscribed(RedisClusterNode node, K channel, long count); + + /** + * Unsubscribed from a pattern. + * + * @param node the {@link RedisClusterNode} where the {@literal message} originates. + * @param pattern Channel + * @param count Subscription count. + */ + void punsubscribed(RedisClusterNode node, K pattern, long count); +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/StatefulRedisClusterPubSubConnection.java b/src/main/java/io/lettuce/core/cluster/pubsub/StatefulRedisClusterPubSubConnection.java new file mode 100644 index 0000000000..d95f4c37fe --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/StatefulRedisClusterPubSubConnection.java @@ -0,0 +1,190 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub; + +import java.util.concurrent.CompletableFuture; +import java.util.function.Predicate; + +import io.lettuce.core.RedisException; +import io.lettuce.core.cluster.ClusterClientOptions; +import io.lettuce.core.cluster.api.sync.NodeSelection; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.pubsub.api.async.RedisClusterPubSubAsyncCommands; +import io.lettuce.core.cluster.pubsub.api.reactive.RedisClusterPubSubReactiveCommands; +import io.lettuce.core.cluster.pubsub.api.sync.RedisClusterPubSubCommands; +import io.lettuce.core.pubsub.RedisPubSubListener; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; + +/** + * A stateful Pub/Sub connection for Redis Cluster use. This connection type is intended for Pub/Sub messaging with Redis + * Cluster. The connection provides transparent command routing based on the first command key. + *

+ * This connection allows publishing and subscription to Pub/Sub messages within a Redis Cluster. Due to Redis Cluster's nature, + * messages are broadcasted across the cluster and a client can connect to any arbitrary node to participate with a + * subscription. + * + *

+ *  StatefulRedisClusterPubSubConnection<String, String> connection = clusterClient.connectPubSub();
+ *  connection.addListener(…);
+ *
+ *  RedisClusterPubSubCommands<String, String> sync = connection.sync();
+ *  sync.subscribe("channel");
+ *  sync.publish("channel", "message");
+ * 
+ * + *

Keyspace notifications

Redis clients can subscribe to user-space Pub/Sub messages and Redis keyspace notifications. + * Other than user-space Pub/Sub messages are Keyspace notifications not broadcasted to the whole cluster. They stay node-local. + * Subscription to keyspace notifications requires subscription to the nodes which publish the keyspace notifications. + * + *

+ * {@link StatefulRedisClusterPubSubConnection} allows node-specific subscriptions and {@link #setNodeMessagePropagation message + * propagation}. {@link #setNodeMessagePropagation} can notify a {@link RedisPubSubListener} that requires a single registration + * with {@link #addListener(RedisPubSubListener) this connection}. Node-subscriptions are supported on + * {@link #getConnection(String, int) connection} and {@link NodeSelection} levels through + * {@link RedisClusterPubSubAsyncCommands#nodes(Predicate) asynchronous}, {@link RedisClusterPubSubCommands#nodes(Predicate) + * synchronous}, and {@link RedisClusterPubSubReactiveCommands#nodes(Predicate) reactive} APIs. + * + *

+ *     
+ *  StatefulRedisClusterPubSubConnection<String, String> connection = clusterClient.connectPubSub();
+ *  connection.addListener(…);
+ *
+ *  RedisClusterPubSubCommands<String, String> sync = connection.sync();
+ *  sync.replicas().commands().psubscribe("__key*__:*");
+ *     
+ * 
+ * + * @author Mark Paluch + * @since 4.4 + */ +public interface StatefulRedisClusterPubSubConnection extends StatefulRedisPubSubConnection { + + /** + * Returns the {@link RedisClusterPubSubCommands} API for the current connection. Does not create a new connection. + * + * @return the synchronous API for the underlying connection. + */ + RedisClusterPubSubCommands sync(); + + /** + * Returns the {@link RedisClusterPubSubAsyncCommands} API for the current connection. Does not create a new connection. + * + * @return the asynchronous API for the underlying connection. + */ + RedisClusterPubSubAsyncCommands async(); + + /** + * Returns the {@link RedisClusterPubSubReactiveCommands} API for the current connection. Does not create a new connection. + * + * @return the reactive API for the underlying connection. + */ + RedisClusterPubSubReactiveCommands reactive(); + + /** + * Retrieve a connection to the specified cluster node using the nodeId. Host and port are looked up in the node list. This + * connection is bound to the node id. Once the cluster topology view is updated, the connection will try to reconnect the + * to the node with the specified {@code nodeId}, that behavior can also lead to a closed connection once the node with the + * specified {@code nodeId} is no longer part of the cluster. + *

+ * Do not close the connections. Otherwise, unpredictable behavior will occur. The nodeId must be part of the cluster and is + * validated against the current topology view in {@link io.lettuce.core.cluster.models.partitions.Partitions}. + * + * @param nodeId the node Id + * @return a connection to the requested cluster node + * @throws RedisException if the requested node identified by {@code nodeId} is not part of the cluster + */ + StatefulRedisPubSubConnection getConnection(String nodeId); + + /** + * Retrieve asynchronously a connection to the specified cluster node using the nodeId. Host and port are looked up in the + * node list. This connection is bound to the node id. Once the cluster topology view is updated, the connection will try to + * reconnect the to the node with the specified {@code nodeId}, that behavior can also lead to a closed connection once the + * node with the specified {@code nodeId} is no longer part of the cluster. + *

+ * Do not close the connections. Otherwise, unpredictable behavior will occur. The nodeId must be part of the cluster and is + * validated against the current topology view in {@link io.lettuce.core.cluster.models.partitions.Partitions}. + * + * @param nodeId the node Id + * @return {@link CompletableFuture} to indicate success or failure to connect to the requested cluster node. + * @throws RedisException if the requested node identified by {@code nodeId} is not part of the cluster + * @since 5.0 + */ + CompletableFuture> getConnectionAsync(String nodeId); + + /** + * Retrieve a connection to the specified cluster node using host and port. This connection is bound to a host and port. + * Updates to the cluster topology view can close the connection once the host, identified by {@code host} and {@code port}, + * are no longer part of the cluster. + *

+ * Do not close the connections. Otherwise, unpredictable behavior will occur. Host and port connections are verified by + * default for cluster membership, see {@link ClusterClientOptions#isValidateClusterNodeMembership()}. + * + * @param host the host + * @param port the port + * @return a connection to the requested cluster node + * @throws RedisException if the requested node identified by {@code host} and {@code port} is not part of the cluster + */ + StatefulRedisPubSubConnection getConnection(String host, int port); + + /** + * Retrieve a connection to the specified cluster node using host and port. This connection is bound to a host and port. + * Updates to the cluster topology view can close the connection once the host, identified by {@code host} and {@code port}, + * are no longer part of the cluster. + *

+ * Do not close the connections. Otherwise, unpredictable behavior will occur. Host and port connections are verified by + * default for cluster membership, see {@link ClusterClientOptions#isValidateClusterNodeMembership()}. + * + * @param host the host + * @param port the port + * @return {@link CompletableFuture} to indicate success or failure to connect to the requested cluster node. + * @throws RedisException if the requested node identified by {@code host} and {@code port} is not part of the cluster + * @since 5.0 + */ + CompletableFuture> getConnectionAsync(String host, int port); + + /** + * @return Known partitions for this connection. + */ + Partitions getPartitions(); + + /** + * Enables/disables node message propagation to {@code this} {@link StatefulRedisClusterPubSubConnection connections} + * {@link RedisPubSubListener listeners}. + *

+ * If {@code enabled} is {@literal true}, then Pub/Sub messages received on node-specific connections are propagated to this + * connection facade. Registered {@link RedisPubSubListener} will receive messages from individual node subscriptions. + *

+ * Node event propagation is disabled by default. + * + * @param enabled {@literal true} to enable node message propagation; {@literal false} (default) to disable message + * propagation. + */ + void setNodeMessagePropagation(boolean enabled); + + /** + * Add a new {@link RedisClusterPubSubListener listener}. + * + * @param listener the listener, must not be {@literal null}. + */ + void addListener(RedisClusterPubSubListener listener); + + /** + * Remove an existing {@link RedisClusterPubSubListener listener}. + * + * @param listener the listener, must not be {@literal null}. + */ + void removeListener(RedisClusterPubSubListener listener); +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/async/NodeSelectionPubSubAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/async/NodeSelectionPubSubAsyncCommands.java new file mode 100644 index 0000000000..d489ef4484 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/async/NodeSelectionPubSubAsyncCommands.java @@ -0,0 +1,59 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.async; + +import io.lettuce.core.cluster.api.async.AsyncExecutions; + +/** + * Asynchronous executed commands on a node selection for Pub/Sub. + * + * @author Mark Paluch + * @since 4.4 + */ +public interface NodeSelectionPubSubAsyncCommands { + + /** + * Listen for messages published to channels matching the given patterns. + * + * @param patterns the patterns + * @return RedisFuture<Void> Future to synchronize {@code psubscribe} completion + */ + AsyncExecutions psubscribe(K... patterns); + + /** + * Stop listening for messages posted to channels matching the given patterns. + * + * @param patterns the patterns + * @return RedisFuture<Void> Future to synchronize {@code punsubscribe} completion + */ + AsyncExecutions punsubscribe(K... patterns); + + /** + * Listen for messages published to the given channels. + * + * @param channels the channels + * @return RedisFuture<Void> Future to synchronize {@code subscribe} completion + */ + AsyncExecutions subscribe(K... channels); + + /** + * Stop listening for messages posted to the given channels. + * + * @param channels the channels + * @return RedisFuture<Void> Future to synchronize {@code unsubscribe} completion. + */ + AsyncExecutions unsubscribe(K... channels); +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/async/PubSubAsyncNodeSelection.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/async/PubSubAsyncNodeSelection.java new file mode 100644 index 0000000000..06b446c36c --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/async/PubSubAsyncNodeSelection.java @@ -0,0 +1,30 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.async; + +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; + +/** + * Node selection with access to asynchronous executed commands on the set. + * + * @author Mark Paluch + * @since 4.4 + */ +public interface PubSubAsyncNodeSelection + extends NodeSelectionSupport, NodeSelectionPubSubAsyncCommands> { + +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/async/RedisClusterPubSubAsyncCommands.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/async/RedisClusterPubSubAsyncCommands.java new file mode 100644 index 0000000000..f7a28ec250 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/async/RedisClusterPubSubAsyncCommands.java @@ -0,0 +1,112 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.async; + +import java.util.function.Predicate; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; + +/** + * Asynchronous and thread-safe Redis Cluster PubSub API. Operations are executed either on the main connection or a + * {@link PubSubAsyncNodeSelection}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.4 + */ +public interface RedisClusterPubSubAsyncCommands extends RedisPubSubAsyncCommands { + + /** + * @return the underlying connection. + */ + StatefulRedisClusterPubSubConnection getStatefulConnection(); + + /** + * Select all masters. + * + * @return API with asynchronous executed commands on a selection of master cluster nodes. + */ + default PubSubAsyncNodeSelection masters() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)); + } + + /** + * Select all replicas. + * + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas()} + */ + @Deprecated + default PubSubAsyncNodeSelection slaves() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + } + + /** + * Select all replicas. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas(Predicate)} + */ + @Deprecated + default PubSubAsyncNodeSelection slaves(Predicate predicate) { + return nodes(redisClusterNode -> predicate.test(redisClusterNode) + && redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + } + + /** + * Select all replicas. + * + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + @Deprecated + default PubSubAsyncNodeSelection replicas() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + + /** + * Select all replicas. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default PubSubAsyncNodeSelection replicas(Predicate predicate) { + return nodes(redisClusterNode -> predicate.test(redisClusterNode) + && redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + + /** + * Select all known cluster nodes. + * + * @return API with asynchronous executed commands on a selection of all cluster nodes. + */ + default PubSubAsyncNodeSelection all() { + return nodes(redisClusterNode -> true); + } + + /** + * Select nodes by a predicate. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of cluster nodes matching {@code predicate} + */ + PubSubAsyncNodeSelection nodes(Predicate predicate); +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/async/package-info.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/async/package-info.java new file mode 100644 index 0000000000..f6267411e9 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/async/package-info.java @@ -0,0 +1,5 @@ +/** + * Redis Cluster Pub/Sub API for asynchronous executed commands. + */ +package io.lettuce.core.cluster.pubsub.api.async; + diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/NodeSelectionPubSubReactiveCommands.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/NodeSelectionPubSubReactiveCommands.java new file mode 100644 index 0000000000..fa872bc430 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/NodeSelectionPubSubReactiveCommands.java @@ -0,0 +1,59 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.reactive; + +import io.lettuce.core.cluster.api.reactive.ReactiveExecutions; + +/** + * Reactive executed commands on a node selection for Pub/Sub. + * + * @author Mark Paluch + * @since 4.4 + */ +public interface NodeSelectionPubSubReactiveCommands { + + /** + * Listen for messages published to channels matching the given patterns. + * + * @param patterns the patterns + * @return RedisFuture<Void> Future to synchronize {@code psubscribe} completion + */ + ReactiveExecutions psubscribe(K... patterns); + + /** + * Stop listening for messages posted to channels matching the given patterns. + * + * @param patterns the patterns + * @return RedisFuture<Void> Future to synchronize {@code punsubscribe} completion + */ + ReactiveExecutions punsubscribe(K... patterns); + + /** + * Listen for messages published to the given channels. + * + * @param channels the channels + * @return RedisFuture<Void> Future to synchronize {@code subscribe} completion + */ + ReactiveExecutions subscribe(K... channels); + + /** + * Stop listening for messages posted to the given channels. + * + * @param channels the channels + * @return RedisFuture<Void> Future to synchronize {@code unsubscribe} completion. + */ + ReactiveExecutions unsubscribe(K... channels); +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/PubSubReactiveNodeSelection.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/PubSubReactiveNodeSelection.java new file mode 100644 index 0000000000..b4d2b21beb --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/PubSubReactiveNodeSelection.java @@ -0,0 +1,31 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.reactive; + +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.pubsub.api.reactive.RedisPubSubReactiveCommands; +import io.lettuce.core.pubsub.api.sync.RedisPubSubCommands; + +/** + * Node selection with access to {@link RedisPubSubCommands}. + * + * @author Mark Paluch + * @since 4.4 + */ +public interface PubSubReactiveNodeSelection + extends NodeSelectionSupport, NodeSelectionPubSubReactiveCommands> { + +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/RedisClusterPubSubReactiveCommands.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/RedisClusterPubSubReactiveCommands.java new file mode 100644 index 0000000000..372228e2a1 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/RedisClusterPubSubReactiveCommands.java @@ -0,0 +1,111 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.reactive; + +import java.util.function.Predicate; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.core.pubsub.api.reactive.RedisPubSubReactiveCommands; + +/** + * Reactive and thread-safe Redis Cluster PubSub API. Operations are executed either on the main connection or a + * {@link PubSubReactiveNodeSelection}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.4 + */ +public interface RedisClusterPubSubReactiveCommands extends RedisPubSubReactiveCommands { + + /** + * @return the underlying connection. + */ + StatefulRedisClusterPubSubConnection getStatefulConnection(); + + /** + * Select all masters. + * + * @return API with asynchronous executed commands on a selection of master cluster nodes. + */ + default PubSubReactiveNodeSelection masters() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)); + } + + /** + * Select all replicas. + * + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas()}. + */ + @Deprecated + default PubSubReactiveNodeSelection slaves() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + } + + /** + * Select all replicas. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas()}. + */ + @Deprecated + default PubSubReactiveNodeSelection slaves(Predicate predicate) { + return nodes(redisClusterNode -> predicate.test(redisClusterNode) + && redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + } + + /** + * Select all replicas. + * + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default PubSubReactiveNodeSelection replicas() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + + /** + * Select all replicas. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default PubSubReactiveNodeSelection replicas(Predicate predicate) { + return nodes(redisClusterNode -> predicate.test(redisClusterNode) + && redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + + /** + * Select all known cluster nodes. + * + * @return API with asynchronous executed commands on a selection of all cluster nodes. + */ + default PubSubReactiveNodeSelection all() { + return nodes(redisClusterNode -> true); + } + + /** + * Select nodes by a predicate. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of cluster nodes matching {@code predicate} + */ + PubSubReactiveNodeSelection nodes(Predicate predicate); +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/package-info.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/package-info.java new file mode 100644 index 0000000000..07677fe01b --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/reactive/package-info.java @@ -0,0 +1,5 @@ +/** + * Redis Cluster Pub/Sub API for reactive command execution. + */ +package io.lettuce.core.cluster.pubsub.api.reactive; + diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/NodeSelectionPubSubCommands.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/NodeSelectionPubSubCommands.java new file mode 100644 index 0000000000..b0c37ad649 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/NodeSelectionPubSubCommands.java @@ -0,0 +1,59 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.sync; + +import io.lettuce.core.cluster.api.sync.Executions; + +/** + * Synchronous executed commands on a node selection for Pub/Sub. + * + * @author Mark Paluch + * @since 4.4 + */ +public interface NodeSelectionPubSubCommands { + + /** + * Listen for messages published to channels matching the given patterns. + * + * @param patterns the patterns + * @return Executions to synchronize {@code psubscribe} completion + */ + Executions psubscribe(K... patterns); + + /** + * Stop listening for messages posted to channels matching the given patterns. + * + * @param patterns the patterns + * @return Executions Future to synchronize {@code punsubscribe} completion + */ + Executions punsubscribe(K... patterns); + + /** + * Listen for messages published to the given channels. + * + * @param channels the channels + * @return Executions Future to synchronize {@code subscribe} completion + */ + Executions subscribe(K... channels); + + /** + * Stop listening for messages posted to the given channels. + * + * @param channels the channels + * @return Executions Future to synchronize {@code unsubscribe} completion. + */ + Executions unsubscribe(K... channels); +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/PubSubNodeSelection.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/PubSubNodeSelection.java new file mode 100644 index 0000000000..a043f131f8 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/PubSubNodeSelection.java @@ -0,0 +1,30 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.sync; + +import io.lettuce.core.cluster.api.NodeSelectionSupport; +import io.lettuce.core.pubsub.api.sync.RedisPubSubCommands; + +/** + * Node selection with access to {@link RedisPubSubCommands}. + * + * @author Mark Paluch + * @since 4.4 + */ +public interface PubSubNodeSelection + extends NodeSelectionSupport, NodeSelectionPubSubCommands> { + +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/RedisClusterPubSubCommands.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/RedisClusterPubSubCommands.java new file mode 100644 index 0000000000..e37d30cc43 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/RedisClusterPubSubCommands.java @@ -0,0 +1,111 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub.api.sync; + +import java.util.function.Predicate; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.core.pubsub.api.sync.RedisPubSubCommands; + +/** + * Synchronous and thread-safe Redis Cluster PubSub API. Operations are executed either on the main connection or a + * {@link PubSubNodeSelection}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.4 + */ +public interface RedisClusterPubSubCommands extends RedisPubSubCommands { + + /** + * @return the underlying connection. + */ + StatefulRedisClusterPubSubConnection getStatefulConnection(); + + /** + * Select all masters. + * + * @return API with asynchronous executed commands on a selection of master cluster nodes. + */ + default PubSubNodeSelection masters() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)); + } + + /** + * Select all replicas. + * + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas()} + */ + @Deprecated + default PubSubNodeSelection slaves() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + } + + /** + * Select all replicas. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @deprecated since 5.2, use {@link #replicas(Predicate)} + */ + @Deprecated + default PubSubNodeSelection slaves(Predicate predicate) { + return nodes(redisClusterNode -> predicate.test(redisClusterNode) + && redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)); + } + + /** + * Select all replicas. + * + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default PubSubNodeSelection replicas() { + return nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + + /** + * Select all replicas. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of replica cluster nodes. + * @since 5.2 + */ + default PubSubNodeSelection replicas(Predicate predicate) { + return nodes(redisClusterNode -> predicate.test(redisClusterNode) + && redisClusterNode.is(RedisClusterNode.NodeFlag.REPLICA)); + } + + /** + * Select all known cluster nodes. + * + * @return API with asynchronous executed commands on a selection of all cluster nodes. + */ + default PubSubNodeSelection all() { + return nodes(redisClusterNode -> true); + } + + /** + * Select nodes by a predicate. + * + * @param predicate Predicate to filter nodes + * @return API with asynchronous executed commands on a selection of cluster nodes matching {@code predicate} + */ + PubSubNodeSelection nodes(Predicate predicate); +} diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/package-info.java b/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/package-info.java new file mode 100644 index 0000000000..1f420acda0 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/api/sync/package-info.java @@ -0,0 +1,5 @@ +/** + * Redis Cluster Pub/Sub API for synchronous executed commands. + */ +package io.lettuce.core.cluster.pubsub.api.sync; + diff --git a/src/main/java/io/lettuce/core/cluster/pubsub/package-info.java b/src/main/java/io/lettuce/core/cluster/pubsub/package-info.java new file mode 100644 index 0000000000..897d3f2252 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/pubsub/package-info.java @@ -0,0 +1,5 @@ +/** + * Redis Cluster Pub/Sub support. + */ +package io.lettuce.core.cluster.pubsub; + diff --git a/src/main/java/io/lettuce/core/cluster/topology/ClusterTopologyRefresh.java b/src/main/java/io/lettuce/core/cluster/topology/ClusterTopologyRefresh.java new file mode 100644 index 0000000000..4ce29b290a --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/ClusterTopologyRefresh.java @@ -0,0 +1,55 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import java.time.Duration; +import java.util.Map; +import java.util.concurrent.CompletionStage; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.resource.ClientResources; + +/** + * Utility to refresh the cluster topology view based on {@link Partitions}. + * + * @author Mark Paluch + */ +public interface ClusterTopologyRefresh { + + /** + * Create a new {@link ClusterTopologyRefresh} instance. + * + * @param nodeConnectionFactory + * @param clientResources + * @return + */ + static ClusterTopologyRefresh create(NodeConnectionFactory nodeConnectionFactory, ClientResources clientResources) { + return new DefaultClusterTopologyRefresh(nodeConnectionFactory, clientResources); + } + + /** + * Load topology views from a collection of {@link RedisURI}s and return the view per {@link RedisURI}. Partitions contain + * an ordered list of {@link RedisClusterNode}s. The sort key is latency. Nodes with lower latency come first. + * + * @param seed collection of {@link RedisURI}s + * @param connectTimeout connect timeout + * @param discovery {@literal true} to discover additional nodes + * @return mapping between {@link RedisURI} and {@link Partitions} + */ + CompletionStage> loadViews(Iterable seed, Duration connectTimeout, boolean discovery); +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/Connections.java b/src/main/java/io/lettuce/core/cluster/topology/Connections.java new file mode 100644 index 0000000000..bb94288a84 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/Connections.java @@ -0,0 +1,154 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import java.time.Duration; +import java.util.*; +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; + +import io.lettuce.core.internal.ExceptionFactory; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.resource.ClientResources; + +/** + * @author Mark Paluch + * @author Christian Weitendorf + * @author Xujs + */ +class Connections { + + private final ClientResources clientResources; + private final Map> connections; + private volatile boolean closed = false; + + public Connections(ClientResources clientResources, Map> connections) { + this.clientResources = clientResources; + this.connections = connections; + } + + /** + * Add a connection for a {@link RedisURI} + * + * @param redisURI + * @param connection + */ + public void addConnection(RedisURI redisURI, StatefulRedisConnection connection) { + + if (this.closed) { // fastpath + connection.closeAsync(); + return; + } + + synchronized (this.connections) { + + if (this.closed) { + connection.closeAsync(); + return; + } + + this.connections.put(redisURI, connection); + } + } + + /** + * @return {@literal true} if no connections present. + */ + public boolean isEmpty() { + synchronized (this.connections) { + return this.connections.isEmpty(); + } + } + + /* + * Initiate {@code CLUSTER NODES} on all connections and return the {@link Requests}. + * + * @return the {@link Requests}. + */ + public Requests requestTopology(long timeout, TimeUnit timeUnit) { + + return doRequest(() -> { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(CommandKeyword.NODES); + Command command = new Command<>(CommandType.CLUSTER, new StatusOutput<>(StringCodec.UTF8), + args); + return new TimedAsyncCommand<>(command); + }, timeout, timeUnit); + } + + /* + * Initiate {@code INFO CLIENTS} on all connections and return the {@link Requests}. + * + * @return the {@link Requests}. + */ + public Requests requestClients(long timeout, TimeUnit timeUnit) { + + return doRequest(() -> { + + Command command = new Command<>(CommandType.INFO, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).add("CLIENTS")); + return new TimedAsyncCommand<>(command); + }, timeout, timeUnit); + } + + /* + * Initiate {@code CLUSTER NODES} on all connections and return the {@link Requests}. + * + * @return the {@link Requests}. + */ + private Requests doRequest(Supplier> commandFactory, long timeout, + TimeUnit timeUnit) { + + Requests requests = new Requests(); + Duration timeoutDuration = Duration.ofNanos(timeUnit.toNanos(timeout)); + + synchronized (this.connections) { + for (Map.Entry> entry : this.connections.entrySet()) { + + TimedAsyncCommand timedCommand = commandFactory.get(); + + clientResources.timer().newTimeout(it -> { + timedCommand.completeExceptionally(ExceptionFactory.createTimeoutException(timeoutDuration)); + }, timeout, timeUnit); + + entry.getValue().dispatch(timedCommand); + requests.addRequest(entry.getKey(), timedCommand); + } + } + + return requests; + } + + public Connections retainAll(Set connectionsToRetain) { + + Set keys = new LinkedHashSet<>(connections.keySet()); + + for (RedisURI key : keys) { + if (!connectionsToRetain.contains(key)) { + this.connections.remove(key); + } + } + + return this; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/DefaultClusterTopologyRefresh.java b/src/main/java/io/lettuce/core/cluster/topology/DefaultClusterTopologyRefresh.java new file mode 100644 index 0000000000..43e79e1eca --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/DefaultClusterTopologyRefresh.java @@ -0,0 +1,505 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import java.io.IOException; +import java.net.SocketAddress; +import java.time.Duration; +import java.util.*; +import java.util.concurrent.*; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.StreamSupport; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.topology.TopologyComparators.SortAction; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.ExceptionFactory; +import io.lettuce.core.internal.Exceptions; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.resource.ClientResources; +import io.netty.util.Timeout; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.internal.SystemPropertyUtil; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Utility to refresh the cluster topology view based on {@link Partitions}. + * + * @author Mark Paluch + */ +class DefaultClusterTopologyRefresh implements ClusterTopologyRefresh { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultClusterTopologyRefresh.class); + + private final NodeConnectionFactory nodeConnectionFactory; + private final ClientResources clientResources; + + public DefaultClusterTopologyRefresh(NodeConnectionFactory nodeConnectionFactory, ClientResources clientResources) { + this.nodeConnectionFactory = nodeConnectionFactory; + this.clientResources = clientResources; + } + + /** + * Load partition views from a collection of {@link RedisURI}s and return the view per {@link RedisURI}. Partitions contain + * an ordered list of {@link RedisClusterNode}s. The sort key is latency. Nodes with lower latency come first. + * + * @param seed collection of {@link RedisURI}s + * @param connectTimeout connect timeout + * @param discovery {@literal true} to discover additional nodes + * @return mapping between {@link RedisURI} and {@link Partitions} + */ + public CompletionStage> loadViews(Iterable seed, Duration connectTimeout, + boolean discovery) { + + if (!isEventLoopActive()) { + return CompletableFuture.completedFuture(Collections.emptyMap()); + } + + long commandTimeoutNs = getCommandTimeoutNs(seed); + ConnectionTracker tracker = new ConnectionTracker(); + long connectionTimeout = commandTimeoutNs + connectTimeout.toNanos(); + openConnections(tracker, seed, connectionTimeout, TimeUnit.NANOSECONDS); + + CompletableFuture composition = tracker.whenComplete(map -> { + return new Connections(clientResources, map); + }).thenCompose(connections -> { + + Requests requestedTopology = connections.requestTopology(commandTimeoutNs, TimeUnit.NANOSECONDS); + Requests requestedClients = connections.requestClients(commandTimeoutNs, TimeUnit.NANOSECONDS); + return CompletableFuture.allOf(requestedTopology.allCompleted(), requestedClients.allCompleted()) + .thenCompose(ignore -> { + + NodeTopologyViews views = getNodeSpecificViews(requestedTopology, requestedClients); + + if (discovery && isEventLoopActive()) { + + Set allKnownUris = views.getClusterNodes(); + Set discoveredNodes = difference(allKnownUris, toSet(seed)); + + if (discoveredNodes.isEmpty()) { + return CompletableFuture.completedFuture(views); + } + + openConnections(tracker, discoveredNodes, connectionTimeout, TimeUnit.NANOSECONDS); + + return tracker.whenComplete(map -> { + return new Connections(clientResources, map).retainAll(discoveredNodes); + }).thenCompose(newConnections -> { + + Requests additionalTopology = newConnections + .requestTopology(commandTimeoutNs, TimeUnit.NANOSECONDS).mergeWith(requestedTopology); + Requests additionalClients = newConnections + .requestClients(commandTimeoutNs, TimeUnit.NANOSECONDS).mergeWith(requestedClients); + return CompletableFuture + .allOf(additionalTopology.allCompleted(), additionalClients.allCompleted()) + .thenApply(ignore2 -> { + + return getNodeSpecificViews(additionalTopology, additionalClients); + }); + }); + } + + return CompletableFuture.completedFuture(views); + }).whenComplete((ignore, throwable) -> { + + if (throwable != null) { + try { + tracker.close(); + } catch (Exception e) { + logger.debug("Cannot close ClusterTopologyRefresh connections", e); + } + } + }).thenCompose((it) -> tracker.close().thenApply(ignore -> it)).thenCompose(it -> { + + if (it.isEmpty()) { + Exception exception = tryFail(requestedTopology, tracker, seed); + return Futures.failed(exception); + } + + return CompletableFuture.completedFuture(it); + }); + }); + + return composition.thenApply(NodeTopologyViews::toMap); + } + + private Exception tryFail(Requests requestedTopology, ConnectionTracker tracker, Iterable seed) { + + Map failures = new LinkedHashMap<>(); + CannotRetrieveClusterPartitions exception = new CannotRetrieveClusterPartitions(seed, failures); + + for (RedisURI node : requestedTopology.nodes()) { + + TimedAsyncCommand request = requestedTopology.getRequest(node); + if (request == null || !request.isCompletedExceptionally()) { + continue; + } + + Throwable cause = getException(request); + if (cause != null) { + failures.put(node, getExceptionDetail(cause)); + exception.addSuppressed(cause); + } + } + + for (Map.Entry>> entry : tracker.connections + .entrySet()) { + + CompletableFuture> future = entry.getValue(); + + if (!future.isDone() || !future.isCompletedExceptionally()) { + continue; + } + + try { + future.join(); + } catch (CompletionException e) { + + Throwable cause = e.getCause(); + if (cause != null) { + failures.put(entry.getKey(), getExceptionDetail(cause)); + exception.addSuppressed(cause); + } + } + } + + return exception; + } + + private static String getExceptionDetail(Throwable exception) { + + if (exception instanceof RedisConnectionException && exception.getCause() instanceof IOException) { + exception = exception.getCause(); + } + + return LettuceStrings.isNotEmpty(exception.getMessage()) ? exception.getMessage() : exception.toString(); + } + + private Set toSet(Iterable seed) { + return StreamSupport.stream(seed.spliterator(), false).collect(Collectors.toCollection(HashSet::new)); + } + + NodeTopologyViews getNodeSpecificViews(Requests requestedTopology, Requests requestedClients) { + + List allNodes = new ArrayList<>(); + + Map latencies = new HashMap<>(); + Map clientCountByNodeId = new HashMap<>(); + + Set nodes = requestedTopology.nodes(); + + List views = new ArrayList<>(); + for (RedisURI nodeUri : nodes) { + + try { + NodeTopologyView nodeTopologyView = NodeTopologyView.from(nodeUri, requestedTopology, requestedClients); + + if (!nodeTopologyView.isAvailable()) { + continue; + } + + RedisClusterNode node = nodeTopologyView.getOwnPartition(); + if (node.getUri() == null) { + node.setUri(nodeUri); + } else { + node.addAlias(nodeUri); + } + + List nodeWithStats = new ArrayList<>(nodeTopologyView.getPartitions().size()); + + for (RedisClusterNode partition : nodeTopologyView.getPartitions()) { + + if (validNode(partition)) { + RedisClusterNodeSnapshot redisClusterNodeSnapshot = new RedisClusterNodeSnapshot(partition); + nodeWithStats.add(redisClusterNodeSnapshot); + + if (partition.is(RedisClusterNode.NodeFlag.MYSELF)) { + + // record latency for later partition ordering + latencies.put(partition.getNodeId(), nodeTopologyView.getLatency()); + clientCountByNodeId.put(partition.getNodeId(), nodeTopologyView.getConnectedClients()); + } + } + } + + allNodes.addAll(nodeWithStats); + + Partitions partitions = new Partitions(); + partitions.addAll(nodeWithStats); + + nodeTopologyView.setPartitions(partitions); + + views.add(nodeTopologyView); + } catch (CompletionException e) { + logger.warn(String.format("Cannot retrieve partition view from %s, error: %s", nodeUri, e)); + } + } + + for (RedisClusterNodeSnapshot node : allNodes) { + node.setConnectedClients(clientCountByNodeId.get(node.getNodeId())); + node.setLatencyNs(latencies.get(node.getNodeId())); + } + + SortAction sortAction = SortAction.getSortAction(); + for (NodeTopologyView view : views) { + + sortAction.sort(view.getPartitions()); + view.getPartitions().updateCache(); + } + + return new NodeTopologyViews(views); + } + + private static boolean validNode(RedisClusterNode redisClusterNode) { + + if (redisClusterNode.is(RedisClusterNode.NodeFlag.NOADDR)) { + return false; + } + + if (redisClusterNode.getUri() == null || redisClusterNode.getUri().getPort() == 0 + || LettuceStrings.isEmpty(redisClusterNode.getUri().getHost())) { + return false; + } + + return true; + } + + /* + * Open connections where an address can be resolved. + */ + private void openConnections(ConnectionTracker tracker, Iterable redisURIs, long timeout, TimeUnit timeUnit) { + + for (RedisURI redisURI : redisURIs) { + + if (redisURI.getHost() == null || tracker.contains(redisURI) || !isEventLoopActive()) { + continue; + } + + try { + SocketAddress socketAddress = clientResources.socketAddressResolver().resolve(redisURI); + + ConnectionFuture> connectionFuture = nodeConnectionFactory + .connectToNodeAsync(StringCodec.UTF8, socketAddress); + + // Note: timeout skew due to potential socket address resolution and connection work possible. + + CompletableFuture> sync = new CompletableFuture<>(); + Timeout cancelTimeout = clientResources.timer().newTimeout(it -> { + + String message = String.format("Unable to connect to [%s]: Timeout after %s", socketAddress, + ExceptionFactory.formatTimeout(Duration.ofNanos(timeUnit.toNanos(timeout)))); + sync.completeExceptionally(new RedisConnectionException(message)); + }, timeout, timeUnit); + + connectionFuture.whenComplete((connection, throwable) -> { + + cancelTimeout.cancel(); + + if (throwable != null) { + + Throwable throwableToUse = Exceptions.unwrap(throwable); + + String message = String.format("Unable to connect to [%s]: %s", socketAddress, + throwableToUse.getMessage() != null ? throwableToUse.getMessage() : throwableToUse.toString()); + if (throwableToUse instanceof RedisConnectionException || throwableToUse instanceof IOException) { + if (logger.isDebugEnabled()) { + logger.debug(message, throwableToUse); + } else { + logger.warn(message); + } + } else { + logger.warn(message, throwableToUse); + } + + sync.completeExceptionally(new RedisConnectionException(message, throwableToUse)); + } else { + connection.async().clientSetname("lettuce#ClusterTopologyRefresh"); + + // avoid leaking resources + if (!sync.complete(connection)) { + connection.close(); + } + } + }); + + tracker.addConnection(redisURI, sync); + } catch (RuntimeException e) { + logger.warn(String.format("Unable to connect to [%s]", redisURI), e); + } + } + } + + private boolean isEventLoopActive() { + + EventExecutorGroup eventExecutors = clientResources.eventExecutorGroup(); + + return !eventExecutors.isShuttingDown(); + } + + private static Set difference(Set set1, Set set2) { + + Set result = new HashSet<>(set1.size()); + + for (E e1 : set1) { + if (!set2.contains(e1)) { + result.add(e1); + } + } + + List list = new ArrayList<>(set2.size()); + for (E e : set2) { + if (!set1.contains(e)) { + list.add(e); + } + } + + result.addAll(list); + + return result; + } + + private static long getCommandTimeoutNs(Iterable redisURIs) { + + RedisURI redisURI = redisURIs.iterator().next(); + return redisURI.getTimeout().toNanos(); + } + + /** + * Retrieve the exception from a {@link Future}. + * + * @param future + * @return + */ + private static Throwable getException(Future future) { + + try { + future.get(); + } catch (Exception e) { + return Exceptions.bubble(e); + } + + return null; + } + + static class ConnectionTracker { + + private Map>> connections = new LinkedHashMap<>(); + + public void addConnection(RedisURI uri, CompletableFuture> future) { + connections.put(uri, future); + } + + @SuppressWarnings("rawtypes") + public CompletableFuture close() { + + CompletableFuture[] futures = connections.values().stream() + .map(it -> it.thenCompose(StatefulConnection::closeAsync).exceptionally(ignore -> null)) + .toArray(CompletableFuture[]::new); + + return CompletableFuture.allOf(futures); + } + + public boolean contains(RedisURI uri) { + return connections.containsKey(uri); + } + + public CompletableFuture whenComplete( + Function>, ? extends T> mappingFunction) { + + int expectedCount = connections.size(); + AtomicInteger latch = new AtomicInteger(); + CompletableFuture continuation = new CompletableFuture<>(); + + for (Map.Entry>> entry : connections + .entrySet()) { + + CompletableFuture> future = entry.getValue(); + + future.whenComplete((it, ex) -> { + + if (latch.incrementAndGet() == expectedCount) { + + try { + continuation.complete(mappingFunction.apply(collectConnections())); + } catch (RuntimeException e) { + continuation.completeExceptionally(e); + } + } + }); + } + + return continuation; + } + + protected Map> collectConnections() { + + Map> activeConnections = new LinkedHashMap<>(); + + for (Map.Entry>> entry : connections + .entrySet()) { + + CompletableFuture> future = entry.getValue(); + if (future.isDone() && !future.isCompletedExceptionally()) { + activeConnections.put(entry.getKey(), future.join()); + } + } + return activeConnections; + } + } + + @SuppressWarnings("serial") + static class CannotRetrieveClusterPartitions extends RedisException { + + private final Map failure; + + public CannotRetrieveClusterPartitions(Iterable seedNodes, Map failure) { + super(String.format("Cannot retrieve cluster partitions from %s", seedNodes)); + this.failure = failure; + } + + @Override + public String getMessage() { + + StringJoiner joiner = new StringJoiner(SystemPropertyUtil.get("line.separator", "\n")); + + if (!failure.isEmpty()) { + + joiner.add(super.getMessage()).add(""); + joiner.add("Details:"); + + for (Map.Entry entry : failure.entrySet()) { + joiner.add(String.format("\t[%s]: %s", entry.getKey(), entry.getValue())); + } + + joiner.add(""); + } + + return joiner.toString(); + } + + @Override + public synchronized Throwable fillInStackTrace() { + return this; + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/NodeConnectionFactory.java b/src/main/java/io/lettuce/core/cluster/topology/NodeConnectionFactory.java new file mode 100644 index 0000000000..d17165aab7 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/NodeConnectionFactory.java @@ -0,0 +1,55 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import java.net.SocketAddress; + +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.RedisCodec; + +/** + * Factory interface to obtain {@link StatefulRedisConnection connections} to Redis cluster nodes. + * + * @author Mark Paluch + * @since 4.2 + */ +public interface NodeConnectionFactory { + + /** + * Connects to a {@link SocketAddress} with the given {@link RedisCodec}. + * + * @param codec must not be {@literal null}. + * @param socketAddress must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link StatefulRedisConnection} + */ + StatefulRedisConnection connectToNode(RedisCodec codec, SocketAddress socketAddress); + + /** + * Connects to a {@link SocketAddress} with the given {@link RedisCodec} asynchronously. + * + * @param codec must not be {@literal null}. + * @param socketAddress must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link StatefulRedisConnection} + * @since 4.4 + */ + ConnectionFuture> connectToNodeAsync(RedisCodec codec, + SocketAddress socketAddress); +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/NodeTopologyView.java b/src/main/java/io/lettuce/core/cluster/topology/NodeTopologyView.java new file mode 100644 index 0000000000..53fa5f4665 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/NodeTopologyView.java @@ -0,0 +1,158 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.ClusterPartitionParser; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * @author Mark Paluch + * @author Xujs + */ +class NodeTopologyView { + + private static final Pattern NUMBER = Pattern.compile("(\\d+)"); + private final boolean available; + private final RedisURI redisURI; + + private Partitions partitions; + private final int connectedClients; + + private final long latency; + private final String clusterNodes; + + private final String clientList; + + private NodeTopologyView(RedisURI redisURI) { + + this.available = false; + this.redisURI = redisURI; + this.partitions = new Partitions(); + this.connectedClients = 0; + this.clusterNodes = null; + this.clientList = null; + this.latency = 0; + } + + NodeTopologyView(RedisURI redisURI, String clusterNodes, String clientList, long latency) { + + this.available = true; + this.redisURI = redisURI; + + this.partitions = ClusterPartitionParser.parse(clusterNodes); + this.connectedClients = clientList != null ? getClients(clientList) : 0; + this.clusterNodes = clusterNodes; + this.clientList = clientList; + this.latency = latency; + } + + static NodeTopologyView from(RedisURI redisURI, Requests clusterNodesRequests, Requests clientListRequests) { + + TimedAsyncCommand nodes = clusterNodesRequests.getRequest(redisURI); + TimedAsyncCommand clients = clientListRequests.getRequest(redisURI); + + if (resultAvailable(nodes) && resultAvailable(clients)) { + return new NodeTopologyView(redisURI, nodes.join(), optionallyGet(clients), nodes.duration()); + } + return new NodeTopologyView(redisURI); + } + + private static T optionallyGet(TimedAsyncCommand command) { + + if (command.isCompletedExceptionally()) { + return null; + } + return command.join(); + } + + private static boolean resultAvailable(RedisFuture redisFuture) { + + if (redisFuture != null && redisFuture.isDone() && !redisFuture.isCancelled()) { + return true; + } + + return false; + } + + private int getClients(String rawClientsOutput) { + String[] rows = rawClientsOutput.trim().split("\\n"); + for (String row : rows) { + + Matcher matcher = NUMBER.matcher(row); + if (matcher.find()) { + return Integer.parseInt(matcher.group(1)); + } + } + return 0; + } + + long getLatency() { + return latency; + } + + boolean isAvailable() { + return available; + } + + Partitions getPartitions() { + return partitions; + } + + int getConnectedClients() { + return connectedClients; + } + + String getNodeId() { + return getOwnPartition().getNodeId(); + } + + RedisURI getRedisURI() { + + if (partitions.isEmpty()) { + return redisURI; + } + + return getOwnPartition().getUri(); + } + + RedisClusterNode getOwnPartition() { + for (RedisClusterNode partition : partitions) { + if (partition.is(RedisClusterNode.NodeFlag.MYSELF)) { + return partition; + } + } + + throw new IllegalStateException("Cannot determine own partition"); + } + + String getClientList() { + return clientList; + } + + String getClusterNodes() { + return clusterNodes; + } + + void setPartitions(Partitions partitions) { + this.partitions = partitions; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/NodeTopologyViews.java b/src/main/java/io/lettuce/core/cluster/topology/NodeTopologyViews.java new file mode 100644 index 0000000000..b95840f705 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/NodeTopologyViews.java @@ -0,0 +1,79 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import java.util.*; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * @author Mark Paluch + */ +class NodeTopologyViews { + + private List views; + + public NodeTopologyViews(List views) { + this.views = views; + } + + /** + * Return cluster node URI's using the topology query sources and partitions. + * + * @return + */ + public Set getClusterNodes() { + + Set result = new HashSet<>(); + + Map knownUris = new HashMap<>(); + for (NodeTopologyView view : views) { + knownUris.put(view.getNodeId(), view.getRedisURI()); + } + + for (NodeTopologyView view : views) { + for (RedisClusterNode redisClusterNode : view.getPartitions()) { + if (knownUris.containsKey(redisClusterNode.getNodeId())) { + result.add(knownUris.get(redisClusterNode.getNodeId())); + } else { + result.add(redisClusterNode.getUri()); + } + } + } + + return result; + } + + /** + * @return {@literal true} if no views are present. + */ + public boolean isEmpty() { + return views.isEmpty(); + } + + public Map toMap() { + + Map nodeSpecificViews = new TreeMap<>(TopologyComparators.RedisURIComparator.INSTANCE); + + for (NodeTopologyView view : views) { + nodeSpecificViews.put(view.getRedisURI(), view.getPartitions()); + } + + return nodeSpecificViews; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/RedisClusterNodeSnapshot.java b/src/main/java/io/lettuce/core/cluster/topology/RedisClusterNodeSnapshot.java new file mode 100644 index 0000000000..6cf643477e --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/RedisClusterNodeSnapshot.java @@ -0,0 +1,51 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("serial") +class RedisClusterNodeSnapshot extends RedisClusterNode { + + private Long latencyNs; + private Integer connectedClients; + + public RedisClusterNodeSnapshot() { + } + + public RedisClusterNodeSnapshot(RedisClusterNode redisClusterNode) { + super(redisClusterNode); + } + + Long getLatencyNs() { + return latencyNs; + } + + void setLatencyNs(Long latencyNs) { + this.latencyNs = latencyNs; + } + + Integer getConnectedClients() { + return connectedClients; + } + + void setConnectedClients(Integer connectedClients) { + this.connectedClients = connectedClients; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/Requests.java b/src/main/java/io/lettuce/core/cluster/topology/Requests.java new file mode 100644 index 0000000000..7c860f9c07 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/Requests.java @@ -0,0 +1,74 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import java.util.Map; +import java.util.Set; +import java.util.TreeMap; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.RedisURI; + +/** + * Encapsulates asynchronously executed commands to multiple {@link RedisURI nodes}. + * + * @author Mark Paluch + */ +class Requests { + + private final Map> rawViews; + + protected Requests() { + rawViews = new TreeMap<>(TopologyComparators.RedisURIComparator.INSTANCE); + } + + private Requests(Map> rawViews) { + this.rawViews = rawViews; + } + + protected void addRequest(RedisURI redisURI, TimedAsyncCommand command) { + rawViews.put(redisURI, command); + } + + /** + * Returns a marker future that completes when all of the futures in this {@link Requests} complete. The marker never fails + * exceptionally but signals completion only. + * + * @return + */ + public CompletableFuture allCompleted() { + return CompletableFuture.allOf(rawViews.values().stream().map(it -> it.exceptionally(throwable -> "ignore")) + .toArray(CompletableFuture[]::new)); + } + + protected Set nodes() { + return rawViews.keySet(); + } + + protected TimedAsyncCommand getRequest(RedisURI redisURI) { + return rawViews.get(redisURI); + } + + protected Requests mergeWith(Requests requests) { + + Map> result = new TreeMap<>( + TopologyComparators.RedisURIComparator.INSTANCE); + result.putAll(this.rawViews); + result.putAll(requests.rawViews); + + return new Requests(result); + } +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/TimedAsyncCommand.java b/src/main/java/io/lettuce/core/cluster/topology/TimedAsyncCommand.java new file mode 100644 index 0000000000..82f3ccc2f3 --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/TimedAsyncCommand.java @@ -0,0 +1,60 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.RedisCommand; +import io.netty.buffer.ByteBuf; + +/** + * Timed command that records the time at which the command was encoded and completed. + * + * @param Key type + * @param Value type + * @param Result type + * @author Mark Paluch + */ +class TimedAsyncCommand extends AsyncCommand { + + long encodedAtNs = -1; + long completedAtNs = -1; + + public TimedAsyncCommand(RedisCommand command) { + super(command); + } + + @Override + public void encode(ByteBuf buf) { + completedAtNs = -1; + encodedAtNs = -1; + + super.encode(buf); + encodedAtNs = System.nanoTime(); + } + + @Override + public void complete() { + completedAtNs = System.nanoTime(); + super.complete(); + } + + public long duration() { + if (completedAtNs == -1 || encodedAtNs == -1) { + return -1; + } + return completedAtNs - encodedAtNs; + } +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/TopologyComparators.java b/src/main/java/io/lettuce/core/cluster/topology/TopologyComparators.java new file mode 100644 index 0000000000..f11e8a710b --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/TopologyComparators.java @@ -0,0 +1,336 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import java.util.Collections; +import java.util.Comparator; +import java.util.List; +import java.util.stream.Collectors; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceLists; + +/** + * Comparators for {@link RedisClusterNode} and {@link RedisURI}. + * + * @author Mark Paluch + * @author Alessandro Simi + */ +public class TopologyComparators { + + /** + * Sort partitions by a {@code fixedOrder} and by {@link RedisURI}. Nodes are sorted as provided in {@code fixedOrder}. + * {@link RedisURI RedisURIs}s not contained in {@code fixedOrder} are ordered after the fixed sorting and sorted wihin the + * block by comparing {@link RedisURI}. + * + * @param clusterNodes the sorting input + * @param fixedOrder the fixed order part + * @return List containing {@link RedisClusterNode}s ordered by {@code fixedOrder} and {@link RedisURI} + * @see #sortByUri(Iterable) + */ + public static List predefinedSort(Iterable clusterNodes, + Iterable fixedOrder) { + + LettuceAssert.notNull(clusterNodes, "Cluster nodes must not be null"); + LettuceAssert.notNull(fixedOrder, "Fixed order must not be null"); + + List fixedOrderList = LettuceLists.newList(fixedOrder); + List withOrderSpecification = LettuceLists.newList(clusterNodes)// + .stream()// + .filter(redisClusterNode -> fixedOrderList.contains(redisClusterNode.getUri()))// + .collect(Collectors.toList()); + + List withoutSpecification = LettuceLists.newList(clusterNodes)// + .stream()// + .filter(redisClusterNode -> !fixedOrderList.contains(redisClusterNode.getUri()))// + .collect(Collectors.toList()); + + withOrderSpecification.sort(new PredefinedRedisClusterNodeComparator(fixedOrderList)); + withoutSpecification.sort((o1, o2) -> RedisURIComparator.INSTANCE.compare(o1.getUri(), o2.getUri())); + + withOrderSpecification.addAll(withoutSpecification); + + return withOrderSpecification; + } + + /** + * Sort partitions by RedisURI. + * + * @param clusterNodes + * @return List containing {@link RedisClusterNode}s ordered by {@link RedisURI} + */ + public static List sortByUri(Iterable clusterNodes) { + + LettuceAssert.notNull(clusterNodes, "Cluster nodes must not be null"); + + List ordered = LettuceLists.newList(clusterNodes); + ordered.sort((o1, o2) -> RedisURIComparator.INSTANCE.compare(o1.getUri(), o2.getUri())); + return ordered; + } + + /** + * Sort partitions by client count. + * + * @param clusterNodes + * @return List containing {@link RedisClusterNode}s ordered by client count + */ + public static List sortByClientCount(Iterable clusterNodes) { + + LettuceAssert.notNull(clusterNodes, "Cluster nodes must not be null"); + + List ordered = LettuceLists.newList(clusterNodes); + ordered.sort(ClientCountComparator.INSTANCE); + return ordered; + } + + /** + * Sort partitions by latency. + * + * @param clusterNodes + * @return List containing {@link RedisClusterNode}s ordered by latency + */ + public static List sortByLatency(Iterable clusterNodes) { + + List ordered = LettuceLists.newList(clusterNodes); + ordered.sort(LatencyComparator.INSTANCE); + return ordered; + } + + /** + * Check if properties changed which are essential for cluster operations. + * + * @param o1 the first object to be compared. + * @param o2 the second object to be compared. + * @return {@literal true} if {@code MASTER} or {@code SLAVE} flags changed or the responsible slots changed. + */ + public static boolean isChanged(Partitions o1, Partitions o2) { + + if (o1.size() != o2.size()) { + return true; + } + + for (RedisClusterNode base : o2) { + if (!essentiallyEqualsTo(base, o1.getPartitionByNodeId(base.getNodeId()))) { + return true; + } + } + + return false; + } + + /** + * Check for {@code MASTER} or {@code SLAVE} flags and whether the responsible slots changed. + * + * @param o1 the first object to be compared. + * @param o2 the second object to be compared. + * @return {@literal true} if {@code MASTER} or {@code SLAVE} flags changed or the responsible slots changed. + */ + static boolean essentiallyEqualsTo(RedisClusterNode o1, RedisClusterNode o2) { + + if (o2 == null) { + return false; + } + + if (!sameFlags(o1, o2, RedisClusterNode.NodeFlag.MASTER)) { + return false; + } + + if (!sameFlags(o1, o2, RedisClusterNode.NodeFlag.SLAVE)) { + return false; + } + + if (!o1.hasSameSlotsAs(o2)) { + return false; + } + + return true; + } + + private static boolean sameFlags(RedisClusterNode base, RedisClusterNode other, RedisClusterNode.NodeFlag flag) { + + if (base.getFlags().contains(flag)) { + return other.getFlags().contains(flag); + } + + return !other.getFlags().contains(flag); + } + + static class PredefinedRedisClusterNodeComparator implements Comparator { + private final List fixedOrder; + + public PredefinedRedisClusterNodeComparator(List fixedOrder) { + this.fixedOrder = fixedOrder; + } + + @Override + public int compare(RedisClusterNode o1, RedisClusterNode o2) { + + int index1 = fixedOrder.indexOf(o1.getUri()); + int index2 = fixedOrder.indexOf(o2.getUri()); + + return Integer.compare(index1, index2); + } + } + + /** + * Compare {@link RedisClusterNodeSnapshot} based on their latency. Lowest comes first. Objects of type + * {@link RedisClusterNode} cannot be compared and yield to a result of {@literal 0}. + */ + enum LatencyComparator implements Comparator { + + INSTANCE; + + @Override + public int compare(RedisClusterNode o1, RedisClusterNode o2) { + if (o1 instanceof RedisClusterNodeSnapshot && o2 instanceof RedisClusterNodeSnapshot) { + + RedisClusterNodeSnapshot w1 = (RedisClusterNodeSnapshot) o1; + RedisClusterNodeSnapshot w2 = (RedisClusterNodeSnapshot) o2; + + if (w1.getLatencyNs() != null && w2.getLatencyNs() != null) { + return w1.getLatencyNs().compareTo(w2.getLatencyNs()); + } + + if (w1.getLatencyNs() != null && w2.getLatencyNs() == null) { + return -1; + } + + if (w1.getLatencyNs() == null && w2.getLatencyNs() != null) { + return 1; + } + } + + return 0; + } + } + + /** + * Compare {@link RedisClusterNodeSnapshot} based on their client count. Lowest comes first. Objects of type + * {@link RedisClusterNode} cannot be compared and yield to a result of {@literal 0}. + */ + enum ClientCountComparator implements Comparator { + + INSTANCE; + + @Override + public int compare(RedisClusterNode o1, RedisClusterNode o2) { + if (o1 instanceof RedisClusterNodeSnapshot && o2 instanceof RedisClusterNodeSnapshot) { + + RedisClusterNodeSnapshot w1 = (RedisClusterNodeSnapshot) o1; + RedisClusterNodeSnapshot w2 = (RedisClusterNodeSnapshot) o2; + + if (w1.getConnectedClients() != null && w2.getConnectedClients() != null) { + return w1.getConnectedClients().compareTo(w2.getConnectedClients()); + } + + if (w1.getConnectedClients() == null && w2.getConnectedClients() != null) { + return 1; + } + + if (w1.getConnectedClients() != null && w2.getConnectedClients() == null) { + return -1; + } + } + + return 0; + } + } + + /** + * Compare {@link RedisURI} based on their host and port representation. + */ + enum RedisURIComparator implements Comparator { + + INSTANCE; + + @Override + public int compare(RedisURI o1, RedisURI o2) { + String h1 = ""; + String h2 = ""; + + if (o1 != null) { + h1 = o1.getHost() + ":" + o1.getPort(); + } + + if (o2 != null) { + h2 = o2.getHost() + ":" + o2.getPort(); + } + + return h1.compareToIgnoreCase(h2); + } + } + + /** + * Sort action for topology. Defaults to sort by latency. Can be set via {@code io.lettuce.core.topology.sort} system + * property. + * + * @since 4.5 + */ + enum SortAction { + + /** + * Sort by latency. + */ + BY_LATENCY { + @Override + void sort(Partitions partitions) { + partitions.getPartitions().sort(TopologyComparators.LatencyComparator.INSTANCE); + } + }, + + /** + * Do not sort. + */ + NONE { + @Override + void sort(Partitions partitions) { + + } + }, + + /** + * Randomize nodes. + */ + RANDOMIZE { + @Override + void sort(Partitions partitions) { + Collections.shuffle(partitions.getPartitions()); + } + }; + + abstract void sort(Partitions partitions); + + /** + * @return determine {@link SortAction} and fall back to {@link SortAction#BY_LATENCY} if sort action cannot be + * resolved. + */ + static SortAction getSortAction() { + + String sortAction = System.getProperty("io.lettuce.core.topology.sort", BY_LATENCY.name()); + + for (SortAction action : values()) { + if (sortAction.equalsIgnoreCase(action.name())) { + return action; + } + } + + return BY_LATENCY; + } + } +} diff --git a/src/main/java/io/lettuce/core/cluster/topology/package-info.java b/src/main/java/io/lettuce/core/cluster/topology/package-info.java new file mode 100644 index 0000000000..c83ce64b7e --- /dev/null +++ b/src/main/java/io/lettuce/core/cluster/topology/package-info.java @@ -0,0 +1,5 @@ +/** + * Support for cluster topology refresh. + */ +package io.lettuce.core.cluster.topology; + diff --git a/src/main/java/io/lettuce/core/codec/Base16.java b/src/main/java/io/lettuce/core/codec/Base16.java new file mode 100644 index 0000000000..ac315c1445 --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/Base16.java @@ -0,0 +1,63 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +/** + * High-performance base16 (AKA hex) codec. + * + * @author Will Glozer + */ +public class Base16 { + private static final char[] upper = "0123456789ABCDEF".toCharArray(); + private static final char[] lower = "0123456789abcdef".toCharArray(); + private static final byte[] decode = new byte[128]; + + static { + for (int i = 0; i < 10; i++) { + decode['0' + i] = (byte) i; + decode['A' + i] = (byte) (10 + i); + decode['a' + i] = (byte) (10 + i); + } + } + + /** + * Utility constructor. + */ + private Base16() { + + } + + /** + * Encode bytes to base16 chars. + * + * @param src Bytes to encode. + * @param upper Use upper or lowercase chars. + * + * @return Encoded chars. + */ + public static char[] encode(byte[] src, boolean upper) { + char[] table = upper ? Base16.upper : Base16.lower; + char[] dst = new char[src.length * 2]; + + for (int si = 0, di = 0; si < src.length; si++) { + byte b = src[si]; + dst[di++] = table[(b & 0xf0) >>> 4]; + dst[di++] = table[(b & 0x0f)]; + } + + return dst; + } +} diff --git a/src/main/java/io/lettuce/core/codec/ByteArrayCodec.java b/src/main/java/io/lettuce/core/codec/ByteArrayCodec.java new file mode 100644 index 0000000000..24d90f6352 --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/ByteArrayCodec.java @@ -0,0 +1,93 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.nio.ByteBuffer; + +import io.netty.buffer.ByteBuf; + +/** + * A {@link RedisCodec} that uses plain byte arrays without further transformations. + * + * @author Mark Paluch + * @since 3.3 + */ +public class ByteArrayCodec implements RedisCodec, ToByteBufEncoder { + + public static final ByteArrayCodec INSTANCE = new ByteArrayCodec(); + private static final byte[] EMPTY = new byte[0]; + + @Override + public void encodeKey(byte[] key, ByteBuf target) { + + if (key != null) { + target.writeBytes(key); + } + } + + @Override + public void encodeValue(byte[] value, ByteBuf target) { + encodeKey(value, target); + } + + @Override + public int estimateSize(Object keyOrValue) { + + if (keyOrValue == null) { + return 0; + } + + return ((byte[]) keyOrValue).length; + } + + @Override + public byte[] decodeKey(ByteBuffer bytes) { + return getBytes(bytes); + } + + @Override + public byte[] decodeValue(ByteBuffer bytes) { + return getBytes(bytes); + } + + @Override + public ByteBuffer encodeKey(byte[] key) { + + if (key == null) { + return ByteBuffer.wrap(EMPTY); + } + + return ByteBuffer.wrap(key); + } + + @Override + public ByteBuffer encodeValue(byte[] value) { + return encodeKey(value); + } + + private static byte[] getBytes(ByteBuffer buffer) { + + int remaining = buffer.remaining(); + + if (remaining == 0) { + return EMPTY; + } + + byte[] b = new byte[remaining]; + buffer.get(b); + return b; + } +} diff --git a/src/main/java/io/lettuce/core/codec/ByteBufferInputStream.java b/src/main/java/io/lettuce/core/codec/ByteBufferInputStream.java new file mode 100644 index 0000000000..b61d10b199 --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/ByteBufferInputStream.java @@ -0,0 +1,42 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.io.IOException; +import java.io.InputStream; +import java.nio.ByteBuffer; + +class ByteBufferInputStream extends InputStream { + + private final ByteBuffer buffer; + + public ByteBufferInputStream(ByteBuffer b) { + this.buffer = b; + } + + @Override + public int available() throws IOException { + return buffer.remaining(); + } + + @Override + public int read() throws IOException { + if (buffer.remaining() > 0) { + return (buffer.get() & 0xFF); + } + return -1; + } +} diff --git a/src/main/java/io/lettuce/core/codec/CRC16.java b/src/main/java/io/lettuce/core/codec/CRC16.java new file mode 100644 index 0000000000..f927e1580a --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/CRC16.java @@ -0,0 +1,111 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.nio.ByteBuffer; + +/** + * @author Mark Paluch + *

    + *
  • Name: XMODEM (also known as ZMODEM or CRC-16/ACORN)
  • + *
  • Width: 16 bit
  • + *
  • Poly: 1021-2020 (That is actually x16 + x12 + x5 + 1)
  • + *
  • Initialization: 0000
  • + *
  • Reflect Input byte: False
  • + *
  • Reflect Output CRC: False
  • + *
  • Xor constant to output CRC: 0000
  • + *
+ * @since 3.0 + */ +public class CRC16 { + + private static final int[] LOOKUP_TABLE = { 0x0000, 0x1021, 0x2042, 0x3063, 0x4084, 0x50A5, 0x60C6, 0x70E7, 0x8108, 0x9129, + 0xA14A, 0xB16B, 0xC18C, 0xD1AD, 0xE1CE, 0xF1EF, 0x1231, 0x0210, 0x3273, 0x2252, 0x52B5, 0x4294, 0x72F7, 0x62D6, + 0x9339, 0x8318, 0xB37B, 0xA35A, 0xD3BD, 0xC39C, 0xF3FF, 0xE3DE, 0x2462, 0x3443, 0x0420, 0x1401, 0x64E6, 0x74C7, + 0x44A4, 0x5485, 0xA56A, 0xB54B, 0x8528, 0x9509, 0xE5EE, 0xF5CF, 0xC5AC, 0xD58D, 0x3653, 0x2672, 0x1611, 0x0630, + 0x76D7, 0x66F6, 0x5695, 0x46B4, 0xB75B, 0xA77A, 0x9719, 0x8738, 0xF7DF, 0xE7FE, 0xD79D, 0xC7BC, 0x48C4, 0x58E5, + 0x6886, 0x78A7, 0x0840, 0x1861, 0x2802, 0x3823, 0xC9CC, 0xD9ED, 0xE98E, 0xF9AF, 0x8948, 0x9969, 0xA90A, 0xB92B, + 0x5AF5, 0x4AD4, 0x7AB7, 0x6A96, 0x1A71, 0x0A50, 0x3A33, 0x2A12, 0xDBFD, 0xCBDC, 0xFBBF, 0xEB9E, 0x9B79, 0x8B58, + 0xBB3B, 0xAB1A, 0x6CA6, 0x7C87, 0x4CE4, 0x5CC5, 0x2C22, 0x3C03, 0x0C60, 0x1C41, 0xEDAE, 0xFD8F, 0xCDEC, 0xDDCD, + 0xAD2A, 0xBD0B, 0x8D68, 0x9D49, 0x7E97, 0x6EB6, 0x5ED5, 0x4EF4, 0x3E13, 0x2E32, 0x1E51, 0x0E70, 0xFF9F, 0xEFBE, + 0xDFDD, 0xCFFC, 0xBF1B, 0xAF3A, 0x9F59, 0x8F78, 0x9188, 0x81A9, 0xB1CA, 0xA1EB, 0xD10C, 0xC12D, 0xF14E, 0xE16F, + 0x1080, 0x00A1, 0x30C2, 0x20E3, 0x5004, 0x4025, 0x7046, 0x6067, 0x83B9, 0x9398, 0xA3FB, 0xB3DA, 0xC33D, 0xD31C, + 0xE37F, 0xF35E, 0x02B1, 0x1290, 0x22F3, 0x32D2, 0x4235, 0x5214, 0x6277, 0x7256, 0xB5EA, 0xA5CB, 0x95A8, 0x8589, + 0xF56E, 0xE54F, 0xD52C, 0xC50D, 0x34E2, 0x24C3, 0x14A0, 0x0481, 0x7466, 0x6447, 0x5424, 0x4405, 0xA7DB, 0xB7FA, + 0x8799, 0x97B8, 0xE75F, 0xF77E, 0xC71D, 0xD73C, 0x26D3, 0x36F2, 0x0691, 0x16B0, 0x6657, 0x7676, 0x4615, 0x5634, + 0xD94C, 0xC96D, 0xF90E, 0xE92F, 0x99C8, 0x89E9, 0xB98A, 0xA9AB, 0x5844, 0x4865, 0x7806, 0x6827, 0x18C0, 0x08E1, + 0x3882, 0x28A3, 0xCB7D, 0xDB5C, 0xEB3F, 0xFB1E, 0x8BF9, 0x9BD8, 0xABBB, 0xBB9A, 0x4A75, 0x5A54, 0x6A37, 0x7A16, + 0x0AF1, 0x1AD0, 0x2AB3, 0x3A92, 0xFD2E, 0xED0F, 0xDD6C, 0xCD4D, 0xBDAA, 0xAD8B, 0x9DE8, 0x8DC9, 0x7C26, 0x6C07, + 0x5C64, 0x4C45, 0x3CA2, 0x2C83, 0x1CE0, 0x0CC1, 0xEF1F, 0xFF3E, 0xCF5D, 0xDF7C, 0xAF9B, 0xBFBA, 0x8FD9, 0x9FF8, + 0x6E17, 0x7E36, 0x4E55, 0x5E74, 0x2E93, 0x3EB2, 0x0ED1, 0x1EF0 }; + + /** + * Utility constructor. + */ + private CRC16() { + + } + + /** + * Create a CRC16 checksum from the bytes. + * + * @param bytes input bytes + * @return CRC16 as integer value + */ + public static int crc16(byte[] bytes) { + return crc16(bytes, 0, bytes.length); + } + + /** + * Create a CRC16 checksum from the bytes. + * + * @param bytes input bytes + * @return CRC16 as integer value + */ + public static int crc16(byte[] bytes, int off, int len) { + + int crc = 0x0000; + int end = off + len; + + for (int i = off; i < end; i++) { + crc = doCrc(bytes[i], crc); + } + + return crc & 0xFFFF; + } + + /** + * Create a CRC16 checksum from the bytes. + * + * @param bytes input bytes + * @return CRC16 as integer value + * @since 4.4 + */ + public static int crc16(ByteBuffer bytes) { + + int crc = 0x0000; + + while (bytes.hasRemaining()) { + crc = doCrc(bytes.get(), crc); + } + + return crc & 0xFFFF; + } + + private static int doCrc(byte b, int crc) { + return ((crc << 8) ^ LOOKUP_TABLE[((crc >>> 8) ^ (b & 0xFF)) & 0xFF]); + } +} diff --git a/src/main/java/io/lettuce/core/codec/CipherCodec.java b/src/main/java/io/lettuce/core/codec/CipherCodec.java new file mode 100644 index 0000000000..089e68c256 --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/CipherCodec.java @@ -0,0 +1,376 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.nio.ByteBuffer; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.security.GeneralSecurityException; + +import javax.crypto.Cipher; + +import io.lettuce.core.internal.LettuceAssert; +import io.netty.buffer.ByteBuf; + +/** + * A crypto {@link RedisCodec} that that allows transparent encryption/decryption of values. This codec uses {@link Cipher} + * instances provided by {@link CipherSupplier} to process encryption and decryption. + *

+ * This codec supports various encryption keys by encoding the key name and used key version in the value that is stored in + * Redis. The message format for encryption is: + * + *

+ *     $<key name>+<key version>$<cipher text>
+ * 
+ * + * Each value is prefixed with the key message that is enclosed with dollar ({@code $}) signs and using the plus sign + * ({@code +}) to denote the key version. Decryption decodes the key name and requests a {@link Cipher} from + * {@link CipherSupplier} to decrypt values with an appropriate key/{@link Cipher}. + *

+ * This {@link RedisCodec codec} does not provide re-wrapping or key rotation features. + * + * @author Mark Paluch + * @since 5.2 + * @see CipherSupplier + * @see KeyDescriptor + */ +public abstract class CipherCodec { + + private CipherCodec() { + } + + /** + * A {@link RedisCodec} that compresses values from a delegating {@link RedisCodec}. + * + * @param delegate codec used for key-value encoding/decoding, must not be {@literal null}. + * @param encrypt the {@link CipherSupplier} of encryption {@link Cipher} to use. + * @param decrypt the {@link CipherSupplier} of decryption {@link Cipher} to use. + * @param Key type. + * @param Value type. + * @return Cipher codec. + */ + @SuppressWarnings({ "rawtypes", "unchecked" }) + public static RedisCodec forValues(RedisCodec delegate, CipherSupplier encrypt, CipherSupplier decrypt) { + LettuceAssert.notNull(delegate, "RedisCodec must not be null"); + LettuceAssert.notNull(encrypt, "Encryption Supplier must not be null"); + LettuceAssert.notNull(decrypt, "Decryption Supplier must not be null"); + return (RedisCodec) new CipherCodecWrapper((RedisCodec) delegate, encrypt, decrypt); + } + + @SuppressWarnings("unchecked") + private static class CipherCodecWrapper implements RedisCodec, ToByteBufEncoder { + + private RedisCodec delegate; + private CipherSupplier encrypt; + private CipherSupplier decrypt; + + CipherCodecWrapper(RedisCodec delegate, CipherSupplier encrypt, CipherSupplier decrypt) { + + this.delegate = delegate; + this.encrypt = encrypt; + this.decrypt = decrypt; + } + + @Override + public Object decodeKey(ByteBuffer bytes) { + return delegate.decodeKey(bytes); + } + + @Override + public Object decodeValue(ByteBuffer bytes) { + + KeyDescriptor keyDescriptor = KeyDescriptor.from(bytes); + + try { + return delegate.decodeValue(doWithCipher(this.decrypt.get(keyDescriptor), bytes)); + } catch (GeneralSecurityException e) { + throw new IllegalStateException(e); + } + } + + @Override + public void encodeKey(Object key, ByteBuf target) { + + if (delegate instanceof ToByteBufEncoder) { + ((ToByteBufEncoder) delegate).encodeKey(key, target); + return; + } + + target.writeBytes(delegate.encodeKey(key)); + } + + @Override + public void encodeValue(Object value, ByteBuf target) { + + ByteBuf serialized; + if (delegate instanceof ToByteBufEncoder) { + serialized = target.alloc().buffer(estimateSize(value)); + ((ToByteBufEncoder) delegate).encodeKey(value, serialized); + } else { + ByteBuffer byteBuffer = delegate.encodeValue(value); + serialized = target.alloc().buffer(byteBuffer.remaining()); + serialized.writeBytes(byteBuffer); + } + + try { + KeyDescriptor keyDescriptor = this.encrypt.encryptionKey(); + Cipher cipher = this.encrypt.get(keyDescriptor); + + keyDescriptor.writeTo(target); + doWithCipher(cipher, serialized, target); + } catch (GeneralSecurityException e) { + throw new IllegalStateException(e); + } finally { + serialized.release(); + } + } + + @Override + public int estimateSize(Object keyOrValue) { + + if (delegate instanceof ToByteBufEncoder) { + return ((ToByteBufEncoder) delegate).estimateSize(keyOrValue); + } + + return /* blocksize */16 + /* avg key descriptor size */8; + } + + @Override + public ByteBuffer encodeKey(Object key) { + return delegate.encodeKey(key); + } + + @Override + public ByteBuffer encodeValue(Object value) { + try { + + ByteBuffer serialized = delegate.encodeValue(value); + KeyDescriptor keyDescriptor = this.encrypt.encryptionKey(); + Cipher cipher = this.encrypt.get(keyDescriptor); + + ByteBuffer intermediate = ByteBuffer + .allocate(cipher.getOutputSize(serialized.remaining() + 3 + keyDescriptor.name.length + 10)); + + keyDescriptor.writeTo(intermediate); + intermediate.put(doWithCipher(cipher, serialized)); + intermediate.flip(); + + return intermediate; + } catch (GeneralSecurityException e) { + throw new IllegalStateException(e); + } + } + + private void doWithCipher(Cipher cipher, ByteBuf serialized, ByteBuf target) throws GeneralSecurityException { + + ByteBuffer intermediate = ByteBuffer.allocate(cipher.getOutputSize(serialized.readableBytes())); + + ByteBuffer buffer = serialized.nioBuffer(); + + cipher.update(buffer, intermediate); + cipher.doFinal(buffer, intermediate); + + intermediate.flip(); + target.writeBytes(intermediate); + + serialized.readerIndex(serialized.writerIndex()); + } + + private ByteBuffer doWithCipher(Cipher cipher, ByteBuffer source) throws GeneralSecurityException { + + byte[] encrypted = new byte[source.remaining()]; + source.get(encrypted); + + byte[] update = cipher.update(encrypted); + byte[] finalBytes = cipher.doFinal(); + + ByteBuffer buffer = ByteBuffer.allocate(update.length + finalBytes.length); + buffer.put(update).put(finalBytes).flip(); + + return buffer; + } + } + + /** + * Represents a supplier of {@link Cipher}. Requires to return a new {@link Cipher} instance as ciphers are one-time use + * only. + */ + @FunctionalInterface + public interface CipherSupplier { + + /** + * Creates a new {@link Cipher}. + * + * @return a new {@link Cipher}. + * @throws GeneralSecurityException + * @param keyDescriptor the key to use for the returned {@link Cipher}. + */ + Cipher get(KeyDescriptor keyDescriptor) throws GeneralSecurityException; + + /** + * Returns the latest {@link KeyDescriptor} to use for encryption. + * + * @return the {@link KeyDescriptor} to use for encryption. + */ + default KeyDescriptor encryptionKey() { + return KeyDescriptor.unnamed(); + } + } + + /** + * Descriptor to determine which crypto key to use. Allows versioning and usage of named keys. Key names must not contain + * dollar {@code $} or plus {@code +} characters as these characters are used within the message format to encode key name + * and key version. + */ + public static class KeyDescriptor { + + private static final KeyDescriptor UNNAMED = new KeyDescriptor("".getBytes(StandardCharsets.US_ASCII), 0); + + private final byte[] name; + private final int version; + + private KeyDescriptor(byte[] name, int version) { + + for (byte b : name) { + if (b == '+' || b == '$') { + throw new IllegalArgumentException( + String.format("Key name %s must not contain plus (+) or dollar ($) characters", new String(name))); + } + } + this.name = name; + this.version = version; + } + + /** + * Returns the default {@link KeyDescriptor} that has no specified name. + * + * @return the default {@link KeyDescriptor}. + */ + public static KeyDescriptor unnamed() { + return UNNAMED; + } + + /** + * Create a named {@link KeyDescriptor} without version. Version defaults to zero. + * + * @param name the key name. Must not contain plus or dollar character. + * @return the {@link KeyDescriptor} for {@code name}. + */ + public static KeyDescriptor create(String name) { + return create(name, 0); + } + + /** + * Create a named and versioned {@link KeyDescriptor}. + * + * @param name the key name. Must not contain plus or dollar character. + * @param version the key version. + * @return the {@link KeyDescriptor} for {@code name}. + */ + public static KeyDescriptor create(String name, int version) { + return create(name, version, Charset.defaultCharset()); + } + + /** + * Create a named and versioned {@link KeyDescriptor} using {@link Charset} to encode {@code name} to its binary + * representation. + * + * @param name the key name. Must not contain plus or dollar character. + * @param version the key version. + * @param charset must not be {@literal null}. + * @return the {@link KeyDescriptor} for {@code name}. + */ + public static KeyDescriptor create(String name, int version, Charset charset) { + + LettuceAssert.notNull(name, "Name must not be null"); + LettuceAssert.notNull(charset, "Charset must not be null"); + + return new KeyDescriptor(name.getBytes(charset), version); + } + + static KeyDescriptor from(ByteBuffer bytes) { + + int end = -1; + int version = -1; + + if (bytes.get() != '$') { + throw new IllegalArgumentException("Cannot extract KeyDescriptor. Malformed message header."); + } + + int startPosition = bytes.position(); + + for (int i = 0; i < bytes.remaining(); i++) { + if (bytes.get(bytes.position() + i) == '$') { + end = (bytes.position() - startPosition) + i; + break; + } + + if (bytes.get(bytes.position() + i) == '+') { + version = (bytes.position() - startPosition) + i; + } + } + + if (end == -1 || version == -1) { + throw new IllegalArgumentException("Cannot extract KeyDescriptor"); + } + + byte[] name = new byte[version]; + bytes.get(name); + bytes.get(); + + byte[] versionBytes = new byte[end - version - 1]; + bytes.get(versionBytes); + bytes.get(); // skip last char + + return new KeyDescriptor(name, Integer.parseInt(new String(versionBytes))); + } + + public int getVersion() { + return version; + } + + /** + * Returns the key {@code name} by decoding name bytes using the {@link Charset#defaultCharset() default charset}. + * + * @return the key name. + */ + public String getName() { + return getName(Charset.defaultCharset()); + } + + /** + * Returns the key {@code name} by decoding name bytes using the given {@link Charset}. + * + * @param charset the {@link Charset} to use to decode the key name, must not be {@literal null}. + * @return the key name. + */ + public String getName(Charset charset) { + + LettuceAssert.notNull(charset, "Charset must not be null"); + return new String(name, charset); + } + + void writeTo(ByteBuf target) { + target.writeByte('$').writeBytes(this.name).writeByte('+').writeBytes(Integer.toString(this.version).getBytes()) + .writeByte('$'); + } + + void writeTo(ByteBuffer target) { + target.put((byte) '$').put(this.name).put((byte) '+').put(Integer.toString(this.version).getBytes()) + .put((byte) '$'); + } + } +} diff --git a/src/main/java/io/lettuce/core/codec/ComposedRedisCodec.java b/src/main/java/io/lettuce/core/codec/ComposedRedisCodec.java new file mode 100644 index 0000000000..5a50ce5ab8 --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/ComposedRedisCodec.java @@ -0,0 +1,59 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.nio.ByteBuffer; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * A {@link ComposedRedisCodec} combines two {@link RedisCodec cdecs} to encode/decode key and value to the command output. + * + * @author Dimitris Mandalidis + * @since 5.2 + */ +class ComposedRedisCodec implements RedisCodec { + + private final RedisCodec keyCodec; + private final RedisCodec valueCodec; + + ComposedRedisCodec(RedisCodec keyCodec, RedisCodec valueCodec) { + LettuceAssert.notNull(keyCodec, "Key codec must not be null"); + LettuceAssert.notNull(valueCodec, "Value codec must not be null"); + this.keyCodec = keyCodec; + this.valueCodec = valueCodec; + } + + @Override + public K decodeKey(ByteBuffer bytes) { + return keyCodec.decodeKey(bytes); + } + + @Override + public V decodeValue(ByteBuffer bytes) { + return valueCodec.decodeValue(bytes); + } + + @Override + public ByteBuffer encodeKey(K key) { + return keyCodec.encodeKey(key); + } + + @Override + public ByteBuffer encodeValue(V value) { + return valueCodec.encodeValue(value); + } +} diff --git a/src/main/java/io/lettuce/core/codec/CompressionCodec.java b/src/main/java/io/lettuce/core/codec/CompressionCodec.java new file mode 100644 index 0000000000..2e752b2b69 --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/CompressionCodec.java @@ -0,0 +1,187 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.util.zip.DeflaterOutputStream; +import java.util.zip.GZIPInputStream; +import java.util.zip.GZIPOutputStream; +import java.util.zip.InflaterInputStream; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * A compressing/decompressing {@link RedisCodec} that wraps a typed {@link RedisCodec codec} and compresses values using GZIP + * or Deflate. See {@link io.lettuce.core.codec.CompressionCodec.CompressionType} for supported compression types. + * + * @author Mark Paluch + */ +public abstract class CompressionCodec { + + private CompressionCodec() { + } + + /** + * A {@link RedisCodec} that compresses values from a delegating {@link RedisCodec}. + * + * @param delegate codec used for key-value encoding/decoding, must not be {@literal null}. + * @param compressionType the compression type, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return Value-compressing codec. + */ + @SuppressWarnings({ "rawtypes", "unchecked" }) + public static RedisCodec valueCompressor(RedisCodec delegate, CompressionType compressionType) { + LettuceAssert.notNull(delegate, "RedisCodec must not be null"); + LettuceAssert.notNull(compressionType, "CompressionType must not be null"); + return (RedisCodec) new CompressingValueCodecWrapper((RedisCodec) delegate, compressionType); + } + + private static class CompressingValueCodecWrapper implements RedisCodec { + + private RedisCodec delegate; + private CompressionType compressionType; + + public CompressingValueCodecWrapper(RedisCodec delegate, CompressionType compressionType) { + this.delegate = delegate; + this.compressionType = compressionType; + } + + @Override + public Object decodeKey(ByteBuffer bytes) { + return delegate.decodeKey(bytes); + } + + @Override + public Object decodeValue(ByteBuffer bytes) { + try { + return delegate.decodeValue(decompress(bytes)); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + @Override + public ByteBuffer encodeKey(Object key) { + return delegate.encodeKey(key); + } + + @Override + public ByteBuffer encodeValue(Object value) { + try { + return compress(delegate.encodeValue(value)); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private ByteBuffer compress(ByteBuffer source) throws IOException { + if (source.remaining() == 0) { + return source; + } + + ByteArrayOutputStream outputStream = new ByteArrayOutputStream(source.remaining() / 2); + OutputStream compressor = null; + + try { + try (ByteBufferInputStream sourceStream = new ByteBufferInputStream(source)) { + if (compressionType == CompressionType.GZIP) { + compressor = new GZIPOutputStream(outputStream); + } + + if (compressionType == CompressionType.DEFLATE) { + compressor = new DeflaterOutputStream(outputStream); + } + copy(sourceStream, compressor); + } finally { + + if (compressor != null) { + compressor.close(); + } + } + + return ByteBuffer.wrap(outputStream.toByteArray()); + } finally { + outputStream.close(); + } + } + + private ByteBuffer decompress(ByteBuffer source) throws IOException { + if (source.remaining() == 0) { + return source; + } + + InputStream decompressor = null; + ByteArrayOutputStream outputStream = new ByteArrayOutputStream(source.remaining() * 2); + + try { + try (ByteBufferInputStream sourceStream = new ByteBufferInputStream(source);) { + + if (compressionType == CompressionType.GZIP) { + decompressor = new GZIPInputStream(sourceStream); + } + + if (compressionType == CompressionType.DEFLATE) { + decompressor = new InflaterInputStream(sourceStream); + } + + copy(decompressor, outputStream); + } finally { + if (decompressor != null) { + decompressor.close(); + } + } + + return ByteBuffer.wrap(outputStream.toByteArray()); + } finally { + outputStream.close(); + } + } + } + + /** + * Copies all bytes from the input stream to the output stream. Does not close or flush either stream. + * + * @param from the input stream to read from + * @param to the output stream to write to + * @return the number of bytes copied + * @throws IOException if an I/O error occurs + */ + private static long copy(InputStream from, OutputStream to) throws IOException { + LettuceAssert.notNull(from, "From must not be null"); + LettuceAssert.notNull(to, "From must not be null"); + byte[] buf = new byte[4096]; + long total = 0; + while (true) { + int r = from.read(buf); + if (r == -1) { + break; + } + to.write(buf, 0, r); + total += r; + } + return total; + } + + public enum CompressionType { + GZIP, DEFLATE; + } + +} diff --git a/src/main/java/io/lettuce/core/codec/RedisCodec.java b/src/main/java/io/lettuce/core/codec/RedisCodec.java new file mode 100644 index 0000000000..8bef466de8 --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/RedisCodec.java @@ -0,0 +1,84 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.nio.ByteBuffer; + +/** + * A {@link RedisCodec} encodes keys and values sent to Redis, and decodes keys and values in the command output. + * + * The methods are called by multiple threads and must be thread-safe. + * + * @param Key type. + * @param Value type. + * + * @author Will Glozer + * @author Mark Paluch + * @author Dimitris Mandalidis + */ +public interface RedisCodec { + + /** + * Returns a composite {@link RedisCodec} that uses {@code keyCodec} for keys and {@code valueCodec} for values. + * + * @param the type of the key + * @param the type of the value + * @param keyCodec the codec to encode/decode the keys. + * @param valueCodec the codec to encode/decode the values. + * @return a composite {@link RedisCodec}. + * @since 5.2 + */ + static RedisCodec of(RedisCodec keyCodec, RedisCodec valueCodec) { + return new ComposedRedisCodec<>(keyCodec, valueCodec); + } + + /** + * Decode the key output by redis. + * + * @param bytes Raw bytes of the key, must not be {@literal null}. + * + * @return The decoded key, may be {@literal null}. + */ + K decodeKey(ByteBuffer bytes); + + /** + * Decode the value output by redis. + * + * @param bytes Raw bytes of the value, must not be {@literal null}. + * + * @return The decoded value, may be {@literal null}. + */ + V decodeValue(ByteBuffer bytes); + + /** + * Encode the key for output to redis. + * + * @param key the key, may be {@literal null}. + * + * @return The encoded key, never {@literal null}. + */ + ByteBuffer encodeKey(K key); + + /** + * Encode the value for output to redis. + * + * @param value the value, may be {@literal null}. + * + * @return The encoded value, never {@literal null}. + */ + ByteBuffer encodeValue(V value); + +} diff --git a/src/main/java/com/lambdaworks/redis/codec/StringCodec.java b/src/main/java/io/lettuce/core/codec/StringCodec.java similarity index 81% rename from src/main/java/com/lambdaworks/redis/codec/StringCodec.java rename to src/main/java/io/lettuce/core/codec/StringCodec.java index 4121559729..573e595d86 100644 --- a/src/main/java/com/lambdaworks/redis/codec/StringCodec.java +++ b/src/main/java/io/lettuce/core/codec/StringCodec.java @@ -1,15 +1,25 @@ -package com.lambdaworks.redis.codec; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; import java.nio.ByteBuffer; import java.nio.CharBuffer; -import java.nio.charset.CharacterCodingException; -import java.nio.charset.Charset; -import java.nio.charset.CharsetEncoder; -import java.nio.charset.CoderResult; - -import com.lambdaworks.redis.internal.LettuceAssert; -import com.lambdaworks.redis.protocol.LettuceCharsets; +import java.nio.charset.*; +import io.lettuce.core.internal.LettuceAssert; import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufUtil; import io.netty.buffer.Unpooled; @@ -24,10 +34,10 @@ */ public class StringCodec implements RedisCodec, ToByteBufEncoder { - public final static StringCodec UTF8 = new StringCodec(LettuceCharsets.UTF8); - public final static StringCodec ASCII = new StringCodec(LettuceCharsets.ASCII); + public static final StringCodec UTF8 = new StringCodec(StandardCharsets.UTF_8); + public static final StringCodec ASCII = new StringCodec(StandardCharsets.US_ASCII); - private final static byte[] EMPTY = new byte[0]; + private static final byte[] EMPTY = new byte[0]; private final Charset charset; private final boolean ascii; @@ -43,7 +53,7 @@ public StringCodec() { /** * Creates a new {@link StringCodec} for the given {@link Charset} that encodes and decodes keys and values. - * + * * @param charset must not be {@literal null}. */ public StringCodec(Charset charset) { @@ -142,7 +152,7 @@ public ByteBuffer encodeValue(String value) { /** * Compatibility implementation. - * + * * @param key * @return */ diff --git a/src/main/java/io/lettuce/core/codec/ToByteBufEncoder.java b/src/main/java/io/lettuce/core/codec/ToByteBufEncoder.java new file mode 100644 index 0000000000..5ddc795faa --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/ToByteBufEncoder.java @@ -0,0 +1,58 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import io.netty.buffer.ByteBuf; + +/** + * Optimized encoder that encodes keys and values directly on a {@link ByteBuf}. This encoder does not allocate buffers, it just + * encodes data to existing buffers. + *

+ * Classes implementing {@link ToByteBufEncoder} are required to implement {@link RedisCodec} as well. You should implement also + * the {@link RedisCodec#encodeKey(Object)} and {@link RedisCodec#encodeValue(Object)} methods to ensure compatibility for users + * that access the {@link RedisCodec} API only. + *

+ * + * @author Mark Paluch + * @since 4.3 + */ +public interface ToByteBufEncoder { + + /** + * Encode the key for output to redis. + * + * @param key the key, may be {@literal null}. + * @param target the target buffer, must not be {@literal null}. + */ + void encodeKey(K key, ByteBuf target); + + /** + * Encode the value for output to redis. + * + * @param value the value, may be {@literal null}. + * @param target the target buffer, must not be {@literal null}. + */ + void encodeValue(V value, ByteBuf target); + + /** + * Estimates the size of the resulting byte stream. This method is called for keys and values to estimate the size for the + * temporary buffer to allocate. + * + * @param keyOrValue the key or value, may be {@literal null}. + * @return the estimated number of bytes in the encoded representation. + */ + int estimateSize(Object keyOrValue); +} diff --git a/src/main/java/io/lettuce/core/codec/Utf8StringCodec.java b/src/main/java/io/lettuce/core/codec/Utf8StringCodec.java new file mode 100644 index 0000000000..aafdc8373e --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/Utf8StringCodec.java @@ -0,0 +1,38 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.nio.charset.StandardCharsets; + +/** + * A {@link RedisCodec} that handles UTF-8 encoded keys and values. + * + * @author Will Glozer + * @author Mark Paluch + * @see StringCodec + * @see StandardCharsets#UTF_8 + * @deprecated since 5.2, use {@link StringCodec#UTF8} instead. + */ +@Deprecated +public class Utf8StringCodec extends StringCodec implements RedisCodec { + + /** + * Initialize a new instance that encodes and decodes strings using the UTF-8 charset; + */ + public Utf8StringCodec() { + super(StandardCharsets.UTF_8); + } +} diff --git a/src/main/java/io/lettuce/core/codec/package-info.java b/src/main/java/io/lettuce/core/codec/package-info.java new file mode 100644 index 0000000000..62a717318d --- /dev/null +++ b/src/main/java/io/lettuce/core/codec/package-info.java @@ -0,0 +1,4 @@ +/** + * Codecs for key/value type conversion. + */ +package io.lettuce.core.codec; diff --git a/src/main/java/io/lettuce/core/dynamic/AsyncExecutableCommand.java b/src/main/java/io/lettuce/core/dynamic/AsyncExecutableCommand.java new file mode 100644 index 0000000000..584c1d8d81 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/AsyncExecutableCommand.java @@ -0,0 +1,99 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.time.Duration; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Future; + +import io.lettuce.core.LettuceFutures; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.dynamic.domain.Timeout; +import io.lettuce.core.dynamic.parameter.ExecutionSpecificParameters; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.RedisCommand; + +/** + * An {@link ExecutableCommand} that is executed asynchronously or synchronously. + * + * @author Mark Paluch + * @since 5.0 + */ +class AsyncExecutableCommand implements ExecutableCommand { + + private final CommandMethod commandMethod; + private final CommandFactory commandFactory; + private final StatefulConnection connection; + + AsyncExecutableCommand(CommandMethod commandMethod, CommandFactory commandFactory, + StatefulConnection connection) { + + this.commandMethod = commandMethod; + this.commandFactory = commandFactory; + this.connection = connection; + } + + @Override + public Object execute(Object[] parameters) throws ExecutionException, InterruptedException { + + RedisCommand command = commandFactory.createCommand(parameters); + + return dispatchCommand(parameters, command); + } + + protected Object dispatchCommand(Object[] arguments, RedisCommand command) + throws InterruptedException, java.util.concurrent.ExecutionException { + + AsyncCommand asyncCommand = new AsyncCommand<>(command); + + if (commandMethod.isFutureExecution()) { + + RedisCommand dispatched = connection.dispatch(asyncCommand); + + if (dispatched instanceof AsyncCommand) { + return dispatched; + } + + return asyncCommand; + } + + connection.dispatch(asyncCommand); + + Duration timeout = connection.getTimeout(); + + if (commandMethod.getParameters() instanceof ExecutionSpecificParameters) { + ExecutionSpecificParameters executionSpecificParameters = (ExecutionSpecificParameters) commandMethod + .getParameters(); + + if (executionSpecificParameters.hasTimeoutIndex()) { + Timeout timeoutArg = (Timeout) arguments[executionSpecificParameters.getTimeoutIndex()]; + if (timeoutArg != null) { + timeout = timeoutArg.getTimeout(); + } + } + } + + Futures.await(timeout, asyncCommand); + + return asyncCommand.get(); + } + + @Override + public CommandMethod getCommandMethod() { + return commandMethod; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/AsyncExecutableCommandLookupStrategy.java b/src/main/java/io/lettuce/core/dynamic/AsyncExecutableCommandLookupStrategy.java new file mode 100644 index 0000000000..8293554f49 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/AsyncExecutableCommandLookupStrategy.java @@ -0,0 +1,51 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.List; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.output.CommandOutputFactoryResolver; +import io.lettuce.core.internal.LettuceAssert; + +/** + * @author Mark Paluch + * @since 5.0 + */ +class AsyncExecutableCommandLookupStrategy extends ExecutableCommandLookupStrategySupport { + + private final StatefulConnection connection; + + public AsyncExecutableCommandLookupStrategy(List> redisCodecs, + CommandOutputFactoryResolver commandOutputFactoryResolver, CommandMethodVerifier commandMethodVerifier, + StatefulConnection connection) { + + super(redisCodecs, commandOutputFactoryResolver, commandMethodVerifier); + this.connection = connection; + } + + @Override + public ExecutableCommand resolveCommandMethod(CommandMethod method, RedisCommandsMetadata metadata) { + + LettuceAssert.isTrue(!method.isReactiveExecution(), + () -> String.format("Command method %s not supported by this command lookup strategy", method)); + + CommandFactory commandFactory = super.resolveCommandFactory(method, metadata); + + return new AsyncExecutableCommand(method, commandFactory, connection); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/BatchExecutableCommand.java b/src/main/java/io/lettuce/core/dynamic/BatchExecutableCommand.java new file mode 100644 index 0000000000..bdcb567671 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/BatchExecutableCommand.java @@ -0,0 +1,117 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.ExecutionException; + +import io.lettuce.core.LettuceFutures; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.dynamic.batch.BatchException; +import io.lettuce.core.dynamic.batch.CommandBatching; +import io.lettuce.core.dynamic.parameter.ExecutionSpecificParameters; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Executable command that uses a {@link Batcher} for command execution. + * + * @author Mark Paluch + * @since 5.0 + */ +class BatchExecutableCommand implements ExecutableCommand { + + private final CommandMethod commandMethod; + private final CommandFactory commandFactory; + private final Batcher batcher; + private final StatefulConnection connection; + private final ExecutionSpecificParameters parameters; + private final boolean async; + + BatchExecutableCommand(CommandMethod commandMethod, CommandFactory commandFactory, Batcher batcher, + StatefulConnection connection) { + + this.commandMethod = commandMethod; + this.commandFactory = commandFactory; + this.batcher = batcher; + this.parameters = (ExecutionSpecificParameters) commandMethod.getParameters(); + this.async = commandMethod.isFutureExecution(); + this.connection = connection; + } + + @Override + public Object execute(Object[] parameters) throws ExecutionException, InterruptedException { + + RedisCommand command = commandFactory.createCommand(parameters); + + CommandBatching batching = null; + if (this.parameters.hasCommandBatchingIndex()) { + batching = (CommandBatching) parameters[this.parameters.getCommandBatchingIndex()]; + } + + AsyncCommand asyncCommand = new AsyncCommand<>(command); + + if (async) { + batcher.batch(asyncCommand, batching); + return asyncCommand; + } + + BatchTasks batchTasks = batcher.batch(asyncCommand, batching); + + return synchronize(batchTasks, connection); + } + + protected static Object synchronize(BatchTasks batchTasks, StatefulConnection connection) { + + if (batchTasks == BatchTasks.EMPTY) { + return null; + } + + Duration timeout = connection.getTimeout(); + + BatchException exception = null; + List> failures = null; + for (RedisCommand batchTask : batchTasks) { + + try { + Futures.await(timeout, (RedisFuture) batchTask); + } catch (Exception e) { + if (exception == null) { + failures = new ArrayList<>(); + exception = new BatchException(failures); + } + + failures.add(batchTask); + exception.addSuppressed(e); + } + } + + if (exception != null) { + throw exception; + } + + return null; + } + + @Override + public CommandMethod getCommandMethod() { + return commandMethod; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/BatchExecutableCommandLookupStrategy.java b/src/main/java/io/lettuce/core/dynamic/BatchExecutableCommandLookupStrategy.java new file mode 100644 index 0000000000..5702e63264 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/BatchExecutableCommandLookupStrategy.java @@ -0,0 +1,98 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.ExecutionException; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.batch.BatchExecutor; +import io.lettuce.core.dynamic.output.CommandOutputFactoryResolver; +import io.lettuce.core.dynamic.parameter.ExecutionSpecificParameters; +import io.lettuce.core.internal.LettuceAssert; + +/** + * @author Mark Paluch + * @since 5.0 + */ +class BatchExecutableCommandLookupStrategy extends ExecutableCommandLookupStrategySupport { + + private final Set> SYNCHRONOUS_RETURN_TYPES = new HashSet>(Arrays.asList(Void.class, Void.TYPE)); + + private final Batcher batcher; + private final StatefulConnection connection; + + public BatchExecutableCommandLookupStrategy(List> redisCodecs, + CommandOutputFactoryResolver commandOutputFactoryResolver, CommandMethodVerifier commandMethodVerifier, + Batcher batcher, StatefulConnection connection) { + + super(redisCodecs, commandOutputFactoryResolver, commandMethodVerifier); + this.batcher = batcher; + this.connection = connection; + } + + public static boolean supports(CommandMethod method) { + return method.isBatchExecution() || isForceFlush(method); + } + + private static boolean isForceFlush(CommandMethod method) { + return method.getName().equals("flush") && method.getMethod().getDeclaringClass().equals(BatchExecutor.class); + } + + @Override + public ExecutableCommand resolveCommandMethod(CommandMethod method, RedisCommandsMetadata metadata) { + + LettuceAssert.isTrue(!method.isReactiveExecution(), + () -> String.format("Command method %s not supported by this command lookup strategy", method)); + + ExecutionSpecificParameters parameters = (ExecutionSpecificParameters) method.getParameters(); + + if (parameters.hasTimeoutIndex()) { + throw new IllegalArgumentException(String.format( + "Timeout and batching is not supported, offending command method %s ", method)); + } + + if (isForceFlush(method)) { + + return new ExecutableCommand() { + + @Override + public Object execute(Object[] parameters) throws ExecutionException, InterruptedException { + BatchExecutableCommand.synchronize(batcher.flush(), connection); + return null; + } + + @Override + public CommandMethod getCommandMethod() { + return method; + } + }; + } + + if (method.isFutureExecution() || SYNCHRONOUS_RETURN_TYPES.contains(method.getReturnType().getRawClass())) { + + CommandFactory commandFactory = super.resolveCommandFactory(method, metadata); + return new BatchExecutableCommand(method, commandFactory, batcher, connection); + } + + throw new IllegalArgumentException(String.format( + "Batching command method %s must declare either a Future or void return type", method)); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/BatchTasks.java b/src/main/java/io/lettuce/core/dynamic/BatchTasks.java new file mode 100644 index 0000000000..c4ea2c6696 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/BatchTasks.java @@ -0,0 +1,49 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.Collections; +import java.util.Iterator; +import java.util.List; + +import io.lettuce.core.protocol.RedisCommand; + +/** + * Result of a batching request. Contains references to the batched {@link RedisCommand}s. + * + * @author Mark Paluch + * @since 5.0 + */ +class BatchTasks implements Iterable> { + + public static final BatchTasks EMPTY = new BatchTasks(Collections.emptyList()); + + private final List> futures; + + BatchTasks(List> futures) { + this.futures = futures; + } + + @Override + public Iterator> iterator() { + return futures.iterator(); + } + + @SuppressWarnings("rawtypes") + public RedisCommand[] toArray() { + return futures.toArray(new RedisCommand[0]); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/Batcher.java b/src/main/java/io/lettuce/core/dynamic/Batcher.java new file mode 100644 index 0000000000..89ebdedf5f --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/Batcher.java @@ -0,0 +1,54 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import io.lettuce.core.dynamic.batch.CommandBatching; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Command batcher to enqueue commands and flush a batch once a flush is requested or a configured command threshold is reached. + * + * @author Mark Paluch + * @since 5.0 + * @see SimpleBatcher + */ +public interface Batcher { + + /** + * Batcher that does not support batching. + */ + Batcher NONE = (command, batching) -> { + throw new UnsupportedOperationException(); + }; + + /** + * Add command to the {@link Batcher}. + * + * @param command the command to batch. + * @param batching invocation-specific {@link CommandBatching} control. May be {@literal null} to use default batching + * settings. + * @return result of the batching. Either an {@link BatchTasks#EMPTY empty} result or a result containing the batched + * commands. + */ + BatchTasks batch(RedisCommand command, CommandBatching batching); + + /** + * Force-flush the batch. Has no effect if the queue is empty. + */ + default BatchTasks flush() { + return BatchTasks.EMPTY; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/CodecAwareMethodParametersAccessor.java b/src/main/java/io/lettuce/core/dynamic/CodecAwareMethodParametersAccessor.java new file mode 100644 index 0000000000..c87ffc8eeb --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/CodecAwareMethodParametersAccessor.java @@ -0,0 +1,140 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.Iterator; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.parameter.MethodParametersAccessor; +import io.lettuce.core.dynamic.support.ClassTypeInformation; +import io.lettuce.core.dynamic.support.TypeInformation; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Codec-aware {@link MethodParametersAccessor}. Identifies key and value types by checking value compatibility with + * {@link RedisCodec} types. + * + * @author Mark Paluch + * @since 5.0 + */ +class CodecAwareMethodParametersAccessor implements MethodParametersAccessor { + + private final MethodParametersAccessor delegate; + private final TypeContext typeContext; + + public CodecAwareMethodParametersAccessor(MethodParametersAccessor delegate, RedisCodec redisCodec) { + + LettuceAssert.notNull(delegate, "MethodParametersAccessor must not be null"); + LettuceAssert.notNull(redisCodec, "RedisCodec must not be null"); + + this.delegate = delegate; + this.typeContext = new TypeContext(redisCodec); + } + + public CodecAwareMethodParametersAccessor(MethodParametersAccessor delegate, TypeContext typeContext) { + + LettuceAssert.notNull(delegate, "MethodParametersAccessor must not be null"); + LettuceAssert.notNull(typeContext, "TypeContext must not be null"); + + this.delegate = delegate; + this.typeContext = typeContext; + } + + @Override + public int getParameterCount() { + return delegate.getParameterCount(); + } + + @Override + public Object getBindableValue(int index) { + return delegate.getBindableValue(index); + } + + @Override + public boolean isKey(int index) { + + if (delegate.isValue(index)) { + return false; + } + + if (delegate.isKey(index)) { + return true; + } + + Object bindableValue = getBindableValue(index); + + if (bindableValue != null && typeContext.keyType.getType().isAssignableFrom(bindableValue.getClass())) { + return true; + } + + return false; + } + + @Override + public boolean isValue(int index) { + + if (delegate.isKey(index)) { + return false; + } + + if (delegate.isValue(index)) { + return true; + } + + Object bindableValue = getBindableValue(index); + + if (bindableValue != null && typeContext.valueType.getType().isAssignableFrom(bindableValue.getClass())) { + return true; + } + + return false; + } + + @Override + public Iterator iterator() { + return delegate.iterator(); + } + + @Override + public int resolveParameterIndex(String name) { + return delegate.resolveParameterIndex(name); + } + + @Override + public boolean isBindableNullValue(int index) { + return delegate.isBindableNullValue(index); + } + + /** + * Cacheable type context for a {@link RedisCodec}. + */ + public static class TypeContext { + + final TypeInformation keyType; + final TypeInformation valueType; + + @SuppressWarnings("rawtypes") + public TypeContext(RedisCodec redisCodec) { + + LettuceAssert.notNull(redisCodec, "RedisCodec must not be null"); + + ClassTypeInformation typeInformation = ClassTypeInformation.from(redisCodec.getClass()); + + this.keyType = typeInformation.getTypeArgument(RedisCodec.class, 0); + this.valueType = typeInformation.getTypeArgument(RedisCodec.class, 1); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/CommandCreationException.java b/src/main/java/io/lettuce/core/dynamic/CommandCreationException.java new file mode 100644 index 0000000000..659ac8ef1d --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/CommandCreationException.java @@ -0,0 +1,49 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import io.lettuce.core.RedisException; + +/** + * Exception thrown if a command cannot be constructed from a {@link CommandMethod}. + * + * @author Mark Paluch + * @since 5.0 + */ +@SuppressWarnings("serial") +public class CommandCreationException extends RedisException { + + private final CommandMethod commandMethod; + + /** + * Create a new {@link CommandCreationException} given {@link CommandMethod} and a message. + * + * @param commandMethod must not be {@literal null}. + * @param msg must not be {@literal null}. + */ + public CommandCreationException(CommandMethod commandMethod, String msg) { + + super(String.format("%s Offending method: %s", msg, commandMethod)); + this.commandMethod = commandMethod; + } + + /** + * @return the offending {@link CommandMethod}. + */ + public CommandMethod getCommandMethod() { + return commandMethod; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/CommandFactory.java b/src/main/java/io/lettuce/core/dynamic/CommandFactory.java new file mode 100644 index 0000000000..0999c79402 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/CommandFactory.java @@ -0,0 +1,38 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import io.lettuce.core.protocol.RedisCommand; + +/** + * Strategy interface to create {@link RedisCommand}s. + *

+ * Implementing classes are required to construct {@link RedisCommand}s given an array of parameters for command execution. + * + * @author Mark Paluch + * @since 5.0 + */ +@FunctionalInterface +interface CommandFactory { + + /** + * Create a new {@link RedisCommand} given {@code parameters}. + * + * @param parameters must not be {@literal null}. + * @return the {@link RedisCommand}. + */ + RedisCommand createCommand(Object[] parameters); +} diff --git a/src/main/java/io/lettuce/core/dynamic/CommandFactoryResolver.java b/src/main/java/io/lettuce/core/dynamic/CommandFactoryResolver.java new file mode 100644 index 0000000000..1cdc6d5769 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/CommandFactoryResolver.java @@ -0,0 +1,34 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +/** + * Strategy interface to resolve a {@link CommandFactory}. + * + * @since 5.0 + */ +@FunctionalInterface +interface CommandFactoryResolver { + + /** + * Resolve a {@link CommandFactory} given a{@link DeclaredCommandMethod} and {@link RedisCommandsMetadata}. + * + * @param method must not be {@literal null}. + * @param redisCommandsMetadata must not be {@literal null}. + * @return the {@link CommandFactory}. + */ + CommandFactory resolveRedisCommandFactory(CommandMethod method, RedisCommandsMetadata redisCommandsMetadata); +} diff --git a/src/main/java/io/lettuce/core/dynamic/CommandMethod.java b/src/main/java/io/lettuce/core/dynamic/CommandMethod.java new file mode 100644 index 0000000000..7793ecd87d --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/CommandMethod.java @@ -0,0 +1,89 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.lang.annotation.Annotation; +import java.lang.reflect.Method; +import java.util.concurrent.Future; + +import io.lettuce.core.dynamic.parameter.Parameter; +import io.lettuce.core.dynamic.parameter.Parameters; +import io.lettuce.core.dynamic.support.ResolvableType; + +/** + * Abstraction of a method that is designated to execute a Redis command method. Enriches the standard {@link Method} interface + * with specific information that is necessary to construct {@link io.lettuce.core.protocol.RedisCommand} for the method. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface CommandMethod { + + /** + * @return the method {@link Parameters}. + */ + Parameters getParameters(); + + /** + * @return the {@link Method}. + */ + Method getMethod(); + + /** + * @return declared {@link Method} return {@link io.lettuce.core.dynamic.support.TypeInformation}. + */ + ResolvableType getReturnType(); + + /** + * @return the actual {@link Method} return {@link io.lettuce.core.dynamic.support.TypeInformation} after unwrapping. + */ + ResolvableType getActualReturnType(); + + /** + * Lookup a method annotation. + * + * @param annotationClass the annotation class. + * @return the annotation object or {@literal null} if not found. + */ + A getAnnotation(Class annotationClass); + + /** + * @param annotationClass the annotation class. + * @return {@literal true} if the method is annotated with {@code annotationClass}. + */ + boolean hasAnnotation(Class annotationClass); + + /** + * @return the method name. + */ + String getName(); + + /** + * @return {@literal true} if the method uses asynchronous execution declaring {@link Future} as result type. + */ + boolean isFutureExecution(); + + /** + * @return {@literal true} if the method uses reactive execution declaring {@link org.reactivestreams.Publisher} as result + * type. + */ + boolean isReactiveExecution(); + + /** + * @return {@literal true} if the method defines a {@link io.lettuce.core.dynamic.batch.CommandBatching} argument. + */ + boolean isBatchExecution(); +} diff --git a/src/main/java/io/lettuce/core/dynamic/CommandMethodSyntaxException.java b/src/main/java/io/lettuce/core/dynamic/CommandMethodSyntaxException.java new file mode 100644 index 0000000000..03cf60b4df --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/CommandMethodSyntaxException.java @@ -0,0 +1,36 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +/** + * Exception thrown if the command syntax is invalid. + * + * @author Mark Paluch + * @since 5.0 + */ +@SuppressWarnings("serial") +public class CommandMethodSyntaxException extends CommandCreationException { + + /** + * Create a new {@link CommandMethodSyntaxException} given {@link CommandMethod} and a message. + * + * @param commandMethod must not be {@literal null}. + * @param msg must not be {@literal null}. + */ + public CommandMethodSyntaxException(CommandMethod commandMethod, String msg) { + super(commandMethod, msg); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/CommandMethodVerifier.java b/src/main/java/io/lettuce/core/dynamic/CommandMethodVerifier.java new file mode 100644 index 0000000000..7e2988512b --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/CommandMethodVerifier.java @@ -0,0 +1,43 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import io.lettuce.core.dynamic.segment.CommandSegments; + +/** + * Verifies {@link CommandMethod} declarations by checking available Redis commands. + * + * @author Mark Paluch + * @since 5.0 + */ +@FunctionalInterface +interface CommandMethodVerifier { + + /** + * Default instance that does not verify commands. + */ + CommandMethodVerifier NONE = (commandSegments, commandMethod) -> { + }; + + /** + * Verify a {@link CommandMethod} with its {@link CommandSegments}. This method verifies that the command exists and that + * the required number of arguments is declared. + * + * @param commandSegments must not be {@literal null}. + * @param commandMethod must not be {@literal null}. + */ + void validate(CommandSegments commandSegments, CommandMethod commandMethod) throws CommandMethodSyntaxException; +} diff --git a/src/main/java/io/lettuce/core/dynamic/CommandSegmentCommandFactory.java b/src/main/java/io/lettuce/core/dynamic/CommandSegmentCommandFactory.java new file mode 100644 index 0000000000..3c64cd3a13 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/CommandSegmentCommandFactory.java @@ -0,0 +1,101 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.CodecAwareMethodParametersAccessor.TypeContext; +import io.lettuce.core.dynamic.output.CommandOutputFactory; +import io.lettuce.core.dynamic.output.CommandOutputFactoryResolver; +import io.lettuce.core.dynamic.output.OutputSelector; +import io.lettuce.core.dynamic.parameter.ExecutionSpecificParameters; +import io.lettuce.core.dynamic.parameter.MethodParametersAccessor; +import io.lettuce.core.dynamic.segment.CommandSegments; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.RedisCommand; + +/** + * {@link CommandFactory} based on {@link CommandSegments}. + * + * @author Mark Paluch + * @since 5.0 + */ +class CommandSegmentCommandFactory implements CommandFactory { + + private final CommandMethod commandMethod; + private final CommandSegments segments; + private final CommandOutputFactoryResolver outputResolver; + private final RedisCodec redisCodec; + private final ParameterBinder parameterBinder = new ParameterBinder(); + private final CommandOutputFactory outputFactory; + private final TypeContext typeContext; + + public CommandSegmentCommandFactory(CommandSegments commandSegments, CommandMethod commandMethod, + RedisCodec redisCodec, CommandOutputFactoryResolver outputResolver) { + + this.segments = commandSegments; + this.commandMethod = commandMethod; + this.redisCodec = (RedisCodec) redisCodec; + this.outputResolver = outputResolver; + this.typeContext = new TypeContext(redisCodec); + + OutputSelector outputSelector = new OutputSelector(commandMethod.getActualReturnType(), redisCodec); + CommandOutputFactory factory = resolveCommandOutputFactory(outputSelector); + + if (factory == null) { + throw new IllegalArgumentException(String.format("Cannot resolve CommandOutput for result type %s on method %s", + commandMethod.getActualReturnType(), commandMethod.getMethod())); + } + + if (commandMethod.getParameters() instanceof ExecutionSpecificParameters) { + + ExecutionSpecificParameters executionAwareParameters = (ExecutionSpecificParameters) commandMethod.getParameters(); + + if (commandMethod.isFutureExecution() && executionAwareParameters.hasTimeoutIndex()) { + throw new CommandCreationException(commandMethod, + "Asynchronous command methods do not support Timeout parameters"); + } + } + + this.outputFactory = factory; + } + + protected CommandOutputFactoryResolver getOutputResolver() { + return outputResolver; + } + + protected CommandOutputFactory resolveCommandOutputFactory(OutputSelector outputSelector) { + return outputResolver.resolveCommandOutput(outputSelector); + } + + @Override + public RedisCommand createCommand(Object[] parameters) { + + MethodParametersAccessor parametersAccessor = new CodecAwareMethodParametersAccessor( + new DefaultMethodParametersAccessor(commandMethod.getParameters(), parameters), typeContext); + + CommandArgs args = new CommandArgs<>(redisCodec); + + CommandOutput output = outputFactory.create(redisCodec); + Command command = new Command<>(this.segments.getCommandType(), + output, args); + + parameterBinder.bind(args, redisCodec, segments, parametersAccessor); + + return (Command) command; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/Commands.java b/src/main/java/io/lettuce/core/dynamic/Commands.java new file mode 100644 index 0000000000..10d6151b4e --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/Commands.java @@ -0,0 +1,26 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +/** + * Marker interface for dynamic Redis commands. Typically used by Redis Command Interfaces as extension point to discover + * interface declarations. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface Commands { +} diff --git a/src/main/java/io/lettuce/core/dynamic/ConversionService.java b/src/main/java/io/lettuce/core/dynamic/ConversionService.java new file mode 100644 index 0000000000..e1318a3691 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ConversionService.java @@ -0,0 +1,137 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.function.Function; + +import io.lettuce.core.dynamic.support.ClassTypeInformation; +import io.lettuce.core.dynamic.support.TypeInformation; +import io.lettuce.core.internal.LettuceAssert; + +/** + * @author Mark Paluch + */ +class ConversionService { + + private Map> converterMap = new HashMap<>(10); + + /** + * Register a converter {@link Function}. + * + * @param converter the converter. + */ + @SuppressWarnings("rawtypes") + public void addConverter(Function converter) { + + LettuceAssert.notNull(converter, "Converter must not be null"); + + ClassTypeInformation classTypeInformation = ClassTypeInformation.from(converter.getClass()); + TypeInformation typeInformation = classTypeInformation.getSuperTypeInformation(Function.class); + List> typeArguments = typeInformation.getTypeArguments(); + + ConvertiblePair pair = new ConvertiblePair(typeArguments.get(0).getType(), typeArguments.get(1).getType()); + converterMap.put(pair, converter); + } + + @SuppressWarnings("unchecked") + public T convert(S source, Class targetType) { + + LettuceAssert.notNull(source, "Source must not be null"); + + return (T) getConverter(source.getClass(), targetType).apply(source); + } + + public boolean canConvert(Class sourceType, Class targetType) { + return findConverter(sourceType, targetType).isPresent(); + } + + @SuppressWarnings("unchecked") + Function getConverter(Class source, Class target) { + return findConverter(source, target).orElseThrow(() -> new IllegalArgumentException( + String.format("No converter found for %s to %s conversion", source.getName(), target.getName()))); + } + + private Optional> findConverter(Class source, Class target) { + LettuceAssert.notNull(source, "Source type must not be null"); + LettuceAssert.notNull(target, "Target type must not be null"); + + for (ConvertiblePair pair : converterMap.keySet()) { + + if (pair.getSourceType().isAssignableFrom(source) && target.isAssignableFrom(pair.getTargetType())) { + return Optional.of((Function) converterMap.get(pair)); + } + } + return Optional.empty(); + } + + /** + * Holder for a source-to-target class pair. + */ + final class ConvertiblePair { + + private final Class sourceType; + + private final Class targetType; + + /** + * Create a new source-to-target pair. + * + * @param sourceType the source type + * @param targetType the target type + */ + public ConvertiblePair(Class sourceType, Class targetType) { + + LettuceAssert.notNull(sourceType, "Source type must not be null"); + LettuceAssert.notNull(targetType, "Target type must not be null"); + this.sourceType = sourceType; + this.targetType = targetType; + } + + public Class getSourceType() { + return this.sourceType; + } + + public Class getTargetType() { + return this.targetType; + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } + if (other == null || other.getClass() != ConvertiblePair.class) { + return false; + } + ConvertiblePair otherPair = (ConvertiblePair) other; + return (this.sourceType == otherPair.sourceType && this.targetType == otherPair.targetType); + } + + @Override + public int hashCode() { + return (this.sourceType.hashCode() * 31 + this.targetType.hashCode()); + } + + @Override + public String toString() { + return (this.sourceType.getName() + " -> " + this.targetType.getName()); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/ConvertingCommand.java b/src/main/java/io/lettuce/core/dynamic/ConvertingCommand.java new file mode 100644 index 0000000000..ae709f0d68 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ConvertingCommand.java @@ -0,0 +1,53 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.concurrent.ExecutionException; + +/** + * A {@link ExecutableCommand} that uses {@link ConversionService} to convert the result of a decorated + * {@link ExecutableCommand}. + * + * @author Mark Paluch + * @since 5.0 + */ +class ConvertingCommand implements ExecutableCommand { + + private final ConversionService conversionService; + private final ExecutableCommand delegate; + + public ConvertingCommand(ConversionService conversionService, ExecutableCommand delegate) { + this.conversionService = conversionService; + this.delegate = delegate; + } + + @Override + public Object execute(Object[] parameters) throws ExecutionException, InterruptedException { + + Object result = delegate.execute(parameters); + + if (delegate.getCommandMethod().getReturnType().isAssignableFrom(result.getClass())) { + return result; + } + + return conversionService.convert(result, delegate.getCommandMethod().getReturnType().getRawClass()); + } + + @Override + public CommandMethod getCommandMethod() { + return delegate.getCommandMethod(); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/DeclaredCommandMethod.java b/src/main/java/io/lettuce/core/dynamic/DeclaredCommandMethod.java new file mode 100644 index 0000000000..64b9533cab --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/DeclaredCommandMethod.java @@ -0,0 +1,218 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.lang.annotation.Annotation; +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.Future; + +import org.reactivestreams.Publisher; + +import io.lettuce.core.dynamic.batch.BatchExecutor; +import io.lettuce.core.dynamic.parameter.ExecutionSpecificParameters; +import io.lettuce.core.dynamic.parameter.Parameter; +import io.lettuce.core.dynamic.parameter.Parameters; +import io.lettuce.core.dynamic.support.ResolvableType; +import io.lettuce.core.dynamic.support.TypeInformation; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Abstraction of a method that is designated to execute a Redis command method. Enriches the standard {@link Method} interface + * with specific information that is necessary to construct {@link io.lettuce.core.protocol.RedisCommand} for the method. + * + * @author Mark Paluch + * @since 5.0 + */ +public class DeclaredCommandMethod implements CommandMethod { + + private final Method method; + private final ResolvableType returnType; + private final List> arguments = new ArrayList<>(); + private final ExecutionSpecificParameters parameters; + private final ResolvableType actualReturnType; + private final boolean futureExecution; + private final boolean reactiveExecution; + + /** + * Create a new {@link DeclaredCommandMethod} given a {@link Method}. + * + * @param method must not be null. + */ + private DeclaredCommandMethod(Method method) { + this(method, new ExecutionSpecificParameters(method)); + } + + /** + * Create a new {@link DeclaredCommandMethod} given a {@link Method} and {@link Parameters}. + * + * @param method must not be null. + * @param parameters must not be null. + */ + private DeclaredCommandMethod(Method method, ExecutionSpecificParameters parameters) { + + LettuceAssert.notNull(method, "Method must not be null"); + LettuceAssert.notNull(parameters, "Parameters must not be null"); + + this.method = method; + this.returnType = ResolvableType.forMethodReturnType(method); + this.parameters = parameters; + this.futureExecution = Future.class.isAssignableFrom(getReturnType().getRawClass()); + this.reactiveExecution = ReactiveTypes.supports(getReturnType().getRawClass()); + + Collections.addAll(arguments, method.getParameterTypes()); + + ResolvableType actualReturnType = this.returnType; + + while (Future.class.isAssignableFrom(actualReturnType.getRawClass())) { + ResolvableType[] generics = actualReturnType.getGenerics(); + + if (generics.length != 1) { + break; + } + + actualReturnType = generics[0]; + } + + this.actualReturnType = actualReturnType; + } + + /** + * Create a new {@link DeclaredCommandMethod} given a {@link Method}. + * + * @param method must not be null. + */ + public static CommandMethod create(Method method) { + return new DeclaredCommandMethod(method); + } + + /** + * @return the method {@link Parameters}. + */ + @Override + public Parameters getParameters() { + return parameters; + } + + /** + * @return the {@link Method}. + */ + @Override + public Method getMethod() { + return method; + } + + /** + * @return declared {@link Method} return {@link TypeInformation}. + */ + @Override + public ResolvableType getReturnType() { + return returnType; + } + + /** + * @return the actual {@link Method} return {@link TypeInformation} after unwrapping. + */ + @Override + public ResolvableType getActualReturnType() { + return actualReturnType; + } + + /** + * Lookup a method annotation. + * + * @param annotationClass the annotation class. + * @return the annotation object or {@literal null} if not found. + */ + @Override + public A getAnnotation(Class annotationClass) { + return method.getAnnotation(annotationClass); + } + + /** + * @param annotationClass the annotation class. + * @return {@literal true} if the method is annotated with {@code annotationClass}. + */ + @Override + public boolean hasAnnotation(Class annotationClass) { + return method.getAnnotation(annotationClass) != null; + } + + /** + * @return the method name. + */ + @Override + public String getName() { + return method.getName(); + } + + /** + * @return {@literal true} if the method uses asynchronous execution declaring {@link Future} as result type. + */ + @Override + public boolean isFutureExecution() { + return futureExecution; + } + + /** + * @return {@literal true} if the method uses reactive execution declaring {@link Publisher} as result type. + */ + @Override + public boolean isReactiveExecution() { + return reactiveExecution; + } + + /** + * @return {@literal true} if the method defines a {@link io.lettuce.core.dynamic.batch.CommandBatching} argument. + */ + @Override + public boolean isBatchExecution() { + return parameters.hasCommandBatchingIndex() + || (method.getName().equals("flush") && method.getDeclaringClass().equals(BatchExecutor.class)); + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof DeclaredCommandMethod)) + return false; + + DeclaredCommandMethod that = (DeclaredCommandMethod) o; + + if (method != null ? !method.equals(that.method) : that.method != null) + return false; + if (returnType != null ? !returnType.equals(that.returnType) : that.returnType != null) + return false; + return arguments != null ? arguments.equals(that.arguments) : that.arguments == null; + + } + + @Override + public int hashCode() { + int result = method != null ? method.hashCode() : 0; + result = 31 * result + (returnType != null ? returnType.hashCode() : 0); + result = 31 * result + (arguments != null ? arguments.hashCode() : 0); + return result; + } + + @Override + public String toString() { + return method.toGenericString(); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/DefaultCommandMethodVerifier.java b/src/main/java/io/lettuce/core/dynamic/DefaultCommandMethodVerifier.java new file mode 100644 index 0000000000..a01b7a2ece --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/DefaultCommandMethodVerifier.java @@ -0,0 +1,251 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.ArrayList; +import java.util.List; +import java.util.Optional; +import java.util.stream.Collectors; + +import io.lettuce.core.*; +import io.lettuce.core.dynamic.parameter.Parameter; +import io.lettuce.core.dynamic.segment.CommandSegment; +import io.lettuce.core.dynamic.segment.CommandSegments; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.models.command.CommandDetail; + +/** + * Verifies {@link CommandMethod} declarations by checking available Redis commands. + * + * @author Mark Paluch + * @since 5.0 + */ +class DefaultCommandMethodVerifier implements CommandMethodVerifier { + + /** + * Default maximum property distance: 2 + */ + public static final int DEFAULT_MAX_DISTANCE = 2; + + private List commandDetails; + + /** + * Create a new {@link DefaultCommandMethodVerifier} given a {@link List} of {@link CommandDetail} + * + * @param commandDetails must not be {@literal null}. + */ + public DefaultCommandMethodVerifier(List commandDetails) { + + LettuceAssert.notNull(commandDetails, "Command details must not be null"); + + this.commandDetails = LettuceLists.newList(commandDetails); + } + + /** + * Verify a {@link CommandMethod} with its {@link CommandSegments}. This method verifies that the command exists and that + * the required number of arguments is declared. + * + * @param commandSegments + * @param commandMethod + */ + public void validate(CommandSegments commandSegments, CommandMethod commandMethod) throws CommandMethodSyntaxException { + + LettuceAssert.notEmpty(commandSegments.getCommandType().name(), "Command name must not be empty"); + + CommandDetail commandDetail = findCommandDetail(commandSegments.getCommandType().name()).orElseThrow( + () -> syntaxException(commandSegments.getCommandType().name(), commandMethod)); + + validateParameters(commandDetail, commandSegments, commandMethod); + } + + private void validateParameters(CommandDetail commandDetail, CommandSegments commandSegments, CommandMethod commandMethod) { + + List bindableParameters = commandMethod.getParameters().getBindableParameters(); + + int availableParameterCount = calculateAvailableParameterCount(commandSegments, bindableParameters); + + // exact parameter count + if (commandDetail.getArity() - 1 == availableParameterCount) { + return; + } + + // more or same parameter cound for dynamic arg count commands + if (0 > commandDetail.getArity() && availableParameterCount >= -(commandDetail.getArity() + 1)) { + return; + } + + for (Parameter bindableParameter : bindableParameters) { + + // Can't verify collection-like arguments as they may contain multiple elements. + if (bindableParameter.getTypeInformation().isCollectionLike()) { + return; + } + } + + String message; + if (commandDetail.getArity() == 1) { + message = String.format("Command %s accepts no parameters.", commandDetail.getName().toUpperCase()); + } else if (commandDetail.getArity() < -1) { + message = String.format("Command %s requires at least %d parameters but method declares %d parameter(s).", + commandDetail.getName().toUpperCase(), Math.abs(commandDetail.getArity()) - 1, availableParameterCount); + } else { + message = String.format("Command %s accepts %d parameters but method declares %d parameter(s).", commandDetail + .getName().toUpperCase(), commandDetail.getArity() - 1, availableParameterCount); + } + + throw new CommandMethodSyntaxException(commandMethod, message); + } + + private int calculateAvailableParameterCount(CommandSegments commandSegments, List bindableParameters) { + + int count = commandSegments.size(); + + for (int i = 0; i < bindableParameters.size(); i++) { + + Parameter bindableParameter = bindableParameters.get(i); + + boolean consumed = isConsumed(commandSegments, bindableParameter); + + if (consumed) { + continue; + } + + if (bindableParameter.isAssignableTo(KeyValue.class) || bindableParameter.isAssignableTo(ScoredValue.class)) { + count++; + } + + if (bindableParameter.isAssignableTo(GeoCoordinates.class) || bindableParameter.isAssignableTo(Range.class)) { + count++; + } + + if (bindableParameter.isAssignableTo(Limit.class)) { + count += 2; + } + + count++; + } + + return count; + } + + private boolean isConsumed(CommandSegments commandSegments, Parameter bindableParameter) { + + for (CommandSegment commandSegment : commandSegments) { + if (commandSegment.canConsume(bindableParameter)) { + return true; + } + } + + return false; + } + + private CommandMethodSyntaxException syntaxException(String commandName, CommandMethod commandMethod) { + + CommandMatches commandMatches = CommandMatches.forCommand(commandName, commandDetails); + + if (commandMatches.hasMatches()) { + return new CommandMethodSyntaxException(commandMethod, String.format( + "Command %s does not exist. Did you mean: %s?", commandName, commandMatches)); + } + + return new CommandMethodSyntaxException(commandMethod, String.format("Command %s does not exist", commandName)); + + } + + private Optional findCommandDetail(String commandName) { + return commandDetails.stream().filter(commandDetail -> commandDetail.getName().equalsIgnoreCase(commandName)) + .findFirst(); + } + + static class CommandMatches { + + private final List matches = new ArrayList<>(); + + private CommandMatches(List matches) { + this.matches.addAll(matches); + } + + public static CommandMatches forCommand(String command, List commandDetails) { + return new CommandMatches(calculateMatches(command, commandDetails)); + } + + private static List calculateMatches(String command, List commandDetails) { + + return commandDetails + .stream() + // + .filter(commandDetail -> calculateStringDistance(commandDetail.getName().toLowerCase(), + command.toLowerCase()) <= DEFAULT_MAX_DISTANCE).map(CommandDetail::getName) // + .map(String::toUpperCase) // + .sorted(CommandMatches::calculateStringDistance).collect(Collectors.toList()); + } + + public boolean hasMatches() { + return !matches.isEmpty(); + } + + @Override + public String toString() { + return LettuceStrings.collectionToDelimitedString(matches, ", ", "", ""); + } + + /** + * Calculate the distance between the given two Strings according to the Levenshtein algorithm. + * + * @param s1 the first String + * @param s2 the second String + * @return the distance value + */ + private static int calculateStringDistance(String s1, String s2) { + + if (s1.length() == 0) { + return s2.length(); + } + + if (s2.length() == 0) { + return s1.length(); + } + + int d[][] = new int[s1.length() + 1][s2.length() + 1]; + + for (int i = 0; i <= s1.length(); i++) { + d[i][0] = i; + } + + for (int j = 0; j <= s2.length(); j++) { + d[0][j] = j; + } + + for (int i = 1; i <= s1.length(); i++) { + char s_i = s1.charAt(i - 1); + for (int j = 1; j <= s2.length(); j++) { + int cost; + char t_j = s2.charAt(j - 1); + if (s_i == t_j) { + cost = 0; + } else { + cost = 1; + } + d[i][j] = Math.min(Math.min(d[i - 1][j] + 1, d[i][j - 1] + 1), d[i - 1][j - 1] + cost); + } + } + + return d[s1.length()][s2.length()]; + } + } + +} diff --git a/src/main/java/io/lettuce/core/dynamic/DefaultMethodParametersAccessor.java b/src/main/java/io/lettuce/core/dynamic/DefaultMethodParametersAccessor.java new file mode 100644 index 0000000000..2ae5bdcde6 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/DefaultMethodParametersAccessor.java @@ -0,0 +1,147 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.Arrays; +import java.util.Iterator; +import java.util.List; + +import io.lettuce.core.*; +import io.lettuce.core.dynamic.annotation.Key; +import io.lettuce.core.dynamic.annotation.Value; +import io.lettuce.core.dynamic.parameter.MethodParametersAccessor; +import io.lettuce.core.dynamic.parameter.Parameter; +import io.lettuce.core.dynamic.parameter.Parameters; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Default {@link MethodParametersAccessor} implementation. + * + * @author Mark Paluch + * @since 5.0 + */ +class DefaultMethodParametersAccessor implements MethodParametersAccessor { + + private final Parameters parameters; + private final List values; + + DefaultMethodParametersAccessor(Parameters parameters, Object... values) { + + LettuceAssert.notNull(parameters, "Parameters must not be null"); + LettuceAssert.notNull(values, "Values must not be null"); + + this.parameters = parameters; + this.values = Arrays.asList(values); + } + + public int getParameterCount() { + return parameters.getBindableParameters().size(); + } + + @Override + public Object getBindableValue(int index) { + return values.get(parameters.getBindableParameter(index).getParameterIndex()); + } + + @Override + public boolean isKey(int index) { + return parameters.getBindableParameter(index).findAnnotation(Key.class) != null; + } + + @Override + public boolean isValue(int index) { + return parameters.getBindableParameter(index).findAnnotation(Value.class) != null; + } + + @Override + public Iterator iterator() { + return new BindableParameterIterator(this); + } + + @Override + public int resolveParameterIndex(String name) { + + List bindableParameters = parameters.getBindableParameters(); + + for (int i = 0; i < bindableParameters.size(); i++) { + + if (name.equals(bindableParameters.get(i).getName())) { + return i; + } + } + + throw new IllegalArgumentException(String.format("Cannot resolve named parameter %s", name)); + } + + public Parameters getParameters() { + return parameters; + } + + @Override + public boolean isBindableNullValue(int index) { + + Parameter bindableParameter = parameters.getBindableParameter(index); + + if (bindableParameter.isAssignableTo(Limit.class) || bindableParameter.isAssignableTo(io.lettuce.core.Value.class) + || bindableParameter.isAssignableTo(KeyValue.class) || bindableParameter.isAssignableTo(ScoredValue.class) + || bindableParameter.isAssignableTo(GeoCoordinates.class) || bindableParameter.isAssignableTo(Range.class)) { + return false; + } + + return true; + } + + /** + * Iterator class to allow traversing all bindable parameters inside the accessor. + */ + static class BindableParameterIterator implements Iterator { + + private final int bindableParameterCount; + private final DefaultMethodParametersAccessor accessor; + + private int currentIndex = 0; + + /** + * Creates a new {@link BindableParameterIterator}. + * + * @param accessor must not be {@literal null}. + */ + BindableParameterIterator(DefaultMethodParametersAccessor accessor) { + + LettuceAssert.notNull(accessor, "ParametersParameterAccessor must not be null!"); + + this.accessor = accessor; + this.bindableParameterCount = accessor.getParameters().getBindableParameters().size(); + } + + /** + * Return the next bindable parameter. + * + * @return + */ + public Object next() { + return accessor.getBindableValue(currentIndex++); + } + + public boolean hasNext() { + return bindableParameterCount > currentIndex; + } + + public void remove() { + throw new UnsupportedOperationException(); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/DefaultRedisCommandsMetadata.java b/src/main/java/io/lettuce/core/dynamic/DefaultRedisCommandsMetadata.java new file mode 100644 index 0000000000..260d077ccb --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/DefaultRedisCommandsMetadata.java @@ -0,0 +1,160 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.lang.annotation.Annotation; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.Set; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Default implementation of {@link RedisCommandsMetadata}. + * + * @author Mark Paluch + * @since 5.0 + */ +class DefaultRedisCommandsMetadata implements RedisCommandsMetadata { + + /** The package separator character: '.' */ + private static final char PACKAGE_SEPARATOR = '.'; + + private final Class apiInterface; + + /** + * Create {@link DefaultRedisCommandsMetadata} given a {@link Class command interface}. + * + * @param apiInterface must not be {@literal null}. + */ + DefaultRedisCommandsMetadata(Class apiInterface) { + this.apiInterface = apiInterface; + } + + @Override + public Class getCommandsInterface() { + return apiInterface; + } + + @Override + public Collection getMethods() { + + Set result = new HashSet(); + + for (Method method : getCommandsInterface().getMethods()) { + method = getMostSpecificMethod(method, getCommandsInterface()); + if (isQueryMethodCandidate(method)) { + result.add(method); + } + } + + return Collections.unmodifiableSet(result); + } + + /** + * Checks whether the given method is a query method candidate. + * + * @param method + * @return + */ + private boolean isQueryMethodCandidate(Method method) { + return !method.isBridge() && !method.isDefault(); + } + + @Override + public A getAnnotation(Class annotationClass) { + return getCommandsInterface().getAnnotation(annotationClass); + } + + @Override + public boolean hasAnnotation(Class annotationClass) { + return getCommandsInterface().getAnnotation(annotationClass) != null; + } + + /** + * Given a method, which may come from an interface, and a target class used in the current reflective invocation, find the + * corresponding target method if there is one. E.g. the method may be {@code IFoo.bar()} and the target class may be + * {@code DefaultFoo}. In this case, the method may be {@code DefaultFoo.bar()}. This enables attributes on that method to + * be found. + * + * @param method the method to be invoked, which may come from an interface + * @param targetClass the target class for the current invocation. May be {@code null} or may not even implement the method. + * @return the specific target method, or the original method if the {@code targetClass} doesn't implement it or is + * {@code null} + */ + public static Method getMostSpecificMethod(Method method, Class targetClass) { + + if (method != null && isOverridable(method, targetClass) && targetClass != null + && targetClass != method.getDeclaringClass()) { + try { + try { + return targetClass.getMethod(method.getName(), method.getParameterTypes()); + } catch (NoSuchMethodException ex) { + return method; + } + } catch (SecurityException ex) { + } + } + return method; + } + + /** + * Determine whether the given method is overridable in the given target class. + * + * @param method the method to check + * @param targetClass the target class to check against + */ + private static boolean isOverridable(Method method, Class targetClass) { + + if (Modifier.isPrivate(method.getModifiers())) { + return false; + } + if (Modifier.isPublic(method.getModifiers()) || Modifier.isProtected(method.getModifiers())) { + return true; + } + return getPackageName(method.getDeclaringClass()).equals(getPackageName(targetClass)); + } + + /** + * Determine the name of the package of the given class, e.g. "java.lang" for the {@code java.lang.String} class. + * + * @param clazz the class + * @return the package name, or the empty String if the class is defined in the default package + */ + private static String getPackageName(Class clazz) { + + LettuceAssert.notNull(clazz, "Class must not be null"); + return getPackageName(clazz.getName()); + } + + /** + * Determine the name of the package of the given fully-qualified class name, e.g. "java.lang" for the + * {@code java.lang.String} class name. + * + * @param fqClassName the fully-qualified class name + * @return the package name, or the empty String if the class is defined in the default package + */ + private static String getPackageName(String fqClassName) { + + LettuceAssert.notNull(fqClassName, "Class name must not be null"); + int lastDotIndex = fqClassName.lastIndexOf(PACKAGE_SEPARATOR); + return (lastDotIndex != -1 ? fqClassName.substring(0, lastDotIndex) : ""); + } + +} diff --git a/src/main/java/io/lettuce/core/dynamic/ExecutableCommand.java b/src/main/java/io/lettuce/core/dynamic/ExecutableCommand.java new file mode 100644 index 0000000000..52893757e9 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ExecutableCommand.java @@ -0,0 +1,42 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.concurrent.ExecutionException; + +/** + * An executable command that can be executed calling {@link #execute(Object[])}. + * + * @author Mark Paluch + * @since 5.0 + */ +interface ExecutableCommand { + + /** + * Executes the {@link ExecutableCommand} with the given parameters. + * + * @param parameters + * @return + */ + Object execute(Object[] parameters) throws ExecutionException, InterruptedException; + + /** + * Returns the {@link CommandMethod}. + * + * @return + */ + CommandMethod getCommandMethod(); +} diff --git a/src/main/java/io/lettuce/core/dynamic/ExecutableCommandLookupStrategy.java b/src/main/java/io/lettuce/core/dynamic/ExecutableCommandLookupStrategy.java new file mode 100644 index 0000000000..80901cb8e9 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ExecutableCommandLookupStrategy.java @@ -0,0 +1,37 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.lang.reflect.Method; + +/** + * Strategy interface to resolve {@link ExecutableCommand} from a {@link Method} and {@link RedisCommandsMetadata}. + * + * @author Mark Paluch + * @since 5.0 + */ +@FunctionalInterface +interface ExecutableCommandLookupStrategy { + + /** + * Resolve a {@link ExecutableCommand} given the {@link Method} and {@link RedisCommandsMetadata}. + * + * @param method must not be {@literal null}. + * @param metadata must not be {@literal null}. + * @return the {@link ExecutableCommand}. + */ + ExecutableCommand resolveCommandMethod(CommandMethod method, RedisCommandsMetadata metadata); +} diff --git a/src/main/java/io/lettuce/core/dynamic/ExecutableCommandLookupStrategySupport.java b/src/main/java/io/lettuce/core/dynamic/ExecutableCommandLookupStrategySupport.java new file mode 100644 index 0000000000..d4061f3619 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ExecutableCommandLookupStrategySupport.java @@ -0,0 +1,79 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.List; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.codec.AnnotationRedisCodecResolver; +import io.lettuce.core.dynamic.output.CodecAwareOutputFactoryResolver; +import io.lettuce.core.dynamic.output.CommandOutputFactoryResolver; +import io.lettuce.core.dynamic.segment.AnnotationCommandSegmentFactory; +import io.lettuce.core.dynamic.segment.CommandSegments; + +/** + * @author Mark Paluch + * @since 5.0 + */ +abstract class ExecutableCommandLookupStrategySupport implements ExecutableCommandLookupStrategy { + + private final List> redisCodecs; + private final CommandOutputFactoryResolver commandOutputFactoryResolver; + private final CommandFactoryResolver commandFactoryResolver; + private final CommandMethodVerifier commandMethodVerifier; + + public ExecutableCommandLookupStrategySupport(List> redisCodecs, + CommandOutputFactoryResolver commandOutputFactoryResolver, CommandMethodVerifier commandMethodVerifier) { + + this.redisCodecs = redisCodecs; + this.commandOutputFactoryResolver = commandOutputFactoryResolver; + this.commandMethodVerifier = commandMethodVerifier; + this.commandFactoryResolver = new DefaultCommandFactoryResolver(); + } + + protected CommandFactory resolveCommandFactory(CommandMethod commandMethod, RedisCommandsMetadata commandsMetadata) { + return commandFactoryResolver.resolveRedisCommandFactory(commandMethod, commandsMetadata); + } + + @SuppressWarnings("unchecked") + class DefaultCommandFactoryResolver implements CommandFactoryResolver { + + final AnnotationCommandSegmentFactory commandSegmentFactory = new AnnotationCommandSegmentFactory(); + final AnnotationRedisCodecResolver codecResolver; + + DefaultCommandFactoryResolver() { + codecResolver = new AnnotationRedisCodecResolver(redisCodecs); + } + + @Override + public CommandFactory resolveRedisCommandFactory(CommandMethod commandMethod, RedisCommandsMetadata commandsMetadata) { + + RedisCodec codec = codecResolver.resolve(commandMethod); + + if (codec == null) { + throw new CommandCreationException(commandMethod, "Cannot resolve RedisCodec"); + } + + CodecAwareOutputFactoryResolver outputFactoryResolver = new CodecAwareOutputFactoryResolver( + commandOutputFactoryResolver, codec); + CommandSegments commandSegments = commandSegmentFactory.createCommandSegments(commandMethod); + + commandMethodVerifier.validate(commandSegments, commandMethod); + + return new CommandSegmentCommandFactory(commandSegments, commandMethod, codec, outputFactoryResolver); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/ParameterBinder.java b/src/main/java/io/lettuce/core/dynamic/ParameterBinder.java new file mode 100644 index 0000000000..6a26d3eea6 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ParameterBinder.java @@ -0,0 +1,346 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static io.lettuce.core.protocol.CommandKeyword.LIMIT; + +import java.lang.reflect.Array; +import java.nio.ByteBuffer; +import java.util.*; + +import io.lettuce.core.*; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.parameter.MethodParametersAccessor; +import io.lettuce.core.dynamic.segment.CommandSegment; +import io.lettuce.core.dynamic.segment.CommandSegments; +import io.lettuce.core.dynamic.segment.CommandSegment.ArgumentContribution; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Parameter binder for {@link CommandSegments}-based Redis Commands. + * + * @author Mark Paluch + * @since 5.0 + */ +class ParameterBinder { + + private static final byte[] MINUS_BYTES = { '-' }; + private static final byte[] PLUS_BYTES = { '+' }; + + /** + * Bind {@link CommandSegments} and method parameters to {@link CommandArgs}. + * + * @param args the command arguments. + * @param codec the codec. + * @param commandSegments the command segments. + * @param accessor the parameter accessor. + * @return + */ + CommandArgs bind(CommandArgs args, RedisCodec codec, CommandSegments commandSegments, + MethodParametersAccessor accessor) { + + int parameterCount = accessor.getParameterCount(); + + BitSet set = new BitSet(parameterCount); + + for (CommandSegment commandSegment : commandSegments) { + + ArgumentContribution argumentContribution = commandSegment.contribute(accessor); + bind(args, codec, argumentContribution.getValue(), argumentContribution.getParameterIndex(), accessor); + + if (argumentContribution.getParameterIndex() != -1) { + set.set(argumentContribution.getParameterIndex()); + } + } + + for (int i = 0; i < parameterCount; i++) { + + if (set.get(i)) { + continue; + } + + Object bindableValue = accessor.getBindableValue(i); + bind(args, codec, bindableValue, i, accessor); + + set.set(i); + } + + return args; + } + + /* + * Bind key/value/byte[] arguments. Other arguments are unwound, if applicable, and bound according to their type. + */ + @SuppressWarnings("unchecked") + private void bind(CommandArgs args, RedisCodec codec, Object argument, int index, + MethodParametersAccessor accessor) { + + if (argument == null) { + + if (accessor.isBindableNullValue(index)) { + args.add(new byte[0]); + } + + return; + } + + if (argument instanceof byte[]) { + if (index != -1 && accessor.isKey(index)) { + args.addKey((K) argument); + } else { + args.add((byte[]) argument); + } + return; + } + + if (argument.getClass().isArray()) { + argument = asIterable(argument); + } + + if (index != -1) { + + if (accessor.isKey(index)) { + + if (argument instanceof Iterable) { + args.addKeys((Iterable) argument); + } else { + args.addKey((K) argument); + } + return; + } + + if (accessor.isValue(index)) { + + if (argument instanceof Range) { + bindValueRange(args, codec, (Range) argument); + return; + } + + if (argument instanceof Iterable) { + args.addValues((Iterable) argument); + } else { + args.addValue((V) argument); + } + return; + } + } + + if (argument instanceof Iterable) { + + for (Object argumentElement : (Iterable) argument) { + bindArgument(args, argumentElement); + } + + return; + } + + bindArgument(args, argument); + } + + /* + * Bind generic-handled arguments (String, ProtocolKeyword, Double, Map, Value hierarchy, Limit, Range, GeoCoordinates, + * Composite Arguments). + */ + @SuppressWarnings("unchecked") + private static void bindArgument(CommandArgs args, Object argument) { + + if (argument instanceof byte[]) { + args.add((byte[]) argument); + return; + } + + if (argument instanceof String) { + args.add((String) argument); + return; + } + + if (argument instanceof Double) { + args.add(((Double) argument)); + return; + } else if (argument instanceof Number) { + args.add(((Number) argument).longValue()); + return; + } + + if (argument instanceof ProtocolKeyword) { + args.add((ProtocolKeyword) argument); + return; + } + + if (argument instanceof Map) { + args.add((Map) argument); + return; + } + + if (argument instanceof ScoredValue) { + + ScoredValue scoredValue = (ScoredValue) argument; + V value = scoredValue.getValueOrElseThrow(() -> new IllegalArgumentException( + "Cannot bind empty ScoredValue to a Redis command.")); + + args.add(scoredValue.getScore()); + args.addValue(value); + return; + } + + if (argument instanceof KeyValue) { + + KeyValue keyValue = (KeyValue) argument; + V value = keyValue.getValueOrElseThrow(() -> new IllegalArgumentException( + "Cannot bind empty KeyValue to a Redis command.")); + + args.addKey(keyValue.getKey()); + args.addValue(value); + return; + } + + if (argument instanceof Value) { + + Value valueWrapper = (Value) argument; + V value = valueWrapper.getValueOrElseThrow(() -> new IllegalArgumentException( + "Cannot bind empty Value to a Redis command.")); + + args.addValue(value); + return; + } + + if (argument instanceof Limit) { + + Limit limit = (Limit) argument; + args.add(LIMIT); + args.add(limit.getOffset()); + args.add(limit.getCount()); + return; + } + + if (argument instanceof Range) { + + Range range = (Range) argument; + bindNumericRange(args, range); + return; + } + + if (argument instanceof GeoCoordinates) { + + GeoCoordinates coordinates = (GeoCoordinates) argument; + args.add(coordinates.getX().doubleValue()); + args.add(coordinates.getY().doubleValue()); + return; + } + + if (argument instanceof CompositeArgument) { + ((CompositeArgument) argument).build(args); + return; + } + + throw new IllegalArgumentException("Cannot bind unsupported command argument " + args); + } + + private static void bindValueRange(CommandArgs args, RedisCodec codec, Range range) { + + args.add(minValue(codec, range)); + args.add(maxValue(codec, range)); + } + + private static void bindNumericRange(CommandArgs args, Range range) { + + if (range.getLower().getValue() != null && !(range.getLower().getValue() instanceof Number)) { + throw new IllegalArgumentException( + "Cannot bind non-numeric lower range value for a numeric Range. Annotate with @Value if the Range contains a value range."); + } + + if (range.getUpper().getValue() != null && !(range.getUpper().getValue() instanceof Number)) { + throw new IllegalArgumentException( + "Cannot bind non-numeric upper range value for a numeric Range. Annotate with @Value if the Range contains a value range."); + } + + args.add(minNumeric(range)); + args.add(maxNumeric(range)); + } + + private static String minNumeric(Range range) { + + Range.Boundary lower = range.getLower(); + + if (lower.getValue() == null || lower.getValue() instanceof Double + && lower.getValue().doubleValue() == Double.NEGATIVE_INFINITY) { + return "-inf"; + } + + if (!lower.isIncluding()) { + return "(" + lower.getValue(); + } + + return lower.getValue().toString(); + } + + private static String maxNumeric(Range range) { + + Range.Boundary upper = range.getUpper(); + + if (upper.getValue() == null || upper.getValue() instanceof Double + && upper.getValue().doubleValue() == Double.POSITIVE_INFINITY) { + return "+inf"; + } + + if (!upper.isIncluding()) { + return "(" + upper.getValue(); + } + + return upper.getValue().toString(); + } + + private static byte[] minValue(RedisCodec codec, Range range) { + return valueRange(range.getLower(), MINUS_BYTES, codec); + } + + private static byte[] maxValue(RedisCodec codec, Range range) { + return valueRange(range.getUpper(), PLUS_BYTES, codec); + } + + private static byte[] valueRange(Range.Boundary boundary, byte[] unbounded, RedisCodec codec) { + + if (boundary.getValue() == null) { + return unbounded; + } + + ByteBuffer encodeValue = codec.encodeValue(boundary.getValue()); + byte[] argument = new byte[encodeValue.remaining() + 1]; + + argument[0] = (byte) (boundary.isIncluding() ? '[' : '('); + + encodeValue.get(argument, 1, argument.length - 1); + + return argument; + } + + private static Object asIterable(Object argument) { + + if (argument.getClass().getComponentType().isPrimitive()) { + + int length = Array.getLength(argument); + + List elements = new ArrayList<>(length); + for (int i = 0; i < length; i++) { + elements.add(Array.get(argument, i)); + } + + return elements; + } + return Arrays.asList((Object[]) argument); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/ReactiveCommandSegmentCommandFactory.java b/src/main/java/io/lettuce/core/dynamic/ReactiveCommandSegmentCommandFactory.java new file mode 100644 index 0000000000..3d75e121ec --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ReactiveCommandSegmentCommandFactory.java @@ -0,0 +1,75 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.output.CommandOutputFactory; +import io.lettuce.core.dynamic.output.CommandOutputFactoryResolver; +import io.lettuce.core.dynamic.output.OutputSelector; +import io.lettuce.core.dynamic.parameter.ExecutionSpecificParameters; +import io.lettuce.core.dynamic.segment.CommandSegments; + +/** + * {@link CommandSegmentCommandFactory} for Reactive Command execution. + * + * @author Mark Paluch + */ +class ReactiveCommandSegmentCommandFactory extends CommandSegmentCommandFactory { + + private boolean streamingExecution; + + ReactiveCommandSegmentCommandFactory(CommandSegments commandSegments, CommandMethod commandMethod, + RedisCodec redisCodec, CommandOutputFactoryResolver outputResolver) { + + super(commandSegments, commandMethod, redisCodec, outputResolver); + + if (commandMethod.getParameters() instanceof ExecutionSpecificParameters) { + + ExecutionSpecificParameters executionAwareParameters = (ExecutionSpecificParameters) commandMethod.getParameters(); + + if (executionAwareParameters.hasTimeoutIndex()) { + throw new CommandCreationException(commandMethod, "Reactive command methods do not support Timeout parameters"); + } + } + } + + @Override + protected CommandOutputFactory resolveCommandOutputFactory(OutputSelector outputSelector) { + + streamingExecution = ReactiveTypes.isMultiValueType(outputSelector.getOutputType().getRawClass()); + + OutputSelector componentType = new OutputSelector(outputSelector.getOutputType().getGeneric(0), + outputSelector.getRedisCodec()); + + if (streamingExecution) { + + CommandOutputFactory streamingFactory = getOutputResolver().resolveStreamingCommandOutput(componentType); + + if (streamingExecution && streamingFactory != null) { + return streamingFactory; + } + } + + return super.resolveCommandOutputFactory(componentType); + } + + /** + * @return {@literal true} if the resolved {@link io.lettuce.core.output.CommandOutput} should use streaming. + */ + boolean isStreamingExecution() { + return streamingExecution; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/ReactiveExecutableCommand.java b/src/main/java/io/lettuce/core/dynamic/ReactiveExecutableCommand.java new file mode 100644 index 0000000000..190d7962c5 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ReactiveExecutableCommand.java @@ -0,0 +1,62 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import io.lettuce.core.AbstractRedisReactiveCommands; + +/** + * An {@link ExecutableCommand} that is executed using reactive infrastructure. + * + * @author Mark Paluch + * @since 5.0 + */ +class ReactiveExecutableCommand implements ExecutableCommand { + + private final CommandMethod commandMethod; + private final ReactiveCommandSegmentCommandFactory commandFactory; + private final AbstractRedisReactiveCommands redisReactiveCommands; + + ReactiveExecutableCommand(CommandMethod commandMethod, ReactiveCommandSegmentCommandFactory commandFactory, + AbstractRedisReactiveCommands redisReactiveCommands) { + + this.commandMethod = commandMethod; + this.commandFactory = commandFactory; + this.redisReactiveCommands = redisReactiveCommands; + } + + @Override + public Object execute(Object[] parameters) { + return dispatch(parameters); + } + + protected Object dispatch(Object[] arguments) { + + if (ReactiveTypes.isSingleValueType(commandMethod.getReturnType().getRawClass())) { + return redisReactiveCommands.createMono(() -> commandFactory.createCommand(arguments)); + } + + if (commandFactory.isStreamingExecution()) { + return redisReactiveCommands.createDissolvingFlux(() -> commandFactory.createCommand(arguments)); + } + + return redisReactiveCommands.createFlux(() -> commandFactory.createCommand(arguments)); + } + + @Override + public CommandMethod getCommandMethod() { + return commandMethod; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/ReactiveExecutableCommandLookupStrategy.java b/src/main/java/io/lettuce/core/dynamic/ReactiveExecutableCommandLookupStrategy.java new file mode 100644 index 0000000000..070f79b863 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ReactiveExecutableCommandLookupStrategy.java @@ -0,0 +1,99 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.List; + +import io.lettuce.core.AbstractRedisReactiveCommands; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.codec.AnnotationRedisCodecResolver; +import io.lettuce.core.dynamic.output.CodecAwareOutputFactoryResolver; +import io.lettuce.core.dynamic.output.CommandOutputFactoryResolver; +import io.lettuce.core.dynamic.segment.AnnotationCommandSegmentFactory; +import io.lettuce.core.dynamic.segment.CommandSegments; +import io.lettuce.core.internal.LettuceAssert; + +/** + * @author Mark Paluch + * @since 5.0 + */ +class ReactiveExecutableCommandLookupStrategy implements ExecutableCommandLookupStrategy { + + private final AbstractRedisReactiveCommands redisReactiveCommands; + private final ConversionService conversionService = new ConversionService(); + private final List> redisCodecs; + private final CommandOutputFactoryResolver outputFactoryResolver; + private final ReactiveCommandFactoryResolver commandFactoryResolver; + private final CommandMethodVerifier commandMethodVerifier; + + ReactiveExecutableCommandLookupStrategy(List> redisCodecs, + CommandOutputFactoryResolver outputFactoryResolver, CommandMethodVerifier commandMethodVerifier, + AbstractRedisReactiveCommands redisReactiveCommands) { + + this.redisReactiveCommands = redisReactiveCommands; + this.redisCodecs = redisCodecs; + this.outputFactoryResolver = outputFactoryResolver; + this.commandMethodVerifier = commandMethodVerifier; + + ReactiveTypeAdapters.registerIn(this.conversionService); + this.commandFactoryResolver = new ReactiveCommandFactoryResolver(); + } + + @Override + public ExecutableCommand resolveCommandMethod(CommandMethod method, RedisCommandsMetadata commandsMetadata) { + + LettuceAssert.isTrue(!method.isBatchExecution(), + () -> String.format("Command batching %s not supported with ReactiveExecutableCommandLookupStrategy", method)); + + LettuceAssert.isTrue(method.isReactiveExecution(), + () -> String.format("Command method %s not supported by ReactiveExecutableCommandLookupStrategy", method)); + + ReactiveCommandSegmentCommandFactory commandFactory = commandFactoryResolver.resolveRedisCommandFactory(method, + commandsMetadata); + + return new ConvertingCommand(conversionService, new ReactiveExecutableCommand(method, commandFactory, + redisReactiveCommands)); + } + + class ReactiveCommandFactoryResolver implements CommandFactoryResolver { + + final AnnotationCommandSegmentFactory commandSegmentFactory = new AnnotationCommandSegmentFactory(); + final AnnotationRedisCodecResolver codecResolver; + + ReactiveCommandFactoryResolver() { + codecResolver = new AnnotationRedisCodecResolver(redisCodecs); + } + + public ReactiveCommandSegmentCommandFactory resolveRedisCommandFactory(CommandMethod commandMethod, + RedisCommandsMetadata redisCommandsMetadata) { + + RedisCodec codec = codecResolver.resolve(commandMethod); + + if (codec == null) { + throw new CommandCreationException(commandMethod, "Cannot resolve RedisCodec"); + } + + CommandSegments commandSegments = commandSegmentFactory.createCommandSegments(commandMethod); + + commandMethodVerifier.validate(commandSegments, commandMethod); + + CodecAwareOutputFactoryResolver outputFactoryResolver = new CodecAwareOutputFactoryResolver( + ReactiveExecutableCommandLookupStrategy.this.outputFactoryResolver, codec); + + return new ReactiveCommandSegmentCommandFactory(commandSegments, commandMethod, codec, outputFactoryResolver); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/ReactiveTypeAdapters.java b/src/main/java/io/lettuce/core/dynamic/ReactiveTypeAdapters.java new file mode 100644 index 0000000000..f7f13f86a0 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ReactiveTypeAdapters.java @@ -0,0 +1,874 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.function.Function; + +import org.reactivestreams.Publisher; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import rx.Completable; +import rx.Observable; +import rx.RxReactiveStreams; +import rx.Single; +import rx.internal.reactivestreams.PublisherAdapter; +import io.lettuce.core.dynamic.ReactiveTypes.ReactiveLibrary; +import io.lettuce.core.internal.LettuceAssert; +import io.reactivex.BackpressureStrategy; +import io.reactivex.Flowable; +import io.reactivex.Maybe; + +/** + * @author Mark Paluch + * @since 5.0 + */ +class ReactiveTypeAdapters { + + /** + * Register adapters in the conversion service. + * + * @param conversionService + */ + static void registerIn(ConversionService conversionService) { + + LettuceAssert.notNull(conversionService, "ConversionService must not be null!"); + + if (ReactiveTypes.isAvailable(ReactiveLibrary.PROJECT_REACTOR)) { + + if (ReactiveTypes.isAvailable(ReactiveLibrary.RXJAVA1)) { + + conversionService.addConverter(PublisherToRxJava1CompletableAdapter.INSTANCE); + conversionService.addConverter(RxJava1CompletableToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava1CompletableToMonoAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava1SingleAdapter.INSTANCE); + conversionService.addConverter(RxJava1SingleToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava1SingleToMonoAdapter.INSTANCE); + conversionService.addConverter(RxJava1SingleToFluxAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava1ObservableAdapter.INSTANCE); + conversionService.addConverter(RxJava1ObservableToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava1ObservableToMonoAdapter.INSTANCE); + conversionService.addConverter(RxJava1ObservableToFluxAdapter.INSTANCE); + } + + if (ReactiveTypes.isAvailable(ReactiveLibrary.RXJAVA2)) { + + conversionService.addConverter(PublisherToRxJava2CompletableAdapter.INSTANCE); + conversionService.addConverter(RxJava2CompletableToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava2CompletableToMonoAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava2SingleAdapter.INSTANCE); + conversionService.addConverter(RxJava2SingleToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava2SingleToMonoAdapter.INSTANCE); + conversionService.addConverter(RxJava2SingleToFluxAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava2ObservableAdapter.INSTANCE); + conversionService.addConverter(RxJava2ObservableToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava2ObservableToMonoAdapter.INSTANCE); + conversionService.addConverter(RxJava2ObservableToFluxAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava2FlowableAdapter.INSTANCE); + conversionService.addConverter(RxJava2FlowableToPublisherAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava2MaybeAdapter.INSTANCE); + conversionService.addConverter(RxJava2MaybeToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava2MaybeToMonoAdapter.INSTANCE); + conversionService.addConverter(RxJava2MaybeToFluxAdapter.INSTANCE); + } + + if (ReactiveTypes.isAvailable(ReactiveLibrary.RXJAVA3)) { + + conversionService.addConverter(PublisherToRxJava3CompletableAdapter.INSTANCE); + conversionService.addConverter(RxJava3CompletableToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava3CompletableToMonoAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava3SingleAdapter.INSTANCE); + conversionService.addConverter(RxJava3SingleToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava3SingleToMonoAdapter.INSTANCE); + conversionService.addConverter(RxJava3SingleToFluxAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava3ObservableAdapter.INSTANCE); + conversionService.addConverter(RxJava3ObservableToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava3ObservableToMonoAdapter.INSTANCE); + conversionService.addConverter(RxJava3ObservableToFluxAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava3FlowableAdapter.INSTANCE); + conversionService.addConverter(RxJava3FlowableToPublisherAdapter.INSTANCE); + + conversionService.addConverter(PublisherToRxJava3MaybeAdapter.INSTANCE); + conversionService.addConverter(RxJava3MaybeToPublisherAdapter.INSTANCE); + conversionService.addConverter(RxJava3MaybeToMonoAdapter.INSTANCE); + conversionService.addConverter(RxJava3MaybeToFluxAdapter.INSTANCE); + } + + conversionService.addConverter(PublisherToMonoAdapter.INSTANCE); + conversionService.addConverter(PublisherToFluxAdapter.INSTANCE); + + if (ReactiveTypes.isAvailable(ReactiveLibrary.RXJAVA1)) { + conversionService.addConverter(RxJava1SingleToObservableAdapter.INSTANCE); + conversionService.addConverter(RxJava1ObservableToSingleAdapter.INSTANCE); + } + + if (ReactiveTypes.isAvailable(ReactiveLibrary.RXJAVA2)) { + conversionService.addConverter(RxJava2SingleToObservableAdapter.INSTANCE); + conversionService.addConverter(RxJava2ObservableToSingleAdapter.INSTANCE); + conversionService.addConverter(RxJava2ObservableToMaybeAdapter.INSTANCE); + } + + if (ReactiveTypes.isAvailable(ReactiveLibrary.RXJAVA3)) { + conversionService.addConverter(RxJava3SingleToObservableAdapter.INSTANCE); + conversionService.addConverter(RxJava3ObservableToSingleAdapter.INSTANCE); + conversionService.addConverter(RxJava3ObservableToMaybeAdapter.INSTANCE); + } + } + } + + // ------------------------------------------------------------------------- + // ReactiveStreams adapters + // ------------------------------------------------------------------------- + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link Flux}. + */ + public enum PublisherToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(Publisher source) { + return Flux.from(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link Mono}. + */ + public enum PublisherToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(Publisher source) { + return Mono.from(source); + } + } + + // ------------------------------------------------------------------------- + // RxJava 1 adapters + // ------------------------------------------------------------------------- + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link Single}. + */ + public enum PublisherToRxJava1SingleAdapter implements Function, Single> { + + INSTANCE; + + @Override + public Single apply(Publisher source) { + return RxReactiveStreams.toSingle(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link Completable}. + */ + public enum PublisherToRxJava1CompletableAdapter implements Function, Completable> { + + INSTANCE; + + @Override + public Completable apply(Publisher source) { + return RxReactiveStreams.toCompletable(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link Observable}. + */ + public enum PublisherToRxJava1ObservableAdapter implements Function, Observable> { + + INSTANCE; + + @Override + public Observable apply(Publisher source) { + return RxReactiveStreams.toObservable(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Single} to {@link Publisher}. + */ + public enum RxJava1SingleToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(Single source) { + return Flux.defer(() -> RxReactiveStreams.toPublisher(source)); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Single} to {@link Mono}. + */ + public enum RxJava1SingleToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(Single source) { + return Mono.defer(() -> Mono.from((Publisher) RxReactiveStreams.toPublisher(source))); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Single} to {@link Publisher}. + */ + public enum RxJava1SingleToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(Single source) { + return Flux.defer(() -> RxReactiveStreams.toPublisher(source)); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Completable} to {@link Publisher}. + */ + public enum RxJava1CompletableToPublisherAdapter implements Function> { + + INSTANCE; + + @Override + public Publisher apply(Completable source) { + return Flux.defer(() -> RxReactiveStreams.toPublisher(source)); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Completable} to {@link Mono}. + */ + public enum RxJava1CompletableToMonoAdapter implements Function> { + + INSTANCE; + + @Override + public Mono apply(Completable source) { + return Mono.from(RxJava1CompletableToPublisherAdapter.INSTANCE.apply(source)); + } + } + + /** + * An adapter {@link Function} to adopt an {@link Observable} to {@link Publisher}. + */ + public enum RxJava1ObservableToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(Observable source) { + return Flux.defer(() -> new PublisherAdapter<>(source)); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Observable} to {@link Mono}. + */ + public enum RxJava1ObservableToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(Observable source) { + return Mono.defer(() -> Mono.from((Publisher) RxReactiveStreams.toPublisher(source))); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Observable} to {@link Flux}. + */ + public enum RxJava1ObservableToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(Observable source) { + return Flux.defer(() -> Flux.from((Publisher) RxReactiveStreams.toPublisher(source))); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Observable} to {@link Single}. + */ + public enum RxJava1ObservableToSingleAdapter implements Function, Single> { + + INSTANCE; + + @Override + public Single apply(Observable source) { + return source.toSingle(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Single} to {@link Single}. + */ + public enum RxJava1SingleToObservableAdapter implements Function, Observable> { + + INSTANCE; + + @Override + public Observable apply(Single source) { + return source.toObservable(); + } + } + + // ------------------------------------------------------------------------- + // RxJava 2 adapters + // ------------------------------------------------------------------------- + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.Single}. + */ + public enum PublisherToRxJava2SingleAdapter implements Function, io.reactivex.Single> { + + INSTANCE; + + @Override + public io.reactivex.Single apply(Publisher source) { + return io.reactivex.Single.fromPublisher(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.Completable}. + */ + public enum PublisherToRxJava2CompletableAdapter implements Function, io.reactivex.Completable> { + + INSTANCE; + + @Override + public io.reactivex.Completable apply(Publisher source) { + return io.reactivex.Completable.fromPublisher(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.Observable}. + */ + public enum PublisherToRxJava2ObservableAdapter implements Function, io.reactivex.Observable> { + + INSTANCE; + + @Override + public io.reactivex.Observable apply(Publisher source) { + return io.reactivex.Observable.fromPublisher(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Single} to {@link Publisher}. + */ + public enum RxJava2SingleToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.Single source) { + return source.toFlowable(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Single} to {@link Mono}. + */ + public enum RxJava2SingleToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(io.reactivex.Single source) { + return Mono.from(source.toFlowable()); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Single} to {@link Publisher}. + */ + public enum RxJava2SingleToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(io.reactivex.Single source) { + return Flux.from(source.toFlowable()); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Completable} to {@link Publisher}. + */ + public enum RxJava2CompletableToPublisherAdapter implements Function> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.Completable source) { + return source.toFlowable(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Completable} to {@link Mono}. + */ + public enum RxJava2CompletableToMonoAdapter implements Function> { + + INSTANCE; + + @Override + public Mono apply(io.reactivex.Completable source) { + return Mono.from(RxJava2CompletableToPublisherAdapter.INSTANCE.apply(source)); + } + } + + /** + * An adapter {@link Function} to adopt an {@link io.reactivex.Observable} to {@link Publisher}. + */ + public enum RxJava2ObservableToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.Observable source) { + return source.toFlowable(BackpressureStrategy.BUFFER); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Observable} to {@link Mono}. + */ + public enum RxJava2ObservableToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(io.reactivex.Observable source) { + return Mono.from(source.toFlowable(BackpressureStrategy.BUFFER)); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Observable} to {@link Flux}. + */ + public enum RxJava2ObservableToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(io.reactivex.Observable source) { + return Flux.from(source.toFlowable(BackpressureStrategy.BUFFER)); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.Flowable}. + */ + public enum PublisherToRxJava2FlowableAdapter implements Function, io.reactivex.Flowable> { + + INSTANCE; + + @Override + public io.reactivex.Flowable apply(Publisher source) { + return Flowable.fromPublisher(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Flowable} to {@link Publisher}. + */ + public enum RxJava2FlowableToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.Flowable source) { + return source; + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.Flowable}. + */ + public enum PublisherToRxJava2MaybeAdapter implements Function, io.reactivex.Maybe> { + + INSTANCE; + + @Override + public io.reactivex.Maybe apply(Publisher source) { + return Flowable.fromPublisher(source).singleElement(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Maybe} to {@link Publisher}. + */ + public enum RxJava2MaybeToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.Maybe source) { + return source.toFlowable(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Maybe} to {@link Mono}. + */ + public enum RxJava2MaybeToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(io.reactivex.Maybe source) { + return Mono.from(source.toFlowable()); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.Maybe} to {@link Flux}. + */ + public enum RxJava2MaybeToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(io.reactivex.Maybe source) { + return Flux.from(source.toFlowable()); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Observable} to {@link Single}. + */ + public enum RxJava2ObservableToSingleAdapter implements Function, io.reactivex.Single> { + + INSTANCE; + + @Override + public io.reactivex.Single apply(io.reactivex.Observable source) { + return source.singleOrError(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Observable} to {@link Maybe}. + */ + public enum RxJava2ObservableToMaybeAdapter implements Function, io.reactivex.Maybe> { + + INSTANCE; + + @Override + public io.reactivex.Maybe apply(io.reactivex.Observable source) { + return source.singleElement(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Single} to {@link Single}. + */ + public enum RxJava2SingleToObservableAdapter implements Function, io.reactivex.Observable> { + + INSTANCE; + + @Override + public io.reactivex.Observable apply(io.reactivex.Single source) { + return source.toObservable(); + } + } + + // ------------------------------------------------------------------------- + // RxJava 3 adapters + // ------------------------------------------------------------------------- + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.Single}. + */ + public enum PublisherToRxJava3SingleAdapter implements Function, io.reactivex.rxjava3.core.Single> { + + INSTANCE; + + @Override + public io.reactivex.rxjava3.core.Single apply(Publisher source) { + return io.reactivex.rxjava3.core.Single.fromPublisher(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.Completable}. + */ + public enum PublisherToRxJava3CompletableAdapter implements Function, io.reactivex.rxjava3.core.Completable> { + + INSTANCE; + + @Override + public io.reactivex.rxjava3.core.Completable apply(Publisher source) { + return io.reactivex.rxjava3.core.Completable.fromPublisher(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.rxjava3.core.Observable}. + */ + public enum PublisherToRxJava3ObservableAdapter implements Function, io.reactivex.rxjava3.core.Observable> { + + INSTANCE; + + @Override + public io.reactivex.rxjava3.core.Observable apply(Publisher source) { + return io.reactivex.rxjava3.core.Observable.fromPublisher(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Single} to {@link Publisher}. + */ + public enum RxJava3SingleToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.rxjava3.core.Single source) { + return source.toFlowable(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Single} to {@link Mono}. + */ + public enum RxJava3SingleToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(io.reactivex.rxjava3.core.Single source) { + return Mono.from(source.toFlowable()); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Single} to {@link Publisher}. + */ + public enum RxJava3SingleToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(io.reactivex.rxjava3.core.Single source) { + return Flux.from(source.toFlowable()); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Completable} to {@link Publisher}. + */ + public enum RxJava3CompletableToPublisherAdapter implements Function> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.rxjava3.core.Completable source) { + return source.toFlowable(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Completable} to {@link Mono}. + */ + public enum RxJava3CompletableToMonoAdapter implements Function> { + + INSTANCE; + + @Override + public Mono apply(io.reactivex.rxjava3.core.Completable source) { + return Mono.from(RxJava3CompletableToPublisherAdapter.INSTANCE.apply(source)); + } + } + + /** + * An adapter {@link Function} to adopt an {@link io.reactivex.rxjava3.core.Observable} to {@link Publisher}. + */ + public enum RxJava3ObservableToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.rxjava3.core.Observable source) { + return source.toFlowable(io.reactivex.rxjava3.core.BackpressureStrategy.BUFFER); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Observable} to {@link Mono}. + */ + public enum RxJava3ObservableToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(io.reactivex.rxjava3.core.Observable source) { + return Mono.from(source.toFlowable(io.reactivex.rxjava3.core.BackpressureStrategy.BUFFER)); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Observable} to {@link Flux}. + */ + public enum RxJava3ObservableToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(io.reactivex.rxjava3.core.Observable source) { + return Flux.from(source.toFlowable(io.reactivex.rxjava3.core.BackpressureStrategy.BUFFER)); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.rxjava3.core.Flowable}. + */ + public enum PublisherToRxJava3FlowableAdapter implements Function, io.reactivex.rxjava3.core.Flowable> { + + INSTANCE; + + @Override + public io.reactivex.rxjava3.core.Flowable apply(Publisher source) { + return io.reactivex.rxjava3.core.Flowable.fromPublisher(source); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Flowable} to {@link Publisher}. + */ + public enum RxJava3FlowableToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.rxjava3.core.Flowable source) { + return source; + } + } + + /** + * An adapter {@link Function} to adopt a {@link Publisher} to {@link io.reactivex.rxjava3.core.Flowable}. + */ + public enum PublisherToRxJava3MaybeAdapter implements Function, io.reactivex.rxjava3.core.Maybe> { + + INSTANCE; + + @Override + public io.reactivex.rxjava3.core.Maybe apply(Publisher source) { + return io.reactivex.rxjava3.core.Flowable.fromPublisher(source).singleElement(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Maybe} to {@link Publisher}. + */ + public enum RxJava3MaybeToPublisherAdapter implements Function, Publisher> { + + INSTANCE; + + @Override + public Publisher apply(io.reactivex.rxjava3.core.Maybe source) { + return source.toFlowable(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Maybe} to {@link Mono}. + */ + public enum RxJava3MaybeToMonoAdapter implements Function, Mono> { + + INSTANCE; + + @Override + public Mono apply(io.reactivex.rxjava3.core.Maybe source) { + return Mono.from(source.toFlowable()); + } + } + + /** + * An adapter {@link Function} to adopt a {@link io.reactivex.rxjava3.core.Maybe} to {@link Flux}. + */ + public enum RxJava3MaybeToFluxAdapter implements Function, Flux> { + + INSTANCE; + + @Override + public Flux apply(io.reactivex.rxjava3.core.Maybe source) { + return Flux.from(source.toFlowable()); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Observable} to {@link Single}. + */ + public enum RxJava3ObservableToSingleAdapter + implements Function, io.reactivex.rxjava3.core.Single> { + + INSTANCE; + + @Override + public io.reactivex.rxjava3.core.Single apply(io.reactivex.rxjava3.core.Observable source) { + return source.singleOrError(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Observable} to {@link Maybe}. + */ + public enum RxJava3ObservableToMaybeAdapter + implements Function, io.reactivex.rxjava3.core.Maybe> { + + INSTANCE; + + @Override + public io.reactivex.rxjava3.core.Maybe apply(io.reactivex.rxjava3.core.Observable source) { + return source.singleElement(); + } + } + + /** + * An adapter {@link Function} to adopt a {@link Single} to {@link Single}. + */ + public enum RxJava3SingleToObservableAdapter + implements Function, io.reactivex.rxjava3.core.Observable> { + + INSTANCE; + + @Override + public io.reactivex.rxjava3.core.Observable apply(io.reactivex.rxjava3.core.Single source) { + return source.toObservable(); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/ReactiveTypes.java b/src/main/java/io/lettuce/core/dynamic/ReactiveTypes.java new file mode 100644 index 0000000000..ba09d7f7da --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/ReactiveTypes.java @@ -0,0 +1,268 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.*; +import java.util.Map.Entry; +import java.util.stream.Collectors; + +import org.reactivestreams.Publisher; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import rx.Completable; +import rx.Observable; +import rx.Single; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceClassUtils; + +/** + * Utility class to expose details about reactive wrapper types. This class exposes whether a reactive wrapper is supported in + * general and whether a particular type is suitable for no-value/single-value/multi-value usage. + *

+ * Supported types are discovered by their availability on the class path. This class is typically used to determine + * multiplicity and whether a reactive wrapper type is acceptable for a specific operation. + * + * @author Mark Paluch + * @since 5.0 + * @see org.reactivestreams.Publisher + * @see rx.Single + * @see rx.Observable + * @see rx.Completable + * @see io.reactivex.Single + * @see io.reactivex.Maybe + * @see io.reactivex.Observable + * @see io.reactivex.Completable + * @see io.reactivex.Flowable + * @see io.reactivex.rxjava3.core.Single + * @see io.reactivex.rxjava3.core.Maybe + * @see io.reactivex.rxjava3.core.Observable + * @see io.reactivex.rxjava3.core.Completable + * @see io.reactivex.rxjava3.core.Flowable + * @see Mono + * @see Flux + */ +class ReactiveTypes { + + private static final boolean PROJECT_REACTOR_PRESENT = LettuceClassUtils.isPresent("reactor.core.publisher.Mono"); + private static final boolean RXJAVA1_PRESENT = LettuceClassUtils.isPresent("rx.Completable"); + private static final boolean RXJAVA2_PRESENT = LettuceClassUtils.isPresent("io.reactivex.Flowable"); + private static final boolean RXJAVA3_PRESENT = LettuceClassUtils.isPresent("io.reactivex.rxjava3.core.Flowable"); + + private static final Map, Descriptor> REACTIVE_WRAPPERS; + + static { + + Map, Descriptor> reactiveWrappers = new LinkedHashMap<>(3); + + if (RXJAVA1_PRESENT) { + + reactiveWrappers.put(Single.class, new Descriptor(false, true, false)); + reactiveWrappers.put(Completable.class, new Descriptor(false, true, true)); + reactiveWrappers.put(Observable.class, new Descriptor(true, true, false)); + } + + if (RXJAVA2_PRESENT) { + + reactiveWrappers.put(io.reactivex.Single.class, new Descriptor(false, true, false)); + reactiveWrappers.put(io.reactivex.Maybe.class, new Descriptor(false, true, false)); + reactiveWrappers.put(io.reactivex.Completable.class, new Descriptor(false, true, true)); + reactiveWrappers.put(io.reactivex.Flowable.class, new Descriptor(true, true, false)); + reactiveWrappers.put(io.reactivex.Observable.class, new Descriptor(true, true, false)); + } + + if (RXJAVA3_PRESENT) { + + reactiveWrappers.put(io.reactivex.rxjava3.core.Single.class, new Descriptor(false, true, false)); + reactiveWrappers.put(io.reactivex.rxjava3.core.Maybe.class, new Descriptor(false, true, false)); + reactiveWrappers.put(io.reactivex.rxjava3.core.Completable.class, new Descriptor(false, true, true)); + reactiveWrappers.put(io.reactivex.rxjava3.core.Flowable.class, new Descriptor(true, true, false)); + reactiveWrappers.put(io.reactivex.rxjava3.core.Observable.class, new Descriptor(true, true, false)); + } + + if (PROJECT_REACTOR_PRESENT) { + + reactiveWrappers.put(Mono.class, new Descriptor(false, true, false)); + reactiveWrappers.put(Flux.class, new Descriptor(true, true, true)); + reactiveWrappers.put(Publisher.class, new Descriptor(true, true, true)); + } + + REACTIVE_WRAPPERS = Collections.unmodifiableMap(reactiveWrappers); + } + + /** + * Returns {@literal true} if reactive support is available. More specifically, whether RxJava1/2 or Project Reactor + * libraries are on the class path. + * + * @return {@literal true} if reactive support is available. + */ + public static boolean isAvailable() { + return isAvailable(ReactiveLibrary.PROJECT_REACTOR) || isAvailable(ReactiveLibrary.RXJAVA1) + || isAvailable(ReactiveLibrary.RXJAVA2) || isAvailable(ReactiveLibrary.RXJAVA3); + } + + /** + * Returns {@literal true} if the {@link ReactiveLibrary} is available. + * + * @param reactiveLibrary must not be {@literal null}. + * @return {@literal true} if the {@link ReactiveLibrary} is available. + */ + public static boolean isAvailable(ReactiveLibrary reactiveLibrary) { + + LettuceAssert.notNull(reactiveLibrary, "ReactiveLibrary must not be null!"); + + switch (reactiveLibrary) { + case PROJECT_REACTOR: + return PROJECT_REACTOR_PRESENT; + case RXJAVA1: + return RXJAVA1_PRESENT; + case RXJAVA2: + return RXJAVA2_PRESENT; + case RXJAVA3: + return RXJAVA3_PRESENT; + } + + throw new IllegalArgumentException(String.format("ReactiveLibrary %s not supported", reactiveLibrary)); + } + + /** + * Returns {@literal true} if the {@code type} is a supported reactive wrapper type. + * + * @param type must not be {@literal null}. + * @return {@literal true} if the {@code type} is a supported reactive wrapper type. + */ + public static boolean supports(Class type) { + return isNoValueType(type) || isSingleValueType(type) || isMultiValueType(type); + } + + /** + * Returns {@literal true} if {@code type} is a reactive wrapper type that contains no value. + * + * @param type must not be {@literal null}. + * @return {@literal true} if {@code type} is a reactive wrapper type that contains no value. + */ + public static boolean isNoValueType(Class type) { + + LettuceAssert.notNull(type, "Class must not be null!"); + + return findDescriptor(type).map(Descriptor::isNoValue).orElse(false); + } + + /** + * Returns {@literal true} if {@code type} is a reactive wrapper type for a single value. + * + * @param type must not be {@literal null}. + * @return {@literal true} if {@code type} is a reactive wrapper type for a single value. + */ + public static boolean isSingleValueType(Class type) { + + LettuceAssert.notNull(type, "Class must not be null!"); + + return findDescriptor(type).map((descriptor) -> !descriptor.isMultiValue() && !descriptor.isNoValue()).orElse(false); + } + + /** + * Returns {@literal true} if {@code type} is a reactive wrapper type supporting multiple values ({@code 0..N} elements). + * + * @param type must not be {@literal null}. + * @return {@literal true} if {@code type} is a reactive wrapper type supporting multiple values ({@code 0..N} elements). + */ + public static boolean isMultiValueType(Class type) { + + LettuceAssert.notNull(type, "Class must not be null!"); + + // Prevent single-types with a multi-hierarchy supertype to be reported as multi type + // See Mono implements Publisher + if (isSingleValueType(type)) { + return false; + } + + return findDescriptor(type).map(Descriptor::isMultiValue).orElse(false); + } + + /** + * Returns a collection of No-Value wrapper types. + * + * @return a collection of No-Value wrapper types. + */ + public static Collection> getNoValueTypes() { + return REACTIVE_WRAPPERS.entrySet().stream().filter(entry -> entry.getValue().isNoValue()).map(Entry::getKey) + .collect(Collectors.toList()); + } + + /** + * Returns a collection of Single-Value wrapper types. + * + * @return a collection of Single-Value wrapper types. + */ + public static Collection> getSingleValueTypes() { + return REACTIVE_WRAPPERS.entrySet().stream().filter(entry -> !entry.getValue().isMultiValue()).map(Entry::getKey) + .collect(Collectors.toList()); + } + + /** + * Returns a collection of Multi-Value wrapper types. + * + * @return a collection of Multi-Value wrapper types. + */ + public static Collection> getMultiValueTypes() { + return REACTIVE_WRAPPERS.entrySet().stream().filter(entry -> entry.getValue().isMultiValue()).map(Entry::getKey) + .collect(Collectors.toList()); + } + + private static Optional findDescriptor(Class rhsType) { + + for (Class type : REACTIVE_WRAPPERS.keySet()) { + if (LettuceClassUtils.isAssignable(type, rhsType)) { + return Optional.ofNullable(REACTIVE_WRAPPERS.get(type)); + } + } + return Optional.empty(); + } + + /** + * Enumeration of supported reactive libraries. + * + * @author Mark Paluch + */ + enum ReactiveLibrary { + PROJECT_REACTOR, RXJAVA1, RXJAVA2, RXJAVA3; + } + + public static class Descriptor { + private final boolean isMultiValue; + private final boolean supportsEmpty; + private final boolean isNoValue; + + public Descriptor(boolean isMultiValue, boolean canBeEmpty, boolean isNoValue) { + this.isMultiValue = isMultiValue; + this.supportsEmpty = canBeEmpty; + this.isNoValue = isNoValue; + } + + public boolean isMultiValue() { + return this.isMultiValue; + } + + public boolean supportsEmpty() { + return this.supportsEmpty; + } + + public boolean isNoValue() { + return this.isNoValue; + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/RedisCommandFactory.java b/src/main/java/io/lettuce/core/dynamic/RedisCommandFactory.java new file mode 100644 index 0000000000..ff862228eb --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/RedisCommandFactory.java @@ -0,0 +1,342 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Method; +import java.lang.reflect.Proxy; +import java.nio.charset.StandardCharsets; +import java.util.*; + +import io.lettuce.core.AbstractRedisReactiveCommands; +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.batch.BatchSize; +import io.lettuce.core.dynamic.intercept.DefaultMethodInvokingInterceptor; +import io.lettuce.core.dynamic.intercept.InvocationProxyFactory; +import io.lettuce.core.dynamic.intercept.MethodInterceptor; +import io.lettuce.core.dynamic.intercept.MethodInvocation; +import io.lettuce.core.dynamic.output.CommandOutputFactoryResolver; +import io.lettuce.core.dynamic.output.OutputRegistry; +import io.lettuce.core.dynamic.output.OutputRegistryCommandOutputFactoryResolver; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.models.command.CommandDetail; +import io.lettuce.core.models.command.CommandDetailParser; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.support.ConnectionWrapping; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Factory to create Redis Command interface instances. + *

+ * This class is the entry point to implement command interfaces and obtain a reference to the implementation. Redis Command + * interfaces provide a dynamic API that are declared in userland code. {@link RedisCommandFactory} and its supportive classes + * analyze method declarations and derive from those factories to create and execute {@link RedisCommand}s. + * + *

Example

+ * + *
+ * public interface MyRedisCommands extends Commands {
+ *
+ *     String get(String key); // Synchronous Execution of GET
+ *
+ *     @Command("GET")
+ *     byte[] getAsBytes(String key); // Synchronous Execution of GET returning data as byte array
+ *
+ *     @Command("SET")
+ *     // synchronous execution applying a Timeout
+ *     String setSync(String key, String value, Timeout timeout);
+ *
+ *     Future<String> set(String key, String value); // asynchronous SET execution
+ *
+ *     @Command("SET")
+ *     Mono<String> setReactive(String key, String value, SetArgs setArgs); // reactive SET execution using SetArgs
+ *
+ *     @CommandNaming(split = DOT)
+ *     // support for Redis Module command notation -> NR.RUN
+ *     double nrRun(String key, int... indexes);
+ * }
+ *
+ * RedisCommandFactory factory = new RedisCommandFactory(connection);
+ *
+ * MyRedisCommands commands = factory.getCommands(MyRedisCommands.class);
+ *
+ * String value = commands.get("key");
+ *
+ * 
+ * + * @author Mark Paluch + * @since 5.0 + * @see io.lettuce.core.dynamic.annotation.Command + * @see CommandMethod + */ +public class RedisCommandFactory { + + private final InternalLogger log = InternalLoggerFactory.getInstance(getClass()); + private final StatefulConnection connection; + private final DefaultCommandMethodVerifier commandMethodVerifier; + private final List> redisCodecs = new ArrayList<>(); + + private CommandOutputFactoryResolver commandOutputFactoryResolver = new OutputRegistryCommandOutputFactoryResolver( + new OutputRegistry()); + + private boolean verifyCommandMethods = true; + + /** + * Create a new {@link CommandFactory} given {@link StatefulConnection}. + * + * @param connection must not be {@literal null}. + */ + public RedisCommandFactory(StatefulConnection connection) { + this(connection, LettuceLists.newList(new ByteArrayCodec(), new StringCodec(StandardCharsets.UTF_8))); + } + + /** + * Create a new {@link CommandFactory} given {@link StatefulConnection} and a {@link List} of {@link RedisCodec}s to use + * + * @param connection must not be {@literal null}. + * @param redisCodecs must not be {@literal null}. + */ + public RedisCommandFactory(StatefulConnection connection, Iterable> redisCodecs) { + + LettuceAssert.notNull(connection, "Redis Connection must not be null"); + LettuceAssert.notNull(redisCodecs, "Iterable of RedisCodec must not be null"); + + this.connection = connection; + this.redisCodecs.addAll(LettuceLists.newList(redisCodecs)); + this.commandMethodVerifier = new DefaultCommandMethodVerifier(getCommands(connection)); + } + + @SuppressWarnings("unchecked") + private List getCommands(StatefulConnection connection) { + + List commands = Collections.emptyList(); + try { + if (connection instanceof StatefulRedisConnection) { + commands = ((StatefulRedisConnection) connection).sync().command(); + } + + if (connection instanceof StatefulRedisClusterConnection) { + commands = ((StatefulRedisClusterConnection) connection).sync().command(); + } + } catch (RedisCommandExecutionException e) { + log.debug("Cannot obtain command metadata", e); + } + + if (commands.isEmpty()) { + setVerifyCommandMethods(false); + } + + return CommandDetailParser.parse(commands); + } + + /** + * Set a {@link CommandOutputFactoryResolver}. + * + * @param commandOutputFactoryResolver must not be {@literal null}. + */ + public void setCommandOutputFactoryResolver(CommandOutputFactoryResolver commandOutputFactoryResolver) { + + LettuceAssert.notNull(commandOutputFactoryResolver, "CommandOutputFactoryResolver must not be null"); + + this.commandOutputFactoryResolver = commandOutputFactoryResolver; + } + + /** + * Enables/disables command verification which checks the command name against Redis {@code COMMAND} and the argument count. + * + * @param verifyCommandMethods {@literal true} to enable command verification (default) or {@literal false} to disable + * command verification. + */ + public void setVerifyCommandMethods(boolean verifyCommandMethods) { + this.verifyCommandMethods = verifyCommandMethods; + } + + /** + * Returns a Redis Commands interface instance for the given interface. + * + * @param commandInterface must not be {@literal null}. + * @param command interface type. + * @return the implemented Redis Commands interface. + */ + public T getCommands(Class commandInterface) { + + LettuceAssert.notNull(commandInterface, "Redis Command Interface must not be null"); + + RedisCommandsMetadata metadata = new DefaultRedisCommandsMetadata(commandInterface); + + InvocationProxyFactory factory = new InvocationProxyFactory(); + factory.addInterface(commandInterface); + + BatchAwareCommandLookupStrategy lookupStrategy = new BatchAwareCommandLookupStrategy( + new CompositeCommandLookupStrategy(), metadata); + + factory.addInterceptor(new DefaultMethodInvokingInterceptor()); + factory.addInterceptor(new CommandFactoryExecutorMethodInterceptor(metadata, lookupStrategy)); + + return factory.createProxy(commandInterface.getClassLoader()); + } + + /** + * {@link CommandFactory}-based {@link MethodInterceptor} to create and invoke Redis Commands using asynchronous and + * synchronous execution models. + * + * @author Mark Paluch + */ + static class CommandFactoryExecutorMethodInterceptor implements MethodInterceptor { + + private final Map commandMethods = new HashMap<>(); + + CommandFactoryExecutorMethodInterceptor(RedisCommandsMetadata redisCommandsMetadata, + ExecutableCommandLookupStrategy strategy) { + + for (Method method : redisCommandsMetadata.getMethods()) { + + ExecutableCommand executableCommand = strategy.resolveCommandMethod(DeclaredCommandMethod.create(method), + redisCommandsMetadata); + commandMethods.put(method, executableCommand); + } + } + + @Override + public Object invoke(MethodInvocation invocation) throws Throwable { + + Method method = invocation.getMethod(); + Object[] arguments = invocation.getArguments(); + + if (hasFactoryFor(method)) { + + ExecutableCommand executableCommand = commandMethods.get(method); + return executableCommand.execute(arguments); + } + + return invocation.proceed(); + } + + private boolean hasFactoryFor(Method method) { + return commandMethods.containsKey(method); + } + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + class CompositeCommandLookupStrategy implements ExecutableCommandLookupStrategy { + + private final AsyncExecutableCommandLookupStrategy async; + private final ReactiveExecutableCommandLookupStrategy reactive; + + CompositeCommandLookupStrategy() { + + CommandMethodVerifier verifier = verifyCommandMethods ? commandMethodVerifier : CommandMethodVerifier.NONE; + + AbstractRedisReactiveCommands reactive = getReactiveCommands(); + + LettuceAssert.isTrue(reactive != null, "Reactive commands is null"); + + this.async = new AsyncExecutableCommandLookupStrategy(redisCodecs, commandOutputFactoryResolver, verifier, + (StatefulConnection) connection); + + this.reactive = new ReactiveExecutableCommandLookupStrategy(redisCodecs, commandOutputFactoryResolver, verifier, + reactive); + } + + private AbstractRedisReactiveCommands getReactiveCommands() { + + Object reactive = null; + + if (connection instanceof StatefulRedisConnection) { + reactive = ((StatefulRedisConnection) connection).reactive(); + } + + if (connection instanceof StatefulRedisClusterConnection) { + reactive = ((StatefulRedisClusterConnection) connection).reactive(); + } + + if (reactive != null && Proxy.isProxyClass(reactive.getClass())) { + + InvocationHandler invocationHandler = Proxy.getInvocationHandler(reactive); + reactive = ConnectionWrapping.unwrap(invocationHandler); + } + + return (AbstractRedisReactiveCommands) reactive; + } + + @Override + public ExecutableCommand resolveCommandMethod(CommandMethod method, RedisCommandsMetadata metadata) { + + if (method.isReactiveExecution()) { + return reactive.resolveCommandMethod(method, metadata); + } + + return async.resolveCommandMethod(method, metadata); + } + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + class BatchAwareCommandLookupStrategy implements ExecutableCommandLookupStrategy { + + private final ExecutableCommandLookupStrategy fallbackStrategy; + private final boolean globalBatching; + private final CommandMethodVerifier verifier; + private final long batchSize; + + private Batcher batcher = Batcher.NONE; + private BatchExecutableCommandLookupStrategy batchingStrategy; + + public BatchAwareCommandLookupStrategy(ExecutableCommandLookupStrategy fallbackStrategy, + RedisCommandsMetadata metadata) { + + this.fallbackStrategy = fallbackStrategy; + this.verifier = verifyCommandMethods ? commandMethodVerifier : CommandMethodVerifier.NONE; + + if (metadata.hasAnnotation(BatchSize.class)) { + + BatchSize batchSize = metadata.getAnnotation(BatchSize.class); + + this.globalBatching = true; + this.batchSize = batchSize.value(); + + } else { + + this.globalBatching = false; + this.batchSize = -1; + } + } + + @Override + public ExecutableCommand resolveCommandMethod(CommandMethod method, RedisCommandsMetadata metadata) { + + if (BatchExecutableCommandLookupStrategy.supports(method) || globalBatching) { + + if (batcher == Batcher.NONE) { + batcher = new SimpleBatcher((StatefulConnection) connection, Math.toIntExact(batchSize)); + batchingStrategy = new BatchExecutableCommandLookupStrategy(redisCodecs, commandOutputFactoryResolver, + verifier, batcher, (StatefulConnection) connection); + } + + return batchingStrategy.resolveCommandMethod(method, metadata); + } + + return fallbackStrategy.resolveCommandMethod(method, metadata); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/RedisCommandsMetadata.java b/src/main/java/io/lettuce/core/dynamic/RedisCommandsMetadata.java new file mode 100644 index 0000000000..a41c2564a5 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/RedisCommandsMetadata.java @@ -0,0 +1,52 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.lang.annotation.Annotation; +import java.lang.reflect.Method; +import java.util.Collection; + +/** + * Interface exposing Redis command interface metadata. + * + * @author Mark Paluch + * @since 5.0 + */ +interface RedisCommandsMetadata { + + Collection getMethods(); + + /** + * Returns the Redis Commands interface. + * + * @return + */ + Class getCommandsInterface(); + + /** + * Lookup an interface annotation. + * + * @param annotationClass the annotation class. + * @return the annotation object or {@literal null} if not found. + */ + A getAnnotation(Class annotationClass); + + /** + * @param annotationClass the annotation class. + * @return {@literal true} if the interface is annotated with {@code annotationClass}. + */ + boolean hasAnnotation(Class annotationClass); +} diff --git a/src/main/java/io/lettuce/core/dynamic/SimpleBatcher.java b/src/main/java/io/lettuce/core/dynamic/SimpleBatcher.java new file mode 100644 index 0000000000..71515bb30f --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/SimpleBatcher.java @@ -0,0 +1,172 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.atomic.AtomicBoolean; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.dynamic.batch.CommandBatching; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Simple threadsafe {@link Batcher} that flushes queued command when either: + *
    + *
  • Reaches the configured {@link #batchSize}
  • + *
  • Encounters a {@link CommandBatching#flush() force flush}
  • + *
+ * + * @author Mark Paluch + */ +class SimpleBatcher implements Batcher { + + private final StatefulConnection connection; + private final int batchSize; + private final BlockingQueue> queue = new LinkedBlockingQueue<>(); + private final AtomicBoolean flushing = new AtomicBoolean(); + + public SimpleBatcher(StatefulConnection connection, int batchSize) { + + LettuceAssert.isTrue(batchSize == -1 || batchSize > 1, "Batch size must be greater zero or -1"); + this.connection = connection; + this.batchSize = batchSize; + } + + @Override + public BatchTasks batch(RedisCommand command, CommandBatching batching) { + + queue.add(command); + + if (batching == CommandBatching.queue()) { + return BatchTasks.EMPTY; + } + + boolean forcedFlush = batching == CommandBatching.flush(); + + boolean defaultFlush = false; + + if (!forcedFlush) { + if (queue.size() >= batchSize) { + defaultFlush = true; + } + } + + if (defaultFlush || forcedFlush) { + return flush(forcedFlush); + } + + return BatchTasks.EMPTY; + } + + @Override + public BatchTasks flush() { + return flush(true); + } + + protected BatchTasks flush(boolean forcedFlush) { + + boolean defaultFlush = false; + + List> commands = new ArrayList<>(Math.max(batchSize, 10)); + + while (flushing.compareAndSet(false, true)) { + + try { + + int consume = -1; + + if (!forcedFlush) { + long queuedItems = queue.size(); + if (queuedItems >= batchSize) { + consume = batchSize; + defaultFlush = true; + } + } + + List> batch = doFlush(forcedFlush, defaultFlush, consume); + if (batch != null) { + commands.addAll(batch); + } + + if (defaultFlush && !queue.isEmpty() && queue.size() > batchSize) { + continue; + } + + return new BatchTasks(commands); + + } finally { + flushing.set(false); + } + } + + return BatchTasks.EMPTY; + } + + private List> doFlush(boolean forcedFlush, boolean defaultFlush, int consume) { + + List> commands = null; + if (forcedFlush) { + commands = prepareForceFlush(); + } else if (defaultFlush) { + commands = prepareDefaultFlush(consume); + } + + if (commands != null && !commands.isEmpty()) { + if (commands.size() == 1) { + connection.dispatch(commands.get(0)); + } else { + connection.dispatch(commands); + } + + return commands; + } + return Collections.emptyList(); + } + + private List> prepareForceFlush() { + + List> batch = new ArrayList<>(Math.max(batchSize, 10)); + + do { + RedisCommand poll = queue.poll(); + + assert poll != null; + batch.add(poll); + } while (!queue.isEmpty()); + + return batch; + } + + private List> prepareDefaultFlush(int consume) { + + List> batch = new ArrayList<>(Math.max(consume, 10)); + + while ((batch.size() < consume || consume == -1) && !queue.isEmpty()) { + + RedisCommand poll = queue.poll(); + + assert poll != null; + batch.add(poll); + } + + return batch; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/annotation/Command.java b/src/main/java/io/lettuce/core/dynamic/annotation/Command.java new file mode 100644 index 0000000000..d56916c3c1 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/annotation/Command.java @@ -0,0 +1,61 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.annotation; + +import java.lang.annotation.*; + +import io.lettuce.core.dynamic.domain.Timeout; + +/** + * Redis command method annotation specifying a command string. A command string can contain the command name, a sequence of + * command string bytes and parameter references. + *

+ * Parameters: Parameters can be referenced by their name {@code :myArg} or index {@code ?0}. Additional, not referenced + * parameters are appended to the command in the order of their appearance. Declared parameters are matched against + * {@link io.lettuce.core.codec.RedisCodec} for codec resolution. Additional parameter types such as {@link Timeout} + * control execution behavior and are not added to command arguments. + *

+ * Usage: + * + *

+ *     @Command("SET ?0 ?1")
+ *     public String setKey(String key, String value)
+ *
+ *     @Command("SET :key :value")
+ *     public String setKeyNamed(@Param("key") String key, @Param("value") String value)
+ * 
+ *

+ * Implementation notes: A {@link Command#value()} is split into command segments of which each segment is represented as ASCII + * string or parameter reference. + * + * @author Mark Paluch + * @since 5.0 + * @see CommandNaming + * @see Param + * @see Key + * @see Value + * @see io.lettuce.core.dynamic.codec.RedisCodecResolver + */ +@Retention(RetentionPolicy.RUNTIME) +@Target(ElementType.METHOD) +@Documented +public @interface Command { + + /** + * Command string. + */ + String value(); +} diff --git a/src/main/java/io/lettuce/core/dynamic/annotation/CommandNaming.java b/src/main/java/io/lettuce/core/dynamic/annotation/CommandNaming.java new file mode 100644 index 0000000000..6420d978bf --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/annotation/CommandNaming.java @@ -0,0 +1,84 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.annotation; + +import java.lang.annotation.*; + +/** + * Command naming strategy for Redis command methods. Redis command methods name can be provided either by annotating method + * with {@link Command} or derived from its name. Annotate a command interface or method with {@link CommandNaming} to set a + * command naming {@link Strategy}. + * + * @author Mark Paluch + * @since 5.0 + * @see Command + */ +@Retention(RetentionPolicy.RUNTIME) +@Target({ ElementType.TYPE, ElementType.METHOD }) +@Documented +public @interface CommandNaming { + + /** + * Apply a naming {@link Strategy} to transform the method name into a Redis command name. + */ + Strategy strategy() default Strategy.DEFAULT; + + /** + * Adjust letter case, defaults to {@link LetterCase#UPPERCASE}. + */ + LetterCase letterCase() default LetterCase.DEFAULT; + + public enum Strategy { + + /** + * Replace camel humps with spaces and split the method name into multiple command segments. A method named + * {@code clientSetname} would issue a command {@code CLIENT SETNAME}. + */ + SPLIT, + + /** + * Replace camel humps with spaces. A method named {@code nrRun} would issue a command named {@code NR.RUN}. + */ + DOT, + + /** + * Passthru the command as-is. A method named {@code clientSetname} would issue a command named {@code CLIENTSETNAME}. + */ + METHOD_NAME, + + /** + * Not defined here which defaults to {@link #SPLIT} if nothing else found. + */ + DEFAULT; + } + + public enum LetterCase { + /** + * Keep command name as specified. + */ + AS_IS, + + /** + * Convert command to uppercase. + */ + UPPERCASE, + + /** + * Not defined here which defaults to {@link #UPPERCASE} if nothing else found. + */ + DEFAULT; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/annotation/Key.java b/src/main/java/io/lettuce/core/dynamic/annotation/Key.java new file mode 100644 index 0000000000..1f6754c4c0 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/annotation/Key.java @@ -0,0 +1,31 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.annotation; + +import java.lang.annotation.*; + +/** + * Marker annotation to declare a method parameter as key. + * + * @author Mark Paluch + * @see Value + * @since 5.0 + */ +@Retention(RetentionPolicy.RUNTIME) +@Target(ElementType.PARAMETER) +@Documented +public @interface Key { +} diff --git a/src/main/java/io/lettuce/core/dynamic/annotation/Param.java b/src/main/java/io/lettuce/core/dynamic/annotation/Param.java new file mode 100644 index 0000000000..a73e9ca6ef --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/annotation/Param.java @@ -0,0 +1,38 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.annotation; + +import java.lang.annotation.*; + +/** + * Annotation to bind method parameters using their name. + * + * @author Mark Paluch + * @see Key + * @since 5.0 + */ +@Retention(RetentionPolicy.RUNTIME) +@Target(ElementType.PARAMETER) +@Documented +public @interface Param { + + /** + * Name of the parameter. + * + * @return + */ + String value(); +} diff --git a/src/main/java/io/lettuce/core/dynamic/annotation/Value.java b/src/main/java/io/lettuce/core/dynamic/annotation/Value.java new file mode 100644 index 0000000000..ff3896ff8d --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/annotation/Value.java @@ -0,0 +1,31 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.annotation; + +import java.lang.annotation.*; + +/** + * Marker annotation to declare a method parameter as value. + * + * @author Mark Paluch + * @see Key + * @since 5.0 + */ +@Retention(RetentionPolicy.RUNTIME) +@Target(ElementType.PARAMETER) +@Documented +public @interface Value { +} diff --git a/src/main/java/io/lettuce/core/dynamic/annotation/package-info.java b/src/main/java/io/lettuce/core/dynamic/annotation/package-info.java new file mode 100644 index 0000000000..43894661b0 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/annotation/package-info.java @@ -0,0 +1,4 @@ +/** + * Central domain abstractions to be used in combination with Redis Command interfaces. + */ +package io.lettuce.core.dynamic.annotation; diff --git a/src/main/java/io/lettuce/core/dynamic/batch/BatchException.java b/src/main/java/io/lettuce/core/dynamic/batch/BatchException.java new file mode 100644 index 0000000000..893b953bcd --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/batch/BatchException.java @@ -0,0 +1,56 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.batch; + +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Batch exception to collect multiple errors from batched command execution. + *

+ * Commands that fail during the batch cause a {@link BatchException} while non-failed commands remain executed successfully. + * + * @author Mark Paluch + * @since 5.0 + * @see BatchExecutor + * @see BatchSize + * @see CommandBatching + */ +@SuppressWarnings("serial") +public class BatchException extends RedisCommandExecutionException { + + private final List> failedCommands; + + /** + * Create a new {@link BatchException}. + * + * @param failedCommands {@link List} of failed {@link RedisCommand}s. + */ + public BatchException(List> failedCommands) { + super("Error during batch command execution"); + this.failedCommands = Collections.unmodifiableList(failedCommands); + } + + /** + * @return the failed commands. + */ + public List> getFailedCommands() { + return failedCommands; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/batch/BatchExecutor.java b/src/main/java/io/lettuce/core/dynamic/batch/BatchExecutor.java new file mode 100644 index 0000000000..521103d86d --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/batch/BatchExecutor.java @@ -0,0 +1,38 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.batch; + +/** + * Batch executor interface to enforce command queue flushing using {@link BatchSize}. + *

+ * Commands remain in a batch queue until the batch size is reached or the queue is {@link BatchExecutor#flush() flushed}. If + * the batch size is not reached, commands remain not executed. + *

+ * Commands that fail during the batch cause a {@link BatchException} while non-failed commands remain executed successfully. + * + * @author Mark Paluch + * @since 5.0 + * @see BatchSize + */ +public interface BatchExecutor { + + /** + * Flush the command queue resulting in the queued commands being executed. + * + * @throws BatchException if at least one command failed. + */ + void flush() throws BatchException; +} diff --git a/src/main/java/io/lettuce/core/dynamic/batch/BatchSize.java b/src/main/java/io/lettuce/core/dynamic/batch/BatchSize.java new file mode 100644 index 0000000000..28d7b83eba --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/batch/BatchSize.java @@ -0,0 +1,67 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.batch; + +import java.lang.annotation.*; + +/** + * Redis command method annotation declaring a command interface to use batching with a specified {@code batchSize}. + *

+ * Usage: + * + *

+ * @BatchSize(50)
+ * public interface MyCommands extends Commands {
+ *
+ *   public void set(String key, String value);
+ *
+ *   public RedisFuture<String> get(String key)
+ * }
+ * 
+ *

+ * Command batching executes commands in a deferred nature. This also means that at the time of invocation no result is + * available. Batching can be only used with synchronous methods without a return value ({@code void}) or asynchronous methods + * returning a {@link io.lettuce.core.RedisFuture}. Reactive command batching is not supported because reactive executed + * commands maintain an own subscription lifecycle that is decoupled from command method batching. + *

+ * Command methods participating in batching share a single batch queue. All method invocations are queued until reaching the + * batch size. Command batching can be also specified using dynamic batching by providing a {@link CommandBatching} parameter on + * each command invocation. {@link CommandBatching} parameters have precedence over {@link BatchSize} and can be used to enqueue + * commands or force batch flushing of commands. + *

+ * Alternatively, a command interface can implement {@link BatchExecutor} to {@link BatchExecutor#flush()} commands before the + * batch size is reached. Commands remain in a batch queue until the batch size is reached or the queue is + * {@link BatchExecutor#flush() flushed}. If the batch size is not reached, commands remain not executed. + *

+ * Batching command interfaces are thread-safe and can be shared amongst multiple threads. + * + * @author Mark Paluch + * @since 5.0 + * @see CommandBatching + * @see BatchExecutor + */ +@Retention(RetentionPolicy.RUNTIME) +@Target(ElementType.TYPE) +@Documented +public @interface BatchSize { + + /** + * Declares the batch size for the command method. + * + * @return a positive, non-zero number of commands. + */ + int value(); +} diff --git a/src/main/java/io/lettuce/core/dynamic/batch/CommandBatching.java b/src/main/java/io/lettuce/core/dynamic/batch/CommandBatching.java new file mode 100644 index 0000000000..98779bd452 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/batch/CommandBatching.java @@ -0,0 +1,103 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.batch; + +/** + * Programmatic command batching API. + *

+ * {@link CommandBatching} is used to queue commands in a batch queue and flush the command queue on command invocation. Usage: + * + *

+ * public interface MyCommands extends Commands {
+ *
+ *   public void set(String key, String value, CommandBatching batching);
+ *
+ *   public RedisFuture<String> get(String key, CommandBatching batching)
+ * }
+ *
+ * MyCommands commands = …
+ *
+ * commands.set("key", "value", CommandBatching.queue());
+ * commands.get("key", CommandBatching.flush());
+ * 
+ *

+ * Using {@link CommandBatching} in a method signature turns the command method into a batched command method.
+ * Command batching executes commands in a deferred nature. This also means that at the time of invocation no result is + * available. Batching can be only used with synchronous methods without a return value ({@code void}) or asynchronous methods + * returning a {@link io.lettuce.core.RedisFuture}. Reactive command batching is not supported because reactive executed + * commands maintain an own subscription lifecycle that is decoupled from command method batching. + *

+ * + * @author Mark Paluch + * @since 5.0 + * @see BatchSize + */ +public abstract class CommandBatching { + + /** + * Flush the command batch queue after adding a command to the batch queue. + * + * @return {@link CommandBatching} to flush the command batch queue after adding a command to the batch queue. + */ + public static CommandBatching flush() { + return FlushCommands.instance(); + } + + /** + * Enqueue the command to the batch queue. + * + * @return {@link CommandBatching} to enqueue the command to the batch queue. + */ + public static CommandBatching queue() { + return QueueCommands.instance(); + } + + /** + * {@link CommandBatching} to flush the command batch queue after adding a command to the batch queue. + */ + static class FlushCommands extends CommandBatching { + + static final FlushCommands INSTANCE = new FlushCommands(); + + private FlushCommands() { + } + + /** + * @return a static instance of {@link FlushCommands}. + */ + public static CommandBatching instance() { + return INSTANCE; + } + } + + /** + * {@link CommandBatching} to enqueue the command to the batch queue. + */ + static class QueueCommands extends CommandBatching { + + static final QueueCommands INSTANCE = new QueueCommands(); + + private QueueCommands() { + } + + /** + * @return a static instance of {@link QueueCommands}. + */ + public static QueueCommands instance() { + return INSTANCE; + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/batch/package-info.java b/src/main/java/io/lettuce/core/dynamic/batch/package-info.java new file mode 100644 index 0000000000..87e83e2f04 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/batch/package-info.java @@ -0,0 +1,5 @@ +/** + * Batching with Redis Command interfaces. + */ +package io.lettuce.core.dynamic.batch; + diff --git a/src/main/java/io/lettuce/core/dynamic/codec/AnnotationRedisCodecResolver.java b/src/main/java/io/lettuce/core/dynamic/codec/AnnotationRedisCodecResolver.java new file mode 100644 index 0000000000..7414146360 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/codec/AnnotationRedisCodecResolver.java @@ -0,0 +1,317 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.codec; + +import java.util.*; +import java.util.stream.Collectors; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.CommandMethod; +import io.lettuce.core.dynamic.annotation.Key; +import io.lettuce.core.dynamic.annotation.Value; +import io.lettuce.core.dynamic.parameter.Parameter; +import io.lettuce.core.dynamic.support.ClassTypeInformation; +import io.lettuce.core.dynamic.support.TypeInformation; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceLists; + +/** + * Annotation-based {@link RedisCodecResolver}. Considers {@code @Key} and {@code @Value} annotations of method parameters to + * determine a {@link RedisCodec} that is able to handle all involved types. + * + * @author Mark Paluch + * @author Manyanda Chitimbo + * @since 5.0 + * @see Key + * @see Value + */ +public class AnnotationRedisCodecResolver implements RedisCodecResolver { + + private final List> codecs; + + /** + * Creates a new {@link AnnotationRedisCodecResolver} given a {@link List} of {@link RedisCodec}s. + * + * @param codecs must not be {@literal null}. + */ + public AnnotationRedisCodecResolver(List> codecs) { + + LettuceAssert.notNull(codecs, "List of RedisCodecs must not be null"); + + this.codecs = LettuceLists.unmodifiableList(codecs); + } + + @Override + public RedisCodec resolve(CommandMethod commandMethod) { + + LettuceAssert.notNull(commandMethod, "CommandMethod must not be null"); + + Set> keyTypes = findTypes(commandMethod, Key.class); + Set> valueTypes = findTypes(commandMethod, Value.class); + + if (keyTypes.isEmpty() && valueTypes.isEmpty()) { + + Voted> voted = voteForTypeMajority(commandMethod); + if (voted != null) { + return voted.subject; + } + + return codecs.get(0); + } + + if ((keyTypes.size() == 1 && hasAtMostOne(valueTypes)) || (valueTypes.size() == 1 && hasAtMostOne(keyTypes))) { + RedisCodec resolvedCodec = resolveCodec(keyTypes, valueTypes); + if (resolvedCodec != null) { + return resolvedCodec; + } + } + + throw new IllegalStateException(String.format("Cannot resolve Codec for method %s", commandMethod.getMethod())); + } + + private boolean hasAtMostOne(Collection collection) { + return collection.size() <= 1; + } + + private Voted> voteForTypeMajority(CommandMethod commandMethod) { + + List>> votes = codecs.stream().map(redisCodec -> new Voted>(redisCodec, 0)) + .collect(Collectors.toList()); + + commandMethod.getParameters().getBindableParameters().forEach(parameter -> { + vote(votes, parameter); + }); + + Collections.sort(votes); + if (votes.isEmpty()) { + return null; + } + + Voted> voted = votes.get(votes.size() - 1); + + if (voted.votes == 0) { + return null; + } + + return voted; + } + + @SuppressWarnings("rawtypes") + private static void vote(List>> votes, Parameter parameter) { + + for (Voted> vote : votes) { + + ClassTypeInformation typeInformation = ClassTypeInformation.from(vote.subject.getClass()); + + TypeInformation superTypeInformation = typeInformation.getSuperTypeInformation(RedisCodec.class); + + List> typeArguments = superTypeInformation.getTypeArguments(); + + if (typeArguments.size() != 2) { + continue; + } + + TypeInformation parameterType = parameter.getTypeInformation(); + TypeInformation parameterKeyType = ParameterWrappers.getKeyType(parameterType); + TypeInformation parameterValueType = ParameterWrappers.getValueType(parameterType); + + TypeInformation keyType = typeArguments.get(0); + TypeInformation valueType = typeArguments.get(1); + + if (keyType.isAssignableFrom(parameterKeyType)) { + vote.votes++; + } + + if (valueType.isAssignableFrom(parameterValueType)) { + vote.votes++; + } + } + } + + private RedisCodec resolveCodec(Set> keyTypes, Set> valueTypes) { + Class keyType = keyTypes.isEmpty() ? null : keyTypes.iterator().next(); + Class valueType = valueTypes.isEmpty() ? null : valueTypes.iterator().next(); + + for (RedisCodec codec : codecs) { + ClassTypeInformation typeInformation = ClassTypeInformation.from(codec.getClass()); + TypeInformation keyTypeArgument = typeInformation.getTypeArgument(RedisCodec.class, 0); + TypeInformation valueTypeArgument = typeInformation.getTypeArgument(RedisCodec.class, 1); + + if (keyTypeArgument == null || valueTypeArgument == null) { + continue; + } + + boolean keyMatch = false; + boolean valueMatch = false; + + if (keyType != null) { + keyMatch = keyTypeArgument.isAssignableFrom(ClassTypeInformation.from(keyType)); + } + + if (valueType != null) { + valueMatch = valueTypeArgument.isAssignableFrom(ClassTypeInformation.from(valueType)); + } + + if (keyType != null && valueType != null && keyMatch && valueMatch) { + return codec; + } + + if (keyType != null && valueType == null && keyMatch) { + return codec; + } + + if (keyType == null && valueType != null && valueMatch) { + return codec; + } + } + + return null; + } + + Set> findTypes(CommandMethod commandMethod, Class annotation) { + + Set> types = new LinkedHashSet<>(); + + for (Parameter parameter : commandMethod.getParameters().getBindableParameters()) { + + types.addAll(parameter.getAnnotations().stream() + .filter(parameterAnnotation -> annotation.isAssignableFrom(parameterAnnotation.getClass())) + .map(parameterAnnotation -> { + + TypeInformation typeInformation = parameter.getTypeInformation(); + + if (annotation == Key.class && ParameterWrappers.hasKeyType(typeInformation)) { + TypeInformation parameterKeyType = ParameterWrappers.getKeyType(typeInformation); + return parameterKeyType.getType(); + } + + return ParameterWrappers.getValueType(typeInformation).getType(); + + }).collect(Collectors.toList())); + } + + return types; + } + + private static class Voted implements Comparable> { + + private T subject; + private int votes; + + Voted(T subject, int votes) { + this.subject = subject; + this.votes = votes; + } + + @Override + public int compareTo(Voted o) { + return votes - o.votes; + } + } + + /** + * Parameter wrapper support for types that encapsulate one or more parameter values. + */ + protected static class ParameterWrappers { + + private static final Set> WRAPPERS = new HashSet<>(); + private static final Set> WITH_KEY_TYPE = new HashSet<>(); + private static final Set> WITH_VALUE_TYPE = new HashSet<>(); + + static { + + WRAPPERS.add(io.lettuce.core.Value.class); + WRAPPERS.add(io.lettuce.core.KeyValue.class); + WRAPPERS.add(io.lettuce.core.ScoredValue.class); + WRAPPERS.add(io.lettuce.core.Range.class); + + WRAPPERS.add(List.class); + WRAPPERS.add(Collection.class); + WRAPPERS.add(Set.class); + WRAPPERS.add(Iterable.class); + WRAPPERS.add(Map.class); + + WITH_VALUE_TYPE.add(io.lettuce.core.Value.class); + WITH_VALUE_TYPE.add(io.lettuce.core.KeyValue.class); + WITH_KEY_TYPE.add(io.lettuce.core.KeyValue.class); + WITH_VALUE_TYPE.add(io.lettuce.core.ScoredValue.class); + + WITH_KEY_TYPE.add(Map.class); + WITH_VALUE_TYPE.add(Map.class); + } + + /** + * @param typeInformation must not be {@literal null}. + * @return {@literal true} if {@code parameterClass} is a parameter wrapper. + */ + public static boolean supports(TypeInformation typeInformation) { + return WRAPPERS.contains(typeInformation.getType()) + || (typeInformation.getType().isArray() && !(typeInformation.getType().equals(byte[].class))); + } + + /** + * @param typeInformation must not be {@literal null}. + * @return {@literal true} if the type has a key type variable. + */ + public static boolean hasKeyType(TypeInformation typeInformation) { + return WITH_KEY_TYPE.contains(typeInformation.getType()); + } + + /** + * @param typeInformation must not be {@literal null}. + * @return {@literal true} if the type has a value type variable. + */ + public static boolean hasValueType(TypeInformation typeInformation) { + return WITH_VALUE_TYPE.contains(typeInformation.getType()); + } + + /** + * @param typeInformation must not be {@literal null}. + * @return the key type. + */ + public static TypeInformation getKeyType(TypeInformation typeInformation) { + + if (!supports(typeInformation) || !hasKeyType(typeInformation)) { + return typeInformation; + } + + return typeInformation.getComponentType(); + } + + /** + * @param typeInformation must not be {@literal null}. + * @return the value type. + */ + public static TypeInformation getValueType(TypeInformation typeInformation) { + + if (!supports(typeInformation) || typeInformation.getComponentType() == null) { + return typeInformation; + } + + if (!hasValueType(typeInformation)) { + return typeInformation.getComponentType(); + } + + List> typeArguments = typeInformation.getTypeArguments(); + + if (hasKeyType(typeInformation)) { + return typeArguments.get(1); + } + + return typeArguments.get(0); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/codec/RedisCodecResolver.java b/src/main/java/io/lettuce/core/dynamic/codec/RedisCodecResolver.java new file mode 100644 index 0000000000..8acf9b1aea --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/codec/RedisCodecResolver.java @@ -0,0 +1,36 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.codec; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.CommandMethod; + +/** + * Strategy interface to resolve a {@link RedisCodec} for a {@link CommandMethod}. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface RedisCodecResolver { + + /** + * Resolve a {@link RedisCodec} for the given {@link CommandMethod}. + * + * @param commandMethod must not be {@literal null}. + * @return the resolved {@link RedisCodec} or {@literal null} if not resolvable. + */ + RedisCodec resolve(CommandMethod commandMethod); +} diff --git a/src/main/java/io/lettuce/core/dynamic/codec/package-info.java b/src/main/java/io/lettuce/core/dynamic/codec/package-info.java new file mode 100644 index 0000000000..cf1743f4c9 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/codec/package-info.java @@ -0,0 +1,5 @@ +/** + * {@link io.lettuce.core.codec.RedisCodec} resolution support. + */ +package io.lettuce.core.dynamic.codec; + diff --git a/src/main/java/io/lettuce/core/dynamic/domain/Timeout.java b/src/main/java/io/lettuce/core/dynamic/domain/Timeout.java new file mode 100644 index 0000000000..9250358bd7 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/domain/Timeout.java @@ -0,0 +1,73 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.domain; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Timeout value object to represent a timeout value with its {@link TimeUnit}. + * + * @author Mark Paluch + * @since 5.0 + * @see Command + */ +public class Timeout { + + private final Duration timeout; + + private Timeout(Duration timeout) { + + LettuceAssert.notNull(timeout, "Timeout must not be null"); + LettuceAssert.isTrue(!timeout.isNegative(), "Timeout must be greater or equal to zero"); + + this.timeout = timeout; + } + + /** + * Create a {@link Timeout}. + * + * @param timeout the timeout value, must be non-negative. + * @return the {@link Timeout}. + */ + public static Timeout create(Duration timeout) { + return new Timeout(timeout); + } + + /** + * Create a {@link Timeout}. + * + * @param timeout the timeout value, must be non-negative. + * @param timeUnit the associated {@link TimeUnit}, must not be {@literal null}. + * @return the {@link Timeout}. + */ + public static Timeout create(long timeout, TimeUnit timeUnit) { + + LettuceAssert.notNull(timeUnit, "TimeUnit must not be null"); + + return new Timeout(Duration.ofNanos(timeUnit.toNanos(timeout))); + } + + /** + * @return the timeout value. + */ + public Duration getTimeout() { + return timeout; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/domain/package-info.java b/src/main/java/io/lettuce/core/dynamic/domain/package-info.java new file mode 100644 index 0000000000..cfec978b77 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/domain/package-info.java @@ -0,0 +1,4 @@ +/** + * Core annotations to be used with Redis Command interfaces. + */ +package io.lettuce.core.dynamic.domain; diff --git a/src/main/java/io/lettuce/core/dynamic/intercept/DefaultMethodInvokingInterceptor.java b/src/main/java/io/lettuce/core/dynamic/intercept/DefaultMethodInvokingInterceptor.java new file mode 100644 index 0000000000..bf5ff43a67 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/intercept/DefaultMethodInvokingInterceptor.java @@ -0,0 +1,64 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.intercept; + +import java.lang.invoke.MethodHandle; +import java.lang.reflect.Method; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +import io.lettuce.core.internal.DefaultMethods; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Invokes default interface methods. Requires {@link MethodInvocation} to implement {@link InvocationTargetProvider} to + * determine the target object. + * + * @author Mark Paluch + * @since 5.0 + * @see MethodInvocation + * @see InvocationTargetProvider + */ +public class DefaultMethodInvokingInterceptor implements MethodInterceptor { + + private final Map methodHandleCache = new ConcurrentHashMap<>(); + + @Override + public Object invoke(MethodInvocation invocation) throws Throwable { + + Method method = invocation.getMethod(); + + if (!method.isDefault()) { + return invocation.proceed(); + } + + LettuceAssert.isTrue(invocation instanceof InvocationTargetProvider, + "Invocation must provide a target object via InvocationTargetProvider"); + + InvocationTargetProvider targetProvider = (InvocationTargetProvider) invocation; + + return methodHandleCache.computeIfAbsent(method, DefaultMethodInvokingInterceptor::lookupMethodHandle) + .bindTo(targetProvider.getInvocationTarget()).invokeWithArguments(invocation.getArguments()); + } + + private static MethodHandle lookupMethodHandle(Method method) { + try { + return DefaultMethods.lookupMethodHandle(method); + } catch (ReflectiveOperationException e) { + throw new IllegalArgumentException(e); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/intercept/InvocationProxyFactory.java b/src/main/java/io/lettuce/core/dynamic/intercept/InvocationProxyFactory.java new file mode 100644 index 0000000000..abc0d1ceac --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/intercept/InvocationProxyFactory.java @@ -0,0 +1,104 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.intercept; + +import java.lang.reflect.Method; +import java.lang.reflect.Proxy; +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.internal.AbstractInvocationHandler; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Factory to create invocation proxies. + *

+ * Method calls to invocation proxies can be intercepted and modified by a chain of {@link MethodInterceptor}s. Each + * {@link MethodInterceptor} can continue the call chain, terminate prematurely or modify all aspects of a {@link Method} + * invocation. + *

+ * {@link InvocationProxyFactory} produces invocation proxies which can implement multiple interface type. Any non-interface + * types are rejected. + * + * @author Mark Paluch + * @since 5.0 + * @see MethodInterceptor + * @see MethodInvocation + */ +public class InvocationProxyFactory { + + private final List interceptors = new ArrayList<>(); + private final List> interfaces = new ArrayList<>(); + + /** + * Create a proxy instance give a {@link ClassLoader}. + * + * @param classLoader must not be {@literal null}. + * @param inferred result type. + * @return the invocation proxy instance. + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + public T createProxy(ClassLoader classLoader) { + + LettuceAssert.notNull(classLoader, "ClassLoader must not be null"); + + Class[] interfaces = this.interfaces.toArray(new Class[0]); + + return (T) Proxy.newProxyInstance(classLoader, interfaces, new InterceptorChainInvocationHandler(interceptors)); + } + + /** + * Add a interface type that should be implemented by the resulting invocation proxy. + * + * @param ifc must not be {@literal null} and must be an interface type. + */ + public void addInterface(Class ifc) { + + LettuceAssert.notNull(ifc, "Interface type must not be null"); + LettuceAssert.isTrue(ifc.isInterface(), "Type must be an interface"); + + this.interfaces.add(ifc); + } + + /** + * Add a {@link MethodInterceptor} to the interceptor chain. + * + * @param interceptor notNull + */ + public void addInterceptor(MethodInterceptor interceptor) { + + LettuceAssert.notNull(interceptor, "MethodInterceptor must not be null"); + + this.interceptors.add(interceptor); + } + + /** + * {@link MethodInterceptor}-based {@link InterceptorChainInvocationHandler}. + */ + static class InterceptorChainInvocationHandler extends AbstractInvocationHandler { + + private final MethodInterceptorChain.Head context; + + InterceptorChainInvocationHandler(List interceptors) { + this.context = MethodInterceptorChain.from(interceptors); + } + + @Override + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + return context.invoke(proxy, method, args); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/intercept/InvocationTargetProvider.java b/src/main/java/io/lettuce/core/dynamic/intercept/InvocationTargetProvider.java new file mode 100644 index 0000000000..22e8197f16 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/intercept/InvocationTargetProvider.java @@ -0,0 +1,31 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.intercept; + +/** + * Provides an invocation target object. + * + * @see MethodInterceptor + * @author Mark Paluch + * @since 5.0 + */ +public interface InvocationTargetProvider { + + /** + * @return the invocation target. + */ + Object getInvocationTarget(); +} diff --git a/src/main/java/io/lettuce/core/dynamic/intercept/MethodInterceptor.java b/src/main/java/io/lettuce/core/dynamic/intercept/MethodInterceptor.java new file mode 100644 index 0000000000..bf01838248 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/intercept/MethodInterceptor.java @@ -0,0 +1,39 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.intercept; + +/** + * Intercepts calls on an interface on its way to the target. These are nested "on top" of the target. + * + *

+ * Implementing classes are required to implement the {@link #invoke(MethodInvocation)} method to modify the original behavior. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface MethodInterceptor { + + /** + * Implement this method to perform extra treatments before and after the invocation. Polite implementations would certainly + * like to invoke {@link MethodInvocation#proceed()}. + * + * @param invocation the method invocation + * @return the result of the call to {@link MethodInvocation#proceed()}, might be intercepted by the interceptor. + * @throws Throwable if the interceptors or the target-object throws an exception. + */ + Object invoke(MethodInvocation invocation) throws Throwable; + +} diff --git a/src/main/java/io/lettuce/core/dynamic/intercept/MethodInterceptorChain.java b/src/main/java/io/lettuce/core/dynamic/intercept/MethodInterceptorChain.java new file mode 100644 index 0000000000..47432cfb47 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/intercept/MethodInterceptorChain.java @@ -0,0 +1,212 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.intercept; + +import java.lang.reflect.Method; +import java.util.Iterator; + +/** + * Invocation context with a static call chain of {@link MethodInterceptor}s to handle method invocations. + *

+ * {@link MethodInterceptorChain} is created from one or more {@link MethodInterceptor}s and compiled to a forward-only call + * chain. A chain of {@link MethodInterceptorChain} has a head and tail context. The tail context is no-op and simply returns + * {@literal null}. + *

+ * Invocations are represented as {@link PooledMethodInvocation} using thread-local pooling. An invocation lives within the + * boundaries of a thread therefore it's safe to use thread-local object pooling. + * + * @author Mark Paluch + * @since 5.0 + */ +abstract class MethodInterceptorChain { + + private final ThreadLocal pool = ThreadLocal.withInitial(PooledMethodInvocation::new); + final MethodInterceptorChain next; + + MethodInterceptorChain(MethodInterceptorChain next) { + this.next = next; + } + + /** + * Create a {@link MethodInterceptorChain} from {@link MethodInterceptor}s. Chain elements are created eagerly by + * stack-walking {@code interceptors}. Make sure the {@link Iterable} does not exhaust the stack size. + * + * @param interceptors must not be {@literal null}. + * @return the {@link MethodInterceptorChain} that is an entry point for method invocations. + */ + public static Head from(Iterable interceptors) { + return new Head(next(interceptors.iterator())); + } + + private static MethodInterceptorChain next(Iterator iterator) { + return iterator.hasNext() ? createContext(iterator, iterator.next()) : Tail.INSTANCE; + } + + private static MethodInterceptorChain createContext(Iterator iterator, + MethodInterceptor interceptor) { + return new MethodInterceptorContext(next(iterator), interceptor); + } + + /** + * Invoke a {@link Method} with its {@code args}. + * + * @param target must not be {@literal null}. + * @param method must not be {@literal null}. + * @param args must not be {@literal null}. + * @return + * @throws Throwable + */ + public Object invoke(Object target, Method method, Object[] args) throws Throwable { + + PooledMethodInvocation invocation = getInvocation(target, method, args, next); + + try { + // JIT hint + if (next instanceof MethodInterceptorContext) { + return next.proceed(invocation); + } + return next.proceed(invocation); + } finally { + invocation.clear(); + } + } + + private PooledMethodInvocation getInvocation(Object target, Method method, Object[] args, MethodInterceptorChain next) { + + PooledMethodInvocation pooledMethodInvocation = pool.get(); + pooledMethodInvocation.initialize(target, method, args, next); + return pooledMethodInvocation; + } + + /** + * Proceed to the next {@link MethodInterceptorChain}. + * + * @param invocation must not be {@literal null}. + * @return + * @throws Throwable + */ + abstract Object proceed(MethodInvocation invocation) throws Throwable; + + /** + * {@link MethodInterceptorChain} using {@link MethodInterceptor} to handle invocations. + */ + static class MethodInterceptorContext extends MethodInterceptorChain { + + private final MethodInterceptor interceptor; + + MethodInterceptorContext(MethodInterceptorChain next, MethodInterceptor interceptor) { + super(next); + this.interceptor = interceptor; + } + + @Override + Object proceed(MethodInvocation invocation) throws Throwable { + return interceptor.invoke(invocation); + } + } + + /** + * Head {@link MethodInterceptorChain} to delegate to the next {@link MethodInterceptorChain}. + */ + static class Head extends MethodInterceptorChain { + + protected Head(MethodInterceptorChain next) { + super(next); + } + + @Override + Object proceed(MethodInvocation invocation) throws Throwable { + return next.proceed(invocation); + } + } + + /** + * Tail {@link MethodInterceptorChain}, no-op. + */ + static class Tail extends MethodInterceptorChain { + + public static Tail INSTANCE = new Tail(); + + private Tail() { + super(null); + } + + @Override + Object proceed(MethodInvocation invocation) throws Throwable { + return null; + } + } + + /** + * Stateful {@link MethodInvocation} using {@link MethodInterceptorChain}. The state is only valid throughout a call. + */ + static class PooledMethodInvocation implements MethodInvocation, InvocationTargetProvider { + + private Object target; + private Method method; + private Object args[]; + private MethodInterceptorChain current; + + PooledMethodInvocation() { + } + + /** + * Initialize state from the method call. + * + * @param target + * @param method + * @param args + * @param head + */ + public void initialize(Object target, Method method, Object[] args, MethodInterceptorChain head) { + this.target = target; + this.method = method; + this.args = args; + this.current = head; + } + + /** + * Clear invocation state. + */ + public void clear() { + this.target = null; + this.method = null; + this.args = null; + this.current = null; + } + + @Override + public Object proceed() throws Throwable { + current = current.next; + return current.proceed(this); + } + + @Override + public Object getInvocationTarget() { + return target; + } + + @Override + public Method getMethod() { + return method; + } + + @Override + public Object[] getArguments() { + return args; + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/intercept/MethodInvocation.java b/src/main/java/io/lettuce/core/dynamic/intercept/MethodInvocation.java new file mode 100644 index 0000000000..e8733e063d --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/intercept/MethodInvocation.java @@ -0,0 +1,51 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.intercept; + +import java.lang.reflect.Method; + +/** + * Description of an invocation to a method, given to an interceptor upon method-call. + * + *

+ * A method invocation is a joinpoint and can be intercepted by a method interceptor. + * + * @see MethodInterceptor + * @author Mark Paluch + * @since 5.0 + */ +public interface MethodInvocation { + + /** + * Proceed to the next interceptor in the chain. + *

+ * The implementation and the semantics of this method depends on the actual joinpoint type (see the children interfaces). + * + * @return see the children interfaces' proceed definition + * @throws Throwable if the invocation throws an exception + */ + Object proceed() throws Throwable; + + /** + * @return the originally called {@link Method}. + */ + Method getMethod(); + + /** + * @return method call arguments. + */ + Object[] getArguments(); +} diff --git a/src/main/java/io/lettuce/core/dynamic/intercept/package-info.java b/src/main/java/io/lettuce/core/dynamic/intercept/package-info.java new file mode 100644 index 0000000000..36416377dc --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/intercept/package-info.java @@ -0,0 +1,4 @@ +/** + * Invocation proxy support. + */ +package io.lettuce.core.dynamic.intercept; diff --git a/src/main/java/io/lettuce/core/dynamic/output/CodecAwareOutputFactoryResolver.java b/src/main/java/io/lettuce/core/dynamic/output/CodecAwareOutputFactoryResolver.java new file mode 100644 index 0000000000..0ec3ad1bbe --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/CodecAwareOutputFactoryResolver.java @@ -0,0 +1,57 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link RedisCodec}-aware implementation of {@link CommandOutputFactoryResolver}. This implementation inspects + * {@link RedisCodec} regarding its type and enhances {@link OutputSelector} for {@link CommandOutputFactory} resolution. + * + * @author Mark Paluch + * @since 5.0 + */ +public class CodecAwareOutputFactoryResolver implements CommandOutputFactoryResolver { + + private final CommandOutputFactoryResolver delegate; + private final RedisCodec redisCodec; + + /** + * Create a new {@link CodecAwareOutputFactoryResolver} given {@link CommandOutputFactoryResolver} and {@link RedisCodec}. + * + * @param delegate must not be {@literal null}. + * @param redisCodec must not be {@literal null}. + */ + public CodecAwareOutputFactoryResolver(CommandOutputFactoryResolver delegate, RedisCodec redisCodec) { + + LettuceAssert.notNull(delegate, "CommandOutputFactoryResolver delegate must not be null"); + LettuceAssert.notNull(redisCodec, "RedisCodec must not be null"); + + this.delegate = delegate; + this.redisCodec = redisCodec; + } + + @Override + public CommandOutputFactory resolveCommandOutput(OutputSelector outputSelector) { + return delegate.resolveCommandOutput(new OutputSelector(outputSelector.getOutputType(), redisCodec)); + } + + @Override + public CommandOutputFactory resolveStreamingCommandOutput(OutputSelector outputSelector) { + return delegate.resolveStreamingCommandOutput(new OutputSelector(outputSelector.getOutputType(), redisCodec)); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/CommandOutputFactory.java b/src/main/java/io/lettuce/core/dynamic/output/CommandOutputFactory.java new file mode 100644 index 0000000000..2c33ccd653 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/CommandOutputFactory.java @@ -0,0 +1,42 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.output.CommandOutput; + +/** + * Strategy interface to create {@link CommandOutput} given {@link RedisCodec}. + * + *

+ * Implementing classes usually produce the same {@link CommandOutput} type. + * + * @author Mark Paluch + * @since 5.0 + */ +@FunctionalInterface +public interface CommandOutputFactory { + + /** + * Create and initialize a new {@link CommandOutput} given {@link RedisCodec}. + * + * @param codec must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return the new {@link CommandOutput}. + */ + CommandOutput create(RedisCodec codec); +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/CommandOutputFactoryResolver.java b/src/main/java/io/lettuce/core/dynamic/output/CommandOutputFactoryResolver.java new file mode 100644 index 0000000000..d5cdb9682d --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/CommandOutputFactoryResolver.java @@ -0,0 +1,48 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +/** + * Strategy interface to resolve a {@link CommandOutputFactory} based on a {@link OutputSelector}. Resolution of + * {@link CommandOutputFactory} is based on {@link io.lettuce.core.dynamic.CommandMethod} result types and can be + * influenced whether the result type is a key or value result type. Additional type variables (based on the used + * {@link io.lettuce.core.codec.RedisCodec} are hints to improve output resolution. + * + * @author Mark Paluch + * @since 5.0 + * @see OutputSelector + */ +public interface CommandOutputFactoryResolver { + + /** + * Resolve a regular {@link CommandOutputFactory} that produces the {@link io.lettuce.core.output.CommandOutput} + * result component type. + * + * @param outputSelector must not be {@literal null}. + * @return the {@link CommandOutputFactory} if resolved, {@literal null} otherwise. + */ + CommandOutputFactory resolveCommandOutput(OutputSelector outputSelector); + + /** + * Resolve a streaming {@link CommandOutputFactory} that produces the {@link io.lettuce.core.output.StreamingOutput} + * result component type. + * + * @param outputSelector must not be {@literal null}. + * @return the {@link CommandOutputFactory} that implements {@link io.lettuce.core.output.StreamingOutput} if + * resolved, {@literal null} otherwise. + */ + CommandOutputFactory resolveStreamingCommandOutput(OutputSelector outputSelector); +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/CommandOutputResolverSupport.java b/src/main/java/io/lettuce/core/dynamic/output/CommandOutputResolverSupport.java new file mode 100644 index 0000000000..dcdb59426c --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/CommandOutputResolverSupport.java @@ -0,0 +1,46 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import io.lettuce.core.dynamic.support.ResolvableType; + +/** + * Base class for {@link CommandOutputFactory} resolution such as {@link OutputRegistryCommandOutputFactoryResolver}. + *

+ * This class provides methods to check provider/selector type assignability. Subclasses are responsible for calling methods in + * this class in the correct order. + * + * @author Mark Paluch + */ +public abstract class CommandOutputResolverSupport { + + /** + * Overridable hook to check whether {@code selector} can be assigned from the provider type {@code provider}. + *

+ * This method descends the component type hierarchy and considers primitive/wrapper type conversion. + * + * @param selector must not be {@literal null}. + * @param provider must not be {@literal null}. + * @return {@literal true} if selector can be assigned from its provider type. + */ + protected boolean isAssignableFrom(OutputSelector selector, OutputType provider) { + + ResolvableType selectorType = selector.getOutputType(); + ResolvableType resolvableType = provider.withCodec(selector.getRedisCodec()); + + return selectorType.isAssignableFrom(resolvableType); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/OutputRegistry.java b/src/main/java/io/lettuce/core/dynamic/output/OutputRegistry.java new file mode 100644 index 0000000000..9e55a2a340 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/OutputRegistry.java @@ -0,0 +1,260 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import java.lang.reflect.TypeVariable; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.support.ClassTypeInformation; +import io.lettuce.core.dynamic.support.ResolvableType; +import io.lettuce.core.dynamic.support.TypeInformation; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.*; + +/** + * Registry for {@link CommandOutput} types and their {@link CommandOutputFactory factories}. + * + * @author Mark Paluch + * @since 5.0 + * @see CommandOutput + */ +@SuppressWarnings("rawtypes") +public class OutputRegistry { + + private static final Map BUILTIN = new LinkedHashMap<>(); + private final Map registry = new LinkedHashMap<>(); + + static { + + Map registry = new LinkedHashMap<>(); + + register(registry, ListOfMapsOutput.class, ListOfMapsOutput::new); + register(registry, ArrayOutput.class, ArrayOutput::new); + register(registry, DoubleOutput.class, DoubleOutput::new); + register(registry, ByteArrayOutput.class, ByteArrayOutput::new); + register(registry, IntegerOutput.class, IntegerOutput::new); + + register(registry, KeyOutput.class, KeyOutput::new); + register(registry, ValueOutput.class, ValueOutput::new); + register(registry, KeyListOutput.class, KeyListOutput::new); + register(registry, ValueListOutput.class, ValueListOutput::new); + register(registry, MapOutput.class, MapOutput::new); + + register(registry, ValueSetOutput.class, ValueSetOutput::new); + + register(registry, BooleanOutput.class, BooleanOutput::new); + register(registry, BooleanListOutput.class, BooleanListOutput::new); + register(registry, GeoCoordinatesListOutput.class, GeoCoordinatesListOutput::new); + register(registry, GeoCoordinatesValueListOutput.class, GeoCoordinatesValueListOutput::new); + register(registry, ScoredValueListOutput.class, ScoredValueListOutput::new); + register(registry, ValueValueListOutput.class, ValueValueListOutput::new); + register(registry, StringValueListOutput.class, StringValueListOutput::new); + + register(registry, StringListOutput.class, StringListOutput::new); + register(registry, VoidOutput.class, VoidOutput::new); + + BUILTIN.putAll(registry); + } + + /** + * Create a new {@link OutputRegistry} registering builtin {@link CommandOutput} types. + */ + public OutputRegistry() { + this(true); + } + + /** + * Create a new {@link OutputRegistry}. + * + * @param registerBuiltin {@literal true} to register builtin {@link CommandOutput} types. + */ + public OutputRegistry(boolean registerBuiltin) { + + if (registerBuiltin) { + registry.putAll(BUILTIN); + } + } + + /** + * Register a {@link CommandOutput} type with its {@link CommandOutputFactory}. + * + * @param commandOutputClass must not be {@literal null}. + * @param commandOutputFactory must not be {@literal null}. + */ + public > void register(Class commandOutputClass, + CommandOutputFactory commandOutputFactory) { + + LettuceAssert.notNull(commandOutputClass, "CommandOutput class must not be null"); + LettuceAssert.notNull(commandOutputFactory, "CommandOutputFactory must not be null"); + + register(registry, commandOutputClass, commandOutputFactory); + } + + /** + * Return the registry map. + * + * @return map of {@link OutputType} to {@link CommandOutputFactory}. + */ + Map getRegistry() { + return registry; + } + + private static > void register(Map registry, + Class commandOutputClass, CommandOutputFactory commandOutputFactory) { + + List outputTypes = getOutputTypes(commandOutputClass); + + for (OutputType outputType : outputTypes) { + registry.put(outputType, commandOutputFactory); + } + } + + private static List getOutputTypes(Class> commandOutputClass) { + + OutputType streamingType = getStreamingType(commandOutputClass); + OutputType componentOutputType = getOutputComponentType(commandOutputClass); + + List types = new ArrayList<>(2); + if (streamingType != null) { + types.add(streamingType); + } + + if (componentOutputType != null) { + types.add(componentOutputType); + } + + return types; + } + + /** + * Retrieve {@link OutputType} for a {@link StreamingOutput} type. + * + * @param commandOutputClass + * @return + */ + @SuppressWarnings("rawtypes") + static OutputType getStreamingType(Class commandOutputClass) { + + ClassTypeInformation classTypeInformation = ClassTypeInformation.from(commandOutputClass); + + TypeInformation superTypeInformation = classTypeInformation.getSuperTypeInformation(StreamingOutput.class); + + if (superTypeInformation == null) { + return null; + } + + List> typeArguments = superTypeInformation.getTypeArguments(); + + return new OutputType(commandOutputClass, typeArguments.get(0), true) { + + @Override + public ResolvableType withCodec(RedisCodec codec) { + + TypeInformation typeInformation = ClassTypeInformation.from(codec.getClass()); + + ResolvableType resolvableType = ResolvableType.forType(commandOutputClass, new CodecVariableTypeResolver( + typeInformation)); + + while (resolvableType != ResolvableType.NONE) { + + ResolvableType[] interfaces = resolvableType.getInterfaces(); + for (ResolvableType resolvableInterface : interfaces) { + + if (resolvableInterface.getRawClass().equals(StreamingOutput.class)) { + return resolvableInterface.getGeneric(0); + } + } + + resolvableType = resolvableType.getSuperType(); + } + + throw new IllegalStateException(); + } + }; + } + + /** + * Retrieve {@link OutputType} for a {@link CommandOutput} type. + * + * @param commandOutputClass + * @return + */ + static OutputType getOutputComponentType(Class commandOutputClass) { + + ClassTypeInformation classTypeInformation = ClassTypeInformation.from(commandOutputClass); + + TypeInformation superTypeInformation = classTypeInformation.getSuperTypeInformation(CommandOutput.class); + + if (superTypeInformation == null) { + return null; + } + + List> typeArguments = superTypeInformation.getTypeArguments(); + + return new OutputType(commandOutputClass, typeArguments.get(2), false) { + @Override + public ResolvableType withCodec(RedisCodec codec) { + + TypeInformation typeInformation = ClassTypeInformation.from(codec.getClass()); + + ResolvableType resolvableType = ResolvableType.forType(commandOutputClass, new CodecVariableTypeResolver( + typeInformation)); + + while (!resolvableType.getRawClass().equals(CommandOutput.class)) { + resolvableType = resolvableType.getSuperType(); + } + + return resolvableType.getGeneric(2); + } + }; + } + + @SuppressWarnings("serial") + static class CodecVariableTypeResolver implements ResolvableType.VariableResolver { + + private final TypeInformation codecType; + private final List> typeArguments; + + public CodecVariableTypeResolver(TypeInformation codecType) { + + this.codecType = codecType.getSuperTypeInformation(RedisCodec.class); + this.typeArguments = this.codecType.getTypeArguments(); + } + + @Override + public Object getSource() { + return codecType; + } + + @Override + public ResolvableType resolveVariable(TypeVariable variable) { + + if (variable.getName().equals("K")) { + return ResolvableType.forClass(typeArguments.get(0).getType()); + } + + if (variable.getName().equals("V")) { + return ResolvableType.forClass(typeArguments.get(1).getType()); + } + return null; + } + } + +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/OutputRegistryCommandOutputFactoryResolver.java b/src/main/java/io/lettuce/core/dynamic/output/OutputRegistryCommandOutputFactoryResolver.java new file mode 100644 index 0000000000..c9a00e0e30 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/OutputRegistryCommandOutputFactoryResolver.java @@ -0,0 +1,104 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import java.util.Collection; +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; + +import io.lettuce.core.dynamic.support.ClassTypeInformation; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.CommandOutput; + +/** + * {@link CommandOutputFactoryResolver} using {@link OutputRegistry} to resolve a {@link CommandOutputFactory}. + *

+ * Types registered in {@link OutputRegistry} are inspected for the types they produce and matched with the declared repository + * method. If resolution yields multiple {@link CommandOutput}s, the first matched output is used. + * + * @author Mark Paluch + * @since 5.0 + * @see OutputRegistry + */ +public class OutputRegistryCommandOutputFactoryResolver extends CommandOutputResolverSupport implements + CommandOutputFactoryResolver { + + @SuppressWarnings("rawtypes") + private static final ClassTypeInformation COMMAND_OUTPUT = ClassTypeInformation.from(CommandOutput.class); + + private final OutputRegistry outputRegistry; + + /** + * Create a new {@link OutputRegistryCommandOutputFactoryResolver} given {@link OutputRegistry}. + * + * @param outputRegistry must not be {@literal null}. + */ + public OutputRegistryCommandOutputFactoryResolver(OutputRegistry outputRegistry) { + + LettuceAssert.notNull(outputRegistry, "OutputRegistry must not be null"); + + this.outputRegistry = outputRegistry; + } + + @Override + public CommandOutputFactory resolveCommandOutput(OutputSelector outputSelector) { + + Map registry = outputRegistry.getRegistry(); + + List outputTypes = registry.keySet().stream().filter((outputType) -> !outputType.isStreaming()) + .collect(Collectors.toList()); + + List candidates = getCandidates(outputTypes, outputSelector); + + if (candidates.isEmpty()) { + return null; + } + + return registry.get(candidates.get(0)); + } + + @Override + public CommandOutputFactory resolveStreamingCommandOutput(OutputSelector outputSelector) { + + Map registry = outputRegistry.getRegistry(); + + List outputTypes = registry.keySet().stream().filter(OutputType::isStreaming).collect(Collectors.toList()); + + List candidates = getCandidates(outputTypes, outputSelector); + + if (candidates.isEmpty()) { + return null; + } + + return registry.get(candidates.get(0)); + } + + private List getCandidates(Collection outputTypes, OutputSelector outputSelector) { + + return outputTypes.stream().filter(outputType -> { + + if (COMMAND_OUTPUT.getType().isAssignableFrom(outputSelector.getOutputType().getRawClass())) { + + if (outputSelector.getOutputType().getRawClass().isAssignableFrom(outputType.getCommandOutputClass())) { + return true; + } + } + + return isAssignableFrom(outputSelector, outputType); + }).collect(Collectors.toList()); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/OutputSelector.java b/src/main/java/io/lettuce/core/dynamic/output/OutputSelector.java new file mode 100644 index 0000000000..ee2fccac54 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/OutputSelector.java @@ -0,0 +1,65 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.support.ResolvableType; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Selector {@link CommandOutputFactory} resolution. + *

+ * A {@link OutputSelector} is based on the result {@link ResolvableType} and {@link io.lettuce.core.codec.RedisCodec}. + * The codec supplies types for generics resolution of {@link io.lettuce.core.output.CommandOutput}. + * + * @author Mark Paluch + * @since 5.0 + */ +public class OutputSelector { + + private final ResolvableType outputType; + private final RedisCodec redisCodec; + + /** + * Creates a new {@link OutputSelector} given {@link ResolvableType} and {@link RedisCodec}. + * + * @param outputType must not be {@literal null}. + * @param redisCodec must not be {@literal null}. + */ + public OutputSelector(ResolvableType outputType, RedisCodec redisCodec) { + + LettuceAssert.notNull(outputType, "Output type must not be null!"); + LettuceAssert.notNull(redisCodec, "RedisCodec must not be null!"); + + this.outputType = outputType; + this.redisCodec = redisCodec; + } + + /** + * @return the output type. + */ + public ResolvableType getOutputType() { + return outputType; + } + + /** + * + * @return the associated codec. + */ + public RedisCodec getRedisCodec() { + return redisCodec; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/OutputType.java b/src/main/java/io/lettuce/core/dynamic/output/OutputType.java new file mode 100644 index 0000000000..36b5133c03 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/OutputType.java @@ -0,0 +1,98 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.support.ResolvableType; +import io.lettuce.core.dynamic.support.TypeInformation; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.CommandOutput; + +/** + * Type descriptor for a {@link io.lettuce.core.output.CommandOutput}. + *

+ * This value object describes the primary output type and the produced {@link TypeInformation} by the {@link CommandOutput} + * type. + *

+ * {@link OutputType} makes a distinction whether a {@link CommandOutput} is a {@link io.lettuce.core.output.StreamingOutput} by + * providing {@code streaming}. Streaming outputs produce usually a component type hence they require an own {@link OutputType} + * descriptor. + * + * @author Mark Paluch + * @since 5.0 + */ +@SuppressWarnings("rawtypes") +public class OutputType { + + private final Class commandOutputClass; + private final TypeInformation typeInformation; + private final boolean streaming; + + /** + * Create a new {@link OutputType} given {@code primaryType}, the {@code commandOutputClass}, {@link TypeInformation} and + * whether the {@link OutputType} is for a {@link io.lettuce.core.output.StreamingOutput}. + * + * @param commandOutputClass must not be {@literal null}. + * @param typeInformation must not be {@literal null}. + * @param streaming {@literal true} if the type descriptor concerns the {@link io.lettuce.core.output.StreamingOutput} + */ + OutputType(Class commandOutputClass, TypeInformation typeInformation, boolean streaming) { + + LettuceAssert.notNull(commandOutputClass, "CommandOutput class must not be null"); + LettuceAssert.notNull(typeInformation, "TypeInformation must not be null"); + + this.commandOutputClass = commandOutputClass; + this.typeInformation = typeInformation; + this.streaming = streaming; + } + + /** + * @return + */ + public TypeInformation getTypeInformation() { + return typeInformation; + } + + /** + * @return + */ + public boolean isStreaming() { + return streaming; + } + + public ResolvableType withCodec(RedisCodec codec) { + return ResolvableType.forClass(typeInformation.getType()); + } + + /** + * @return + */ + public Class getCommandOutputClass() { + return commandOutputClass; + } + + @Override + public String toString() { + + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [commandOutputClass=").append(commandOutputClass); + sb.append(", typeInformation=").append(typeInformation); + sb.append(", streaming=").append(streaming); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/VoidOutput.java b/src/main/java/io/lettuce/core/dynamic/output/VoidOutput.java new file mode 100644 index 0000000000..540752a051 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/VoidOutput.java @@ -0,0 +1,49 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.output.CommandOutput; + +/** + * {@link Void} command output to consume data silently without actually processing it. + * + * @author Mark Paluch + * @since 5.0 + */ +class VoidOutput extends CommandOutput { + + /** + * Initialize a new instance that encodes and decodes keys and values using the supplied codec. + * + * @param codec Codec used to encode/decode keys and values, must not be {@literal null}. + */ + public VoidOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + // no-op + } + + @Override + public void set(long integer) { + // no-op + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/output/package-info.java b/src/main/java/io/lettuce/core/dynamic/output/package-info.java new file mode 100644 index 0000000000..efdb055558 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/output/package-info.java @@ -0,0 +1,4 @@ +/** + * {@link io.lettuce.core.output.CommandOutput} resolution support. + */ +package io.lettuce.core.dynamic.output; diff --git a/src/main/java/io/lettuce/core/dynamic/package-info.java b/src/main/java/io/lettuce/core/dynamic/package-info.java new file mode 100644 index 0000000000..c77695040d --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/package-info.java @@ -0,0 +1,4 @@ +/** + * Core package for Redis Command Interface support through {@link io.lettuce.core.dynamic.RedisCommandFactory}. + */ +package io.lettuce.core.dynamic; diff --git a/src/main/java/io/lettuce/core/dynamic/parameter/ExecutionSpecificParameters.java b/src/main/java/io/lettuce/core/dynamic/parameter/ExecutionSpecificParameters.java new file mode 100644 index 0000000000..123c8cc7a8 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/parameter/ExecutionSpecificParameters.java @@ -0,0 +1,120 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.parameter; + +import java.lang.reflect.Method; +import java.util.Arrays; +import java.util.List; + +import io.lettuce.core.dynamic.batch.CommandBatching; +import io.lettuce.core.dynamic.domain.Timeout; + +/** + * {@link Parameters}-implementation specific to execution. This implementation considers {@link Timeout} for a command method + * applying the appropriate synchronization and {@link CommandBatching} to batch commands. + * + * @author Mark Paluch + * @since 5.0 + * @see Timeout + * @see CommandBatching + */ +public class ExecutionSpecificParameters extends Parameters { + + private static final List> TYPES = Arrays.asList(Timeout.class, CommandBatching.class); + + private final int timeoutIndex; + private final int commandBatchingIndex; + + /** + * Create new {@link ExecutionSpecificParameters} given a {@link Method}. + * + * @param method must not be {@literal null}. + */ + public ExecutionSpecificParameters(Method method) { + + super(method); + + int timeoutIndex = -1; + int commandBatchingIndex = -1; + + List parameters = getParameters(); + + for (int i = 0; i < method.getParameterCount(); i++) { + + Parameter methodParameter = parameters.get(i); + + if (methodParameter.isSpecialParameter()) { + if (methodParameter.isAssignableTo(Timeout.class)) { + timeoutIndex = i; + } + + if (methodParameter.isAssignableTo(CommandBatching.class)) { + commandBatchingIndex = i; + } + } + } + + this.timeoutIndex = timeoutIndex; + this.commandBatchingIndex = commandBatchingIndex; + } + + /** + * @return the timeout argument index if present, or {@literal -1} if the command method declares a {@link Timeout} + * parameter. + */ + public int getTimeoutIndex() { + return timeoutIndex; + } + + /** + * @return the command batching argument index if present, or {@literal -1} if the command method declares a + * {@link CommandBatching} parameter. + */ + public int getCommandBatchingIndex() { + return commandBatchingIndex; + } + + @Override + protected ExecutionAwareParameter createParameter(Method method, int parameterIndex) { + return new ExecutionAwareParameter(method, parameterIndex); + } + + /** + * @return {@literal true} if the method defines a {@link CommandBatching} parameter. + */ + public boolean hasCommandBatchingIndex() { + return commandBatchingIndex != -1; + } + + /** + * @return {@literal true} if the method defines a {@link Timeout} parameter. + */ + public boolean hasTimeoutIndex() { + return getTimeoutIndex() != -1; + } + + public static class ExecutionAwareParameter extends Parameter { + + public ExecutionAwareParameter(Method method, int parameterIndex) { + super(method, parameterIndex); + } + + @Override + public boolean isSpecialParameter() { + return super.isSpecialParameter() || TYPES.contains(getParameterType()); + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/parameter/MethodParametersAccessor.java b/src/main/java/io/lettuce/core/dynamic/parameter/MethodParametersAccessor.java new file mode 100644 index 0000000000..986f8b466f --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/parameter/MethodParametersAccessor.java @@ -0,0 +1,84 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.parameter; + +import java.util.Iterator; + +import io.lettuce.core.dynamic.domain.Timeout; + +/** + * Accessor interface to method parameters during the actual invocation. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface MethodParametersAccessor { + + /** + * @return number of parameters. + */ + int getParameterCount(); + + /** + * Returns the bindable value with the given index. Bindable means, that {@link Timeout} values are skipped without noticed + * in the index. For a method signature taking {@link String}, {@link Timeout} , {@link String}, + * {@code #getBindableParameter(1)} would return the second {@link String} value. + * + * @param index parameter index. + * @return the bindable value. + */ + Object getBindableValue(int index); + + /** + * + * @param index parameter index. + * @return {@literal true} if the parameter at {@code index} is a key. + */ + boolean isKey(int index); + + /** + * + * @param index parameter index. + * @return {@literal true} if the parameter at {@code index} is a value. + */ + boolean isValue(int index); + + /** + * Returns an iterator over all bindable parameters. This means parameters assignable to {@link Timeout} will not + * be included in this {@link Iterator}. + * + * @return + */ + Iterator iterator(); + + /** + * Resolve a parameter name to its index. + * + * @param name the name. + * @return + */ + int resolveParameterIndex(String name); + + /** + * Return {@literal true} if the parameter at {@code index} is a bindable {@literal null} value that requires a + * {@literal null} value instead of being skipped. + * + * @param index parameter index. + * @return {@literal true} if the parameter at {@code index} is a bindable {@literal null} value that requires a + * {@literal null} value instead of being skipped. + */ + boolean isBindableNullValue(int index); +} diff --git a/src/main/java/io/lettuce/core/dynamic/parameter/Parameter.java b/src/main/java/io/lettuce/core/dynamic/parameter/Parameter.java new file mode 100644 index 0000000000..e046fc7e75 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/parameter/Parameter.java @@ -0,0 +1,156 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.parameter; + +import java.lang.annotation.Annotation; +import java.lang.reflect.Method; +import java.util.*; +import java.util.concurrent.ConcurrentHashMap; + +import io.lettuce.core.dynamic.support.*; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceClassUtils; + +/** + * Abstracts a method parameter and exposes access to type and parameter information. + * + * @author Mark Paluch + * @since 5.0 + */ +public class Parameter { + + private final ParameterNameDiscoverer discoverer = new CompositeParameterNameDiscoverer( + new StandardReflectionParameterNameDiscoverer(), new AnnotationParameterNameDiscoverer()); + + private final Method method; + private final String name; + private final int parameterIndex; + private final TypeInformation typeInformation; + private final MethodParameter methodParameter; + private final Map, Annotation> annotationCache = new ConcurrentHashMap<>(); + private final Set> absentCache = ConcurrentHashMap.newKeySet(); + private final List annotations; + + public Parameter(Method method, int parameterIndex) { + + this.method = method; + this.parameterIndex = parameterIndex; + this.methodParameter = new MethodParameter(method, parameterIndex); + this.methodParameter.initParameterNameDiscovery(discoverer); + this.name = methodParameter.getParameterName(); + this.typeInformation = ClassTypeInformation.fromMethodParameter(method, parameterIndex); + + Annotation[] annotations = method.getParameterAnnotations()[parameterIndex]; + List allAnnotations = new ArrayList<>(annotations.length); + + for (Annotation annotation : annotations) { + this.annotationCache.put(annotation.getClass(), annotation); + allAnnotations.add(annotation); + } + this.annotations = Collections.unmodifiableList(allAnnotations); + } + + /** + * Return the parameter annotation of the given type, if available. + * + * @param annotationType the annotation type to look for + * @return the annotation object, or {@code null} if not found + */ + @SuppressWarnings("unchecked") + public A findAnnotation(Class annotationType) { + + if (absentCache.contains(annotationType)) { + return null; + } + + A result = (A) annotationCache.computeIfAbsent(annotationType, + key -> methodParameter.getParameterAnnotation(annotationType)); + + if (result == null) { + absentCache.add(annotationType); + } + + return result; + } + + /** + * Return all parameter annotations. + * + * @return the {@link List} of annotation objects. + */ + public List getAnnotations() { + return annotations; + } + + /** + * + * @return the parameter index. + */ + public int getParameterIndex() { + return parameterIndex; + } + + /** + * + * @return the parameter type. + */ + public Class getParameterType() { + return method.getParameterTypes()[parameterIndex]; + } + + /** + * + * @return the parameter {@link TypeInformation}. + */ + public TypeInformation getTypeInformation() { + return typeInformation; + } + + /** + * Check whether the parameter is assignable to {@code target}. + * + * @param target must not be {@literal null}. + * @return + */ + public boolean isAssignableTo(Class target) { + + LettuceAssert.notNull(target, "Target type must not be null"); + + return LettuceClassUtils.isAssignable(target, getParameterType()); + } + + /** + * + * @return {@literal true} if the parameter is a special parameter. + */ + public boolean isSpecialParameter() { + return false; + } + + /** + * @return {@literal true} if the {@link Parameter} can be bound to a command. + */ + boolean isBindable() { + return !isSpecialParameter(); + } + + /** + * @return the parameter name or {@literal null} if not available. + */ + public String getName() { + return name; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/parameter/Parameters.java b/src/main/java/io/lettuce/core/dynamic/parameter/Parameters.java new file mode 100644 index 0000000000..cc62151e58 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/parameter/Parameters.java @@ -0,0 +1,115 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.parameter; + +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Iterator; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Base class to abstract method {@link Parameter}s. + * + * @author Mark Paluch + */ +public abstract class Parameters

implements Iterable

{ + + private final List

parameters; + private final List

bindableParameters; + + /** + * Create new {@link Parameters} given a {@link Method}. + * + * @param method must not be {@literal null}. + */ + public Parameters(Method method) { + + LettuceAssert.notNull(method, "Method must not be null"); + + this.parameters = new ArrayList<>(method.getParameterCount()); + + for (int i = 0; i < method.getParameterCount(); i++) { + + P parameter = createParameter(method, i); + + parameters.add(parameter); + } + + this.bindableParameters = createBindableParameters(); + } + + /** + * Create a new {@link Parameters} for given a {@link Method} at {@code parameterIndex}. + * + * @param method must not be {@literal null}. + * @param parameterIndex the parameter index. + * @return the {@link Parameter}. + */ + protected abstract P createParameter(Method method, int parameterIndex); + + /** + * Returns {@link Parameter} instances with effectively all special parameters removed. + * + * @return + */ + private List

createBindableParameters() { + + List

bindables = new ArrayList<>(parameters.size()); + + for (P parameter : parameters) { + if (parameter.isBindable()) { + bindables.add(parameter); + } + } + + return bindables; + } + + /** + * @return + */ + public List

getParameters() { + return parameters; + } + + /** + * Get the bindable parameter according it's logical position in the command. Declarative position may differ because of + * special parameters interleaved. + * + * @param index + * @return the {@link Parameter}. + */ + public Parameter getBindableParameter(int index) { + return getBindableParameters().get(index); + } + + /** + * Returns {@link Parameter} instances with effectively all special parameters removed. + * + * @return + */ + public List

getBindableParameters() { + return bindableParameters; + } + + @Override + public Iterator

iterator() { + return getBindableParameters().iterator(); + } + +} diff --git a/src/main/java/io/lettuce/core/dynamic/parameter/package-info.java b/src/main/java/io/lettuce/core/dynamic/parameter/package-info.java new file mode 100644 index 0000000000..4b0c60c051 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/parameter/package-info.java @@ -0,0 +1,4 @@ +/** + * Parameter access and descriptors. + */ +package io.lettuce.core.dynamic.parameter; diff --git a/src/main/java/io/lettuce/core/dynamic/segment/AnnotationCommandSegmentFactory.java b/src/main/java/io/lettuce/core/dynamic/segment/AnnotationCommandSegmentFactory.java new file mode 100644 index 0000000000..7cf1033506 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/segment/AnnotationCommandSegmentFactory.java @@ -0,0 +1,229 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.segment; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.regex.Pattern; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.dynamic.CommandMethod; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.core.dynamic.annotation.CommandNaming; +import io.lettuce.core.dynamic.annotation.CommandNaming.LetterCase; +import io.lettuce.core.dynamic.annotation.CommandNaming.Strategy; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link CommandSegmentFactory} implementation that creates {@link CommandSegments} considering {@link Command} and + * {@link CommandNaming} annotations. + * + * @author Mark Paluch + * @since 5.0 + */ +public class AnnotationCommandSegmentFactory implements CommandSegmentFactory { + + private static final Pattern SPACE = Pattern.compile("\\s"); + private static final String INDEX_BASED_PARAM_START = "?"; + private static final String NAME_BASED_PARAM_START = ":"; + + @Override + public CommandSegments createCommandSegments(CommandMethod commandMethod) { + + if (CommandSegmentParser.INSTANCE.hasCommandString(commandMethod)) { + return CommandSegmentParser.INSTANCE.createCommandSegments(commandMethod); + } + + LetterCase letterCase = getLetterCase(commandMethod); + Strategy strategy = getNamingStrategy(commandMethod); + + List parts = parseMethodName(commandMethod.getName(), strategy); + return createCommandSegments(parts, letterCase); + } + + private CommandSegments createCommandSegments(List parts, LetterCase letterCase) { + + List segments = new ArrayList<>(parts.size()); + + for (String part : parts) { + + if (letterCase == LetterCase.AS_IS) { + segments.add(CommandSegment.constant(part)); + } else { + segments.add(CommandSegment.constant(part.toUpperCase())); + } + } + + return new CommandSegments(segments); + } + + private List parseMethodName(String name, Strategy strategy) { + + if (strategy == Strategy.METHOD_NAME) { + return Collections.singletonList(name); + } + + List parts = new ArrayList<>(); + + char[] chars = name.toCharArray(); + + boolean previousUpperCase = false; + StringBuffer buffer = new StringBuffer(chars.length); + for (char theChar : chars) { + + if (!Character.isUpperCase(theChar)) { + buffer.append(theChar); + previousUpperCase = false; + continue; + + } + + // Camel hump + if (!previousUpperCase) { + + if (!LettuceStrings.isEmpty(buffer)) { + + if (strategy == Strategy.DOT) { + buffer.append('.'); + } + + if (strategy == Strategy.SPLIT) { + + parts.add(buffer.toString()); + buffer = new StringBuffer(chars.length); + } + } + } + + previousUpperCase = true; + buffer.append(theChar); + } + + if (LettuceStrings.isNotEmpty(buffer)) { + parts.add(buffer.toString()); + } + + return parts; + } + + private LetterCase getLetterCase(CommandMethod commandMethod) { + + if (commandMethod.hasAnnotation(CommandNaming.class)) { + LetterCase letterCase = commandMethod.getMethod().getAnnotation(CommandNaming.class).letterCase(); + if (letterCase != LetterCase.DEFAULT) { + return letterCase; + } + } + + Class declaringClass = commandMethod.getMethod().getDeclaringClass(); + CommandNaming annotation = declaringClass.getAnnotation(CommandNaming.class); + if (annotation != null && annotation.letterCase() != LetterCase.DEFAULT) { + return annotation.letterCase(); + } + + return LetterCase.UPPERCASE; + } + + private Strategy getNamingStrategy(CommandMethod commandMethod) { + + if (commandMethod.hasAnnotation(CommandNaming.class)) { + Strategy strategy = commandMethod.getMethod().getAnnotation(CommandNaming.class).strategy(); + if (strategy != Strategy.DEFAULT) { + return strategy; + } + } + + Class declaringClass = commandMethod.getMethod().getDeclaringClass(); + CommandNaming annotation = declaringClass.getAnnotation(CommandNaming.class); + if (annotation != null && annotation.strategy() != Strategy.DEFAULT) { + return annotation.strategy(); + } + + return Strategy.SPLIT; + } + + private enum CommandSegmentParser implements CommandSegmentFactory { + + INSTANCE; + + @Override + public CommandSegments createCommandSegments(CommandMethod commandMethod) { + return parse(getCommandString(commandMethod)); + } + + private CommandSegments parse(String command) { + + String[] split = SPACE.split(command); + + LettuceAssert.notEmpty(split, "Command must not be empty"); + + return getCommandSegments(split); + } + + private CommandSegments getCommandSegments(String[] split) { + + List segments = new ArrayList<>(); + + for (String segment : split) { + + if (segment.startsWith(INDEX_BASED_PARAM_START)) { + segments.add(parseIndexBasedArgument(segment)); + continue; + } + + if (segment.startsWith(NAME_BASED_PARAM_START)) { + segments.add(parseNameBasedArgument(segment)); + continue; + } + + segments.add(CommandSegment.constant(segment)); + } + + return new CommandSegments(segments); + } + + private CommandSegment parseIndexBasedArgument(String segment) { + + String index = segment.substring(INDEX_BASED_PARAM_START.length()); + return getIndexBasedArgument(index); + } + + private CommandSegment parseNameBasedArgument(String segment) { + return CommandSegment.namedParameter(segment.substring(NAME_BASED_PARAM_START.length())); + } + + private CommandSegment getIndexBasedArgument(String index) { + return CommandSegment.indexedParameter(Integer.parseInt(index)); + } + + private String getCommandString(CommandMethod commandMethod) { + + Command annotation = commandMethod.getAnnotation(Command.class); + return annotation.value(); + } + + private boolean hasCommandString(CommandMethod commandMethod) { + + if (commandMethod.hasAnnotation(Command.class)) { + Command annotation = commandMethod.getAnnotation(Command.class); + return LettuceStrings.isNotEmpty(annotation.value()); + } + + return false; + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/segment/CommandSegment.java b/src/main/java/io/lettuce/core/dynamic/segment/CommandSegment.java new file mode 100644 index 0000000000..89b57253db --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/segment/CommandSegment.java @@ -0,0 +1,186 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.segment; + +import io.lettuce.core.dynamic.parameter.MethodParametersAccessor; +import io.lettuce.core.dynamic.parameter.Parameter; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Value object representing a segment within a Redis Command. + *

+ * A command segment is an ASCII string denoting a command, a named or an index-parameter reference. + * + * @author Mark Paluch + * @since 5.0 + */ +public abstract class CommandSegment { + + /** + * Create a constant {@link CommandSegment}. + * + * @param content must not be empty or {@literal null}. + * @return the {@link CommandSegment}. + */ + public static CommandSegment constant(String content) { + return new Constant(content); + } + + /** + * Create a named parameter reference {@link CommandSegment}. + * + * @param name must not be empty or {@literal null}. + * @return + */ + public static CommandSegment namedParameter(String name) { + return new NamedParameter(name); + } + + public static CommandSegment indexedParameter(int index) { + return new IndexedParameter(index); + } + + /** + * + * @return the command segment in its {@link String representation} + */ + public abstract String asString(); + + /** + * Check whether this segment can consume the {@link Parameter} by applying parameter substitution. + * + * @param parameter + * @return + * @since 5.1.3 + */ + public abstract boolean canConsume(Parameter parameter); + + /** + * @param parametersAccessor + * @return + */ + public abstract ArgumentContribution contribute(MethodParametersAccessor parametersAccessor); + + @Override + public String toString() { + + StringBuffer sb = new StringBuffer(); + sb.append(getClass().getSimpleName()); + sb.append(" ").append(asString()); + return sb.toString(); + } + + private static class Constant extends CommandSegment { + + private final String content; + + public Constant(String content) { + + LettuceAssert.notEmpty(content, "Constant must not be empty"); + + this.content = content; + } + + @Override + public String asString() { + return content; + } + + @Override + public boolean canConsume(Parameter parameter) { + return false; + } + + @Override + public ArgumentContribution contribute(MethodParametersAccessor parametersAccessor) { + return new ArgumentContribution(-1, asString()); + } + } + + private static class NamedParameter extends CommandSegment { + + private final String name; + + public NamedParameter(String name) { + + LettuceAssert.notEmpty(name, "Parameter name must not be empty"); + + this.name = name; + } + + @Override + public String asString() { + return name; + } + + @Override + public boolean canConsume(Parameter parameter) { + return parameter.getName() != null && parameter.getName().equals(name); + } + + @Override + public ArgumentContribution contribute(MethodParametersAccessor parametersAccessor) { + + int index = parametersAccessor.resolveParameterIndex(name); + return new ArgumentContribution(index, parametersAccessor.getBindableValue(index)); + } + } + + private static class IndexedParameter extends CommandSegment { + + private final int index; + + public IndexedParameter(int index) { + + LettuceAssert.isTrue(index >= 0, "Parameter index must be non-negative starting at 0"); + this.index = index; + } + + @Override + public String asString() { + return Integer.toString(index); + } + + @Override + public boolean canConsume(Parameter parameter) { + return parameter.getParameterIndex() == index; + } + + @Override + public ArgumentContribution contribute(MethodParametersAccessor parametersAccessor) { + return new ArgumentContribution(index, parametersAccessor.getBindableValue(index)); + } + } + + public static class ArgumentContribution { + + private final int parameterIndex; + private final Object value; + + ArgumentContribution(int parameterIndex, Object value) { + this.parameterIndex = parameterIndex; + this.value = value; + } + + public int getParameterIndex() { + return parameterIndex; + } + + public Object getValue() { + return value; + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/segment/CommandSegmentFactory.java b/src/main/java/io/lettuce/core/dynamic/segment/CommandSegmentFactory.java new file mode 100644 index 0000000000..f9b102a3a5 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/segment/CommandSegmentFactory.java @@ -0,0 +1,35 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.segment; + +import io.lettuce.core.dynamic.CommandMethod; + +/** + * Strategy interface to create {@link CommandSegments} for a {@link CommandMethod}. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface CommandSegmentFactory { + + /** + * Create {@link CommandSegments} for a {@link CommandMethod}. + * + * @param commandMethod must not be {@literal null}. + * @return the {@link CommandSegments}. + */ + CommandSegments createCommandSegments(CommandMethod commandMethod); +} diff --git a/src/main/java/io/lettuce/core/dynamic/segment/CommandSegments.java b/src/main/java/io/lettuce/core/dynamic/segment/CommandSegments.java new file mode 100644 index 0000000000..bf1fad9d42 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/segment/CommandSegments.java @@ -0,0 +1,135 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.segment; + +import java.util.Collections; +import java.util.Iterator; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Value object abstracting multiple {@link CommandSegment}s. + * + * @author Mark Paluch + * @since 5.0 + */ +public class CommandSegments implements Iterable { + + private final ProtocolKeyword commandType; + private final List segments; + + /** + * Create {@link CommandSegments} given a {@link List} of {@link CommandSegment}s. + * + * @param segments must not be {@literal null.} + */ + public CommandSegments(List segments) { + + LettuceAssert.isTrue(!segments.isEmpty(), "Command segments must not be empty"); + + this.segments = segments.size() > 1 ? Collections.unmodifiableList(segments.subList(1, segments.size())) + : Collections.emptyList(); + this.commandType = potentiallyResolveCommand(segments.get(0).asString()); + } + + /** + * Attempt to resolve the {@code commandType} against {@link CommandType}. This allows reuse of settings associated with the + * actual command type such as read-write routing. Subclasses may override this method. + * + * @param commandType must not be {@literal null}. + * @return the resolved {@link ProtocolKeyword}. + * @since 5.0.5 + */ + protected ProtocolKeyword potentiallyResolveCommand(String commandType) { + + try { + return CommandType.valueOf(commandType); + } catch (IllegalArgumentException e) { + return new StringCommandType(commandType); + } + } + + @Override + public Iterator iterator() { + return segments.iterator(); + } + + public ProtocolKeyword getCommandType() { + return commandType; + } + + public int size() { + return segments.size(); + } + + static class StringCommandType implements ProtocolKeyword { + + private final byte[] commandTypeBytes; + private final String commandType; + + StringCommandType(String commandType) { + this.commandType = commandType; + this.commandTypeBytes = commandType.getBytes(); + } + + @Override + public byte[] getBytes() { + return commandTypeBytes; + } + + @Override + public String name() { + return commandType; + } + + @Override + public String toString() { + return name(); + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof StringCommandType)) + return false; + + StringCommandType that = (StringCommandType) o; + + return commandType.equals(that.commandType); + } + + @Override + public int hashCode() { + return commandType.hashCode(); + } + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(getCommandType().name()); + + for (CommandSegment segment : segments) { + sb.append(' ').append(segment); + } + + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/segment/package-info.java b/src/main/java/io/lettuce/core/dynamic/segment/package-info.java new file mode 100644 index 0000000000..a605d1ca31 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/segment/package-info.java @@ -0,0 +1,4 @@ +/** + * Support for {@link io.lettuce.core.dynamic.segment.CommandSegments} and segment parsing. + */ +package io.lettuce.core.dynamic.segment; diff --git a/src/main/java/io/lettuce/core/dynamic/support/AnnotationParameterNameDiscoverer.java b/src/main/java/io/lettuce/core/dynamic/support/AnnotationParameterNameDiscoverer.java new file mode 100644 index 0000000000..74ba016771 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/AnnotationParameterNameDiscoverer.java @@ -0,0 +1,77 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.annotation.Annotation; +import java.lang.reflect.Constructor; +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.dynamic.annotation.Param; + +/** + * {@link ParameterNameDiscoverer} based on {@link Param} annotations to resolve parameter names. + * + * @author Mark Paluch + */ +public class AnnotationParameterNameDiscoverer implements ParameterNameDiscoverer { + + @Override + public String[] getParameterNames(Method method) { + + if (method.getParameterCount() == 0) { + return new String[0]; + } + + return doGetParameterNames(method.getParameterAnnotations()); + } + + @Override + public String[] getParameterNames(Constructor ctor) { + + if (ctor.getParameterCount() == 0) { + return new String[0]; + } + + return doGetParameterNames(ctor.getParameterAnnotations()); + } + + protected String[] doGetParameterNames(Annotation[][] parameterAnnotations) { + + List names = new ArrayList<>(); + + for (int i = 0; i < parameterAnnotations.length; i++) { + + boolean foundParam = false; + for (int j = 0; j < parameterAnnotations[i].length; j++) { + + if (parameterAnnotations[i][j].annotationType().equals(Param.class)) { + foundParam = true; + Param param = (Param) parameterAnnotations[i][j]; + names.add(param.value()); + break; + } + } + + if (!foundParam) { + return null; + } + } + + return names.toArray(new String[names.size()]); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/ClassTypeInformation.java b/src/main/java/io/lettuce/core/dynamic/support/ClassTypeInformation.java new file mode 100644 index 0000000000..f369813986 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/ClassTypeInformation.java @@ -0,0 +1,180 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.ref.Reference; +import java.lang.ref.WeakReference; +import java.lang.reflect.Method; +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.util.*; +import java.util.Map.Entry; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceClassUtils; + +/** + * {@link TypeInformation} for a plain {@link Class}. + */ +@SuppressWarnings({ "unchecked", "rawtypes" }) +public class ClassTypeInformation extends TypeDiscoverer { + + public static final ClassTypeInformation COLLECTION = new ClassTypeInformation(Collection.class); + public static final ClassTypeInformation LIST = new ClassTypeInformation(List.class); + public static final ClassTypeInformation SET = new ClassTypeInformation(Set.class); + public static final ClassTypeInformation MAP = new ClassTypeInformation(Map.class); + public static final ClassTypeInformation OBJECT = new ClassTypeInformation(Object.class); + + private static final Map, Reference>> CACHE = Collections + .synchronizedMap(new WeakHashMap, Reference>>()); + + static { + for (ClassTypeInformation info : Arrays.asList(COLLECTION, LIST, SET, MAP, OBJECT)) { + CACHE.put(info.getType(), new WeakReference<>(info)); + } + } + + private final Class type; + + /** + * Simple factory method to easily create new instances of {@link ClassTypeInformation}. + * + * @param + * @param type must not be {@literal null}. + * @return + */ + public static ClassTypeInformation from(Class type) { + + LettuceAssert.notNull(type, "Type must not be null!"); + + Reference> cachedReference = CACHE.get(type); + TypeInformation cachedTypeInfo = cachedReference == null ? null : cachedReference.get(); + + if (cachedTypeInfo != null) { + return (ClassTypeInformation) cachedTypeInfo; + } + + ClassTypeInformation result = new ClassTypeInformation(type); + CACHE.put(type, new WeakReference>(result)); + return result; + } + + /** + * Creates a {@link TypeInformation} from the given method's return type. + * + * @param method must not be {@literal null}. + * @return + */ + public static TypeInformation fromReturnTypeOf(Method method) { + + LettuceAssert.notNull(method, "Method must not be null!"); + return new ClassTypeInformation(method.getDeclaringClass()).createInfo(method.getGenericReturnType()); + } + + /** + * Creates a {@link TypeInformation} from the given method's parameter type. + * + * @param method must not be {@literal null}. + * @return + */ + public static TypeInformation fromMethodParameter(Method method, int index) { + + LettuceAssert.notNull(method, "Method must not be null!"); + return new ClassTypeInformation(method.getDeclaringClass()).createInfo(method.getGenericParameterTypes()[index]); + } + + /** + * Creates {@link ClassTypeInformation} for the given type. + * + * @param type + */ + ClassTypeInformation(Class type) { + super(getUserClass(type), getTypeVariableMap(type)); + this.type = type; + } + + /** + * Return the user-defined class for the given class: usually simply the given class, but the original class in case of a + * CGLIB-generated subclass. + * + * @param clazz the class to check + * @return the user-defined class + */ + private static Class getUserClass(Class clazz) { + if (clazz != null && clazz.getName().contains(LettuceClassUtils.CGLIB_CLASS_SEPARATOR)) { + Class superclass = clazz.getSuperclass(); + if (superclass != null && Object.class != superclass) { + return superclass; + } + } + return clazz; + } + + /** + * Little helper to allow us to create a generified map, actually just to satisfy the compiler. + * + * @param type must not be {@literal null}. + * @return + */ + private static Map, Type> getTypeVariableMap(Class type) { + return getTypeVariableMap(type, new HashSet()); + } + + @SuppressWarnings("deprecation") + private static Map, Type> getTypeVariableMap(Class type, Collection visited) { + + if (visited.contains(type)) { + return Collections.emptyMap(); + } else { + visited.add(type); + } + + Map source = GenericTypeResolver.getTypeVariableMap(type); + Map, Type> map = new HashMap<>(source.size()); + + for (Entry entry : source.entrySet()) { + + Type value = entry.getValue(); + map.put(entry.getKey(), entry.getValue()); + + if (value instanceof Class) { + map.putAll(getTypeVariableMap((Class) value, visited)); + } + } + + return map; + } + + @Override + public Class getType() { + return type; + } + + @Override + public ClassTypeInformation getRawTypeInformation() { + return this; + } + + @Override + public boolean isAssignableFrom(TypeInformation target) { + return getType().isAssignableFrom(target.getType()); + } + + @Override + public String toString() { + return type.getName(); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/CompositeParameterNameDiscoverer.java b/src/main/java/io/lettuce/core/dynamic/support/CompositeParameterNameDiscoverer.java new file mode 100644 index 0000000000..50a85fa5d3 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/CompositeParameterNameDiscoverer.java @@ -0,0 +1,65 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.Constructor; +import java.lang.reflect.Method; +import java.util.Arrays; +import java.util.Collection; + +/** + * Composite {@link ParameterNameDiscoverer} to resolve parameter names using multiple {@link ParameterNameDiscoverer}s. + * + * @author Mark Paluch + */ +public class CompositeParameterNameDiscoverer implements ParameterNameDiscoverer { + + private Collection parameterNameDiscoverers; + + public CompositeParameterNameDiscoverer(ParameterNameDiscoverer... parameterNameDiscoverers) { + this(Arrays.asList(parameterNameDiscoverers)); + } + + public CompositeParameterNameDiscoverer(Collection parameterNameDiscoverers) { + this.parameterNameDiscoverers = parameterNameDiscoverers; + } + + @Override + public String[] getParameterNames(Method method) { + + for (ParameterNameDiscoverer parameterNameDiscoverer : parameterNameDiscoverers) { + String[] parameterNames = parameterNameDiscoverer.getParameterNames(method); + if (parameterNames != null) { + return parameterNames; + } + } + + return null; + } + + @Override + public String[] getParameterNames(Constructor ctor) { + + for (ParameterNameDiscoverer parameterNameDiscoverer : parameterNameDiscoverers) { + String[] parameterNames = parameterNameDiscoverer.getParameterNames(ctor); + if (parameterNames != null) { + return parameterNames; + } + } + + return null; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/GenericArrayTypeInformation.java b/src/main/java/io/lettuce/core/dynamic/support/GenericArrayTypeInformation.java new file mode 100644 index 0000000000..14db534bcf --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/GenericArrayTypeInformation.java @@ -0,0 +1,63 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.Array; +import java.lang.reflect.GenericArrayType; +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.util.Map; + +/** + * Special {@link TypeDiscoverer} handling {@link GenericArrayType}s. + */ +class GenericArrayTypeInformation extends ParentTypeAwareTypeInformation { + + private final GenericArrayType type; + + /** + * Creates a new {@link GenericArrayTypeInformation} for the given {@link GenericArrayTypeInformation} and + * {@link TypeDiscoverer}. + * + * @param type must not be {@literal null}. + * @param parent must not be {@literal null}. + * @param typeVariableMap must not be {@literal null}. + */ + protected GenericArrayTypeInformation(GenericArrayType type, TypeDiscoverer parent, + Map, Type> typeVariableMap) { + + super(type, parent, typeVariableMap); + this.type = type; + } + + @Override + @SuppressWarnings("unchecked") + public Class getType() { + return (Class) Array.newInstance(resolveClass(type.getGenericComponentType()), 0).getClass(); + } + + @Override + protected TypeInformation doGetComponentType() { + + Type componentType = type.getGenericComponentType(); + return createInfo(componentType); + } + + @Override + public String toString() { + return type.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/GenericTypeResolver.java b/src/main/java/io/lettuce/core/dynamic/support/GenericTypeResolver.java new file mode 100644 index 0000000000..060fb0c875 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/GenericTypeResolver.java @@ -0,0 +1,85 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.ParameterizedType; +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.util.HashMap; +import java.util.Map; + +/** + * Helper class for resolving generic types against type variables. + * + *

+ * Mainly intended for usage within the framework, resolving method parameter types even when they are declared generically. + */ +public abstract class GenericTypeResolver { + + /** + * Build a mapping of {@link TypeVariable#getName TypeVariable names} to {@link Class concrete classes} for the specified + * {@link Class}. Searches all super types, enclosing types and interfaces. + */ + @SuppressWarnings("rawtypes") + public static Map getTypeVariableMap(Class clazz) { + Map typeVariableMap = new HashMap(); + buildTypeVariableMap(ResolvableType.forClass(clazz), typeVariableMap); + + return typeVariableMap; + } + + /** + * Resolve the type arguments of the given generic interface against the given target class which is assumed to implement + * the generic interface and possibly declare concrete types for its type variables. + * + * @param clazz the target class to check against + * @param genericIfc the generic interface or superclass to resolve the type argument from + * @return the resolved type of each argument, with the array size matching the number of actual type arguments, or + * {@code null} if not resolvable + */ + public static Class[] resolveTypeArguments(Class clazz, Class genericIfc) { + ResolvableType type = ResolvableType.forClass(clazz).as(genericIfc); + if (!type.hasGenerics() || type.isEntirelyUnresolvable()) { + return null; + } + return type.resolveGenerics(Object.class); + } + + @SuppressWarnings("rawtypes") + private static void buildTypeVariableMap(ResolvableType type, Map typeVariableMap) { + if (type != ResolvableType.NONE) { + if (type.getType() instanceof ParameterizedType) { + TypeVariable[] variables = type.resolve().getTypeParameters(); + for (int i = 0; i < variables.length; i++) { + ResolvableType generic = type.getGeneric(i); + while (generic.getType() instanceof TypeVariable) { + generic = generic.resolveType(); + } + if (generic != ResolvableType.NONE) { + typeVariableMap.put(variables[i], generic.getType()); + } + } + } + buildTypeVariableMap(type.getSuperType(), typeVariableMap); + for (ResolvableType interfaceType : type.getInterfaces()) { + buildTypeVariableMap(interfaceType, typeVariableMap); + } + if (type.resolve().isMemberClass()) { + buildTypeVariableMap(ResolvableType.forClass(type.resolve().getEnclosingClass()), typeVariableMap); + } + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/MethodParameter.java b/src/main/java/io/lettuce/core/dynamic/support/MethodParameter.java new file mode 100644 index 0000000000..406f6cad51 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/MethodParameter.java @@ -0,0 +1,526 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.annotation.Annotation; +import java.lang.reflect.*; +import java.util.HashMap; +import java.util.Map; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Helper class that encapsulates the specification of a method parameter, i.e. a {@link Method} or {@link Constructor} plus a + * parameter index and a nested type index for a declared generic type. Useful as a specification object to pass along. + */ +public class MethodParameter { + + private final Method method; + + private final Constructor constructor; + + private final int parameterIndex; + + private int nestingLevel = 1; + + /** Map from Integer level to Integer type index */ + Map typeIndexesPerLevel; + + private volatile Class containingClass; + + private volatile Class parameterType; + + private volatile Type genericParameterType; + + private volatile Annotation[] parameterAnnotations; + + private volatile ParameterNameDiscoverer parameterNameDiscoverer; + + private volatile String parameterName; + + /** + * Create a new {@code MethodParameter} for the given method, with nesting level 1. + * + * @param method the Method to specify a parameter for + * @param parameterIndex the index of the parameter: -1 for the method return type; 0 for the first method parameter; 1 for + * the second method parameter, etc. + */ + public MethodParameter(Method method, int parameterIndex) { + this(method, parameterIndex, 1); + } + + /** + * Create a new {@code MethodParameter} for the given method. + * + * @param method the Method to specify a parameter for + * @param parameterIndex the index of the parameter: -1 for the method return type; 0 for the first method parameter; 1 for + * the second method parameter, etc. + * @param nestingLevel the nesting level of the target type (typically 1; e.g. in case of a List of Lists, 1 would indicate + * the nested List, whereas 2 would indicate the element of the nested List) + */ + public MethodParameter(Method method, int parameterIndex, int nestingLevel) { + + LettuceAssert.notNull(method, "Method must not be null"); + + this.method = method; + this.parameterIndex = parameterIndex; + this.nestingLevel = nestingLevel; + this.constructor = null; + } + + /** + * Create a new MethodParameter for the given constructor, with nesting level 1. + * + * @param constructor the Constructor to specify a parameter for + * @param parameterIndex the index of the parameter + */ + public MethodParameter(Constructor constructor, int parameterIndex) { + this(constructor, parameterIndex, 1); + } + + /** + * Create a new MethodParameter for the given constructor. + * + * @param constructor the Constructor to specify a parameter for + * @param parameterIndex the index of the parameter + * @param nestingLevel the nesting level of the target type (typically 1; e.g. in case of a List of Lists, 1 would indicate + * the nested List, whereas 2 would indicate the element of the nested List) + */ + public MethodParameter(Constructor constructor, int parameterIndex, int nestingLevel) { + LettuceAssert.notNull(constructor, "Constructor must not be null"); + this.constructor = constructor; + this.parameterIndex = parameterIndex; + this.nestingLevel = nestingLevel; + this.method = null; + } + + /** + * Copy constructor, resulting in an independent MethodParameter object based on the same metadata and cache state that the + * original object was in. + * + * @param original the original MethodParameter object to copy from + */ + public MethodParameter(MethodParameter original) { + LettuceAssert.notNull(original, "Original must not be null"); + this.method = original.method; + this.constructor = original.constructor; + this.parameterIndex = original.parameterIndex; + this.nestingLevel = original.nestingLevel; + this.typeIndexesPerLevel = original.typeIndexesPerLevel; + this.containingClass = original.containingClass; + this.parameterType = original.parameterType; + this.genericParameterType = original.genericParameterType; + this.parameterAnnotations = original.parameterAnnotations; + this.parameterNameDiscoverer = original.parameterNameDiscoverer; + this.parameterName = original.parameterName; + } + + /** + * Return the wrapped Method, if any. + *

+ * Note: Either Method or Constructor is available. + * + * @return the Method, or {@code null} if none + */ + public Method getMethod() { + return this.method; + } + + /** + * Return the wrapped Constructor, if any. + *

+ * Note: Either Method or Constructor is available. + * + * @return the Constructor, or {@code null} if none + */ + public Constructor getConstructor() { + return this.constructor; + } + + /** + * Returns the wrapped member. + * + * @return the Method or Constructor as Member + */ + public Member getMember() { + // NOTE: no ternary expression to retain JDK <8 compatibility even when using + // the JDK 8 compiler (potentially selecting java.lang.reflect.Executable + // as common type, with that new base class not available on older JDKs) + if (this.method != null) { + return this.method; + } else { + return this.constructor; + } + } + + /** + * Returns the wrapped annotated element. + * + * @return the Method or Constructor as AnnotatedElement + */ + public AnnotatedElement getAnnotatedElement() { + // NOTE: no ternary expression to retain JDK <8 compatibility even when using + // the JDK 8 compiler (potentially selecting java.lang.reflect.Executable + // as common type, with that new base class not available on older JDKs) + if (this.method != null) { + return this.method; + } else { + return this.constructor; + } + } + + /** + * Return the class that declares the underlying Method or Constructor. + */ + public Class getDeclaringClass() { + return getMember().getDeclaringClass(); + } + + /** + * Return the index of the method/constructor parameter. + * + * @return the parameter index (-1 in case of the return type) + */ + public int getParameterIndex() { + return this.parameterIndex; + } + + /** + * Increase this parameter's nesting level. + * + * @see #getNestingLevel() + */ + public void increaseNestingLevel() { + this.nestingLevel++; + } + + /** + * Decrease this parameter's nesting level. + * + * @see #getNestingLevel() + */ + public void decreaseNestingLevel() { + getTypeIndexesPerLevel().remove(this.nestingLevel); + this.nestingLevel--; + } + + /** + * Return the nesting level of the target type (typically 1; e.g. in case of a List of Lists, 1 would indicate the nested + * List, whereas 2 would indicate the element of the nested List). + */ + public int getNestingLevel() { + return this.nestingLevel; + } + + /** + * Set the type index for the current nesting level. + * + * @param typeIndex the corresponding type index (or {@code null} for the default type index) + * @see #getNestingLevel() + */ + public void setTypeIndexForCurrentLevel(int typeIndex) { + getTypeIndexesPerLevel().put(this.nestingLevel, typeIndex); + } + + /** + * Return the type index for the current nesting level. + * + * @return the corresponding type index, or {@code null} if none specified (indicating the default type index) + * @see #getNestingLevel() + */ + public Integer getTypeIndexForCurrentLevel() { + return getTypeIndexForLevel(this.nestingLevel); + } + + /** + * Return the type index for the specified nesting level. + * + * @param nestingLevel the nesting level to check + * @return the corresponding type index, or {@code null} if none specified (indicating the default type index) + */ + public Integer getTypeIndexForLevel(int nestingLevel) { + return getTypeIndexesPerLevel().get(nestingLevel); + } + + /** + * Obtain the (lazily constructed) type-indexes-per-level Map. + */ + private Map getTypeIndexesPerLevel() { + if (this.typeIndexesPerLevel == null) { + this.typeIndexesPerLevel = new HashMap(4); + } + return this.typeIndexesPerLevel; + } + + /** + * Set a containing class to resolve the parameter type against. + */ + void setContainingClass(Class containingClass) { + this.containingClass = containingClass; + } + + public Class getContainingClass() { + return (this.containingClass != null ? this.containingClass : getDeclaringClass()); + } + + /** + * Set a resolved (generic) parameter type. + */ + void setParameterType(Class parameterType) { + this.parameterType = parameterType; + } + + /** + * Return the type of the method/constructor parameter. + * + * @return the parameter type (never {@code null}) + */ + public Class getParameterType() { + if (this.parameterType == null) { + if (this.parameterIndex < 0) { + this.parameterType = (this.method != null ? this.method.getReturnType() : null); + } else { + this.parameterType = (this.method != null ? this.method.getParameterTypes()[this.parameterIndex] + : this.constructor.getParameterTypes()[this.parameterIndex]); + } + } + return this.parameterType; + } + + /** + * Return the generic type of the method/constructor parameter. + * + * @return the parameter type (never {@code null}) + */ + public Type getGenericParameterType() { + if (this.genericParameterType == null) { + if (this.parameterIndex < 0) { + this.genericParameterType = (this.method != null ? this.method.getGenericReturnType() : null); + } else { + this.genericParameterType = (this.method != null ? this.method.getGenericParameterTypes()[this.parameterIndex] + : this.constructor.getGenericParameterTypes()[this.parameterIndex]); + } + } + return this.genericParameterType; + } + + /** + * Return the nested type of the method/constructor parameter. + * + * @return the parameter type (never {@code null}) + * @see #getNestingLevel() + */ + public Class getNestedParameterType() { + if (this.nestingLevel > 1) { + Type type = getGenericParameterType(); + for (int i = 2; i <= this.nestingLevel; i++) { + if (type instanceof ParameterizedType) { + Type[] args = ((ParameterizedType) type).getActualTypeArguments(); + Integer index = getTypeIndexForLevel(i); + type = args[index != null ? index : args.length - 1]; + } + } + if (type instanceof Class) { + return (Class) type; + } else if (type instanceof ParameterizedType) { + Type arg = ((ParameterizedType) type).getRawType(); + if (arg instanceof Class) { + return (Class) arg; + } + } + return Object.class; + } else { + return getParameterType(); + } + } + + /** + * Return the nested generic type of the method/constructor parameter. + * + * @return the parameter type (never {@code null}) + * @see #getNestingLevel() + */ + public Type getNestedGenericParameterType() { + if (this.nestingLevel > 1) { + Type type = getGenericParameterType(); + for (int i = 2; i <= this.nestingLevel; i++) { + if (type instanceof ParameterizedType) { + Type[] args = ((ParameterizedType) type).getActualTypeArguments(); + Integer index = getTypeIndexForLevel(i); + type = args[index != null ? index : args.length - 1]; + } + } + return type; + } else { + return getGenericParameterType(); + } + } + + /** + * Return the annotations associated with the target method/constructor itself. + */ + public Annotation[] getMethodAnnotations() { + return adaptAnnotationArray(getAnnotatedElement().getAnnotations()); + } + + /** + * Return the method/constructor annotation of the given type, if available. + * + * @param annotationType the annotation type to look for + * @return the annotation object, or {@code null} if not found + */ + public A getMethodAnnotation(Class annotationType) { + return adaptAnnotation(getAnnotatedElement().getAnnotation(annotationType)); + } + + /** + * Return the annotations associated with the specific method/constructor parameter. + */ + public Annotation[] getParameterAnnotations() { + if (this.parameterAnnotations == null) { + Annotation[][] annotationArray = (this.method != null ? this.method.getParameterAnnotations() + : this.constructor.getParameterAnnotations()); + if (this.parameterIndex >= 0 && this.parameterIndex < annotationArray.length) { + this.parameterAnnotations = adaptAnnotationArray(annotationArray[this.parameterIndex]); + } else { + this.parameterAnnotations = new Annotation[0]; + } + } + return this.parameterAnnotations; + } + + /** + * Return the parameter annotation of the given type, if available. + * + * @param annotationType the annotation type to look for + * @return the annotation object, or {@code null} if not found + */ + @SuppressWarnings("unchecked") + public T getParameterAnnotation(Class annotationType) { + Annotation[] anns = getParameterAnnotations(); + for (Annotation ann : anns) { + if (annotationType.isInstance(ann)) { + return (T) ann; + } + } + return null; + } + + /** + * Return true if the parameter has at least one annotation, false if it has none. + */ + public boolean hasParameterAnnotations() { + return (getParameterAnnotations().length != 0); + } + + /** + * Return true if the parameter has the given annotation type, and false if it doesn't. + */ + public boolean hasParameterAnnotation(Class annotationType) { + return (getParameterAnnotation(annotationType) != null); + } + + /** + * Initialize parameter name discovery for this method parameter. + *

+ * This method does not actually try to retrieve the parameter name at this point; it just allows discovery to happen when + * the application calls {@link #getParameterName()} (if ever). + */ + public void initParameterNameDiscovery(ParameterNameDiscoverer parameterNameDiscoverer) { + this.parameterNameDiscoverer = parameterNameDiscoverer; + } + + /** + * Return the name of the method/constructor parameter. + * + * @return the parameter name (may be {@code null} if no parameter name metadata is contained in the class file or no + * {@link #initParameterNameDiscovery ParameterNameDiscoverer} has been set to begin with) + */ + public String getParameterName() { + ParameterNameDiscoverer discoverer = this.parameterNameDiscoverer; + if (discoverer != null) { + String[] parameterNames = (this.method != null ? discoverer.getParameterNames(this.method) + : discoverer.getParameterNames(this.constructor)); + if (parameterNames != null) { + this.parameterName = parameterNames[this.parameterIndex]; + } + this.parameterNameDiscoverer = null; + } + return this.parameterName; + } + + /** + * A template method to post-process a given annotation instance before returning it to the caller. + *

+ * The default implementation simply returns the given annotation as-is. + * + * @param annotation the annotation about to be returned + * @return the post-processed annotation (or simply the original one) + */ + protected A adaptAnnotation(A annotation) { + return annotation; + } + + /** + * A template method to post-process a given annotation array before returning it to the caller. + *

+ * The default implementation simply returns the given annotation array as-is. + * + * @param annotations the annotation array about to be returned + * @return the post-processed annotation array (or simply the original one) + */ + protected Annotation[] adaptAnnotationArray(Annotation[] annotations) { + return annotations; + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } + if (!(other instanceof MethodParameter)) { + return false; + } + MethodParameter otherParam = (MethodParameter) other; + return (this.parameterIndex == otherParam.parameterIndex && getMember().equals(otherParam.getMember())); + } + + @Override + public int hashCode() { + return (getMember().hashCode() * 31 + this.parameterIndex); + } + + /** + * Create a new MethodParameter for the given method or constructor. + *

+ * This is a convenience constructor for scenarios where a Method or Constructor reference is treated in a generic fashion. + * + * @param methodOrConstructor the Method or Constructor to specify a parameter for + * @param parameterIndex the index of the parameter + * @return the corresponding MethodParameter instance + */ + public static MethodParameter forMethodOrConstructor(Object methodOrConstructor, int parameterIndex) { + if (methodOrConstructor instanceof Method) { + return new MethodParameter((Method) methodOrConstructor, parameterIndex); + } else if (methodOrConstructor instanceof Constructor) { + return new MethodParameter((Constructor) methodOrConstructor, parameterIndex); + } else { + throw new IllegalArgumentException( + "Given object [" + methodOrConstructor + "] is neither a Method nor a Constructor"); + } + } + +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/ParameterNameDiscoverer.java b/src/main/java/io/lettuce/core/dynamic/support/ParameterNameDiscoverer.java new file mode 100644 index 0000000000..94ffeb9ded --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/ParameterNameDiscoverer.java @@ -0,0 +1,46 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.Constructor; +import java.lang.reflect.Method; + +/** + * Interface to discover parameter names for methods and constructors. + * + *

+ * Parameter name discovery is not always possible, but various strategies are available to try, such as looking for debug + * information that may have been emitted at compile time, and looking for argname annotation values. + */ +public interface ParameterNameDiscoverer { + + /** + * Return parameter names for this method, or {@code null} if they cannot be determined. + * + * @param method method to find parameter names for + * @return an array of parameter names if the names can be resolved, or {@code null} if they cannot + */ + String[] getParameterNames(Method method); + + /** + * Return parameter names for this constructor, or {@code null} if they cannot be determined. + * + * @param ctor constructor to find parameter names for + * @return an array of parameter names if the names can be resolved, or {@code null} if they cannot + */ + String[] getParameterNames(Constructor ctor); + +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/ParametrizedTypeInformation.java b/src/main/java/io/lettuce/core/dynamic/support/ParametrizedTypeInformation.java new file mode 100644 index 0000000000..d14b044e5f --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/ParametrizedTypeInformation.java @@ -0,0 +1,209 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.ParameterizedType; +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.util.*; + +import io.lettuce.core.LettuceStrings; + +/** + * Base class for all types that include parametrization of some kind. Crucial as we have to take note of the parent class we + * will have to resolve generic parameters against. + */ +class ParametrizedTypeInformation extends ParentTypeAwareTypeInformation { + + private final ParameterizedType type; + private Boolean resolved; + + /** + * Creates a new {@link ParametrizedTypeInformation} for the given {@link Type} and parent {@link TypeDiscoverer}. + * + * @param type must not be {@literal null} + * @param parent must not be {@literal null} + */ + public ParametrizedTypeInformation(ParameterizedType type, TypeDiscoverer parent, + Map, Type> typeVariableMap) { + + super(type, parent, typeVariableMap); + this.type = type; + } + + @Override + protected TypeInformation doGetMapValueType() { + + if (Map.class.isAssignableFrom(getType())) { + + Type[] arguments = type.getActualTypeArguments(); + + if (arguments.length > 1) { + return createInfo(arguments[1]); + } + } + + Class rawType = getType(); + + Set supertypes = new HashSet(); + supertypes.add(rawType.getGenericSuperclass()); + supertypes.addAll(Arrays.asList(rawType.getGenericInterfaces())); + + for (Type supertype : supertypes) { + + Class rawSuperType = resolveClass(supertype); + + if (Map.class.isAssignableFrom(rawSuperType)) { + + ParameterizedType parameterizedSupertype = (ParameterizedType) supertype; + Type[] arguments = parameterizedSupertype.getActualTypeArguments(); + return createInfo(arguments[1]); + } + } + + return super.doGetMapValueType(); + } + + @Override + public List> getTypeArguments() { + + List> result = new ArrayList<>(); + + for (Type argument : type.getActualTypeArguments()) { + result.add(createInfo(argument)); + } + + return result; + } + + @Override + public boolean isAssignableFrom(TypeInformation target) { + + if (this.equals(target)) { + return true; + } + + Class rawType = getType(); + Class rawTargetType = target.getType(); + + if (!rawType.isAssignableFrom(rawTargetType)) { + return false; + } + + TypeInformation otherTypeInformation = rawType.equals(rawTargetType) ? target + : target.getSuperTypeInformation(rawType); + + List> myParameters = getTypeArguments(); + List> typeParameters = otherTypeInformation.getTypeArguments(); + + if (myParameters.size() != typeParameters.size()) { + return false; + } + + for (int i = 0; i < myParameters.size(); i++) { + + if (myParameters.get(i) instanceof WildcardTypeInformation) { + if (!myParameters.get(i).isAssignableFrom(typeParameters.get(i))) { + return false; + } + } else { + if (!myParameters.get(i).getType().equals(typeParameters.get(i).getType())) { + return false; + } + + if (!myParameters.get(i).isAssignableFrom(typeParameters.get(i))) { + return false; + } + } + + } + + return true; + } + + @Override + protected TypeInformation doGetComponentType() { + return createInfo(type.getActualTypeArguments()[0]); + } + + @Override + public boolean equals(Object obj) { + + if (obj == this) { + return true; + } + + if (!(obj instanceof ParametrizedTypeInformation)) { + return false; + } + + ParametrizedTypeInformation that = (ParametrizedTypeInformation) obj; + + if (this.isResolvedCompletely() && that.isResolvedCompletely()) { + return this.type.equals(that.type); + } + + return super.equals(obj); + } + + @Override + public int hashCode() { + return isResolvedCompletely() ? this.type.hashCode() : super.hashCode(); + } + + @Override + public String toString() { + + return String.format("%s<%s>", getType().getName(), + LettuceStrings.collectionToDelimitedString(getTypeArguments(), ",", "", "")); + } + + private boolean isResolvedCompletely() { + + if (resolved != null) { + return resolved; + } + + Type[] typeArguments = type.getActualTypeArguments(); + + if (typeArguments.length == 0) { + return cacheAndReturn(false); + } + + for (Type typeArgument : typeArguments) { + + TypeInformation info = createInfo(typeArgument); + + if (info instanceof ParametrizedTypeInformation) { + if (!((ParametrizedTypeInformation) info).isResolvedCompletely()) { + return cacheAndReturn(false); + } + } + + if (!(info instanceof ClassTypeInformation)) { + return cacheAndReturn(false); + } + } + + return cacheAndReturn(true); + } + + private boolean cacheAndReturn(boolean resolved) { + + this.resolved = resolved; + return resolved; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/ParentTypeAwareTypeInformation.java b/src/main/java/io/lettuce/core/dynamic/support/ParentTypeAwareTypeInformation.java new file mode 100644 index 0000000000..7ee70fd634 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/ParentTypeAwareTypeInformation.java @@ -0,0 +1,94 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.util.HashMap; +import java.util.Map; + +/** + * Base class for {@link TypeInformation} implementations that need parent type awareness. + */ +abstract class ParentTypeAwareTypeInformation extends TypeDiscoverer { + + private final TypeDiscoverer parent; + private int hashCode; + + /** + * Creates a new {@link ParentTypeAwareTypeInformation}. + * + * @param type must not be {@literal null}. + * @param parent must not be {@literal null}. + * @param map must not be {@literal null}. + */ + protected ParentTypeAwareTypeInformation(Type type, TypeDiscoverer parent, Map, Type> map) { + + super(type, mergeMaps(parent, map)); + this.parent = parent; + } + + /** + * Merges the type variable maps of the given parent with the new map. + * + * @param parent must not be {@literal null}. + * @param map must not be {@literal null}. + * @return + */ + private static Map, Type> mergeMaps(TypeDiscoverer parent, Map, Type> map) { + + Map, Type> typeVariableMap = new HashMap, Type>(); + typeVariableMap.putAll(map); + typeVariableMap.putAll(parent.getTypeVariableMap()); + + return typeVariableMap; + } + + @Override + protected TypeInformation createInfo(Type fieldType) { + + if (parent.getType().equals(fieldType)) { + return parent; + } + + return super.createInfo(fieldType); + } + + @Override + public boolean equals(Object obj) { + + if (!super.equals(obj)) { + return false; + } + + if (!this.getClass().equals(obj.getClass())) { + return false; + } + + ParentTypeAwareTypeInformation that = (ParentTypeAwareTypeInformation) obj; + return this.parent == null ? that.parent == null : this.parent.equals(that.parent); + } + + @Override + public int hashCode() { + + if (this.hashCode == 0) { + this.hashCode = super.hashCode() + 31 * parent.hashCode(); + } + + return this.hashCode; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/ReflectionUtils.java b/src/main/java/io/lettuce/core/dynamic/support/ReflectionUtils.java new file mode 100644 index 0000000000..bd6c345b62 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/ReflectionUtils.java @@ -0,0 +1,366 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.*; +import java.util.Arrays; +import java.util.LinkedList; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Simple utility class for working with the reflection API and handling reflection exceptions. + * + *

+ * Only intended for internal use. + */ +public abstract class ReflectionUtils { + + /** + * Get the field represented by the supplied {@link Field field object} on the specified {@link Object target object}. In + * accordance with {@link Field#get(Object)} semantics, the returned value is automatically wrapped if the underlying field + * has a primitive type. + *

+ * Thrown exceptions are handled via a call to {@link #handleReflectionException(Exception)}. + * + * @param field the field to get + * @param target the target object from which to get the field + * @return the field's current value + */ + public static Object getField(Field field, Object target) { + try { + return field.get(target); + } catch (IllegalAccessException ex) { + handleReflectionException(ex); + throw new IllegalStateException( + "Unexpected reflection exception - " + ex.getClass().getName() + ": " + ex.getMessage()); + } + } + + /** + * Attempt to find a {@link Method} on the supplied class with the supplied name and no parameters. Searches all + * superclasses up to {@code Object}. + *

+ * Returns {@code null} if no {@link Method} can be found. + * + * @param clazz the class to introspect + * @param name the name of the method + * @return the Method object, or {@code null} if none found + */ + public static Method findMethod(Class clazz, String name) { + return findMethod(clazz, name, new Class[0]); + } + + /** + * Attempt to find a {@link Method} on the supplied class with the supplied name and parameter types. Searches all + * superclasses up to {@code Object}. + *

+ * Returns {@code null} if no {@link Method} can be found. + * + * @param clazz the class to introspect + * @param name the name of the method + * @param paramTypes the parameter types of the method (may be {@code null} to indicate any signature) + * @return the Method object, or {@code null} if none found + */ + public static Method findMethod(Class clazz, String name, Class... paramTypes) { + LettuceAssert.notNull(clazz, "Class must not be null"); + LettuceAssert.notNull(name, "Method name must not be null"); + Class searchType = clazz; + while (searchType != null) { + Method[] methods = (searchType.isInterface() ? searchType.getMethods() : getDeclaredMethods(searchType)); + for (Method method : methods) { + if (name.equals(method.getName()) + && (paramTypes == null || Arrays.equals(paramTypes, method.getParameterTypes()))) { + return method; + } + } + searchType = searchType.getSuperclass(); + } + return null; + } + + /** + * Invoke the specified {@link Method} against the supplied target object with no arguments. The target object can be + * {@code null} when invoking a static {@link Method}. + *

+ * Thrown exceptions are handled via a call to {@link #handleReflectionException}. + * + * @param method the method to invoke + * @param target the target object to invoke the method on + * @return the invocation result, if any + * @see #invokeMethod(java.lang.reflect.Method, Object, Object[]) + */ + public static Object invokeMethod(Method method, Object target) { + return invokeMethod(method, target, new Object[0]); + } + + /** + * Invoke the specified {@link Method} against the supplied target object with the supplied arguments. The target object can + * be {@code null} when invoking a static {@link Method}. + *

+ * Thrown exceptions are handled via a call to {@link #handleReflectionException}. + * + * @param method the method to invoke + * @param target the target object to invoke the method on + * @param args the invocation arguments (may be {@code null}) + * @return the invocation result, if any + */ + public static Object invokeMethod(Method method, Object target, Object... args) { + try { + return method.invoke(target, args); + } catch (Exception ex) { + handleReflectionException(ex); + } + throw new IllegalStateException("Should never get here"); + } + + /** + * Handle the given reflection exception. Should only be called if no checked exception is expected to be thrown by the + * target method. + *

+ * Throws the underlying RuntimeException or Error in case of an InvocationTargetException with such a root cause. Throws an + * IllegalStateException with an appropriate message or UndeclaredThrowableException otherwise. + * + * @param ex the reflection exception to handle + */ + public static void handleReflectionException(Exception ex) { + if (ex instanceof NoSuchMethodException) { + throw new IllegalStateException("Method not found: " + ex.getMessage()); + } + if (ex instanceof IllegalAccessException) { + throw new IllegalStateException("Could not access method: " + ex.getMessage()); + } + if (ex instanceof InvocationTargetException) { + handleInvocationTargetException((InvocationTargetException) ex); + } + if (ex instanceof RuntimeException) { + throw (RuntimeException) ex; + } + throw new UndeclaredThrowableException(ex); + } + + /** + * Handle the given invocation target exception. Should only be called if no checked exception is expected to be thrown by + * the target method. + *

+ * Throws the underlying RuntimeException or Error in case of such a root cause. Throws an UndeclaredThrowableException + * otherwise. + * + * @param ex the invocation target exception to handle + */ + public static void handleInvocationTargetException(InvocationTargetException ex) { + rethrowRuntimeException(ex.getTargetException()); + } + + /** + * Rethrow the given {@link Throwable exception}, which is presumably the target exception of an + * {@link InvocationTargetException}. Should only be called if no checked exception is expected to be thrown by the target + * method. + *

+ * Rethrows the underlying exception cast to a {@link RuntimeException} or {@link Error} if appropriate; otherwise, throws + * an {@link UndeclaredThrowableException}. + * + * @param ex the exception to rethrow + * @throws RuntimeException the rethrown exception + */ + public static void rethrowRuntimeException(Throwable ex) { + if (ex instanceof RuntimeException) { + throw (RuntimeException) ex; + } + if (ex instanceof Error) { + throw (Error) ex; + } + throw new UndeclaredThrowableException(ex); + } + + /** + * Perform the given callback operation on all matching methods of the given class and superclasses. + *

+ * The same named method occurring on subclass and superclass will appear twice, unless excluded by a {@link MethodFilter}. + * + * @param clazz the class to introspect + * @param mc the callback to invoke for each method + * @see #doWithMethods(Class, MethodCallback, MethodFilter) + */ + public static void doWithMethods(Class clazz, MethodCallback mc) { + doWithMethods(clazz, mc, null); + } + + /** + * Perform the given callback operation on all matching methods of the given class and superclasses (or given interface and + * super-interfaces). + *

+ * The same named method occurring on subclass and superclass will appear twice, unless excluded by the specified + * {@link MethodFilter}. + * + * @param clazz the class to introspect + * @param mc the callback to invoke for each method + * @param mf the filter that determines the methods to apply the callback to + */ + public static void doWithMethods(Class clazz, MethodCallback mc, MethodFilter mf) { + // Keep backing up the inheritance hierarchy. + Method[] methods = getDeclaredMethods(clazz); + for (Method method : methods) { + if (mf != null && !mf.matches(method)) { + continue; + } + try { + mc.doWith(method); + } catch (IllegalAccessException ex) { + throw new IllegalStateException("Not allowed to access method '" + method.getName() + "': " + ex); + } + } + if (clazz.getSuperclass() != null) { + doWithMethods(clazz.getSuperclass(), mc, mf); + } else if (clazz.isInterface()) { + for (Class superIfc : clazz.getInterfaces()) { + doWithMethods(superIfc, mc, mf); + } + } + } + + /** + * This variant retrieves {@link Class#getDeclaredMethods()} from a local cache in order to avoid the JVM's SecurityManager + * check and defensive array copying. In addition, it also includes Java 8 default methods from locally implemented + * interfaces, since those are effectively to be treated just like declared methods. + * + * @param clazz the class to introspect + * @return the cached array of methods + * @see Class#getDeclaredMethods() + */ + private static Method[] getDeclaredMethods(Class clazz) { + + Method[] result; + Method[] declaredMethods = clazz.getDeclaredMethods(); + List defaultMethods = findConcreteMethodsOnInterfaces(clazz); + if (defaultMethods != null) { + result = new Method[declaredMethods.length + defaultMethods.size()]; + System.arraycopy(declaredMethods, 0, result, 0, declaredMethods.length); + int index = declaredMethods.length; + for (Method defaultMethod : defaultMethods) { + result[index] = defaultMethod; + index++; + } + } else { + result = declaredMethods; + } + return result; + } + + private static List findConcreteMethodsOnInterfaces(Class clazz) { + List result = null; + for (Class ifc : clazz.getInterfaces()) { + for (Method ifcMethod : ifc.getMethods()) { + if (!Modifier.isAbstract(ifcMethod.getModifiers())) { + if (result == null) { + result = new LinkedList(); + } + result.add(ifcMethod); + } + } + } + return result; + } + + /** + * Invoke the given callback on all fields in the target class, going up the class hierarchy to get all declared fields. + * + * @param clazz the target class to analyze + * @param fc the callback to invoke for each field + */ + public static void doWithFields(Class clazz, FieldCallback fc) { + doWithFields(clazz, fc, null); + } + + /** + * Invoke the given callback on all fields in the target class, going up the class hierarchy to get all declared fields. + * + * @param clazz the target class to analyze + * @param fc the callback to invoke for each field + * @param ff the filter that determines the fields to apply the callback to + */ + public static void doWithFields(Class clazz, FieldCallback fc, FieldFilter ff) { + // Keep backing up the inheritance hierarchy. + Class targetClass = clazz; + do { + Field[] fields = targetClass.getDeclaredFields(); + for (Field field : fields) { + if (ff != null && !ff.matches(field)) { + continue; + } + try { + fc.doWith(field); + } catch (IllegalAccessException ex) { + throw new IllegalStateException("Not allowed to access field '" + field.getName() + "': " + ex); + } + } + targetClass = targetClass.getSuperclass(); + } while (targetClass != null && targetClass != Object.class); + } + + /** + * Action to take on each method. + */ + public interface MethodCallback { + + /** + * Perform an operation using the given method. + * + * @param method the method to operate on + */ + void doWith(Method method) throws IllegalArgumentException, IllegalAccessException; + } + + /** + * Callback optionally used to filter methods to be operated on by a method callback. + */ + public interface MethodFilter { + + /** + * Determine whether the given method matches. + * + * @param method the method to check + */ + boolean matches(Method method); + } + + /** + * Callback interface invoked on each field in the hierarchy. + */ + public interface FieldCallback { + + /** + * Perform an operation using the given field. + * + * @param field the field to operate on + */ + void doWith(Field field) throws IllegalArgumentException, IllegalAccessException; + } + + /** + * Callback optionally used to filter fields to be operated on by a field callback. + */ + public interface FieldFilter { + + /** + * Determine whether the given field matches. + * + * @param field the field to check + */ + boolean matches(Field field); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/ResolvableType.java b/src/main/java/io/lettuce/core/dynamic/support/ResolvableType.java new file mode 100644 index 0000000000..de2e3ebbd3 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/ResolvableType.java @@ -0,0 +1,1370 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.io.Serializable; +import java.lang.reflect.*; +import java.util.*; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.dynamic.support.TypeWrapper.MethodParameterTypeProvider; +import io.lettuce.core.dynamic.support.TypeWrapper.TypeProvider; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceClassUtils; + +/** + * Encapsulates a Java {@link java.lang.reflect.Type}, providing access to {@link #getSuperType() supertypes}, + * {@link #getInterfaces() interfaces}, and {@link #getGeneric(int...) generic parameters} along with the ability to ultimately + * {@link #resolve() resolve} to a {@link java.lang.Class}. + */ +@SuppressWarnings("serial") +public class ResolvableType implements Serializable { + + /** + * {@code ResolvableType} returned when no value is available. {@code NONE} is used in preference to {@code null} so that + * multiple method calls can be safely chained. + */ + public static final ResolvableType NONE = new ResolvableType(null, null, null); + + private static final ResolvableType[] EMPTY_TYPES_ARRAY = new ResolvableType[0]; + + /** + * The underlying Java type being managed (only ever {@code null} for {@link #NONE}). + */ + private final Type type; + + /** + * Optional provider for the type. + */ + private final TypeProvider typeProvider; + + /** + * The {@code VariableResolver} to use or {@code null} if no resolver is available. + */ + private final VariableResolver variableResolver; + + /** + * The component type for an array or {@code null} if the type should be deduced. + */ + private final ResolvableType componentType; + + /** + * Copy of the resolved value. + */ + private final Class resolved; + + private ResolvableType superType; + + private ResolvableType[] interfaces; + + private ResolvableType[] generics; + + /** + * Private constructor used to create a new {@link ResolvableType} for cache key purposes, with no upfront resolution. + */ + private ResolvableType(Type type, TypeProvider typeProvider, VariableResolver variableResolver) { + this.type = type; + this.typeProvider = typeProvider; + this.variableResolver = variableResolver; + this.componentType = null; + this.resolved = resolveClass(); + } + + /** + * Private constructor used to create a new {@link ResolvableType} for uncached purposes, with upfront resolution but lazily + * calculated hash. + */ + private ResolvableType(Type type, TypeProvider typeProvider, VariableResolver variableResolver, + ResolvableType componentType) { + + this.type = type; + this.typeProvider = typeProvider; + this.variableResolver = variableResolver; + this.componentType = componentType; + this.resolved = resolveClass(); + } + + /** + * Private constructor used to create a new {@link ResolvableType} on a {@link Class} basis. Avoids all {@code instanceof} + * checks in order to create a straight {@link Class} wrapper. + */ + private ResolvableType(Class sourceClass) { + this.resolved = (sourceClass != null ? sourceClass : Object.class); + this.type = this.resolved; + this.typeProvider = null; + this.variableResolver = null; + this.componentType = null; + } + + /** + * Return the underling Java {@link Type} being managed. With the exception of the {@link #NONE} constant, this method will + * never return {@code null}. + */ + public Type getType() { + return TypeWrapper.unwrap(this.type); + } + + /** + * Return the underlying Java {@link Class} being managed, if available; otherwise {@code null}. + */ + public Class getRawClass() { + if (this.type == this.resolved) { + return this.resolved; + } + Type rawType = this.type; + if (rawType instanceof ParameterizedType) { + rawType = ((ParameterizedType) rawType).getRawType(); + } + return (rawType instanceof Class ? (Class) rawType : null); + } + + /** + * Return the underlying source of the resolvable type. Will return a {@link Field}, {@link MethodParameter} or {@link Type} + * depending on how the {@link ResolvableType} was constructed. With the exception of the {@link #NONE} constant, this + * method will never return {@code null}. This method is primarily to provide access to additional type information or + * meta-data that alternative JVM languages may provide. + */ + public Object getSource() { + Object source = (this.typeProvider != null ? this.typeProvider.getSource() : null); + return (source != null ? source : this.type); + } + + /** + * Determine whether the given object is an instance of this {@code ResolvableType}. + * + * @param obj the object to check + * @see #isAssignableFrom(Class) + */ + public boolean isInstance(Object obj) { + return (obj != null && isAssignableFrom(obj.getClass())); + } + + /** + * Determine whether this {@code ResolvableType} is assignable from the specified other type. + * + * @param other the type to be checked against (as a {@code Class}) + * @see #isAssignableFrom(ResolvableType) + */ + public boolean isAssignableFrom(Class other) { + return isAssignableFrom(forClass(other), null); + } + + /** + * Determine whether this {@code ResolvableType} is assignable from the specified other type. + *

+ * Attempts to follow the same rules as the Java compiler, considering whether both the {@link #resolve() resolved} + * {@code Class} is {@link Class#isAssignableFrom(Class) assignable from} the given type as well as whether all + * {@link #getGenerics() generics} are assignable. + * + * @param other the type to be checked against (as a {@code ResolvableType}) + * @return {@code true} if the specified other type can be assigned to this {@code ResolvableType}; {@code false} otherwise + */ + public boolean isAssignableFrom(ResolvableType other) { + return isAssignableFrom(other, null); + } + + private boolean isAssignableFrom(ResolvableType other, Map matchedBefore) { + LettuceAssert.notNull(other, "ResolvableType must not be null"); + + // If we cannot resolve types, we are not assignable + if (this == NONE || other == NONE) { + return false; + } + + // Deal with array by delegating to the component type + if (isArray()) { + return (other.isArray() && getComponentType().isAssignableFrom(other.getComponentType())); + } + + if (matchedBefore != null && matchedBefore.get(this.type) == other.type) { + return true; + } + + // Deal with wildcard bounds + WildcardBounds ourBounds = WildcardBounds.get(this); + WildcardBounds typeBounds = WildcardBounds.get(other); + + // In the from X is assignable to + if (typeBounds != null) { + return (ourBounds != null && ourBounds.isSameKind(typeBounds) + && ourBounds.isAssignableFrom(typeBounds.getBounds())); + } + + // In the form is assignable to X... + if (ourBounds != null) { + return ourBounds.isAssignableFrom(other); + } + + // Main assignability check about to follow + boolean exactMatch = (matchedBefore != null); // We're checking nested generic variables now... + boolean checkGenerics = true; + Class ourResolved = null; + if (this.type instanceof TypeVariable) { + TypeVariable variable = (TypeVariable) this.type; + // Try default variable resolution + if (this.variableResolver != null) { + ResolvableType resolved = this.variableResolver.resolveVariable(variable); + if (resolved != null) { + ourResolved = resolved.resolve(); + } + } + if (ourResolved == null) { + // Try variable resolution against target type + if (other.variableResolver != null) { + ResolvableType resolved = other.variableResolver.resolveVariable(variable); + if (resolved != null) { + ourResolved = resolved.resolve(); + checkGenerics = false; + } + } + } + if (ourResolved == null) { + // Unresolved type variable, potentially nested -> never insist on exact match + exactMatch = false; + } + } + if (ourResolved == null) { + ourResolved = resolve(Object.class); + } + Class otherResolved = other.resolve(Object.class); + + // We need an exact type match for generics + // List is not assignable from List + if (exactMatch ? !ourResolved.equals(otherResolved) : !LettuceClassUtils.isAssignable(ourResolved, otherResolved)) { + return false; + } + + if (checkGenerics) { + // Recursively check each generic + ResolvableType[] ourGenerics = getGenerics(); + ResolvableType[] typeGenerics = other.as(ourResolved).getGenerics(); + if (ourGenerics.length != typeGenerics.length) { + return false; + } + if (matchedBefore == null) { + matchedBefore = new IdentityHashMap(1); + } + matchedBefore.put(this.type, other.type); + for (int i = 0; i < ourGenerics.length; i++) { + if (!ourGenerics[i].isAssignableFrom(typeGenerics[i], matchedBefore)) { + return false; + } + } + } + + return true; + } + + /** + * Return {@code true} if this type resolves to a Class that represents an array. + * + * @see #getComponentType() + */ + public boolean isArray() { + if (this == NONE) { + return false; + } + return (((this.type instanceof Class && ((Class) this.type).isArray())) || this.type instanceof GenericArrayType + || resolveType().isArray()); + } + + /** + * Return the ResolvableType representing the component type of the array or {@link #NONE} if this type does not represent + * an array. + * + * @see #isArray() + */ + public ResolvableType getComponentType() { + if (this == NONE) { + return NONE; + } + if (this.componentType != null) { + return this.componentType; + } + if (this.type instanceof Class) { + Class componentType = ((Class) this.type).getComponentType(); + return forType(componentType, this.variableResolver); + } + if (this.type instanceof GenericArrayType) { + return forType(((GenericArrayType) this.type).getGenericComponentType(), this.variableResolver); + } + return resolveType().getComponentType(); + } + + /** + * Convenience method to return this type as a resolvable {@link Collection} type. Returns {@link #NONE} if this type does + * not implement or extend {@link Collection}. + * + * @see #as(Class) + * @see #asMap() + */ + public ResolvableType asCollection() { + return as(Collection.class); + } + + /** + * Convenience method to return this type as a resolvable {@link Map} type. Returns {@link #NONE} if this type does not + * implement or extend {@link Map}. + * + * @see #as(Class) + * @see #asCollection() + */ + public ResolvableType asMap() { + return as(Map.class); + } + + /** + * Return this type as a {@link ResolvableType} of the specified class. Searches {@link #getSuperType() supertype} and + * {@link #getInterfaces() interface} hierarchies to find a match, returning {@link #NONE} if this type does not implement + * or extend the specified class. + * + * @param type the required class type + * @return a {@link ResolvableType} representing this object as the specified type, or {@link #NONE} if not resolvable as + * that type + * @see #asCollection() + * @see #asMap() + * @see #getSuperType() + * @see #getInterfaces() + */ + public ResolvableType as(Class type) { + if (this == NONE) { + return NONE; + } + if (nullSafeEquals(resolve(), type)) { + return this; + } + for (ResolvableType interfaceType : getInterfaces()) { + ResolvableType interfaceAsType = interfaceType.as(type); + if (interfaceAsType != NONE) { + return interfaceAsType; + } + } + return getSuperType().as(type); + } + + /** + * Return a {@link ResolvableType} representing the direct supertype of this type. If no supertype is available this method + * returns {@link #NONE}. + * + * @see #getInterfaces() + */ + public ResolvableType getSuperType() { + Class resolved = resolve(); + if (resolved == null || resolved.getGenericSuperclass() == null) { + return NONE; + } + if (this.superType == null) { + this.superType = forType(TypeWrapper.forGenericSuperclass(resolved), asVariableResolver()); + } + return this.superType; + } + + /** + * Return a {@link ResolvableType} array representing the direct interfaces implemented by this type. If this type does not + * implement any interfaces an empty array is returned. + * + * @see #getSuperType() + */ + public ResolvableType[] getInterfaces() { + Class resolved = resolve(); + Object[] array = resolved.getGenericInterfaces(); + if (array == null || array.length == 0) { + return EMPTY_TYPES_ARRAY; + } + if (this.interfaces == null) { + this.interfaces = forTypes(TypeWrapper.forGenericInterfaces(resolved), asVariableResolver()); + } + return this.interfaces; + } + + /** + * Return {@code true} if this type contains generic parameters. + * + * @see #getGeneric(int...) + * @see #getGenerics() + */ + public boolean hasGenerics() { + return (getGenerics().length > 0); + } + + /** + * Return {@code true} if this type contains unresolvable generics only, that is, no substitute for any of its declared type + * variables. + */ + boolean isEntirelyUnresolvable() { + if (this == NONE) { + return false; + } + ResolvableType[] generics = getGenerics(); + for (ResolvableType generic : generics) { + if (!generic.isUnresolvableTypeVariable() && !generic.isWildcardWithoutBounds()) { + return false; + } + } + return true; + } + + /** + * Determine whether the underlying type has any unresolvable generics: either through an unresolvable type variable on the + * type itself or through implementing a generic interface in a raw fashion, i.e. without substituting that interface's type + * variables. The result will be {@code true} only in those two scenarios. + */ + public boolean hasUnresolvableGenerics() { + if (this == NONE) { + return false; + } + ResolvableType[] generics = getGenerics(); + for (ResolvableType generic : generics) { + if (generic.isUnresolvableTypeVariable() || generic.isWildcardWithoutBounds()) { + return true; + } + } + Class resolved = resolve(); + if (resolved != null) { + for (Type genericInterface : resolved.getGenericInterfaces()) { + if (genericInterface instanceof Class) { + if (forClass((Class) genericInterface).hasGenerics()) { + return true; + } + } + } + return getSuperType().hasUnresolvableGenerics(); + } + return false; + } + + /** + * Determine whether the underlying type is a type variable that cannot be resolved through the associated variable + * resolver. + */ + private boolean isUnresolvableTypeVariable() { + if (this.type instanceof TypeVariable) { + if (this.variableResolver == null) { + return true; + } + TypeVariable variable = (TypeVariable) this.type; + ResolvableType resolved = this.variableResolver.resolveVariable(variable); + if (resolved == null || resolved.isUnresolvableTypeVariable()) { + return true; + } + } + return false; + } + + /** + * Determine whether the underlying type represents a wildcard without specific bounds (i.e., equal to + * {@code ? extends Object}). + */ + private boolean isWildcardWithoutBounds() { + if (this.type instanceof WildcardType) { + WildcardType wt = (WildcardType) this.type; + if (wt.getLowerBounds().length == 0) { + Type[] upperBounds = wt.getUpperBounds(); + if (upperBounds.length == 0 || (upperBounds.length == 1 && Object.class == upperBounds[0])) { + return true; + } + } + } + return false; + } + + /** + * Return a {@link ResolvableType} for the specified nesting level. See {@link #getNested(int, Map)} for details. + * + * @param nestingLevel the nesting level + * @return the {@link ResolvableType} type, or {@code #NONE} + */ + public ResolvableType getNested(int nestingLevel) { + return getNested(nestingLevel, null); + } + + /** + * Return a {@link ResolvableType} for the specified nesting level. The nesting level refers to the specific generic + * parameter that should be returned. A nesting level of 1 indicates this type; 2 indicates the first nested generic; 3 the + * second; and so on. For example, given {@code List>} level 1 refers to the {@code List}, level 2 the + * {@code Set}, and level 3 the {@code Integer}. + *

+ * The {@code typeIndexesPerLevel} map can be used to reference a specific generic for the given level. For example, an + * index of 0 would refer to a {@code Map} key; whereas, 1 would refer to the value. If the map does not contain a value for + * a specific level the last generic will be used (e.g. a {@code Map} value). + *

+ * Nesting levels may also apply to array types; for example given {@code String[]}, a nesting level of 2 refers to + * {@code String}. + *

+ * If a type does not {@link #hasGenerics() contain} generics the {@link #getSuperType() supertype} hierarchy will be + * considered. + * + * @param nestingLevel the required nesting level, indexed from 1 for the current type, 2 for the first nested generic, 3 + * for the second and so on + * @param typeIndexesPerLevel a map containing the generic index for a given nesting level (may be {@code null}) + * @return a {@link ResolvableType} for the nested level or {@link #NONE} + */ + public ResolvableType getNested(int nestingLevel, Map typeIndexesPerLevel) { + ResolvableType result = this; + for (int i = 2; i <= nestingLevel; i++) { + if (result.isArray()) { + result = result.getComponentType(); + } else { + // Handle derived types + while (result != ResolvableType.NONE && !result.hasGenerics()) { + result = result.getSuperType(); + } + Integer index = (typeIndexesPerLevel != null ? typeIndexesPerLevel.get(i) : null); + index = (index == null ? result.getGenerics().length - 1 : index); + result = result.getGeneric(index); + } + } + return result; + } + + /** + * Return a {@link ResolvableType} representing the generic parameter for the given indexes. Indexes are zero based; for + * example given the type {@code Map>}, {@code getGeneric(0)} will access the {@code Integer}. Nested + * generics can be accessed by specifying multiple indexes; for example {@code getGeneric(1, 0)} will access the + * {@code String} from the nested {@code List}. For convenience, if no indexes are specified the first generic is returned. + *

+ * If no generic is available at the specified indexes {@link #NONE} is returned. + * + * @param indexes the indexes that refer to the generic parameter (may be omitted to return the first generic) + * @return a {@link ResolvableType} for the specified generic or {@link #NONE} + * @see #hasGenerics() + * @see #getGenerics() + * @see #resolveGeneric(int...) + * @see #resolveGenerics() + */ + public ResolvableType getGeneric(int... indexes) { + try { + if (indexes == null || indexes.length == 0) { + return getGenerics()[0]; + } + ResolvableType generic = this; + for (int index : indexes) { + generic = generic.getGenerics()[index]; + } + return generic; + } catch (IndexOutOfBoundsException ex) { + return NONE; + } + } + + /** + * Return an array of {@link ResolvableType}s representing the generic parameters of this type. If no generics are available + * an empty array is returned. If you need to access a specific generic consider using the {@link #getGeneric(int...)} + * method as it allows access to nested generics and protects against {@code IndexOutOfBoundsExceptions}. + * + * @return an array of {@link ResolvableType}s representing the generic parameters (never {@code null}) + * @see #hasGenerics() + * @see #getGeneric(int...) + * @see #resolveGeneric(int...) + * @see #resolveGenerics() + */ + public ResolvableType[] getGenerics() { + if (this == NONE) { + return EMPTY_TYPES_ARRAY; + } + if (this.generics == null) { + if (this.type instanceof Class) { + Class typeClass = (Class) this.type; + this.generics = forTypes(TypeWrapper.forTypeParameters(typeClass), this.variableResolver); + } else if (this.type instanceof ParameterizedType) { + Type[] actualTypeArguments = ((ParameterizedType) this.type).getActualTypeArguments(); + ResolvableType[] generics = new ResolvableType[actualTypeArguments.length]; + for (int i = 0; i < actualTypeArguments.length; i++) { + generics[i] = forType(actualTypeArguments[i], this.variableResolver); + } + this.generics = generics; + } else { + this.generics = resolveType().getGenerics(); + } + } + return this.generics; + } + + /** + * Convenience method that will {@link #getGenerics() get} and {@link #resolve() resolve} generic parameters. + * + * @return an array of resolved generic parameters (the resulting array will never be {@code null}, but it may contain + * {@code null} elements}) + * @see #getGenerics() + * @see #resolve() + */ + public Class[] resolveGenerics() { + return resolveGenerics(null); + } + + /** + * Convenience method that will {@link #getGenerics() get} and {@link #resolve() resolve} generic parameters, using the + * specified {@code fallback} if any type cannot be resolved. + * + * @param fallback the fallback class to use if resolution fails (may be {@code null}) + * @return an array of resolved generic parameters (the resulting array will never be {@code null}, but it may contain + * {@code null} elements}) + * @see #getGenerics() + * @see #resolve() + */ + public Class[] resolveGenerics(Class fallback) { + ResolvableType[] generics = getGenerics(); + Class[] resolvedGenerics = new Class[generics.length]; + for (int i = 0; i < generics.length; i++) { + resolvedGenerics[i] = generics[i].resolve(fallback); + } + return resolvedGenerics; + } + + /** + * Convenience method that will {@link #getGeneric(int...) get} and {@link #resolve() resolve} a specific generic + * parameters. + * + * @param indexes the indexes that refer to the generic parameter (may be omitted to return the first generic) + * @return a resolved {@link Class} or {@code null} + * @see #getGeneric(int...) + * @see #resolve() + */ + public Class resolveGeneric(int... indexes) { + return getGeneric(indexes).resolve(); + } + + /** + * Resolve this type to a {@link java.lang.Class}, returning {@code null} if the type cannot be resolved. This method will + * consider bounds of {@link TypeVariable}s and {@link WildcardType}s if direct resolution fails; however, bounds of + * {@code Object.class} will be ignored. + * + * @return the resolved {@link Class}, or {@code null} if not resolvable + * @see #resolve(Class) + * @see #resolveGeneric(int...) + * @see #resolveGenerics() + */ + public Class resolve() { + return resolve(null); + } + + /** + * Resolve this type to a {@link java.lang.Class}, returning the specified {@code fallback} if the type cannot be resolved. + * This method will consider bounds of {@link TypeVariable}s and {@link WildcardType}s if direct resolution fails; however, + * bounds of {@code Object.class} will be ignored. + * + * @param fallback the fallback class to use if resolution fails (may be {@code null}) + * @return the resolved {@link Class} or the {@code fallback} + * @see #resolve() + * @see #resolveGeneric(int...) + * @see #resolveGenerics() + */ + public Class resolve(Class fallback) { + return (this.resolved != null ? this.resolved : fallback); + } + + private Class resolveClass() { + if (this.type instanceof Class || this.type == null) { + return (Class) this.type; + } + if (this.type instanceof GenericArrayType) { + Class resolvedComponent = getComponentType().resolve(); + return (resolvedComponent != null ? Array.newInstance(resolvedComponent, 0).getClass() : null); + } + return resolveType().resolve(); + } + + /** + * Resolve this type by a single level, returning the resolved value or {@link #NONE}. + *

+ * Note: The returned {@link ResolvableType} should only be used as an intermediary as it cannot be serialized. + */ + public ResolvableType resolveType() { + if (this.type instanceof ParameterizedType) { + return forType(((ParameterizedType) this.type).getRawType(), this.variableResolver); + } + if (this.type instanceof WildcardType) { + Type resolved = resolveBounds(((WildcardType) this.type).getUpperBounds()); + if (resolved == null) { + resolved = resolveBounds(((WildcardType) this.type).getLowerBounds()); + } + return forType(resolved, this.variableResolver); + } + if (this.type instanceof TypeVariable) { + TypeVariable variable = (TypeVariable) this.type; + // Try default variable resolution + if (this.variableResolver != null) { + ResolvableType resolved = this.variableResolver.resolveVariable(variable); + if (resolved != null) { + return resolved; + } + } + // Fallback to bounds + return forType(resolveBounds(variable.getBounds()), this.variableResolver); + } + return NONE; + } + + private Type resolveBounds(Type[] bounds) { + if ((bounds == null || bounds.length == 0) || Object.class == bounds[0]) { + return null; + } + return bounds[0]; + } + + private ResolvableType resolveVariable(TypeVariable variable) { + if (this.type instanceof TypeVariable) { + return resolveType().resolveVariable(variable); + } + if (this.type instanceof ParameterizedType) { + ParameterizedType parameterizedType = (ParameterizedType) this.type; + TypeVariable[] variables = resolve().getTypeParameters(); + for (int i = 0; i < variables.length; i++) { + if (nullSafeEquals(variables[i].getName(), variable.getName())) { + Type actualType = parameterizedType.getActualTypeArguments()[i]; + return forType(actualType, this.variableResolver); + } + } + if (parameterizedType.getOwnerType() != null) { + return forType(parameterizedType.getOwnerType(), this.variableResolver).resolveVariable(variable); + } + } + if (this.variableResolver != null) { + return this.variableResolver.resolveVariable(variable); + } + return null; + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } + if (!(other instanceof ResolvableType)) { + return false; + } + + ResolvableType otherType = (ResolvableType) other; + if (!nullSafeEquals(this.type, otherType.type)) { + return false; + } + if (this.typeProvider != otherType.typeProvider && (this.typeProvider == null || otherType.typeProvider == null + || !nullSafeEquals(this.typeProvider.getSource(), otherType.typeProvider.getSource()))) { + return false; + } + if (this.variableResolver != otherType.variableResolver + && (this.variableResolver == null || otherType.variableResolver == null + || !nullSafeEquals(this.variableResolver.getSource(), otherType.variableResolver.getSource()))) { + return false; + } + if (!nullSafeEquals(this.componentType, otherType.componentType)) { + return false; + } + return true; + } + + @Override + public int hashCode() { + int result = Objects.hash(type, typeProvider, variableResolver, componentType, resolved, superType); + result = 31 * result + Arrays.hashCode(interfaces); + result = 31 * result + Arrays.hashCode(generics); + return result; + } + + /** + * Adapts this {@link ResolvableType} to a {@link VariableResolver}. + */ + VariableResolver asVariableResolver() { + if (this == NONE) { + return null; + } + return new DefaultVariableResolver(); + } + + /** + * Custom serialization support for {@link #NONE}. + */ + private Object readResolve() { + return (this.type == null ? NONE : this); + } + + /** + * Return a String representation of this type in its fully resolved form (including any generic parameters). + */ + @Override + public String toString() { + if (isArray()) { + return getComponentType() + "[]"; + } + if (this.resolved == null) { + return "?"; + } + if (this.type instanceof TypeVariable) { + TypeVariable variable = (TypeVariable) this.type; + if (this.variableResolver == null || this.variableResolver.resolveVariable(variable) == null) { + // Don't bother with variable boundaries for toString()... + // Can cause infinite recursions in case of self-references + return "?"; + } + } + StringBuilder result = new StringBuilder(this.resolved.getName()); + if (hasGenerics()) { + result.append('<'); + result.append(LettuceStrings.arrayToDelimitedString(getGenerics(), ", ")); + result.append('>'); + } + return result.toString(); + } + + // Factory methods + + /** + * Return a {@link ResolvableType} for the specified {@link Class}, using the full generic type information for + * assignability checks. For example: {@code ResolvableType.forClass(MyArrayList.class)}. + * + * @param sourceClass the source class ({@code null} is semantically equivalent to {@code Object.class} for typical use + * cases here} + * @return a {@link ResolvableType} for the specified class + * @see #forClass(Class, Class) + * @see #forClassWithGenerics(Class, Class...) + */ + public static ResolvableType forClass(Class sourceClass) { + return new ResolvableType(sourceClass); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Class}, doing assignability checks against the raw class only + * (analogous to {@link Class#isAssignableFrom}, which this serves as a wrapper for. For example: + * {@code ResolvableType.forClass(MyArrayList.class)}. + * + * @param sourceClass the source class ({@code null} is semantically equivalent to {@code Object.class} for typical use + * cases here} + * @return a {@link ResolvableType} for the specified class + * @see #forClass(Class) + * @see #getRawClass() + */ + public static ResolvableType forRawClass(Class sourceClass) { + return new ResolvableType(sourceClass) { + @Override + public boolean isAssignableFrom(Class other) { + return LettuceClassUtils.isAssignable(getRawClass(), other); + } + }; + } + + /** + * Return a {@link ResolvableType} for the specified {@link Class} with a given implementation. For example: + * {@code ResolvableType.forClass(List.class, MyArrayList.class)}. + * + * @param sourceClass the source class (must not be {@code null} + * @param implementationClass the implementation class + * @return a {@link ResolvableType} for the specified class backed by the given implementation class + * @see #forClass(Class) + * @see #forClassWithGenerics(Class, Class...) + */ + public static ResolvableType forClass(Class sourceClass, Class implementationClass) { + LettuceAssert.notNull(sourceClass, "Source class must not be null"); + ResolvableType asType = forType(implementationClass).as(sourceClass); + return (asType == NONE ? forType(sourceClass) : asType); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Class} with pre-declared generics. + * + * @param sourceClass the source class + * @param generics the generics of the class + * @return a {@link ResolvableType} for the specific class and generics + * @see #forClassWithGenerics(Class, ResolvableType...) + */ + public static ResolvableType forClassWithGenerics(Class sourceClass, Class... generics) { + LettuceAssert.notNull(sourceClass, "Source class must not be null"); + LettuceAssert.notNull(generics, "Generics must not be null"); + ResolvableType[] resolvableGenerics = new ResolvableType[generics.length]; + for (int i = 0; i < generics.length; i++) { + resolvableGenerics[i] = forClass(generics[i]); + } + return forClassWithGenerics(sourceClass, resolvableGenerics); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Class} with pre-declared generics. + * + * @param sourceClass the source class + * @param generics the generics of the class + * @return a {@link ResolvableType} for the specific class and generics + * @see #forClassWithGenerics(Class, Class...) + */ + public static ResolvableType forClassWithGenerics(Class sourceClass, ResolvableType... generics) { + LettuceAssert.notNull(sourceClass, "Source class must not be null"); + LettuceAssert.notNull(generics, "Generics must not be null"); + TypeVariable[] variables = sourceClass.getTypeParameters(); + LettuceAssert.isTrue(variables.length == generics.length, "Mismatched number of generics specified"); + + Type[] arguments = new Type[generics.length]; + for (int i = 0; i < generics.length; i++) { + ResolvableType generic = generics[i]; + Type argument = (generic != null ? generic.getType() : null); + arguments[i] = (argument != null ? argument : variables[i]); + } + + ParameterizedType syntheticType = new SyntheticParameterizedType(sourceClass, arguments); + return forType(syntheticType, new TypeVariablesVariableResolver(variables, generics)); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Method} return type. + * + * @param method the source for the method return type + * @return a {@link ResolvableType} for the specified method return + * @see #forMethodReturnType(Method, Class) + */ + public static ResolvableType forMethodReturnType(Method method) { + LettuceAssert.notNull(method, "Method must not be null"); + return forMethodParameter(new MethodParameter(method, -1)); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Method} return type. Use this variant when the class that + * declares the method includes generic parameter variables that are satisfied by the implementation class. + * + * @param method the source for the method return type + * @param implementationClass the implementation class + * @return a {@link ResolvableType} for the specified method return + * @see #forMethodReturnType(Method) + */ + public static ResolvableType forMethodReturnType(Method method, Class implementationClass) { + LettuceAssert.notNull(method, "Method must not be null"); + MethodParameter methodParameter = new MethodParameter(method, -1); + methodParameter.setContainingClass(implementationClass); + return forMethodParameter(methodParameter); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Method} parameter. + * + * @param method the source method (must not be {@code null}) + * @param parameterIndex the parameter index + * @return a {@link ResolvableType} for the specified method parameter + * @see #forMethodParameter(Method, int, Class) + * @see #forMethodParameter(MethodParameter) + */ + public static ResolvableType forMethodParameter(Method method, int parameterIndex) { + LettuceAssert.notNull(method, "Method must not be null"); + return forMethodParameter(new MethodParameter(method, parameterIndex)); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Method} parameter with a given implementation. Use this variant + * when the class that declares the method includes generic parameter variables that are satisfied by the implementation + * class. + * + * @param method the source method (must not be {@code null}) + * @param parameterIndex the parameter index + * @param implementationClass the implementation class + * @return a {@link ResolvableType} for the specified method parameter + * @see #forMethodParameter(Method, int, Class) + * @see #forMethodParameter(MethodParameter) + */ + public static ResolvableType forMethodParameter(Method method, int parameterIndex, Class implementationClass) { + LettuceAssert.notNull(method, "Method must not be null"); + MethodParameter methodParameter = new MethodParameter(method, parameterIndex); + methodParameter.setContainingClass(implementationClass); + return forMethodParameter(methodParameter); + } + + /** + * Return a {@link ResolvableType} for the specified {@link MethodParameter}. + * + * @param methodParameter the source method parameter (must not be {@code null}) + * @return a {@link ResolvableType} for the specified method parameter + * @see #forMethodParameter(Method, int) + */ + public static ResolvableType forMethodParameter(MethodParameter methodParameter) { + return forMethodParameter(methodParameter, (Type) null); + } + + /** + * Return a {@link ResolvableType} for the specified {@link MethodParameter} with a given implementation type. Use this + * variant when the class that declares the method includes generic parameter variables that are satisfied by the + * implementation type. + * + * @param methodParameter the source method parameter (must not be {@code null}) + * @param implementationType the implementation type + * @return a {@link ResolvableType} for the specified method parameter + * @see #forMethodParameter(MethodParameter) + */ + public static ResolvableType forMethodParameter(MethodParameter methodParameter, ResolvableType implementationType) { + LettuceAssert.notNull(methodParameter, "MethodParameter must not be null"); + implementationType = (implementationType != null ? implementationType : forType(methodParameter.getContainingClass())); + ResolvableType owner = implementationType.as(methodParameter.getDeclaringClass()); + return forType(null, new MethodParameterTypeProvider(methodParameter), owner.asVariableResolver()) + .getNested(methodParameter.getNestingLevel(), methodParameter.typeIndexesPerLevel); + } + + /** + * Return a {@link ResolvableType} for the specified {@link MethodParameter}, overriding the target type to resolve with a + * specific given type. + * + * @param methodParameter the source method parameter (must not be {@code null}) + * @param targetType the type to resolve (a part of the method parameter's type) + * @return a {@link ResolvableType} for the specified method parameter + * @see #forMethodParameter(Method, int) + */ + public static ResolvableType forMethodParameter(MethodParameter methodParameter, Type targetType) { + LettuceAssert.notNull(methodParameter, "MethodParameter must not be null"); + ResolvableType owner = forType(methodParameter.getContainingClass()).as(methodParameter.getDeclaringClass()); + return forType(targetType, new MethodParameterTypeProvider(methodParameter), owner.asVariableResolver()) + .getNested(methodParameter.getNestingLevel(), methodParameter.typeIndexesPerLevel); + } + + /** + * Resolve the top-level parameter type of the given {@code MethodParameter}. + * + * @param methodParameter the method parameter to resolve + * @see MethodParameter#setParameterType + */ + static void resolveMethodParameter(MethodParameter methodParameter) { + LettuceAssert.notNull(methodParameter, "MethodParameter must not be null"); + ResolvableType owner = forType(methodParameter.getContainingClass()).as(methodParameter.getDeclaringClass()); + methodParameter.setParameterType( + forType(null, new MethodParameterTypeProvider(methodParameter), owner.asVariableResolver()).resolve()); + } + + /** + * Return a {@link ResolvableType} as a array of the specified {@code componentType}. + * + * @param componentType the component type + * @return a {@link ResolvableType} as an array of the specified component type + */ + public static ResolvableType forArrayComponent(ResolvableType componentType) { + LettuceAssert.notNull(componentType, "Component type must not be null"); + Class arrayClass = Array.newInstance(componentType.resolve(), 0).getClass(); + return new ResolvableType(arrayClass, null, null, componentType); + } + + private static ResolvableType[] forTypes(Type[] types, VariableResolver owner) { + ResolvableType[] result = new ResolvableType[types.length]; + for (int i = 0; i < types.length; i++) { + result[i] = forType(types[i], owner); + } + return result; + } + + /** + * Return a {@link ResolvableType} for the specified {@link Type}. Note: The resulting {@link ResolvableType} may not be + * {@link Serializable}. + * + * @param type the source type or {@code null} + * @return a {@link ResolvableType} for the specified {@link Type} + * @see #forType(Type, ResolvableType) + */ + public static ResolvableType forType(Type type) { + return forType(type, null, null); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Type} backed by the given owner type. Note: The resulting + * {@link ResolvableType} may not be {@link Serializable}. + * + * @param type the source type or {@code null} + * @param owner the owner type used to resolve variables + * @return a {@link ResolvableType} for the specified {@link Type} and owner + * @see #forType(Type) + */ + public static ResolvableType forType(Type type, ResolvableType owner) { + VariableResolver variableResolver = null; + if (owner != null) { + variableResolver = owner.asVariableResolver(); + } + return forType(type, variableResolver); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Type} backed by a given {@link VariableResolver}. + * + * @param type the source type or {@code null} + * @param variableResolver the variable resolver or {@code null} + * @return a {@link ResolvableType} for the specified {@link Type} and {@link VariableResolver} + */ + public static ResolvableType forType(Type type, VariableResolver variableResolver) { + return forType(type, null, variableResolver); + } + + /** + * Return a {@link ResolvableType} for the specified {@link Type} backed by a given {@link VariableResolver}. + * + * @param type the source type or {@code null} + * @param typeProvider the type provider or {@code null} + * @param variableResolver the variable resolver or {@code null} + * @return a {@link ResolvableType} for the specified {@link Type} and {@link VariableResolver} + */ + static ResolvableType forType(Type type, TypeProvider typeProvider, VariableResolver variableResolver) { + if (type == null && typeProvider != null) { + type = TypeWrapper.forTypeProvider(typeProvider); + } + if (type == null) { + return NONE; + } + + // For simple Class references, build the wrapper right away - + // no expensive resolution necessary, so not worth caching... + if (type instanceof Class) { + return new ResolvableType(type, typeProvider, variableResolver, null); + } + + return new ResolvableType(type, typeProvider, variableResolver); + } + + /** + * Strategy interface used to resolve {@link TypeVariable}s. + */ + public interface VariableResolver extends Serializable { + + /** + * Return the source of the resolver (used for hashCode and equals). + */ + Object getSource(); + + /** + * Resolve the specified variable. + * + * @param variable the variable to resolve + * @return the resolved variable, or {@code null} if not found + */ + ResolvableType resolveVariable(TypeVariable variable); + } + + @SuppressWarnings("serial") + private class DefaultVariableResolver implements VariableResolver { + + @Override + public ResolvableType resolveVariable(TypeVariable variable) { + return ResolvableType.this.resolveVariable(variable); + } + + @Override + public Object getSource() { + return ResolvableType.this; + } + } + + @SuppressWarnings("serial") + private static class TypeVariablesVariableResolver implements VariableResolver { + + private final TypeVariable[] variables; + + private final ResolvableType[] generics; + + public TypeVariablesVariableResolver(TypeVariable[] variables, ResolvableType[] generics) { + this.variables = variables; + this.generics = generics; + } + + @Override + public ResolvableType resolveVariable(TypeVariable variable) { + for (int i = 0; i < this.variables.length; i++) { + if (TypeWrapper.unwrap(this.variables[i]).equals(TypeWrapper.unwrap(variable))) { + return this.generics[i]; + } + } + return null; + } + + @Override + public Object getSource() { + return this.generics; + } + } + + private static final class SyntheticParameterizedType implements ParameterizedType, Serializable { + + private final Type rawType; + + private final Type[] typeArguments; + + public SyntheticParameterizedType(Type rawType, Type[] typeArguments) { + this.rawType = rawType; + this.typeArguments = typeArguments; + } + + @Override + public Type getOwnerType() { + return null; + } + + @Override + public Type getRawType() { + return this.rawType; + } + + @Override + public Type[] getActualTypeArguments() { + return this.typeArguments; + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } + if (!(other instanceof ParameterizedType)) { + return false; + } + ParameterizedType otherType = (ParameterizedType) other; + return (otherType.getOwnerType() == null && this.rawType.equals(otherType.getRawType()) + && Arrays.equals(this.typeArguments, otherType.getActualTypeArguments())); + } + + @Override + public int hashCode() { + return (this.rawType.hashCode() * 31 + Arrays.hashCode(this.typeArguments)); + } + } + + /** + * Internal helper to handle bounds from {@link WildcardType}s. + */ + private static class WildcardBounds { + + private final Kind kind; + + private final ResolvableType[] bounds; + + /** + * Internal constructor to create a new {@link WildcardBounds} instance. + * + * @param kind the kind of bounds + * @param bounds the bounds + * @see #get(ResolvableType) + */ + public WildcardBounds(Kind kind, ResolvableType[] bounds) { + this.kind = kind; + this.bounds = bounds; + } + + /** + * Return {@code true} if this bounds is the same kind as the specified bounds. + */ + public boolean isSameKind(WildcardBounds bounds) { + return this.kind == bounds.kind; + } + + /** + * Return {@code true} if this bounds is assignable to all the specified types. + * + * @param types the types to test against + * @return {@code true} if this bounds is assignable to all types + */ + public boolean isAssignableFrom(ResolvableType... types) { + for (ResolvableType bound : this.bounds) { + for (ResolvableType type : types) { + if (!isAssignable(bound, type)) { + return false; + } + } + } + return true; + } + + private boolean isAssignable(ResolvableType source, ResolvableType from) { + return (this.kind == Kind.UPPER ? source.isAssignableFrom(from) : from.isAssignableFrom(source)); + } + + /** + * Return the underlying bounds. + */ + public ResolvableType[] getBounds() { + return this.bounds; + } + + /** + * Get a {@link WildcardBounds} instance for the specified type, returning {@code null} if the specified type cannot be + * resolved to a {@link WildcardType}. + * + * @param type the source type + * @return a {@link WildcardBounds} instance or {@code null} + */ + public static WildcardBounds get(ResolvableType type) { + ResolvableType resolveToWildcard = type; + while (!(resolveToWildcard.getType() instanceof WildcardType)) { + if (resolveToWildcard == NONE) { + return null; + } + resolveToWildcard = resolveToWildcard.resolveType(); + } + WildcardType wildcardType = (WildcardType) resolveToWildcard.type; + Kind boundsType = (wildcardType.getLowerBounds().length > 0 ? Kind.LOWER : Kind.UPPER); + Type[] bounds = boundsType == Kind.UPPER ? wildcardType.getUpperBounds() : wildcardType.getLowerBounds(); + ResolvableType[] resolvableBounds = new ResolvableType[bounds.length]; + for (int i = 0; i < bounds.length; i++) { + resolvableBounds[i] = ResolvableType.forType(bounds[i], type.variableResolver); + } + return new WildcardBounds(boundsType, resolvableBounds); + } + + /** + * The various kinds of bounds. + */ + enum Kind { + UPPER, LOWER + } + } + + /** + * Determine if the given objects are equal, returning {@code true} if both are {@code null} or {@code false} if only one is + * {@code null}. + *

+ * Compares arrays with {@code Arrays.equals}, performing an equality check based on the array elements rather than the + * array reference. + * + * @param o1 first Object to compare + * @param o2 second Object to compare + * @return whether the given objects are equal + * @see java.util.Arrays#equals + */ + private static boolean nullSafeEquals(Object o1, Object o2) { + if (o1 == o2) { + return true; + } + if (o1 == null || o2 == null) { + return false; + } + if (o1.equals(o2)) { + return true; + } + if (o1.getClass().isArray() && o2.getClass().isArray()) { + if (o1 instanceof Object[] && o2 instanceof Object[]) { + return Arrays.equals((Object[]) o1, (Object[]) o2); + } + if (o1 instanceof boolean[] && o2 instanceof boolean[]) { + return Arrays.equals((boolean[]) o1, (boolean[]) o2); + } + if (o1 instanceof byte[] && o2 instanceof byte[]) { + return Arrays.equals((byte[]) o1, (byte[]) o2); + } + if (o1 instanceof char[] && o2 instanceof char[]) { + return Arrays.equals((char[]) o1, (char[]) o2); + } + if (o1 instanceof double[] && o2 instanceof double[]) { + return Arrays.equals((double[]) o1, (double[]) o2); + } + if (o1 instanceof float[] && o2 instanceof float[]) { + return Arrays.equals((float[]) o1, (float[]) o2); + } + if (o1 instanceof int[] && o2 instanceof int[]) { + return Arrays.equals((int[]) o1, (int[]) o2); + } + if (o1 instanceof long[] && o2 instanceof long[]) { + return Arrays.equals((long[]) o1, (long[]) o2); + } + if (o1 instanceof short[] && o2 instanceof short[]) { + return Arrays.equals((short[]) o1, (short[]) o2); + } + } + return false; + } + +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/StandardReflectionParameterNameDiscoverer.java b/src/main/java/io/lettuce/core/dynamic/support/StandardReflectionParameterNameDiscoverer.java new file mode 100644 index 0000000000..359a4cade1 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/StandardReflectionParameterNameDiscoverer.java @@ -0,0 +1,58 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.Constructor; +import java.lang.reflect.Method; +import java.lang.reflect.Parameter; + +/** + * {@link ParameterNameDiscoverer} implementation which uses JDK 8's reflection facilities for introspecting parameter names + * (based on the "-parameters" compiler flag). + * + * @see java.lang.reflect.Parameter#getName() + */ +public class StandardReflectionParameterNameDiscoverer implements ParameterNameDiscoverer { + + @Override + public String[] getParameterNames(Method method) { + Parameter[] parameters = method.getParameters(); + String[] parameterNames = new String[parameters.length]; + for (int i = 0; i < parameters.length; i++) { + Parameter param = parameters[i]; + if (!param.isNamePresent()) { + return null; + } + parameterNames[i] = param.getName(); + } + return parameterNames; + } + + @Override + public String[] getParameterNames(Constructor ctor) { + Parameter[] parameters = ctor.getParameters(); + String[] parameterNames = new String[parameters.length]; + for (int i = 0; i < parameters.length; i++) { + Parameter param = parameters[i]; + if (!param.isNamePresent()) { + return null; + } + parameterNames[i] = param.getName(); + } + return parameterNames; + } + +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/TypeDiscoverer.java b/src/main/java/io/lettuce/core/dynamic/support/TypeDiscoverer.java new file mode 100644 index 0000000000..efa0616de5 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/TypeDiscoverer.java @@ -0,0 +1,389 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.*; +import java.util.*; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Basic {@link TypeDiscoverer} that contains basic functionality to discover property types. + */ +class TypeDiscoverer implements TypeInformation { + + private final Type type; + private final Map, Type> typeVariableMap; + private final int hashCode; + + private boolean componentTypeResolved = false; + private TypeInformation componentType; + + private boolean valueTypeResolved = false; + private TypeInformation valueType; + + private Class resolvedType; + + /** + * Creates a new {@link TypeDiscoverer} for the given type and type variable map. + * + * @param type must not be {@literal null}. + * @param typeVariableMap must not be {@literal null}. + */ + protected TypeDiscoverer(Type type, Map, Type> typeVariableMap) { + + LettuceAssert.notNull(type, "Type must not be null"); + LettuceAssert.notNull(typeVariableMap, "TypeVariableMap must not be null"); + + this.type = type; + this.typeVariableMap = typeVariableMap; + this.hashCode = 17 + (31 * type.hashCode()) + (31 * typeVariableMap.hashCode()); + } + + /** + * Returns the type variable map. + * + * @return + */ + public Map, Type> getTypeVariableMap() { + return typeVariableMap; + } + + /** + * Creates {@link TypeInformation} for the given {@link Type}. + * + * @param fieldType + * @return + */ + @SuppressWarnings({ "rawtypes", "unchecked", "deprecation" }) + protected TypeInformation createInfo(Type fieldType) { + + if (fieldType.equals(this.type)) { + return this; + } + + if (fieldType instanceof Class) { + return new ClassTypeInformation((Class) fieldType); + } + + Class resolveType = resolveClass(fieldType); + Map variableMap = new HashMap(); + variableMap.putAll(GenericTypeResolver.getTypeVariableMap(resolveType)); + + if (fieldType instanceof ParameterizedType) { + + ParameterizedType parameterizedType = (ParameterizedType) fieldType; + + TypeVariable>[] typeParameters = resolveType.getTypeParameters(); + Type[] arguments = parameterizedType.getActualTypeArguments(); + + for (int i = 0; i < typeParameters.length; i++) { + variableMap.put(typeParameters[i], arguments[i]); + } + + return new ParametrizedTypeInformation(parameterizedType, this, variableMap); + } + + if (fieldType instanceof TypeVariable) { + TypeVariable variable = (TypeVariable) fieldType; + return new TypeVariableTypeInformation(variable, type, this, variableMap); + } + + if (fieldType instanceof GenericArrayType) { + return new GenericArrayTypeInformation((GenericArrayType) fieldType, this, variableMap); + } + + if (fieldType instanceof WildcardType) { + + WildcardType wildcardType = (WildcardType) fieldType; + return new WildcardTypeInformation(wildcardType, variableMap); + } + + throw new IllegalArgumentException(); + } + + /** + * Resolves the given type into a plain {@link Class}. + * + * @param type + * @return + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + protected Class resolveClass(Type type) { + + Map map = new HashMap(); + map.putAll(getTypeVariableMap()); + + return (Class) ResolvableType.forType(type, new TypeVariableMapVariableResolver(map)).resolve(Object.class); + } + + /** + * Resolves the given type into a {@link Type}. + * + * @param type + * @return + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + protected Type resolveType(Type type) { + + Map map = new HashMap<>(); + map.putAll(getTypeVariableMap()); + + return ResolvableType.forType(type, new TypeVariableMapVariableResolver(map)).getType(); + } + + public List> getParameterTypes(Constructor constructor) { + + LettuceAssert.notNull(constructor, "Constructor must not be null!"); + + Type[] types = constructor.getGenericParameterTypes(); + List> result = new ArrayList>(types.length); + + for (Type parameterType : types) { + result.add(createInfo(parameterType)); + } + + return result; + } + + public Class getType() { + + if (resolvedType == null) { + this.resolvedType = resolveClass(type); + } + + return this.resolvedType; + } + + @Override + public Type getGenericType() { + return resolveType(type); + } + + @Override + public ClassTypeInformation getRawTypeInformation() { + return ClassTypeInformation.from(getType()).getRawTypeInformation(); + } + + public TypeInformation getActualType() { + + if (isMap()) { + return getMapValueType(); + } + + if (isCollectionLike()) { + return getComponentType(); + } + + return this; + } + + public boolean isMap() { + return Map.class.isAssignableFrom(getType()); + } + + public TypeInformation getMapValueType() { + + if (!valueTypeResolved) { + this.valueType = doGetMapValueType(); + this.valueTypeResolved = true; + } + + return this.valueType; + } + + protected TypeInformation doGetMapValueType() { + + if (isMap()) { + return getTypeArgument(Map.class, 1); + } + + List> arguments = getTypeArguments(); + + if (arguments.size() > 1) { + return arguments.get(1); + } + + return null; + } + + public boolean isCollectionLike() { + + Class rawType = getType(); + + if (rawType.isArray() || Iterable.class.equals(rawType)) { + return true; + } + + return Collection.class.isAssignableFrom(rawType); + } + + public final TypeInformation getComponentType() { + + if (!componentTypeResolved) { + this.componentType = doGetComponentType(); + this.componentTypeResolved = true; + } + + return this.componentType; + } + + protected TypeInformation doGetComponentType() { + + Class rawType = getType(); + + if (rawType.isArray()) { + return createInfo(rawType.getComponentType()); + } + + if (isMap()) { + return getTypeArgument(Map.class, 0); + } + + if (Iterable.class.isAssignableFrom(rawType)) { + return getTypeArgument(Iterable.class, 0); + } + + List> arguments = getTypeArguments(); + + if (arguments.size() > 0) { + return arguments.get(0); + } + + return null; + } + + public TypeInformation getReturnType(Method method) { + + return createInfo(method.getGenericReturnType()); + } + + public List> getParameterTypes(Method method) { + + LettuceAssert.notNull(method, "Method most not be null!"); + + Type[] types = method.getGenericParameterTypes(); + List> result = new ArrayList>(types.length); + + for (Type parameterType : types) { + result.add(createInfo(parameterType)); + } + + return result; + } + + public TypeInformation getSuperTypeInformation(Class superType) { + + Class rawType = getType(); + + if (!superType.isAssignableFrom(rawType)) { + return null; + } + + if (getType().equals(superType)) { + return this; + } + + List candidates = new ArrayList(); + + Type genericSuperclass = rawType.getGenericSuperclass(); + if (genericSuperclass != null) { + candidates.add(genericSuperclass); + } + candidates.addAll(Arrays.asList(rawType.getGenericInterfaces())); + + for (Type candidate : candidates) { + + TypeInformation candidateInfo = createInfo(candidate); + + if (superType.equals(candidateInfo.getType())) { + return candidateInfo; + } else { + TypeInformation nestedSuperType = candidateInfo.getSuperTypeInformation(superType); + if (nestedSuperType != null) { + return nestedSuperType; + } + } + } + + return null; + } + + public List> getTypeArguments() { + return Collections.emptyList(); + } + + public boolean isAssignableFrom(TypeInformation target) { + return target.getSuperTypeInformation(getType()).equals(this); + } + + public TypeInformation getTypeArgument(Class bound, int index) { + + Class[] arguments = GenericTypeResolver.resolveTypeArguments(getType(), bound); + + if (arguments == null) { + return getSuperTypeInformation(bound) instanceof ParametrizedTypeInformation ? ClassTypeInformation.OBJECT : null; + } + + return createInfo(arguments[index]); + } + + @Override + public boolean equals(Object obj) { + + if (obj == this) { + return true; + } + + if (obj == null) { + return false; + } + + if (!this.getClass().equals(obj.getClass())) { + return false; + } + + TypeDiscoverer that = (TypeDiscoverer) obj; + + return this.type.equals(that.type) && this.typeVariableMap.equals(that.typeVariableMap); + } + + @Override + public int hashCode() { + return hashCode; + } + + @SuppressWarnings({ "serial", "rawtypes" }) + private static class TypeVariableMapVariableResolver implements ResolvableType.VariableResolver { + + private final Map typeVariableMap; + + public TypeVariableMapVariableResolver(Map typeVariableMap) { + this.typeVariableMap = typeVariableMap; + } + + @Override + public ResolvableType resolveVariable(TypeVariable variable) { + Type type = this.typeVariableMap.get(variable); + return (type != null ? ResolvableType.forType(type) : null); + } + + @Override + public Object getSource() { + return this.typeVariableMap; + } + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/TypeInformation.java b/src/main/java/io/lettuce/core/dynamic/support/TypeInformation.java new file mode 100644 index 0000000000..6bb1d531ad --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/TypeInformation.java @@ -0,0 +1,137 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.Constructor; +import java.lang.reflect.Method; +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.util.List; +import java.util.Map; + +/** + * Interface to access types and resolving generics on the way. + */ +public interface TypeInformation { + + Type getGenericType(); + + /** + * Returns the {@link TypeInformation}s for the parameters of the given {@link Constructor}. + * + * @param constructor must not be {@literal null}. + * @return + */ + List> getParameterTypes(Constructor constructor); + + /** + * Returns whether the type can be considered a collection, which means it's a container of elements, e.g. a + * {@link java.util.Collection} and {@link java.lang.reflect.Array} or anything implementing {@link Iterable}. If this + * returns {@literal true} you can expect {@link #getComponentType()} to return a non-{@literal null} value. + * + * @return + */ + boolean isCollectionLike(); + + /** + * Returns the component type for {@link java.util.Collection}s or the key type for {@link java.util.Map}s. + * + * @return + */ + TypeInformation getComponentType(); + + /** + * Returns whether the property is a {@link java.util.Map}. If this returns {@literal true} you can expect + * {@link #getComponentType()} as well as {@link #getMapValueType()} to return something not {@literal null}. + * + * @return + */ + boolean isMap(); + + /** + * Will return the type of the value in case the underlying type is a {@link java.util.Map}. + * + * @return + */ + TypeInformation getMapValueType(); + + /** + * Returns the type of the property. Will resolve generics and the generic context of + * + * @return + */ + Class getType(); + + /** + * Returns a {@link ClassTypeInformation} to represent the {@link TypeInformation} of the raw type of the current instance. + * + * @return + */ + ClassTypeInformation getRawTypeInformation(); + + /** + * Transparently returns the {@link java.util.Map} value type if the type is a {@link java.util.Map}, returns the component + * type if the type {@link #isCollectionLike()} or the simple type if none of this applies. + * + * @return + */ + TypeInformation getActualType(); + + /** + * Returns a {@link TypeInformation} for the return type of the given {@link Method}. Will potentially resolve generics + * information against the current types type parameter bindings. + * + * @param method must not be {@literal null}. + * @return + */ + TypeInformation getReturnType(Method method); + + /** + * Returns the {@link TypeInformation}s for the parameters of the given {@link Method}. + * + * @param method must not be {@literal null}. + * @return + */ + List> getParameterTypes(Method method); + + /** + * Returns the {@link TypeInformation} for the given raw super type. + * + * @param superType must not be {@literal null}. + * @return the {@link TypeInformation} for the given raw super type or {@literal null} in case the current + * {@link TypeInformation} does not implement the given type. + */ + TypeInformation getSuperTypeInformation(Class superType); + + /** + * Returns if the current {@link TypeInformation} can be safely assigned to the given one. Mimics semantics of + * {@link Class#isAssignableFrom(Class)} but takes generics into account. Thus it will allow to detect that a + * {@code List} is assignable to {@code List}. + * + * @param target + * @return + */ + boolean isAssignableFrom(TypeInformation target); + + /** + * Returns the {@link TypeInformation} for the type arguments of the current {@link TypeInformation}. + * + * @return + */ + List> getTypeArguments(); + + Map, Type> getTypeVariableMap(); +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/TypeVariableTypeInformation.java b/src/main/java/io/lettuce/core/dynamic/support/TypeVariableTypeInformation.java new file mode 100644 index 0000000000..47873dc70f --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/TypeVariableTypeInformation.java @@ -0,0 +1,135 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.ParameterizedType; +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Objects; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Special {@link TypeDiscoverer} to determine the actual type for a {@link TypeVariable}. Will consider the context the + * {@link TypeVariable} is being used in. + */ +public class TypeVariableTypeInformation extends ParentTypeAwareTypeInformation { + + private final TypeVariable variable; + private final Type owningType; + + /** + * Creates a bew {@link TypeVariableTypeInformation} for the given {@link TypeVariable} owning {@link Type} and parent + * {@link TypeDiscoverer}. + * + * @param variable must not be {@literal null} + * @param owningType must not be {@literal null} + * @param parent can be be {@literal null} + * @param typeVariableMap must not be {@literal null} + */ + public TypeVariableTypeInformation(TypeVariable variable, Type owningType, TypeDiscoverer parent, + Map, Type> typeVariableMap) { + + super(variable, parent, typeVariableMap); + + LettuceAssert.notNull(variable, "TypeVariable must not be null"); + + this.variable = variable; + this.owningType = owningType; + } + + @Override + public Class getType() { + + int index = getIndex(variable); + + if (owningType instanceof ParameterizedType && index != -1) { + Type fieldType = ((ParameterizedType) owningType).getActualTypeArguments()[index]; + return resolveClass(fieldType); + } + + return resolveClass(variable); + } + + /** + * Returns the index of the type parameter binding the given {@link TypeVariable}. + * + * @param variable + * @return + */ + private int getIndex(TypeVariable variable) { + + Class rawType = resolveClass(owningType); + TypeVariable[] typeParameters = rawType.getTypeParameters(); + + for (int i = 0; i < typeParameters.length; i++) { + if (variable.equals(typeParameters[i])) { + return i; + } + } + + return -1; + } + + @Override + public List> getTypeArguments() { + + List> result = new ArrayList<>(); + + Type type = resolveType(variable); + if (type instanceof ParameterizedType) { + + for (Type typeArgument : ((ParameterizedType) type).getActualTypeArguments()) { + result.add(createInfo(typeArgument)); + } + } + + return result; + } + + @Override + public boolean equals(Object obj) { + + if (obj == this) { + return true; + } + + if (!(obj instanceof TypeVariableTypeInformation)) { + return false; + } + + TypeVariableTypeInformation that = (TypeVariableTypeInformation) obj; + + return getType().equals(that.getType()); + } + + @Override + public int hashCode() { + return Objects.hash(super.hashCode(), variable, owningType); + } + + @Override + public String toString() { + return variable.getName(); + } + + public String getVariableName() { + return variable.getName(); + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/TypeWrapper.java b/src/main/java/io/lettuce/core/dynamic/support/TypeWrapper.java new file mode 100644 index 0000000000..814c8bb9e9 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/TypeWrapper.java @@ -0,0 +1,364 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.io.IOException; +import java.io.ObjectInputStream; +import java.io.Serializable; +import java.lang.reflect.*; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Internal utility class that can be used to obtain wrapped {@link Serializable} variants of {@link java.lang.reflect.Type}s. + * + *

+ * {@link #forField(Field) Fields} or {@link #forMethodParameter(MethodParameter) MethodParameters} can be used as the root + * source for a serializable type. Alternatively the {@link #forGenericSuperclass(Class) superclass}, + * {@link #forGenericInterfaces(Class) interfaces} or {@link #forTypeParameters(Class) type parameters} or a regular + * {@link Class} can also be used as source. + * + *

+ * The returned type will either be a {@link Class} or a serializable proxy of {@link GenericArrayType}, + * {@link ParameterizedType}, {@link TypeVariable} or {@link WildcardType}. With the exception of {@link Class} (which is final) + * calls to methods that return further {@link Type}s (for example {@link GenericArrayType#getGenericComponentType()}) will be + * automatically wrapped. + * + */ +abstract class TypeWrapper { + + private static final Class[] SUPPORTED_SERIALIZABLE_TYPES = { GenericArrayType.class, ParameterizedType.class, + TypeVariable.class, WildcardType.class }; + + /** + * Return a {@link Serializable} variant of {@link Field#getGenericType()}. + */ + public static Type forField(Field field) { + LettuceAssert.notNull(field, "Field must not be null"); + return forTypeProvider(new FieldTypeProvider(field)); + } + + /** + * Return a {@link Serializable} variant of {@link MethodParameter#getGenericParameterType()}. + */ + public static Type forMethodParameter(MethodParameter methodParameter) { + return forTypeProvider(new MethodParameterTypeProvider(methodParameter)); + } + + /** + * Return a {@link Serializable} variant of {@link Class#getGenericSuperclass()}. + */ + @SuppressWarnings("serial") + public static Type forGenericSuperclass(final Class type) { + return forTypeProvider(new DefaultTypeProvider() { + @Override + public Type getType() { + return type.getGenericSuperclass(); + } + }); + } + + /** + * Return a {@link Serializable} variant of {@link Class#getGenericInterfaces()}. + */ + @SuppressWarnings("serial") + public static Type[] forGenericInterfaces(final Class type) { + Type[] result = new Type[type.getGenericInterfaces().length]; + for (int i = 0; i < result.length; i++) { + final int index = i; + result[i] = forTypeProvider(new DefaultTypeProvider() { + @Override + public Type getType() { + return type.getGenericInterfaces()[index]; + } + }); + } + return result; + } + + /** + * Return a {@link Serializable} variant of {@link Class#getTypeParameters()}. + */ + @SuppressWarnings("serial") + public static Type[] forTypeParameters(final Class type) { + Type[] result = new Type[type.getTypeParameters().length]; + for (int i = 0; i < result.length; i++) { + final int index = i; + result[i] = forTypeProvider(new DefaultTypeProvider() { + @Override + public Type getType() { + return type.getTypeParameters()[index]; + } + }); + } + return result; + } + + /** + * Unwrap the given type, effectively returning the original non-serializable type. + * + * @param type the type to unwrap + * @return the original non-serializable type + */ + @SuppressWarnings("unchecked") + public static T unwrap(T type) { + Type unwrapped = type; + while (unwrapped instanceof SerializableTypeProxy) { + unwrapped = ((SerializableTypeProxy) type).getTypeProvider().getType(); + } + return (T) unwrapped; + } + + /** + * Return a {@link Serializable} {@link Type} backed by a {@link TypeProvider} . + */ + static Type forTypeProvider(final TypeProvider provider) { + LettuceAssert.notNull(provider, "Provider must not be null"); + if (provider.getType() instanceof Serializable || provider.getType() == null) { + return provider.getType(); + } + + for (Class type : SUPPORTED_SERIALIZABLE_TYPES) { + if (type.isAssignableFrom(provider.getType().getClass())) { + ClassLoader classLoader = provider.getClass().getClassLoader(); + Class[] interfaces = new Class[] { type, SerializableTypeProxy.class, Serializable.class }; + InvocationHandler handler = new TypeProxyInvocationHandler(provider); + return (Type) Proxy.newProxyInstance(classLoader, interfaces, handler); + } + } + throw new IllegalArgumentException("Unsupported Type class: " + provider.getType().getClass().getName()); + } + + /** + * Additional interface implemented by the type proxy. + */ + interface SerializableTypeProxy { + + /** + * Return the underlying type provider. + */ + TypeProvider getTypeProvider(); + } + + /** + * A {@link Serializable} interface providing access to a {@link Type}. + */ + interface TypeProvider extends Serializable { + + /** + * Return the (possibly non {@link Serializable}) {@link Type}. + */ + Type getType(); + + /** + * Return the source of the type or {@code null}. + */ + Object getSource(); + } + + /** + * Default implementation of {@link TypeProvider} with a {@code null} source. + */ + @SuppressWarnings("serial") + private static abstract class DefaultTypeProvider implements TypeProvider { + + @Override + public Object getSource() { + return null; + } + } + + /** + * {@link Serializable} {@link InvocationHandler} used by the proxied {@link Type}. Provides serialization support and + * enhances any methods that return {@code Type} or {@code Type[]}. + */ + @SuppressWarnings("serial") + private static class TypeProxyInvocationHandler implements InvocationHandler, Serializable { + + private final TypeProvider provider; + + public TypeProxyInvocationHandler(TypeProvider provider) { + this.provider = provider; + } + + @Override + public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { + if (method.getName().equals("equals")) { + Object other = args[0]; + // Unwrap proxies for speed + if (other instanceof Type) { + other = unwrap((Type) other); + } + return this.provider.getType().equals(other); + } else if (method.getName().equals("hashCode")) { + return this.provider.getType().hashCode(); + } else if (method.getName().equals("getTypeProvider")) { + return this.provider; + } + + if (Type.class == method.getReturnType()) { + return forTypeProvider(new MethodInvokeTypeProvider(this.provider, method, -1)); + } else if (Type[].class == method.getReturnType()) { + Type[] result = new Type[((Type[]) method.invoke(this.provider.getType(), args)).length]; + for (int i = 0; i < result.length; i++) { + result[i] = forTypeProvider(new MethodInvokeTypeProvider(this.provider, method, i)); + } + return result; + } + + try { + return method.invoke(this.provider.getType(), args); + } catch (InvocationTargetException ex) { + throw ex.getTargetException(); + } + } + } + + /** + * {@link TypeProvider} for {@link Type}s obtained from a {@link Field}. + */ + @SuppressWarnings("serial") + static class FieldTypeProvider implements TypeProvider { + + private final String fieldName; + + private final Class declaringClass; + + private transient Field field; + + public FieldTypeProvider(Field field) { + this.fieldName = field.getName(); + this.declaringClass = field.getDeclaringClass(); + this.field = field; + } + + @Override + public Type getType() { + return this.field.getGenericType(); + } + + @Override + public Object getSource() { + return this.field; + } + + private void readObject(ObjectInputStream inputStream) throws IOException, ClassNotFoundException { + inputStream.defaultReadObject(); + try { + this.field = this.declaringClass.getDeclaredField(this.fieldName); + } catch (Throwable ex) { + throw new IllegalStateException("Could not find original class structure", ex); + } + } + } + + /** + * {@link TypeProvider} for {@link Type}s obtained from a {@link MethodParameter}. + */ + @SuppressWarnings("serial") + static class MethodParameterTypeProvider implements TypeProvider { + + private final String methodName; + + private final Class[] parameterTypes; + + private final Class declaringClass; + + private final int parameterIndex; + + private transient MethodParameter methodParameter; + + public MethodParameterTypeProvider(MethodParameter methodParameter) { + if (methodParameter.getMethod() != null) { + this.methodName = methodParameter.getMethod().getName(); + this.parameterTypes = methodParameter.getMethod().getParameterTypes(); + } else { + this.methodName = null; + this.parameterTypes = methodParameter.getConstructor().getParameterTypes(); + } + this.declaringClass = methodParameter.getDeclaringClass(); + this.parameterIndex = methodParameter.getParameterIndex(); + this.methodParameter = methodParameter; + } + + @Override + public Type getType() { + return this.methodParameter.getGenericParameterType(); + } + + @Override + public Object getSource() { + return this.methodParameter; + } + + private void readObject(ObjectInputStream inputStream) throws IOException, ClassNotFoundException { + inputStream.defaultReadObject(); + try { + if (this.methodName != null) { + this.methodParameter = new MethodParameter( + this.declaringClass.getDeclaredMethod(this.methodName, this.parameterTypes), this.parameterIndex); + } else { + this.methodParameter = new MethodParameter(this.declaringClass.getDeclaredConstructor(this.parameterTypes), + this.parameterIndex); + } + } catch (Throwable ex) { + throw new IllegalStateException("Could not find original class structure", ex); + } + } + } + + /** + * {@link TypeProvider} for {@link Type}s obtained by invoking a no-arg method. + */ + @SuppressWarnings("serial") + static class MethodInvokeTypeProvider implements TypeProvider { + + private final TypeProvider provider; + + private final String methodName; + + private final int index; + + private transient Method method; + + private transient volatile Object result; + + public MethodInvokeTypeProvider(TypeProvider provider, Method method, int index) { + this.provider = provider; + this.methodName = method.getName(); + this.index = index; + this.method = method; + } + + @Override + public Type getType() { + Object result = this.result; + if (result == null) { + // Lazy invocation of the target method on the provided type + result = ReflectionUtils.invokeMethod(this.method, this.provider.getType()); + // Cache the result for further calls to getType() + this.result = result; + } + return (result instanceof Type[] ? ((Type[]) result)[this.index] : (Type) result); + } + + @Override + public Object getSource() { + return null; + } + } + +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/WildcardTypeInformation.java b/src/main/java/io/lettuce/core/dynamic/support/WildcardTypeInformation.java new file mode 100644 index 0000000000..2d88a59b1c --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/WildcardTypeInformation.java @@ -0,0 +1,79 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.lang.reflect.WildcardType; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Map; + +/** + * {@link TypeInformation} for a {@link WildcardType}. + */ +class WildcardTypeInformation extends TypeDiscoverer { + + private final WildcardType type; + + /** + * Creates a new {@link WildcardTypeInformation} for the given type, type variable map. + * + * @param type must not be {@literal null}. + * @param typeVariableMap must not be {@literal null}. + */ + protected WildcardTypeInformation(WildcardType type, Map, Type> typeVariableMap) { + + super(type, typeVariableMap); + this.type = type; + } + + @Override + public boolean isAssignableFrom(TypeInformation target) { + + for (TypeInformation lowerBound : getLowerBounds()) { + if (!target.isAssignableFrom(lowerBound)) { + return false; + } + } + + for (TypeInformation upperBound : getUpperBounds()) { + if (!upperBound.isAssignableFrom(target)) { + return false; + } + } + + return true; + } + + public List> getUpperBounds() { + return getBounds(type.getUpperBounds()); + } + + public List> getLowerBounds() { + return getBounds(type.getLowerBounds()); + } + + private List> getBounds(Type[] bounds) { + + List> typeInformations = new ArrayList<>(bounds.length); + + Arrays.stream(bounds).map(this::createInfo).forEach(typeInformations::add); + + return typeInformations; + } +} diff --git a/src/main/java/io/lettuce/core/dynamic/support/package-info.java b/src/main/java/io/lettuce/core/dynamic/support/package-info.java new file mode 100644 index 0000000000..c393cb8389 --- /dev/null +++ b/src/main/java/io/lettuce/core/dynamic/support/package-info.java @@ -0,0 +1,4 @@ +/** + * Support classes imported from the Spring Framework. + */ +package io.lettuce.core.dynamic.support; diff --git a/src/main/java/io/lettuce/core/event/DefaultEventBus.java b/src/main/java/io/lettuce/core/event/DefaultEventBus.java new file mode 100644 index 0000000000..82f7cb25bc --- /dev/null +++ b/src/main/java/io/lettuce/core/event/DefaultEventBus.java @@ -0,0 +1,50 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event; + +import reactor.core.publisher.DirectProcessor; +import reactor.core.publisher.Flux; +import reactor.core.publisher.FluxSink; +import reactor.core.scheduler.Scheduler; + +/** + * Default implementation for an {@link EventBus}. Events are published using a {@link Scheduler}. + * + * @author Mark Paluch + * @since 3.4 + */ +public class DefaultEventBus implements EventBus { + + private final DirectProcessor bus; + private final FluxSink sink; + private final Scheduler scheduler; + + public DefaultEventBus(Scheduler scheduler) { + this.bus = DirectProcessor.create(); + this.sink = bus.sink(); + this.scheduler = scheduler; + } + + @Override + public Flux get() { + return bus.onBackpressureDrop().publishOn(scheduler); + } + + @Override + public void publish(Event event) { + sink.next(event); + } +} diff --git a/src/main/java/io/lettuce/core/event/DefaultEventPublisherOptions.java b/src/main/java/io/lettuce/core/event/DefaultEventPublisherOptions.java new file mode 100644 index 0000000000..6e8a3e3818 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/DefaultEventPublisherOptions.java @@ -0,0 +1,130 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.metrics.CommandLatencyCollectorOptions; + +/** + * The default implementation of {@link CommandLatencyCollectorOptions}. + * + * @author Mark Paluch + */ +public class DefaultEventPublisherOptions implements EventPublisherOptions { + + public static final long DEFAULT_EMIT_INTERVAL = 10; + public static final TimeUnit DEFAULT_EMIT_INTERVAL_UNIT = TimeUnit.MINUTES; + public static final Duration DEFAULT_EMIT_INTERVAL_DURATION = Duration.ofMinutes(DEFAULT_EMIT_INTERVAL); + + private static final DefaultEventPublisherOptions DISABLED = new Builder().eventEmitInterval(Duration.ZERO).build(); + + private final Duration eventEmitInterval; + + private DefaultEventPublisherOptions(Builder builder) { + this.eventEmitInterval = builder.eventEmitInterval; + } + + /** + * Returns a new {@link DefaultEventPublisherOptions.Builder} to construct {@link DefaultEventPublisherOptions}. + * + * @return a new {@link DefaultEventPublisherOptions.Builder} to construct {@link DefaultEventPublisherOptions}. + */ + public static Builder builder() { + return new Builder(); + } + + /** + * Builder for {@link DefaultEventPublisherOptions}. + */ + public static class Builder { + + private Duration eventEmitInterval = DEFAULT_EMIT_INTERVAL_DURATION; + + private Builder() { + } + + /** + * Sets the emit interval and the interval unit. Event emission will be disabled if the {@code eventEmitInterval} is set + * to 0}. Defaults to 10} {@link TimeUnit#MINUTES}. See {@link DefaultEventPublisherOptions#DEFAULT_EMIT_INTERVAL} + * {@link DefaultEventPublisherOptions#DEFAULT_EMIT_INTERVAL_UNIT}. + * + * @param eventEmitInterval the event interval, must be greater or equal to 0} + * @return this + * @since 5.0 + */ + public Builder eventEmitInterval(Duration eventEmitInterval) { + + LettuceAssert.notNull(eventEmitInterval, "EventEmitInterval must not be null"); + LettuceAssert.isTrue(!eventEmitInterval.isNegative(), "EventEmitInterval must be greater or equal to 0"); + + this.eventEmitInterval = eventEmitInterval; + return this; + } + + /** + * Sets the emit interval and the interval unit. Event emission will be disabled if the {@code eventEmitInterval} is set + * to 0}. Defaults to 10} {@link TimeUnit#MINUTES}. See {@link DefaultEventPublisherOptions#DEFAULT_EMIT_INTERVAL} + * {@link DefaultEventPublisherOptions#DEFAULT_EMIT_INTERVAL_UNIT}. + * + * @param eventEmitInterval the event interval, must be greater or equal to 0} + * @param eventEmitIntervalUnit the {@link TimeUnit} for the interval, must not be null + * @return this + * @deprecated since 5.0, use {@link #eventEmitInterval(Duration)} + */ + @Deprecated + public Builder eventEmitInterval(long eventEmitInterval, TimeUnit eventEmitIntervalUnit) { + + LettuceAssert.isTrue(eventEmitInterval >= 0, "EventEmitInterval must be greater or equal to 0"); + LettuceAssert.notNull(eventEmitIntervalUnit, "EventEmitIntervalUnit must not be null"); + + return eventEmitInterval(Duration.ofNanos(eventEmitIntervalUnit.toNanos(eventEmitInterval))); + } + + /** + * + * @return a new instance of {@link DefaultEventPublisherOptions}. + */ + public DefaultEventPublisherOptions build() { + return new DefaultEventPublisherOptions(this); + } + } + + @Override + public Duration eventEmitInterval() { + return eventEmitInterval; + } + + /** + * Create a new {@link DefaultEventPublisherOptions} using default settings. + * + * @return a new instance of a default {@link DefaultEventPublisherOptions} instance + */ + public static DefaultEventPublisherOptions create() { + return new Builder().build(); + } + + /** + * Create a disabled {@link DefaultEventPublisherOptions} using default settings. + * + * @return a new instance of a default {@link DefaultEventPublisherOptions} instance with disabled event emission + */ + public static DefaultEventPublisherOptions disabled() { + return DISABLED; + } +} diff --git a/src/main/java/io/lettuce/core/event/Event.java b/src/main/java/io/lettuce/core/event/Event.java new file mode 100644 index 0000000000..f4d638f82d --- /dev/null +++ b/src/main/java/io/lettuce/core/event/Event.java @@ -0,0 +1,26 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event; + +/** + * + * Marker-interface for events that are published over the event bus. + * + * @author Mark Paluch + * @since 3.4 + */ +public interface Event { +} diff --git a/src/main/java/io/lettuce/core/event/EventBus.java b/src/main/java/io/lettuce/core/event/EventBus.java new file mode 100644 index 0000000000..7508d266d2 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/EventBus.java @@ -0,0 +1,41 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event; + +import reactor.core.publisher.Flux; + +/** + * Interface for an EventBus. Events can be published over the bus that are delivered to the subscribers. + * + * @author Mark Paluch + * @since 3.4 + */ +public interface EventBus { + + /** + * Subscribe to the event bus and {@link Event}s. The {@link Flux} drops events on backpressure to avoid contention. + * + * @return the observable to obtain events. + */ + Flux get(); + + /** + * Publish a {@link Event} to the bus. + * + * @param event the event to publish + */ + void publish(Event event); +} diff --git a/src/main/java/io/lettuce/core/event/EventPublisherOptions.java b/src/main/java/io/lettuce/core/event/EventPublisherOptions.java new file mode 100644 index 0000000000..11113b7fcb --- /dev/null +++ b/src/main/java/io/lettuce/core/event/EventPublisherOptions.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event; + +import java.time.Duration; + +/** + * Configuration interface for command latency collection. + * + * @author Mark Paluch + */ +public interface EventPublisherOptions { + + /** + * Returns the interval for emit metrics. + * + * @return the interval for emit metrics + */ + Duration eventEmitInterval(); +} diff --git a/src/main/java/io/lettuce/core/event/cluster/AdaptiveRefreshTriggeredEvent.java b/src/main/java/io/lettuce/core/event/cluster/AdaptiveRefreshTriggeredEvent.java new file mode 100644 index 0000000000..679bc4c492 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/cluster/AdaptiveRefreshTriggeredEvent.java @@ -0,0 +1,54 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.cluster; + +import java.util.function.Supplier; + +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.event.Event; + +/** + * Event when a topology refresh is about to start by an adaptive refresh trigger. + * + * @author Mark Paluch + * @since 5.2 + */ +public class AdaptiveRefreshTriggeredEvent implements Event { + + private Supplier partitionsSupplier; + private Runnable topologyRefreshScheduler; + + public AdaptiveRefreshTriggeredEvent(Supplier partitionsSupplier, Runnable topologyRefreshScheduler) { + this.partitionsSupplier = partitionsSupplier; + this.topologyRefreshScheduler = topologyRefreshScheduler; + } + + /** + * Schedules a new topology refresh. Refresh happens asynchronously. + */ + public void scheduleRefresh() { + topologyRefreshScheduler.run(); + } + + /** + * Retrieve the currently known partitions. + * + * @return the currently known topology view. The view is mutable and changes over time. + */ + public Partitions getPartitions() { + return partitionsSupplier.get(); + } +} diff --git a/src/main/java/io/lettuce/core/event/cluster/package-info.java b/src/main/java/io/lettuce/core/event/cluster/package-info.java new file mode 100644 index 0000000000..6c9aeda268 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/cluster/package-info.java @@ -0,0 +1,5 @@ +/** + * Redis Cluster events. + */ +package io.lettuce.core.event.cluster; + diff --git a/src/main/java/io/lettuce/core/event/connection/ConnectedEvent.java b/src/main/java/io/lettuce/core/event/connection/ConnectedEvent.java new file mode 100644 index 0000000000..ebcec11539 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/connection/ConnectedEvent.java @@ -0,0 +1,30 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.connection; + +import java.net.SocketAddress; + +/** + * Event for a established TCP-level connection. + * + * @author Mark Paluch + * @since 3.4 + */ +public class ConnectedEvent extends ConnectionEventSupport { + public ConnectedEvent(SocketAddress local, SocketAddress remote) { + super(local, remote); + } +} diff --git a/src/main/java/io/lettuce/core/event/connection/ConnectionActivatedEvent.java b/src/main/java/io/lettuce/core/event/connection/ConnectionActivatedEvent.java new file mode 100644 index 0000000000..0bfd0da57d --- /dev/null +++ b/src/main/java/io/lettuce/core/event/connection/ConnectionActivatedEvent.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.connection; + +import java.net.SocketAddress; + +import io.lettuce.core.ClientOptions; + +/** + * Event for a connection activation (after SSL-handshake, {@link ClientOptions#isPingBeforeActivateConnection() PING before + * activation}, and buffered command replay). + * + * @author Mark Paluch + * @since 3.4 + */ +public class ConnectionActivatedEvent extends ConnectionEventSupport { + public ConnectionActivatedEvent(SocketAddress local, SocketAddress remote) { + super(local, remote); + } +} diff --git a/src/main/java/io/lettuce/core/event/connection/ConnectionDeactivatedEvent.java b/src/main/java/io/lettuce/core/event/connection/ConnectionDeactivatedEvent.java new file mode 100644 index 0000000000..3aa522aa5a --- /dev/null +++ b/src/main/java/io/lettuce/core/event/connection/ConnectionDeactivatedEvent.java @@ -0,0 +1,30 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.connection; + +import java.net.SocketAddress; + +/** + * Event for a connection deactivation. + * + * @author Mark Paluch + * @since 3.4 + */ +public class ConnectionDeactivatedEvent extends ConnectionEventSupport { + public ConnectionDeactivatedEvent(SocketAddress local, SocketAddress remote) { + super(local, remote); + } +} diff --git a/src/main/java/io/lettuce/core/event/connection/ConnectionEvent.java b/src/main/java/io/lettuce/core/event/connection/ConnectionEvent.java new file mode 100644 index 0000000000..be388cbf9c --- /dev/null +++ b/src/main/java/io/lettuce/core/event/connection/ConnectionEvent.java @@ -0,0 +1,29 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.connection; + +import io.lettuce.core.ConnectionId; +import io.lettuce.core.event.Event; + +/** + * Interface for Connection-related events + * + * @author Mark Paluch + * @since 3.4 + */ +public interface ConnectionEvent extends ConnectionId, Event { + +} diff --git a/src/main/java/io/lettuce/core/event/connection/ConnectionEventSupport.java b/src/main/java/io/lettuce/core/event/connection/ConnectionEventSupport.java new file mode 100644 index 0000000000..a19156286e --- /dev/null +++ b/src/main/java/io/lettuce/core/event/connection/ConnectionEventSupport.java @@ -0,0 +1,67 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.connection; + +import java.net.SocketAddress; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * @author Mark Paluch + * @since 3.4 + */ +abstract class ConnectionEventSupport implements ConnectionEvent { + + private final SocketAddress local; + private final SocketAddress remote; + + ConnectionEventSupport(SocketAddress local, SocketAddress remote) { + LettuceAssert.notNull(local, "Local must not be null"); + LettuceAssert.notNull(remote, "Remote must not be null"); + + this.local = local; + this.remote = remote; + } + + /** + * Returns the local address. + * + * @return the local address + */ + public SocketAddress localAddress() { + return local; + } + + /** + * Returns the remote address. + * + * @return the remote address + */ + public SocketAddress remoteAddress() { + return remote; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" ["); + sb.append(local); + sb.append(" -> ").append(remote); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/event/connection/DisconnectedEvent.java b/src/main/java/io/lettuce/core/event/connection/DisconnectedEvent.java new file mode 100644 index 0000000000..314a83cc71 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/connection/DisconnectedEvent.java @@ -0,0 +1,30 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.connection; + +import java.net.SocketAddress; + +/** + * Event for a disconnect on TCP-level. + * + * @author Mark Paluch + * @since 3.4 + */ +public class DisconnectedEvent extends ConnectionEventSupport { + public DisconnectedEvent(SocketAddress local, SocketAddress remote) { + super(local, remote); + } +} diff --git a/src/main/java/io/lettuce/core/event/connection/ReconnectFailedEvent.java b/src/main/java/io/lettuce/core/event/connection/ReconnectFailedEvent.java new file mode 100644 index 0000000000..d77d6362cb --- /dev/null +++ b/src/main/java/io/lettuce/core/event/connection/ReconnectFailedEvent.java @@ -0,0 +1,55 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.connection; + +import java.net.SocketAddress; + +/** + * Event fired on failed reconnect caused either by I/O issues or during connection initialization. + * + * @author Mark Paluch + * @since 5.2 + */ +public class ReconnectFailedEvent extends ConnectionEventSupport { + + private final Throwable cause; + private final int attempt; + + public ReconnectFailedEvent(SocketAddress local, SocketAddress remote, Throwable cause, int attempt) { + super(local, remote); + this.cause = cause; + this.attempt = attempt; + } + + /** + * Returns the {@link Throwable} that describes the reconnect cause. + * + * @return the {@link Throwable} that describes the reconnect cause. + */ + public Throwable getCause() { + return cause; + } + + /** + * Returns the reconnect attempt counter for the connection. Zero-based counter, {@code 0} represents the first attempt. The + * counter is reset upon successful reconnect. + * + * @return the reconnect attempt counter for the connection. Zero-based counter, {@code 0} represents the first attempt. + */ + public int getAttempt() { + return attempt; + } +} diff --git a/src/main/java/io/lettuce/core/event/connection/package-info.java b/src/main/java/io/lettuce/core/event/connection/package-info.java new file mode 100644 index 0000000000..70df7b17c5 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/connection/package-info.java @@ -0,0 +1,5 @@ +/** + * Connection-related events. + */ +package io.lettuce.core.event.connection; + diff --git a/src/main/java/io/lettuce/core/event/metrics/CommandLatencyEvent.java b/src/main/java/io/lettuce/core/event/metrics/CommandLatencyEvent.java new file mode 100644 index 0000000000..d76ccbe009 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/metrics/CommandLatencyEvent.java @@ -0,0 +1,52 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.metrics; + +import java.util.Map; + +import io.lettuce.core.event.Event; +import io.lettuce.core.metrics.CommandLatencyId; +import io.lettuce.core.metrics.CommandMetrics; + +/** + * Event that transports command latency metrics. This event carries latencies for multiple commands and connections. + * + * @author Mark Paluch + */ +public class CommandLatencyEvent implements Event { + + private Map latencies; + + public CommandLatencyEvent(Map latencies) { + this.latencies = latencies; + } + + /** + * Returns the latencies mapped between {@link CommandLatencyId connection/command} and the {@link CommandMetrics metrics}. + * + * @return the latency map. + */ + public Map getLatencies() { + return latencies; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(latencies); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/event/metrics/DefaultCommandLatencyEventPublisher.java b/src/main/java/io/lettuce/core/event/metrics/DefaultCommandLatencyEventPublisher.java new file mode 100644 index 0000000000..e7c4200953 --- /dev/null +++ b/src/main/java/io/lettuce/core/event/metrics/DefaultCommandLatencyEventPublisher.java @@ -0,0 +1,79 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.metrics; + +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.EventPublisherOptions; +import io.lettuce.core.metrics.CommandLatencyCollector; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.concurrent.ScheduledFuture; + +/** + * Default implementation of a {@link CommandLatencyCollector} for command latencies. + * + * @author Mark Paluch + */ +public class DefaultCommandLatencyEventPublisher implements MetricEventPublisher { + + private final EventExecutorGroup eventExecutorGroup; + private final EventPublisherOptions options; + private final EventBus eventBus; + private final CommandLatencyCollector commandLatencyCollector; + + private final Runnable EMITTER = this::emitMetricsEvent; + + private volatile ScheduledFuture scheduledFuture; + + public DefaultCommandLatencyEventPublisher(EventExecutorGroup eventExecutorGroup, EventPublisherOptions options, + EventBus eventBus, CommandLatencyCollector commandLatencyCollector) { + + this.eventExecutorGroup = eventExecutorGroup; + this.options = options; + this.eventBus = eventBus; + this.commandLatencyCollector = commandLatencyCollector; + + if (!options.eventEmitInterval().isZero()) { + scheduledFuture = this.eventExecutorGroup.scheduleAtFixedRate(EMITTER, options.eventEmitInterval().toMillis(), + options.eventEmitInterval().toMillis(), TimeUnit.MILLISECONDS); + } + } + + @Override + public boolean isEnabled() { + return !options.eventEmitInterval().isZero() && scheduledFuture != null; + } + + @Override + public void shutdown() { + + if (scheduledFuture != null) { + scheduledFuture.cancel(true); + scheduledFuture = null; + } + } + + @Override + public void emitMetricsEvent() { + + if (!isEnabled() || !commandLatencyCollector.isEnabled()) { + return; + } + + eventBus.publish(new CommandLatencyEvent(commandLatencyCollector.retrieveMetrics())); + } +} diff --git a/src/main/java/io/lettuce/core/event/metrics/MetricEventPublisher.java b/src/main/java/io/lettuce/core/event/metrics/MetricEventPublisher.java new file mode 100644 index 0000000000..a3bfa6771b --- /dev/null +++ b/src/main/java/io/lettuce/core/event/metrics/MetricEventPublisher.java @@ -0,0 +1,44 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event.metrics; + +import io.lettuce.core.event.Event; + +/** + * Event publisher which publishes metrics by the use of {@link Event events}. + * + * @author Mark Paluch + * @since 3.4 + */ +public interface MetricEventPublisher { + + /** + * Emit immediately a metrics event. + */ + void emitMetricsEvent(); + + /** + * Returns {@literal true} if the metric collector is enabled. + * + * @return {@literal true} if the metric collector is enabled + */ + boolean isEnabled(); + + /** + * Shut down the event publisher. + */ + void shutdown(); +} diff --git a/src/main/java/io/lettuce/core/event/metrics/package-info.java b/src/main/java/io/lettuce/core/event/metrics/package-info.java new file mode 100644 index 0000000000..a40d1f39ba --- /dev/null +++ b/src/main/java/io/lettuce/core/event/metrics/package-info.java @@ -0,0 +1,5 @@ +/** + * Metric events and publishing. + */ +package io.lettuce.core.event.metrics; + diff --git a/src/main/java/io/lettuce/core/event/package-info.java b/src/main/java/io/lettuce/core/event/package-info.java new file mode 100644 index 0000000000..3a6dd1e4bd --- /dev/null +++ b/src/main/java/io/lettuce/core/event/package-info.java @@ -0,0 +1,5 @@ +/** + * Event publishing and subscription. + */ +package io.lettuce.core.event; + diff --git a/src/main/java/com/lambdaworks/redis/internal/AbstractInvocationHandler.java b/src/main/java/io/lettuce/core/internal/AbstractInvocationHandler.java similarity index 80% rename from src/main/java/com/lambdaworks/redis/internal/AbstractInvocationHandler.java rename to src/main/java/io/lettuce/core/internal/AbstractInvocationHandler.java index 5c65ee40cf..6ce5062255 100644 --- a/src/main/java/com/lambdaworks/redis/internal/AbstractInvocationHandler.java +++ b/src/main/java/io/lettuce/core/internal/AbstractInvocationHandler.java @@ -1,4 +1,19 @@ -package com.lambdaworks.redis.internal; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; import java.lang.reflect.InvocationHandler; import java.lang.reflect.Method; @@ -8,7 +23,7 @@ /** * Abstract base class for invocation handlers. - * + * * @since 4.2 */ public abstract class AbstractInvocationHandler implements InvocationHandler { @@ -27,7 +42,7 @@ public abstract class AbstractInvocationHandler implements InvocationHandler { * *

  • other method calls are dispatched to {@link #handleInvocation}. * - * + * * @param proxy the proxy instance that the method was invoked on * * @param method the {@code Method} instance corresponding to the interface method invoked on the proxy instance. The @@ -68,7 +83,7 @@ public final Object invoke(Object proxy, Method method, Object[] args) throws Th /** * {@link #invoke} delegates to this method upon any method invocation on the proxy instance, except {@link Object#equals}, * {@link Object#hashCode} and {@link Object#toString}. The result will be returned as the proxied method's return value. - * + * *

    * Unlike {@link #invoke}, {@code args} will never be null. When the method has no parameter, an empty array is passed in. * @@ -108,7 +123,7 @@ public boolean equals(Object obj) { /** * By default delegates to {@link Object#hashCode}. The dynamic proxies' {@code hashCode()} will delegate to this method. * Subclasses can override this method to provide custom equality. - * + * * @return a hash code value for this object. */ @Override @@ -119,7 +134,7 @@ public int hashCode() { /** * By default delegates to {@link Object#toString}. The dynamic proxies' {@code toString()} will delegate to this method. * Subclasses can override this method to provide custom string representation for the proxies. - * + * * @return a string representation of the object. */ @Override @@ -129,21 +144,35 @@ public String toString() { private static boolean isProxyOfSameInterfaces(Object arg, Class proxyClass) { return proxyClass.isInstance(arg) - // Equal proxy instances should mostly be instance of proxyClass - // Under some edge cases (such as the proxy of JDK types serialized and then deserialized) - // the proxy type may not be the same. - // We first check isProxyClass() so that the common case of comparing with non-proxy objects - // is efficient. - || (Proxy.isProxyClass(arg.getClass()) - && Arrays.equals(arg.getClass().getInterfaces(), proxyClass.getInterfaces())); + // Equal proxy instances should mostly be instance of proxyClass + // Under some edge cases (such as the proxy of JDK types serialized and then deserialized) + // the proxy type may not be the same. + // We first check isProxyClass() so that the common case of comparing with non-proxy objects + // is efficient. + || (Proxy.isProxyClass(arg.getClass()) && Arrays.equals(arg.getClass().getInterfaces(), + proxyClass.getInterfaces())); } protected static class MethodTranslator { + private static final WeakHashMap, MethodTranslator> TRANSLATOR_MAP = new WeakHashMap<>(32); private final Map map; - public MethodTranslator(Class delegate, Class... methodSources) { + private MethodTranslator(Class delegate, Class... methodSources) { + + map = createMethodMap(delegate, methodSources); + } + + public static MethodTranslator of(Class delegate, Class... methodSources) { + + synchronized (TRANSLATOR_MAP) { + return TRANSLATOR_MAP.computeIfAbsent(delegate, key -> new MethodTranslator(key, methodSources)); + } + } + + private Map createMethodMap(Class delegate, Class[] methodSources) { + Map map; List methods = new ArrayList<>(); for (Class sourceClass : methodSources) { methods.addAll(getMethods(sourceClass)); @@ -158,6 +187,7 @@ public MethodTranslator(Class delegate, Class... methodSources) { } catch (NoSuchMethodException ignore) { } } + return map; } private Collection getMethods(Class sourceClass) { diff --git a/src/main/java/io/lettuce/core/internal/AsyncCloseable.java b/src/main/java/io/lettuce/core/internal/AsyncCloseable.java new file mode 100644 index 0000000000..f067056f8d --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/AsyncCloseable.java @@ -0,0 +1,36 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.util.concurrent.CompletableFuture; + +/** + * A {@link AsyncCloseable} is a resource that can be closed. The {@link #closeAsync()} method is invoked to request resources + * release that the object is holding (such as open files). + * + * @since 5.1 + * @author Mark Paluch + */ +public interface AsyncCloseable { + + /** + * Requests to close this object and releases any system resources associated with it. If the object is already closed then + * invoking this method has no effect. + *

    + * Calls to this method return a {@link CompletableFuture} that is notified with the outcome of the close request. + */ + CompletableFuture closeAsync(); +} diff --git a/src/main/java/io/lettuce/core/internal/AsyncConnectionProvider.java b/src/main/java/io/lettuce/core/internal/AsyncConnectionProvider.java new file mode 100644 index 0000000000..73457603ae --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/AsyncConnectionProvider.java @@ -0,0 +1,292 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.concurrent.CancellationException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import java.util.function.BiConsumer; +import java.util.function.Consumer; +import java.util.function.Function; + +/** + * Non-blocking provider for connection objects. This connection provider is typed with a connection type and connection key + * type. + *

    + * {@link #getConnection(Object)} Connection requests} are synchronized with a shared {@link Sync synchronzer object} per + * {@code ConnectionKey}. Multiple threads requesting a connection for the same {@code ConnectionKey} share the same + * synchronizer and are not required to wait until a previous asynchronous connection is established but participate in existing + * connection initializations. Shared synchronization leads to a fair synchronization amongst multiple threads waiting to obtain + * a connection. + * + * @author Mark Paluch + * @param connection type. + * @param connection key type. + * @param type of the {@link CompletionStage} handle of the connection progress. + * @since 5.1 + */ +public class AsyncConnectionProvider> { + + private final Function connectionFactory; + private final Map> connections = new ConcurrentHashMap<>(); + + private volatile boolean closed; + + /** + * Create a new {@link AsyncConnectionProvider}. + * + * @param connectionFactory must not be {@literal null}. + */ + @SuppressWarnings("unchecked") + public AsyncConnectionProvider(Function connectionFactory) { + + LettuceAssert.notNull(connectionFactory, "AsyncConnectionProvider must not be null"); + this.connectionFactory = (Function) connectionFactory; + } + + /** + * Request a connection for the given the connection {@code key} and return a {@link CompletionStage} that is notified about + * the connection outcome. + * + * @param key the connection {@code key}, must not be {@literal null}. + * @return + */ + public F getConnection(K key) { + return getSynchronizer(key).getConnection(); + } + + /** + * Obtain a connection to a target given the connection {@code key}. + * + * @param key the connection {@code key}. + * @return + */ + private Sync getSynchronizer(K key) { + + if (closed) { + throw new IllegalStateException("ConnectionProvider is already closed"); + } + + Sync sync = connections.get(key); + + if (sync != null) { + return sync; + } + + AtomicBoolean atomicBoolean = new AtomicBoolean(); + + sync = connections.computeIfAbsent(key, connectionKey -> { + + Sync createdSync = new Sync<>(key, connectionFactory.apply(key)); + + if (closed) { + createdSync.cancel(); + } + + return createdSync; + }); + + if (atomicBoolean.compareAndSet(false, true)) { + + sync.getConnection().whenComplete((c, t) -> { + + if (t != null) { + connections.remove(key); + } + }); + } + + return sync; + } + + /** + * Register a connection identified by {@code key}. Overwrites existing entries. + * + * @param key the connection {@code key}. + * @param connection the connection object. + */ + public void register(K key, T connection) { + connections.put(key, new Sync<>(key, connection)); + } + + /** + * @return number of established connections. + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + public int getConnectionCount() { + + Sync[] syncs = connections.values().toArray(new Sync[0]); + int count = 0; + + for (Sync sync : syncs) { + if (sync.isComplete()) { + count++; + } + } + + return count; + } + + /** + * Close all connections. Pending connections are closed using future chaining. + */ + @SuppressWarnings("unchecked") + public CompletableFuture close() { + + this.closed = true; + + List> futures = new ArrayList<>(); + + forEach((connectionKey, closeable) -> { + + futures.add(closeable.closeAsync()); + connections.remove(connectionKey); + }); + + return Futures.allOf(futures); + } + + /** + * Close a connection by its connection {@code key}. Pending connections are closed using future chaining. + * + * @param key the connection {@code key}, must not be {@literal null}. + */ + public void close(K key) { + + LettuceAssert.notNull(key, "ConnectionKey must not be null!"); + + Sync sync = connections.get(key); + if (sync != null) { + connections.remove(key); + sync.doWithConnection(AsyncCloseable::closeAsync); + } + } + + /** + * Execute an action for all established and pending connections. + * + * @param action the action. + */ + public void forEach(Consumer action) { + + LettuceAssert.notNull(action, "Action must not be null!"); + + connections.values().forEach(sync -> { + if (sync != null) { + sync.doWithConnection(action); + } + }); + } + + /** + * Execute an action for all established and pending {@link AsyncCloseable}s. + * + * @param action the action. + */ + public void forEach(BiConsumer action) { + connections.forEach((key, sync) -> sync.doWithConnection(action)); + } + + static class Sync> { + + private static final int PHASE_IN_PROGRESS = 0; + private static final int PHASE_COMPLETE = 1; + private static final int PHASE_FAILED = 2; + private static final int PHASE_CANCELED = 3; + + @SuppressWarnings({ "rawtypes", "unchecked" }) + private static final AtomicIntegerFieldUpdater PHASE = AtomicIntegerFieldUpdater.newUpdater(Sync.class, "phase"); + + // Updated with AtomicIntegerFieldUpdater + @SuppressWarnings("unused") + private volatile int phase = PHASE_IN_PROGRESS; + + private volatile T connection; + + private final K key; + private final F future; + + @SuppressWarnings("unchecked") + public Sync(K key, F future) { + + this.key = key; + this.future = (F) future.whenComplete((connection, throwable) -> { + + if (throwable != null) { + + if (throwable instanceof CancellationException) { + PHASE.compareAndSet(this, PHASE_IN_PROGRESS, PHASE_CANCELED); + } + + PHASE.compareAndSet(this, PHASE_IN_PROGRESS, PHASE_FAILED); + } + + if (PHASE.compareAndSet(this, PHASE_IN_PROGRESS, PHASE_COMPLETE)) { + + if (connection != null) { + Sync.this.connection = connection; + } + } + }); + } + + @SuppressWarnings("unchecked") + public Sync(K key, T value) { + + this.key = key; + this.connection = value; + this.future = (F) CompletableFuture.completedFuture(value); + PHASE.set(this, PHASE_COMPLETE); + } + + public void cancel() { + future.toCompletableFuture().cancel(false); + doWithConnection(AsyncCloseable::closeAsync); + } + + public F getConnection() { + return future; + } + + void doWithConnection(Consumer action) { + + if (isComplete()) { + action.accept(connection); + } else { + future.thenAccept(action); + } + } + + void doWithConnection(BiConsumer action) { + + if (isComplete()) { + action.accept(key, connection); + } else { + future.thenAccept(c -> action.accept(key, c)); + } + } + + private boolean isComplete() { + return PHASE.get(this) == PHASE_COMPLETE; + } + } +} diff --git a/src/main/java/io/lettuce/core/internal/DefaultMethods.java b/src/main/java/io/lettuce/core/internal/DefaultMethods.java new file mode 100644 index 0000000000..70e89ec197 --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/DefaultMethods.java @@ -0,0 +1,174 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.lang.invoke.MethodHandle; +import java.lang.invoke.MethodHandles; +import java.lang.invoke.MethodType; +import java.lang.invoke.MethodHandles.Lookup; +import java.lang.reflect.Constructor; +import java.lang.reflect.Method; +import java.util.Arrays; +import java.util.Optional; + +/** + * Collection of utility methods to lookup {@link MethodHandle}s for default interface {@link Method}s. This class is part of + * the internal API and may change without further notice. + * + * @author Mark Paluch + * @since 4.4 + */ +public class DefaultMethods { + + private static final MethodHandleLookup methodHandleLookup = MethodHandleLookup.getMethodHandleLookup(); + + /** + * Lookup a {@link MethodHandle} for a default {@link Method}. + * + * @param method must be a {@link Method#isDefault() default} {@link Method}. + * @return the {@link MethodHandle}. + */ + public static MethodHandle lookupMethodHandle(Method method) throws ReflectiveOperationException { + + LettuceAssert.notNull(method, "Method must not be null"); + LettuceAssert.isTrue(method.isDefault(), "Method is not a default method"); + + return methodHandleLookup.lookup(method); + } + + /** + * Strategies for {@link MethodHandle} lookup. + */ + enum MethodHandleLookup { + + /** + * Open (via reflection construction of {@link Lookup}) method handle lookup. Works with Java 8 and with Java 9 + * permitting illegal access. + */ + OPEN { + + private final Optional> constructor = getLookupConstructor(); + + @Override + MethodHandle lookup(Method method) throws ReflectiveOperationException { + + Constructor constructor = this.constructor.orElseThrow(() -> new IllegalStateException( + "Could not obtain MethodHandles.lookup constructor")); + + return constructor.newInstance(method.getDeclaringClass()).unreflectSpecial(method, method.getDeclaringClass()); + } + + @Override + boolean isAvailable() { + return constructor.isPresent(); + } + }, + + /** + * Encapsulated {@link MethodHandle} lookup working on Java 9. + */ + ENCAPSULATED { + + Method privateLookupIn = findBridgeMethod(); + + @Override + MethodHandle lookup(Method method) throws ReflectiveOperationException { + + MethodType methodType = MethodType.methodType(method.getReturnType(), method.getParameterTypes()); + + return getLookup(method.getDeclaringClass()).findSpecial(method.getDeclaringClass(), method.getName(), + methodType, method.getDeclaringClass()); + } + + private Method findBridgeMethod() { + + try { + return MethodHandles.class.getDeclaredMethod("privateLookupIn", Class.class, Lookup.class); + } catch (ReflectiveOperationException e) { + return null; + } + } + + private Lookup getLookup(Class declaringClass) { + + Lookup lookup = MethodHandles.lookup(); + + if (privateLookupIn != null) { + try { + return (Lookup) privateLookupIn.invoke(null, declaringClass, lookup); + } catch (ReflectiveOperationException e) { + return lookup; + } + } + + return lookup; + } + + @Override + boolean isAvailable() { + return true; + } + }; + + /** + * Lookup a {@link MethodHandle} given {@link Method} to look up. + * + * @param method must not be {@literal null}. + * @return the method handle. + * @throws ReflectiveOperationException + */ + abstract MethodHandle lookup(Method method) throws ReflectiveOperationException; + + /** + * @return {@literal true} if the lookup is available. + */ + abstract boolean isAvailable(); + + /** + * Obtain the first available {@link MethodHandleLookup}. + * + * @return the {@link MethodHandleLookup} + * @throws IllegalStateException if no {@link MethodHandleLookup} is available. + */ + public static MethodHandleLookup getMethodHandleLookup() { + + return Arrays.stream(MethodHandleLookup.values()).filter(MethodHandleLookup::isAvailable).findFirst() + .orElseThrow(() -> new IllegalStateException("No MethodHandleLookup available!")); + } + + private static Optional> getLookupConstructor() { + + try { + + Constructor constructor = Lookup.class.getDeclaredConstructor(Class.class); + if (!constructor.isAccessible()) { + constructor.setAccessible(true); + } + + return Optional.of(constructor); + + } catch (Exception ex) { + + // this is the signal that we are on Java 9 (encapsulated) and can't use the accessible constructor approach. + if (ex.getClass().getName().equals("java.lang.reflect.InaccessibleObjectException")) { + return Optional.empty(); + } + + throw new IllegalStateException(ex); + } + } + } +} diff --git a/src/main/java/io/lettuce/core/internal/ExceptionFactory.java b/src/main/java/io/lettuce/core/internal/ExceptionFactory.java new file mode 100644 index 0000000000..3f69699990 --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/ExceptionFactory.java @@ -0,0 +1,143 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import io.lettuce.core.*; + +import java.time.Duration; +import java.time.LocalTime; +import java.time.format.DateTimeFormatter; +import java.time.format.DateTimeFormatterBuilder; +import java.time.temporal.ChronoField; + +/** + * Factory for Redis exceptions. + * + * @author Mark Paluch + * @since 4.5 + */ +public abstract class ExceptionFactory { + + private static final DateTimeFormatter MINUTES = new DateTimeFormatterBuilder().appendText(ChronoField.MINUTE_OF_DAY) + .appendLiteral(" minute(s)").toFormatter(); + + private static final DateTimeFormatter SECONDS = new DateTimeFormatterBuilder().appendText(ChronoField.SECOND_OF_DAY) + .appendLiteral(" second(s)").toFormatter(); + + private static final DateTimeFormatter MILLISECONDS = new DateTimeFormatterBuilder().appendText(ChronoField.MILLI_OF_DAY) + .appendLiteral(" millisecond(s)").toFormatter(); + + private ExceptionFactory() { + } + + /** + * Create a {@link RedisCommandTimeoutException} with a detail message given the timeout. + * + * @param timeout the timeout value. + * @return the {@link RedisCommandTimeoutException}. + */ + public static RedisCommandTimeoutException createTimeoutException(Duration timeout) { + return new RedisCommandTimeoutException(String.format("Command timed out after %s", formatTimeout(timeout))); + } + + /** + * Create a {@link RedisCommandTimeoutException} with a detail message given the message and timeout. + * + * @param message the detail message. + * @param timeout the timeout value. + * @return the {@link RedisCommandTimeoutException}. + */ + public static RedisCommandTimeoutException createTimeoutException(String message, Duration timeout) { + return new RedisCommandTimeoutException( + String.format("%s. Command timed out after %s", message, formatTimeout(timeout))); + } + + public static String formatTimeout(Duration duration) { + + if (duration.isZero()) { + return "no timeout"; + } + + LocalTime time = LocalTime.MIDNIGHT.plus(duration); + if (isExactMinutes(duration)) { + return MINUTES.format(time); + } + + if (isExactSeconds(duration)) { + return SECONDS.format(time); + } + + if (isExactMillis(duration)) { + return MILLISECONDS.format(time); + } + + return String.format("%d ns", duration.toNanos()); + } + + private static boolean isExactMinutes(Duration duration) { + return duration.toMillis() % (1000 * 60) == 0 && duration.getNano() == 0; + } + + private static boolean isExactSeconds(Duration duration) { + return duration.toMillis() % (1000) == 0 && duration.getNano() == 0; + } + + private static boolean isExactMillis(Duration duration) { + return duration.toNanos() % (1000 * 1000) == 0; + } + + /** + * Create a {@link RedisCommandExecutionException} with a detail message. Specific Redis error messages may create subtypes + * of {@link RedisCommandExecutionException}. + * + * @param message the detail message. + * @return the {@link RedisCommandExecutionException}. + */ + public static RedisCommandExecutionException createExecutionException(String message) { + return createExecutionException(message, null); + } + + /** + * Create a {@link RedisCommandExecutionException} with a detail message and optionally a {@link Throwable cause}. Specific + * Redis error messages may create subtypes of {@link RedisCommandExecutionException}. + * + * @param message the detail message. + * @param cause the nested exception, may be {@literal null}. + * @return the {@link RedisCommandExecutionException}. + */ + public static RedisCommandExecutionException createExecutionException(String message, Throwable cause) { + + if (message != null) { + + if (message.startsWith("BUSY")) { + return cause != null ? new RedisBusyException(message, cause) : new RedisBusyException(message); + } + + if (message.startsWith("NOSCRIPT")) { + return cause != null ? new RedisNoScriptException(message, cause) : new RedisNoScriptException(message); + } + + if (message.startsWith("LOADING")) { + return cause != null ? new RedisLoadingException(message, cause) : new RedisLoadingException(message); + } + + return cause != null ? new RedisCommandExecutionException(message, cause) + : new RedisCommandExecutionException(message); + } + + return new RedisCommandExecutionException(cause); + } +} diff --git a/src/main/java/io/lettuce/core/internal/Exceptions.java b/src/main/java/io/lettuce/core/internal/Exceptions.java new file mode 100644 index 0000000000..85138fbdb5 --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/Exceptions.java @@ -0,0 +1,111 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.util.concurrent.CompletionException; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Future; +import java.util.concurrent.TimeoutException; + +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.RedisCommandInterruptedException; +import io.lettuce.core.RedisCommandTimeoutException; +import io.lettuce.core.RedisException; + +/** + * Exception handling and utils to operate on. + * + * @author Mark Paluch + * @since 6.0 + */ +public class Exceptions { + + /** + * Unwrap the exception if the given {@link Throwable} is a {@link ExecutionException} or {@link CompletionException}. + * + * @param t the root cause + * @return the unwrapped {@link Throwable#getCause() cause} or the actual {@link Throwable}. + */ + public static Throwable unwrap(Throwable t) { + + if (t instanceof ExecutionException || t instanceof CompletionException) { + return t.getCause(); + } + + return t; + } + + /** + * Prepare an unchecked {@link RuntimeException} that will bubble upstream if thrown by an operator. + * + * @param t the root cause + * @return an unchecked exception that should choose bubbling up over error callback path. + */ + public static RuntimeException bubble(Throwable t) { + + Throwable throwableToUse = unwrap(t); + + if (throwableToUse instanceof TimeoutException) { + return new RedisCommandTimeoutException(throwableToUse); + } + + if (throwableToUse instanceof InterruptedException) { + + Thread.currentThread().interrupt(); + return new RedisCommandInterruptedException(throwableToUse); + } + + if (throwableToUse instanceof RedisCommandExecutionException) { + return ExceptionFactory.createExecutionException(throwableToUse.getMessage(), throwableToUse); + } + + if (throwableToUse instanceof RedisException) { + return (RedisException) throwableToUse; + } + + if (throwableToUse instanceof RuntimeException) { + return (RuntimeException) throwableToUse; + } + + return new RedisException(throwableToUse); + } + + /** + * Prepare an unchecked {@link RuntimeException} that will bubble upstream for synchronization usage (i.e. on calling + * {@link Future#get()}). + * + * @param t the root cause + * @return an unchecked exception that should choose bubbling up over error callback path. + */ + public static RuntimeException fromSynchronization(Throwable t) { + + Throwable throwableToUse = unwrap(t); + + if (throwableToUse instanceof RedisCommandTimeoutException) { + return new RedisCommandTimeoutException(throwableToUse); + } + + if (throwableToUse instanceof RedisCommandExecutionException) { + return bubble(throwableToUse); + } + + if (throwableToUse instanceof RuntimeException) { + return new RedisException(throwableToUse); + } + + return bubble(throwableToUse); + } +} diff --git a/src/main/java/io/lettuce/core/internal/Futures.java b/src/main/java/io/lettuce/core/internal/Futures.java new file mode 100644 index 0000000000..b074d89eb0 --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/Futures.java @@ -0,0 +1,253 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.time.Duration; +import java.util.Collection; +import java.util.concurrent.*; + +import io.lettuce.core.RedisFuture; +import io.netty.channel.ChannelFuture; + +/** + * Utility methods for {@link java.util.concurrent.Future} handling. This class is part of the internal API and may change + * without further notice. + * + * @author Mark Paluch + * @since 5.1 + */ +public abstract class Futures { + + private Futures() { + // no instances allowed + } + + /** + * Create a composite {@link CompletableFuture} is composed from the given {@code stages}. + * + * @param stages must not be {@literal null}. + * @return the composed {@link CompletableFuture}. + * @since 5.1.1 + */ + @SuppressWarnings({ "rawtypes" }) + public static CompletableFuture allOf(Collection> stages) { + + LettuceAssert.notNull(stages, "Futures must not be null"); + + CompletableFuture[] futures = new CompletableFuture[stages.size()]; + + int index = 0; + for (CompletionStage stage : stages) { + futures[index++] = stage.toCompletableFuture(); + } + + return CompletableFuture.allOf(futures); + } + + /** + * Create a {@link CompletableFuture} that is completed exceptionally with {@code throwable}. + * + * @param throwable must not be {@literal null}. + * @return the exceptionally completed {@link CompletableFuture}. + */ + public static CompletableFuture failed(Throwable throwable) { + + LettuceAssert.notNull(throwable, "Throwable must not be null"); + + CompletableFuture future = new CompletableFuture<>(); + future.completeExceptionally(throwable); + + return future; + } + + /** + * Adapt Netty's {@link ChannelFuture} emitting a {@link Void} result. + * + * @param future the {@link ChannelFuture} to adapt. + * @return the {@link CompletableFuture}. + * @since 6.0 + */ + public static CompletionStage toCompletionStage(io.netty.util.concurrent.Future future) { + + LettuceAssert.notNull(future, "Future must not be null"); + + CompletableFuture promise = new CompletableFuture<>(); + + if (future.isDone() || future.isCancelled()) { + if (future.isSuccess()) { + promise.complete(null); + } else { + promise.completeExceptionally(future.cause()); + } + return promise; + } + + future.addListener(f -> { + if (f.isSuccess()) { + promise.complete(null); + } else { + promise.completeExceptionally(f.cause()); + } + }); + + return promise; + } + + /** + * Adapt Netty's {@link io.netty.util.concurrent.Future} emitting a value result into a {@link CompletableFuture}. + * + * @param source source {@link io.netty.util.concurrent.Future} emitting signals. + * @param target target {@link CompletableFuture}. + * @since 6.0 + */ + public static void adapt(io.netty.util.concurrent.Future source, CompletableFuture target) { + + source.addListener(f -> { + if (f.isSuccess()) { + target.complete(null); + } else { + target.completeExceptionally(f.cause()); + } + }); + + if (source.isSuccess()) { + target.complete(null); + } else if (source.isCancelled()) { + target.cancel(false); + } else if (source.isDone() && !source.isSuccess()) { + target.completeExceptionally(source.cause()); + } + } + + /** + * Wait until future is complete or the supplied timeout is reached. + * + * @param timeout Maximum time to wait for futures to complete. + * @param future Future to wait for. + * @return {@literal true} if future completes in time, otherwise {@literal false} + * @since 6.0 + */ + public static boolean await(Duration timeout, Future future) { + return await(timeout.toNanos(), TimeUnit.NANOSECONDS, future); + } + + /** + * Wait until future is complete or the supplied timeout is reached. + * + * @param timeout Maximum time to wait for futures to complete. + * @param unit Unit of time for the timeout. + * @param future Future to wait for. + * @return {@literal true} if future completes in time, otherwise {@literal false} + * @since 6.0 + */ + public static boolean await(long timeout, TimeUnit unit, Future future) { + + try { + long nanos = unit.toNanos(timeout); + + if (nanos < 0) { + return false; + } + + if (nanos == 0) { + future.get(); + } else { + future.get(nanos, TimeUnit.NANOSECONDS); + } + + return true; + } catch (TimeoutException e) { + return false; + } catch (Exception e) { + throw Exceptions.fromSynchronization(e); + } + } + + /** + * Wait until futures are complete or the supplied timeout is reached. + * + * @param timeout Maximum time to wait for futures to complete. + * @param futures Futures to wait for. + * @return {@literal true} if all futures complete in time, otherwise {@literal false} + * @since 6.0 + */ + public static boolean awaitAll(Duration timeout, Future... futures) { + return awaitAll(timeout.toNanos(), TimeUnit.NANOSECONDS, futures); + } + + /** + * Wait until futures are complete or the supplied timeout is reached. + * + * @param timeout Maximum time to wait for futures to complete. + * @param unit Unit of time for the timeout. + * @param futures Futures to wait for. + * @return {@literal true} if all futures complete in time, otherwise {@literal false} + */ + public static boolean awaitAll(long timeout, TimeUnit unit, Future... futures) { + + try { + long nanos = unit.toNanos(timeout); + long time = System.nanoTime(); + + for (Future f : futures) { + + if (timeout <= 0) { + f.get(); + } else { + if (nanos < 0) { + return false; + } + + f.get(nanos, TimeUnit.NANOSECONDS); + + long now = System.nanoTime(); + nanos -= now - time; + time = now; + } + } + + return true; + } catch (TimeoutException e) { + return false; + } catch (Exception e) { + throw Exceptions.fromSynchronization(e); + } + } + + /** + * Wait until futures are complete or the supplied timeout is reached. Commands are canceled if the timeout is reached but + * the command is not finished. + * + * @param cmd Command to wait for + * @param timeout Maximum time to wait for futures to complete + * @param unit Unit of time for the timeout + * @param Result type + * @return Result of the command. + * @since 6.0 + */ + public static T awaitOrCancel(RedisFuture cmd, long timeout, TimeUnit unit) { + + try { + if (timeout > 0 && !cmd.await(timeout, unit)) { + cmd.cancel(true); + throw ExceptionFactory.createTimeoutException(Duration.ofNanos(unit.toNanos(timeout))); + } + return cmd.get(); + } catch (Exception e) { + throw Exceptions.bubble(e); + } + } +} diff --git a/src/main/java/com/lambdaworks/redis/internal/HostAndPort.java b/src/main/java/io/lettuce/core/internal/HostAndPort.java similarity index 80% rename from src/main/java/com/lambdaworks/redis/internal/HostAndPort.java rename to src/main/java/io/lettuce/core/internal/HostAndPort.java index b5517bc943..6941651d62 100644 --- a/src/main/java/com/lambdaworks/redis/internal/HostAndPort.java +++ b/src/main/java/io/lettuce/core/internal/HostAndPort.java @@ -1,11 +1,27 @@ -package com.lambdaworks.redis.internal; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; -import com.lambdaworks.redis.LettuceStrings; +import io.lettuce.core.LettuceStrings; /** * An immutable representation of a host and port. - * + * * @author Mark Paluch + * @author Larry Battle * @since 4.2 */ public class HostAndPort { @@ -16,7 +32,7 @@ public class HostAndPort { public final int port; /** - * + * * @param hostText must not be empty or {@literal null}. * @param port */ @@ -29,17 +45,17 @@ private HostAndPort(String hostText, int port) { /** * Create a {@link HostAndPort} of {@code host} and {@code port} - * + * * @param host the hostname * @param port a valid port * @return the {@link HostAndPort} of {@code host} and {@code port} */ public static HostAndPort of(String host, int port) { - LettuceAssert.isTrue(isValidPort(port), String.format("Port out of range: %s", port)); + LettuceAssert.isTrue(isValidPort(port), () -> String.format("Port out of range: %s", port)); HostAndPort parsedHost = parse(host); - LettuceAssert.isTrue(!parsedHost.hasPort(), String.format("Host has a port: %s", host)); + LettuceAssert.isTrue(!parsedHost.hasPort(), () -> String.format("Host has a port: %s", host)); return new HostAndPort(host, port); } @@ -76,13 +92,13 @@ public static HostAndPort parse(String hostPortString) { if (!LettuceStrings.isEmpty(portString)) { // Try to parse the whole port string as a number. // JDK7 accepts leading plus signs. We don't want to. - LettuceAssert.isTrue(!portString.startsWith("+"), String.format("Unparseable port number: %s", hostPortString)); + LettuceAssert.isTrue(!portString.startsWith("+"), () -> String.format("Cannot port number: %s", hostPortString)); try { port = Integer.parseInt(portString); } catch (NumberFormatException e) { - throw new IllegalArgumentException(String.format("Unparseable port number: %s" + hostPortString)); + throw new IllegalArgumentException(String.format("Cannot parse port number: %s", hostPortString)); } - LettuceAssert.isTrue(isValidPort(port), String.format("Port number out of range: %s", hostPortString)); + LettuceAssert.isTrue(isValidPort(port), () -> String.format("Port number out of range: %s", hostPortString)); } return new HostAndPort(host, port); @@ -171,13 +187,13 @@ public int hashCode() { private static String[] getHostAndPortFromBracketedHost(String hostPortString) { LettuceAssert.isTrue(hostPortString.charAt(0) == '[', - String.format("Bracketed host-port string must start with a bracket: %s", hostPortString)); + () -> String.format("Bracketed host-port string must start with a bracket: %s", hostPortString)); int colonIndex = hostPortString.indexOf(':'); int closeBracketIndex = hostPortString.lastIndexOf(']'); LettuceAssert.isTrue(colonIndex > -1 && closeBracketIndex > colonIndex, - String.format("Invalid bracketed host/port: ", hostPortString)); + () -> String.format("Invalid bracketed host/port: %s", hostPortString)); String host = hostPortString.substring(1, closeBracketIndex); if (closeBracketIndex + 1 == hostPortString.length()) { @@ -188,7 +204,7 @@ private static String[] getHostAndPortFromBracketedHost(String hostPortString) { "Only a colon may follow a close bracket: " + hostPortString); for (int i = closeBracketIndex + 2; i < hostPortString.length(); ++i) { LettuceAssert.isTrue(Character.isDigit(hostPortString.charAt(i)), - String.format("Port must be numeric: %s", hostPortString)); + () -> String.format("Port must be numeric: %s", hostPortString)); } return new String[] { host, hostPortString.substring(closeBracketIndex + 2) }; } diff --git a/src/main/java/io/lettuce/core/internal/LettuceAssert.java b/src/main/java/io/lettuce/core/internal/LettuceAssert.java new file mode 100644 index 0000000000..435dedead4 --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/LettuceAssert.java @@ -0,0 +1,256 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.util.Collection; +import java.util.function.Supplier; + +import io.lettuce.core.LettuceStrings; + +/** + * Assertion utility class that assists in validating arguments. This class is part of the internal API and may change without + * further notice. + * + * @author Mark Paluch + */ +public class LettuceAssert { + + /** + * prevent instances. + */ + private LettuceAssert() { + } + + /** + * Assert that a string is not empty, it must not be {@code null} and it must not be empty. + * + * @param string the object to check + * @param message the exception message to use if the assertion fails + * @throws IllegalArgumentException if the object is {@code null} or the underlying string is empty + */ + public static void notEmpty(CharSequence string, String message) { + if (LettuceStrings.isEmpty(string)) { + throw new IllegalArgumentException(message); + } + } + + /** + * Assert that a string is not empty, it must not be {@code null} and it must not be empty. + * + * @param string the object to check + * @param messageSupplier the exception message supplier to use if the assertion fails + * @throws IllegalArgumentException if the object is {@code null} or the underlying string is empty + * @since 5.2.0 + */ + public static void notEmpty(CharSequence string, Supplier messageSupplier) { + if (LettuceStrings.isEmpty(string)) { + throw new IllegalArgumentException(messageSupplier.get()); + } + } + + /** + * Assert that an object is not {@code null} . + * + * @param object the object to check + * @param message the exception message to use if the assertion fails + * @throws IllegalArgumentException if the object is {@code null} + */ + public static void notNull(Object object, String message) { + if (object == null) { + throw new IllegalArgumentException(message); + } + } + + /** + * Assert that an object is not {@code null} . + * + * @param object the object to check + * @param messageSupplier the exception message supplier to use if the assertion fails + * @throws IllegalArgumentException if the object is {@code null} + * @since 5.2.0 + */ + public static void notNull(Object object, Supplier messageSupplier) { + if (object == null) { + throw new IllegalArgumentException(messageSupplier.get()); + } + } + + /** + * Assert that an array has elements; that is, it must not be {@code null} and must have at least one element. + * + * @param array the array to check + * @param message the exception message to use if the assertion fails + * @throws IllegalArgumentException if the object array is {@code null} or has no elements + */ + public static void notEmpty(Object[] array, String message) { + if (array == null || array.length == 0) { + throw new IllegalArgumentException(message); + } + } + + /** + * Assert that an array has elements; that is, it must not be {@code null} and must have at least one element. + * + * @param array the array to check + * @param messageSupplier the exception message supplier to use if the assertion fails + * @throws IllegalArgumentException if the object array is {@code null} or has no elements + * @since 5.2.0 + */ + public static void notEmpty(Object[] array, Supplier messageSupplier) { + if (array == null || array.length == 0) { + throw new IllegalArgumentException(messageSupplier.get()); + } + } + + /** + * Assert that an array has elements; that is, it must not be {@code null} and must have at least one element. + * + * @param array the array to check + * @param message the exception message to use if the assertion fails + * @throws IllegalArgumentException if the object array is {@code null} or has no elements + */ + public static void notEmpty(int[] array, String message) { + if (array == null || array.length == 0) { + throw new IllegalArgumentException(message); + } + } + + /** + * Assert that an array has no null elements. + * + * @param array the array to check + * @param message the exception message to use if the assertion fails + * @throws IllegalArgumentException if the object array contains a {@code null} element + */ + public static void noNullElements(Object[] array, String message) { + if (array != null) { + for (Object element : array) { + if (element == null) { + throw new IllegalArgumentException(message); + } + } + } + } + + /** + * Assert that an array has no null elements. + * + * @param array the array to check + * @param messageSupplier the exception message supplier to use if the assertion fails + * @throws IllegalArgumentException if the object array contains a {@code null} element + * @since 5.2.0 + */ + public static void noNullElements(Object[] array, Supplier messageSupplier) { + if (array != null) { + for (Object element : array) { + if (element == null) { + throw new IllegalArgumentException(messageSupplier.get()); + } + } + } + } + + /** + * Assert that a {@link java.util.Collection} has no null elements. + * + * @param c the collection to check + * @param message the exception message to use if the assertion fails + * @throws IllegalArgumentException if the {@link Collection} contains a {@code null} element + */ + public static void noNullElements(Collection c, String message) { + if (c != null) { + for (Object element : c) { + if (element == null) { + throw new IllegalArgumentException(message); + } + } + } + } + + /** + * Assert that a {@link java.util.Collection} has no null elements. + * + * @param c the collection to check + * @param messageSupplier the exception message supplier to use if the assertion fails + * @throws IllegalArgumentException if the {@link Collection} contains a {@code null} element + * @since 5.2.0 + */ + public static void noNullElements(Collection c, Supplier messageSupplier) { + if (c != null) { + for (Object element : c) { + if (element == null) { + throw new IllegalArgumentException(messageSupplier.get()); + } + } + } + } + + /** + * Assert that {@code value} is {@literal true}. + * + * @param value the value to check + * @param message the exception message to use if the assertion fails + * @throws IllegalArgumentException if the object array contains a {@code null} element + */ + public static void isTrue(boolean value, String message) { + if (!value) { + throw new IllegalArgumentException(message); + } + } + + /** + * Assert that {@code value} is {@literal true}. + * + * @param value the value to check + * @param messageSupplier the exception message supplier to use if the assertion fails + * @throws IllegalArgumentException if the object array contains a {@code null} element + * @since 5.2.0 + */ + public static void isTrue(boolean value, Supplier messageSupplier) { + if (!value) { + throw new IllegalArgumentException(messageSupplier.get()); + } + } + + /** + * Ensures the truth of an expression involving the state of the calling instance, but not involving any parameters to the + * calling method. + * + * @param condition a boolean expression + * @param message the exception message to use if the assertion fails + * @throws IllegalStateException if {@code expression} is false + */ + public static void assertState(boolean condition, String message) { + if (!condition) { + throw new IllegalStateException(message); + } + } + + /** + * Ensures the truth of an expression involving the state of the calling instance, but not involving any parameters to the + * calling method. + * + * @param condition a boolean expression + * @param messageSupplier the exception message supplier to use if the assertion fails + * @throws IllegalStateException if {@code expression} is false + * @since 5.2.0 + */ + public static void assertState(boolean condition, Supplier messageSupplier) { + if (!condition) { + throw new IllegalStateException(messageSupplier.get()); + } + } +} diff --git a/src/main/java/io/lettuce/core/internal/LettuceClassUtils.java b/src/main/java/io/lettuce/core/internal/LettuceClassUtils.java new file mode 100644 index 0000000000..9a75501761 --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/LettuceClassUtils.java @@ -0,0 +1,167 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.util.IdentityHashMap; +import java.util.Map; + +import io.lettuce.core.JavaRuntime; + +/** + * Miscellaneous class utility methods. Mainly for internal use within the framework. + * + * @author Mark Paluch + * @since 4.2 + */ +public class LettuceClassUtils { + + /** The CGLIB class separator character "$$" */ + public static final String CGLIB_CLASS_SEPARATOR = "$$"; + + /** + * Map with primitive wrapper type as key and corresponding primitive type as value, for example: Integer.class -> + * int.class. + */ + private static final Map, Class> primitiveWrapperTypeMap = new IdentityHashMap, Class>(9); + + /** + * Map with primitive type as key and corresponding wrapper type as value, for example: int.class -> Integer.class. + */ + private static final Map, Class> primitiveTypeToWrapperMap = new IdentityHashMap, Class>(9); + + static { + primitiveWrapperTypeMap.put(Boolean.class, boolean.class); + primitiveWrapperTypeMap.put(Byte.class, byte.class); + primitiveWrapperTypeMap.put(Character.class, char.class); + primitiveWrapperTypeMap.put(Double.class, double.class); + primitiveWrapperTypeMap.put(Float.class, float.class); + primitiveWrapperTypeMap.put(Integer.class, int.class); + primitiveWrapperTypeMap.put(Long.class, long.class); + primitiveWrapperTypeMap.put(Short.class, short.class); + primitiveWrapperTypeMap.put(Void.class, void.class); + } + + /** + * Determine whether the {@link Class} identified by the supplied name is present and can be loaded. Will return + * {@code false} if either the class or one of its dependencies is not present or cannot be loaded. + * + * @param className the name of the class to check + * @return whether the specified class is present + */ + public static boolean isPresent(String className) { + try { + forName(className); + return true; + } catch (Throwable ex) { + // Class or one of its dependencies is not present... + return false; + } + } + + /** + * Loads a class using the {@link #getDefaultClassLoader()}. + * + * @param className + * @return + */ + public static Class findClass(String className) { + try { + return forName(className, getDefaultClassLoader()); + } catch (ClassNotFoundException e) { + return null; + } + } + + /** + * Loads a class using the {@link #getDefaultClassLoader()}. + * + * @param className + * @return + * @throws ClassNotFoundException + */ + public static Class forName(String className) throws ClassNotFoundException { + return forName(className, getDefaultClassLoader()); + } + + private static Class forName(String className, ClassLoader classLoader) throws ClassNotFoundException { + try { + return classLoader.loadClass(className); + } catch (ClassNotFoundException ex) { + int lastDotIndex = className.lastIndexOf('.'); + if (lastDotIndex != -1) { + String innerClassName = className.substring(0, lastDotIndex) + '$' + className.substring(lastDotIndex + 1); + try { + return classLoader.loadClass(innerClassName); + } catch (ClassNotFoundException ex2) { + // swallow - let original exception get through + } + } + throw ex; + } + } + + /** + * Return the default ClassLoader to use: typically the thread context ClassLoader, if available; the ClassLoader that + * loaded the ClassUtils class will be used as fallback. + * + * @return the default ClassLoader (never null) + * @see java.lang.Thread#getContextClassLoader() + */ + private static ClassLoader getDefaultClassLoader() { + ClassLoader cl = null; + try { + cl = Thread.currentThread().getContextClassLoader(); + } catch (Throwable ex) { + // Cannot access thread context ClassLoader - falling back to system class loader... + } + if (cl == null) { + // No thread context class loader -> use class loader of this class. + cl = JavaRuntime.class.getClassLoader(); + } + return cl; + } + + /** + * Check if the right-hand side type may be assigned to the left-hand side type, assuming setting by reflection. Considers + * primitive wrapper classes as assignable to the corresponding primitive types. + * + * @param lhsType the target type + * @param rhsType the value type that should be assigned to the target type + * @return if the target type is assignable from the value type + */ + public static boolean isAssignable(Class lhsType, Class rhsType) { + + LettuceAssert.notNull(lhsType, "Left-hand side type must not be null"); + LettuceAssert.notNull(rhsType, "Right-hand side type must not be null"); + + if (lhsType.isAssignableFrom(rhsType)) { + return true; + } + + if (lhsType.isPrimitive()) { + Class resolvedPrimitive = primitiveWrapperTypeMap.get(rhsType); + if (lhsType == resolvedPrimitive) { + return true; + } + } else { + Class resolvedWrapper = primitiveTypeToWrapperMap.get(rhsType); + if (resolvedWrapper != null && lhsType.isAssignableFrom(resolvedWrapper)) { + return true; + } + } + return false; + } +} diff --git a/src/main/java/io/lettuce/core/internal/LettuceFactories.java b/src/main/java/io/lettuce/core/internal/LettuceFactories.java new file mode 100644 index 0000000000..65679a7eec --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/LettuceFactories.java @@ -0,0 +1,73 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.util.ArrayDeque; +import java.util.Deque; +import java.util.Queue; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.ConcurrentLinkedDeque; +import java.util.concurrent.LinkedBlockingQueue; + +/** + * This class is part of the internal API and may change without further notice. + * + * @author Mark Paluch + * @since 4.2 + */ +public class LettuceFactories { + + /** + * Threshold used to determine queue implementation. A queue size above the size indicates usage of + * {@link LinkedBlockingQueue} otherwise {@link ArrayBlockingQueue}. + */ + private static final int ARRAY_QUEUE_THRESHOLD = Integer.getInteger( + "io.lettuce.core.LettuceFactories.array-queue-threshold", 200000); + + /** + * Creates a new, optionally bounded, {@link Queue} that does not require external synchronization. + * + * @param maxSize queue size. If {@link Integer#MAX_VALUE}, then creates an {@link ConcurrentLinkedDeque unbounded queue}. + * @return a new, empty {@link Queue}. + */ + public static Queue newConcurrentQueue(int maxSize) { + + if (maxSize == Integer.MAX_VALUE) { + return new ConcurrentLinkedDeque<>(); + } + + return maxSize > ARRAY_QUEUE_THRESHOLD ? new LinkedBlockingQueue<>(maxSize) : new ArrayBlockingQueue<>(maxSize); + } + + /** + * Creates a new {@link Queue} for single producer/single consumer. + * + * @return a new, empty {@link ArrayDeque}. + */ + public static Deque newSpScQueue() { + return new ArrayDeque<>(); + } + + /** + * Creates a new {@link BlockingQueue}. + * + * @return a new, empty {@link BlockingQueue}. + */ + public static LinkedBlockingQueue newBlockingQueue() { + return new LinkedBlockingQueue<>(); + } +} diff --git a/src/main/java/com/lambdaworks/redis/internal/LettuceLists.java b/src/main/java/io/lettuce/core/internal/LettuceLists.java similarity index 82% rename from src/main/java/com/lambdaworks/redis/internal/LettuceLists.java rename to src/main/java/io/lettuce/core/internal/LettuceLists.java index 36a59e37c1..684b564dbe 100644 --- a/src/main/java/com/lambdaworks/redis/internal/LettuceLists.java +++ b/src/main/java/io/lettuce/core/internal/LettuceLists.java @@ -1,11 +1,26 @@ -package com.lambdaworks.redis.internal; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; import java.util.*; /** * Static utility methods for {@link List} instances. This class is part of the internal API and may change without further * notice. - * + * * @author Mark Paluch * @since 4.2 */ @@ -20,7 +35,7 @@ private LettuceLists() { /** * Creates a new {@link ArrayList} containing all elements from {@code elements}. - * + * * @param elements the elements that the list should contain, must not be {@literal null}. * @param the element type * @return a new {@link ArrayList} containing all elements from {@code elements}. @@ -37,7 +52,7 @@ public static List newList(T... elements) { /** * Creates a new {@link ArrayList} containing all elements from {@code elements}. - * + * * @param elements the elements that the list should contain, must not be {@literal null}. * @param the element type * @return a new {@link ArrayList} containing all elements from {@code elements}. @@ -55,7 +70,7 @@ public static List newList(Iterable elements) { /** * Creates a new {@link ArrayList} containing all elements from {@code elements}. - * + * * @param elements the elements that the list should contain, must not be {@literal null}. * @param the element type * @return a new {@link ArrayList} containing all elements from {@code elements}. diff --git a/src/main/java/com/lambdaworks/redis/internal/LettuceSets.java b/src/main/java/io/lettuce/core/internal/LettuceSets.java similarity index 80% rename from src/main/java/com/lambdaworks/redis/internal/LettuceSets.java rename to src/main/java/io/lettuce/core/internal/LettuceSets.java index 1392d6d7b1..42dcb9c13d 100644 --- a/src/main/java/com/lambdaworks/redis/internal/LettuceSets.java +++ b/src/main/java/io/lettuce/core/internal/LettuceSets.java @@ -1,4 +1,19 @@ -package com.lambdaworks.redis.internal; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; import java.util.Collection; import java.util.Collections; @@ -8,7 +23,7 @@ /** * Static utility methods for {@link Set} instances. This class is part of the internal API and may change without further * notice. - * + * * @author Mark Paluch * @since 4.2 */ @@ -24,7 +39,7 @@ private LettuceSets() { /** * Creates a new {@code HashSet} containing all elements from {@code elements}. - * + * * @param elements the elements that the set should contain, must not be {@literal null}. * @param the element type * @return a new {@code HashSet} containing all elements from {@code elements}. @@ -39,7 +54,7 @@ public static Set newHashSet(Collection elements) { /** * Creates a new {@code HashSet} containing all elements from {@code elements}. - * + * * @param elements the elements that the set should contain, must not be {@literal null}. * @param the element type * @return a new {@code HashSet} containing all elements from {@code elements}. @@ -61,7 +76,7 @@ public static Set newHashSet(Iterable elements) { /** * Creates a new {@code HashSet} containing all elements from {@code elements}. - * + * * @param elements the elements that the set should contain, must not be {@literal null}. * @param the element type * @return a new {@code HashSet} containing all elements from {@code elements}. diff --git a/src/main/java/io/lettuce/core/internal/TimeoutProvider.java b/src/main/java/io/lettuce/core/internal/TimeoutProvider.java new file mode 100644 index 0000000000..bf83d7228d --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/TimeoutProvider.java @@ -0,0 +1,94 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import java.util.concurrent.TimeUnit; +import java.util.function.LongSupplier; +import java.util.function.Supplier; + +import io.lettuce.core.TimeoutOptions; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Provider for command timeout durations. Determines an individual timeout for each command and falls back to a default + * timeout. + * + * @author Mark Paluch + * @since 5.1 + * @see TimeoutOptions + */ +public class TimeoutProvider { + + private final Supplier timeoutOptionsSupplier; + private final LongSupplier defaultTimeoutSupplier; + + private State state; + + /** + * Creates a new {@link TimeoutProvider} given {@link TimeoutOptions supplier} and {@link LongSupplier default timeout + * supplier in nano seconds}. + * + * @param timeoutOptionsSupplier must not be {@literal null}. + * @param defaultTimeoutNsSupplier must not be {@literal null}. + */ + public TimeoutProvider(Supplier timeoutOptionsSupplier, LongSupplier defaultTimeoutNsSupplier) { + + LettuceAssert.notNull(timeoutOptionsSupplier, "TimeoutOptionsSupplier must not be null"); + LettuceAssert.notNull(defaultTimeoutNsSupplier, "Default TimeoutSupplier must not be null"); + + this.timeoutOptionsSupplier = timeoutOptionsSupplier; + this.defaultTimeoutSupplier = defaultTimeoutNsSupplier; + } + + /** + * Returns the timeout in {@link TimeUnit#NANOSECONDS} for {@link RedisCommand}. + * + * @param command the command. + * @return timeout in {@link TimeUnit#NANOSECONDS}. + */ + public long getTimeoutNs(RedisCommand command) { + + long timeoutNs = -1; + + State state = this.state; + if (state == null) { + state = this.state = new State(timeoutOptionsSupplier.get()); + } + + if (!state.applyDefaultTimeout) { + timeoutNs = state.timeoutSource.getTimeUnit().toNanos(state.timeoutSource.getTimeout(command)); + } + + return timeoutNs >= 0 ? timeoutNs : defaultTimeoutSupplier.getAsLong(); + } + + static class State { + + final boolean applyDefaultTimeout; + final TimeoutOptions.TimeoutSource timeoutSource; + + State(TimeoutOptions timeoutOptions) { + + this.timeoutSource = timeoutOptions.getSource(); + + if (timeoutSource == null || !timeoutOptions.isTimeoutCommands() || timeoutOptions.isApplyConnectionTimeout()) { + this.applyDefaultTimeout = true; + } else { + this.applyDefaultTimeout = false; + } + } + } +} diff --git a/src/main/java/io/lettuce/core/internal/package-info.java b/src/main/java/io/lettuce/core/internal/package-info.java new file mode 100644 index 0000000000..283e588ea7 --- /dev/null +++ b/src/main/java/io/lettuce/core/internal/package-info.java @@ -0,0 +1,6 @@ +/** + * Contains internal API. Classes in this package are part of the internal API and may change without further notice. + * + * @since 4.2 + */ +package io.lettuce.core.internal; diff --git a/src/main/java/io/lettuce/core/masterreplica/AsyncConnections.java b/src/main/java/io/lettuce/core/masterreplica/AsyncConnections.java new file mode 100644 index 0000000000..697696a342 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/AsyncConnections.java @@ -0,0 +1,75 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.time.Duration; +import java.util.List; +import java.util.Map; +import java.util.TreeMap; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ScheduledExecutorService; + +import reactor.core.publisher.Mono; +import reactor.util.function.Tuples; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * @author Mark Paluch + */ +class AsyncConnections { + + private final Map>> connections = new TreeMap<>( + MasterReplicaUtils.RedisURIComparator.INSTANCE); + private final List nodeList; + + AsyncConnections(List nodeList) { + this.nodeList = nodeList; + } + + /** + * Add a connection for a {@link RedisURI} + * + * @param redisURI + * @param connection + */ + public void addConnection(RedisURI redisURI, CompletableFuture> connection) { + connections.put(redisURI, connection); + } + + public Mono asMono(Duration timeout, ScheduledExecutorService timeoutExecutor) { + + Connections connections = new Connections(this.connections.size(), nodeList); + + for (Map.Entry>> entry : this.connections + .entrySet()) { + + CompletableFuture> future = entry.getValue(); + + future.whenComplete((connection, throwable) -> { + + if (throwable != null) { + connections.accept(throwable); + } else { + connections.accept(Tuples.of(entry.getKey(), connection)); + } + }); + } + + return Mono.fromCompletionStage(connections.getOrTimeout(timeout, timeoutExecutor)); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/AutodiscoveryConnector.java b/src/main/java/io/lettuce/core/masterreplica/AutodiscoveryConnector.java new file mode 100644 index 0000000000..20e9e7a699 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/AutodiscoveryConnector.java @@ -0,0 +1,160 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ExecutionException; +import java.util.function.Predicate; + +import reactor.core.publisher.Mono; +import reactor.util.function.Tuple2; +import reactor.util.function.Tuples; +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * {@link MasterReplicaConnector} to connect unmanaged Redis Master/Replica with auto-discovering master and replica nodes from + * a single {@link RedisURI}. + * + * @author Mark Paluch + * @since 5.1 + */ +class AutodiscoveryConnector implements MasterReplicaConnector { + + private final RedisClient redisClient; + private final RedisCodec codec; + private final RedisURI redisURI; + + private final Map> initialConnections = new ConcurrentHashMap<>(); + + AutodiscoveryConnector(RedisClient redisClient, RedisCodec codec, RedisURI redisURI) { + this.redisClient = redisClient; + this.codec = codec; + this.redisURI = redisURI; + } + + @Override + public CompletableFuture> connectAsync() { + + ConnectionFuture> initialConnection = redisClient.connectAsync(codec, redisURI); + Mono> connect = Mono + .fromCompletionStage(initialConnection) + .flatMap( + nodeConnection -> { + + initialConnections.put(redisURI, nodeConnection); + + TopologyProvider topologyProvider = new MasterReplicaTopologyProvider(nodeConnection, redisURI); + + return Mono.fromCompletionStage(topologyProvider.getNodesAsync()).flatMap( + nodes -> getMasterConnectionAndUri(nodes, Tuples.of(redisURI, nodeConnection), codec)); + }).flatMap(connectionAndUri -> { + return initializeConnection(codec, connectionAndUri); + }); + + return connect.onErrorResume(t -> { + + Mono close = Mono.empty(); + + for (StatefulRedisConnection connection : initialConnections.values()) { + close = close.then(Mono.fromFuture(connection.closeAsync())); + } + + return close.then(Mono.error(t)); + }).onErrorMap(ExecutionException.class, Throwable::getCause).toFuture(); + + } + + private Mono>> getMasterConnectionAndUri(List nodes, + Tuple2> connectionTuple, RedisCodec codec) { + + RedisNodeDescription node = getConnectedNode(redisURI, nodes); + + if (node.getRole() != RedisInstance.Role.MASTER) { + + RedisNodeDescription master = lookupMaster(nodes); + ConnectionFuture> masterConnection = redisClient.connectAsync(codec, master.getUri()); + + return Mono.just(master.getUri()).zipWith(Mono.fromCompletionStage(masterConnection)) // + .doOnNext(it -> { + initialConnections.put(it.getT1(), it.getT2()); + }); + } + + return Mono.just(connectionTuple); + } + + @SuppressWarnings("unchecked") + private Mono> initializeConnection(RedisCodec codec, + Tuple2> connectionAndUri) { + + MasterReplicaTopologyProvider topologyProvider = new MasterReplicaTopologyProvider(connectionAndUri.getT2(), + connectionAndUri.getT1()); + + MasterReplicaTopologyRefresh refresh = new MasterReplicaTopologyRefresh(redisClient, topologyProvider); + MasterReplicaConnectionProvider connectionProvider = new MasterReplicaConnectionProvider<>(redisClient, codec, + redisURI, (Map) initialConnections); + + Mono> refreshFuture = refresh.getNodes(redisURI); + + return refreshFuture.map(nodes -> { + + connectionProvider.setKnownNodes(nodes); + + MasterReplicaChannelWriter channelWriter = new MasterReplicaChannelWriter(connectionProvider, + redisClient + .getResources()); + + StatefulRedisMasterReplicaConnectionImpl connection = new StatefulRedisMasterReplicaConnectionImpl<>( + channelWriter, codec, redisURI.getTimeout()); + + connection.setOptions(redisClient.getOptions()); + + return connection; + }); + } + + private static RedisNodeDescription lookupMaster(List nodes) { + + Optional first = findFirst(nodes, n -> n.getRole() == RedisInstance.Role.MASTER); + return first.orElseThrow(() -> new IllegalStateException("Cannot lookup master from " + nodes)); + } + + private static RedisNodeDescription getConnectedNode(RedisURI redisURI, List nodes) { + + Optional first = findFirst(nodes, n -> equals(redisURI, n)); + return first.orElseThrow(() -> new IllegalStateException("Cannot lookup node descriptor for connected node at " + + redisURI)); + } + + private static Optional findFirst(List nodes, + Predicate predicate) { + return nodes.stream().filter(predicate).findFirst(); + } + + private static boolean equals(RedisURI redisURI, RedisNodeDescription node) { + return node.getUri().getHost().equals(redisURI.getHost()) && node.getUri().getPort() == redisURI.getPort(); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/CompletableEventLatchSupport.java b/src/main/java/io/lettuce/core/masterreplica/CompletableEventLatchSupport.java new file mode 100644 index 0000000000..c17dda09b4 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/CompletableEventLatchSupport.java @@ -0,0 +1,192 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.time.Duration; +import java.util.concurrent.*; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; + +/** + * Completable latch support expecting an number of inbound events to trigger an outbound event signalled though a + * {@link CompletionStage}. This latch is created by specifying a number of expected events of type {@code T} or exceptions and + * synchronized through either a timeout or receiving the matching number of events. + *

    + * Inbound events can be consumed through callback hook methods. Events arriving after synchronization are dropped. + * + * @author Mark Paluch + */ +abstract class CompletableEventLatchSupport { + + @SuppressWarnings("rawtypes") + private static final AtomicIntegerFieldUpdater NOTIFICATIONS_UPDATER = AtomicIntegerFieldUpdater + .newUpdater(CompletableEventLatchSupport.class, "notifications"); + + @SuppressWarnings("rawtypes") + private static final AtomicIntegerFieldUpdater GATE_UPDATER = AtomicIntegerFieldUpdater + .newUpdater( + CompletableEventLatchSupport.class, "gate"); + + private static final int GATE_OPEN = 0; + private static final int GATE_CLOSED = 1; + + private final int expectedCount; + private final CompletableFuture selfFuture = new CompletableFuture<>(); + + private volatile ScheduledFuture timeoutScheduleFuture; + + // accessed via UPDATER + @SuppressWarnings("unused") + private volatile int notifications = 0; + + @SuppressWarnings("unused") + private volatile int gate = GATE_OPEN; + + /** + * Construct a new {@link CompletableEventLatchSupport} class expecting {@code expectedCount} notifications. + * + * @param expectedCount + */ + public CompletableEventLatchSupport(int expectedCount) { + this.expectedCount = expectedCount; + } + + public final int getExpectedCount() { + return expectedCount; + } + + /** + * Notification callback method accepting a connection for a value. Triggers emission if the gate is open and the current + * call to this method is the last expected notification. + */ + public final void accept(T value) { + + if (GATE_UPDATER.get(this) == GATE_CLOSED) { + onDrop(value); + return; + } + + onAccept(value); + onNotification(); + } + + /** + * Notification callback method accepting a connection error. Triggers emission if the gate is open and the current call to + * this method is the last expected notification. + */ + public final void accept(Throwable throwable) { + + if (GATE_UPDATER.get(this) == GATE_CLOSED) { + onDrop(throwable); + return; + } + + onError(throwable); + onNotification(); + } + + private void onNotification() { + + if (NOTIFICATIONS_UPDATER.incrementAndGet(this) == expectedCount) { + + ScheduledFuture timeoutScheduleFuture = this.timeoutScheduleFuture; + this.timeoutScheduleFuture = null; + + if (timeoutScheduleFuture != null) { + timeoutScheduleFuture.cancel(false); + } + + emit(); + } + } + + private void emit() { + + if (GATE_UPDATER.compareAndSet(this, GATE_OPEN, GATE_CLOSED)) { + + onEmit(new Emission() { + @Override + public void success(V value) { + selfFuture.complete(value); + } + + @Override + public void error(Throwable exception) { + selfFuture.completeExceptionally(exception); + } + }); + } + } + + // Callback hooks + + protected void onAccept(T value) { + + } + + protected void onError(Throwable value) { + + } + + protected void onDrop(T value) { + + } + + protected void onDrop(Throwable value) { + + } + + protected void onEmit(Emission emission) { + + } + + /** + * Retrieve a {@link CompletionStage} that is notified upon completion or timeout. + * + * @param timeout + * @param timeoutExecutor + * @return + */ + public final CompletionStage getOrTimeout(Duration timeout, ScheduledExecutorService timeoutExecutor) { + + if (GATE_UPDATER.get(this) == GATE_OPEN && timeoutScheduleFuture == null) { + this.timeoutScheduleFuture = timeoutExecutor.schedule(this::emit, timeout.toNanos(), TimeUnit.NANOSECONDS); + } + + return selfFuture; + } + + /** + * Interface to signal emission of a value or an {@link Exception}. + * + * @param + */ + public interface Emission { + + /** + * Complete emission successfully. + * + * @param value the actual value to emit. + */ + void success(T value); + + /** + * Complete emission with an {@link Throwable exception}. + * + * @param exception the error to emit. + */ + void error(Throwable exception); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/Connections.java b/src/main/java/io/lettuce/core/masterreplica/Connections.java new file mode 100644 index 0000000000..8334253ad6 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/Connections.java @@ -0,0 +1,165 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.time.Duration; +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.ScheduledExecutorService; + +import reactor.core.publisher.Mono; +import reactor.util.function.Tuple2; +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.AsyncCloseable; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; +import io.lettuce.core.protocol.CommandType; + +/** + * Connection collector with non-blocking synchronization. This synchronizer emits itself through a {@link Mono} as soon as it + * gets synchronized via either receiving connects/exceptions from all connections or timing out. + *

    + * It can be used only once via {@link #getOrTimeout(Duration, ScheduledExecutorService)}. + *

    + * Synchronizer uses a gate to determine whether it was already emitted or awaiting incoming events (exceptions, successful + * connects). Connections arriving after closing the gate are discarded. + * + * @author Mark Paluch + */ +class Connections extends CompletableEventLatchSupport>, Connections> + implements AsyncCloseable { + + private final Map> connections = new TreeMap<>( + MasterReplicaUtils.RedisURIComparator.INSTANCE); + + private final List exceptions = new CopyOnWriteArrayList<>(); + private final List nodes; + + private volatile boolean closed = false; + + public Connections(int expectedConnectionCount, List nodes) { + super(expectedConnectionCount); + this.nodes = nodes; + } + + @Override + protected void onAccept(Tuple2> value) { + + if (this.closed) { + value.getT2().closeAsync(); + return; + } + + synchronized (this.connections) { + this.connections.put(value.getT1(), value.getT2()); + } + } + + @Override + protected void onError(Throwable value) { + this.exceptions.add(value); + } + + @Override + protected void onDrop(Tuple2> value) { + value.getT2().closeAsync(); + } + + @Override + protected void onDrop(Throwable value) { + + } + + @Override + protected void onEmit(Emission emission) { + + if (getExpectedCount() != 0 && this.connections.isEmpty() && !this.exceptions.isEmpty()) { + + RedisConnectionException collector = new RedisConnectionException( + "Unable to establish a connection to Redis Master/Replica"); + this.exceptions.forEach(collector::addSuppressed); + + emission.error(collector); + } else { + emission.success(this); + } + } + + /** + * @return {@literal true} if no connections present. + */ + public boolean isEmpty() { + synchronized (this.connections) { + return this.connections.isEmpty(); + } + } + + /* + * Initiate {@code PING} on all connections and return the {@link Requests}. + * + * @return the {@link Requests}. + */ + public Requests requestPing() { + + Set>> entries = new LinkedHashSet<>( + this.connections.entrySet()); + Requests requests = new Requests(entries.size(), this.nodes); + + for (Map.Entry> entry : entries) { + + CommandArgs args = new CommandArgs<>(StringCodec.ASCII).add(CommandKeyword.NODES); + Command command = new Command<>(CommandType.PING, new StatusOutput<>(StringCodec.ASCII), + args); + TimedAsyncCommand timedCommand = new TimedAsyncCommand<>(command); + + entry.getValue().dispatch(timedCommand); + requests.addRequest(entry.getKey(), timedCommand); + } + + return requests; + } + + /** + * Close all connections. + */ + public CompletableFuture closeAsync() { + + List> close = new ArrayList<>(this.connections.size()); + List toRemove = new ArrayList<>(this.connections.size()); + + this.closed = true; + + for (Map.Entry> entry : this.connections.entrySet()) { + + toRemove.add(entry.getKey()); + close.add(entry.getValue().closeAsync()); + } + + for (RedisURI redisURI : toRemove) { + this.connections.remove(redisURI); + } + + return Futures.allOf(close); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/MasterReplica.java b/src/main/java/io/lettuce/core/masterreplica/MasterReplica.java new file mode 100644 index 0000000000..72b8900ce9 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/MasterReplica.java @@ -0,0 +1,285 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ExecutionException; + +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceLists; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Master-Replica connection API. + *

    + * This API allows connections to Redis Master/Replica setups which run either in a static Master/Replica setup or are managed + * by Redis Sentinel. Master-Replica connections can discover topologies and select a source for read operations using + * {@link io.lettuce.core.ReadFrom}. + *

    + *

    + * + * Connections can be obtained by providing the {@link RedisClient}, a {@link RedisURI} and a {@link RedisCodec}. + * + *

    + * RedisClient client = RedisClient.create();
    + * StatefulRedisMasterReplicaConnection<String, String> connection = MasterReplica.connect(client,
    + *         RedisURI.create("redis://localhost"), StringCodec.UTF8);
    + * // ...
    + *
    + * connection.close();
    + * client.shutdown();
    + * 
    + * + *

    + *

    Topology Discovery

    + *

    + * Master-Replica topologies are either static or semi-static. Redis Standalone instances with attached replicas provide no + * failover/HA mechanism. Redis Sentinel managed instances are controlled by Redis Sentinel and allow failover (which include + * master promotion). The {@link MasterReplica} API supports both mechanisms. The topology is provided by a + * {@link TopologyProvider}: + * + *

      + *
    • {@link MasterReplicaTopologyProvider}: Dynamic topology lookup using the {@code INFO REPLICATION} output. Replicas are + * listed as {@code replicaN=...} entries. The initial connection can either point to a master or a replica and the topology + * provider will discover nodes. The connection needs to be re-established outside of lettuce in a case of Master/Replica + * failover or topology changes.
    • + *
    • {@link StaticMasterReplicaTopologyProvider}: Topology is defined by the list of {@link RedisURI URIs} and the + * {@code ROLE} output. MasterReplica uses only the supplied nodes and won't discover additional nodes in the setup. The + * connection needs to be re-established outside of lettuce in a case of Master/Replica failover or topology changes.
    • + *
    • {@link SentinelTopologyProvider}: Dynamic topology lookup using the Redis Sentinel API. In particular, + * {@code SENTINEL MASTER} and {@code SENTINEL SLAVES} output. Master/Replica failover is handled by lettuce.
    • + *
    + * + *

    Topology Updates

    + *
      + *
    • Standalone Master/Replica: Performs a one-time topology lookup which remains static afterward
    • + *
    • Redis Sentinel: Subscribes to all Sentinels and listens for Pub/Sub messages to trigger topology refreshing
    • + *
    + * + *

    Connection Fault-Tolerance

    Connecting to Master/Replica bears the possibility that individual nodes are not + * reachable. {@link MasterReplica} can still connect to a partially-available set of nodes. + * + *
      + *
    • Redis Sentinel: At least one Sentinel must be reachable, the masterId must be registered and at least one host must be + * available (master or replica). Allows for runtime-recovery based on Sentinel Events.
    • + *
    • Static Setup (auto-discovery): The initial endpoint must be reachable. No recovery/reconfiguration during runtime.
    • + *
    • Static Setup (provided hosts): All endpoints must be reachable. No recovery/reconfiguration during runtime.
    • + *
    + * + * @author Mark Paluch + * @since 5.2 + */ +public class MasterReplica { + + /** + * Open a new connection to a Redis Master-Replica server/servers using the supplied {@link RedisURI} and the supplied + * {@link RedisCodec codec} to encode/decode keys. + *

    + * This {@link MasterReplica} performs auto-discovery of nodes using either Redis Sentinel or Master/Replica. A + * {@link RedisURI} can point to either a master or a replica host. + *

    + * + * @param redisClient the Redis client. + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null}. + * @param redisURI the Redis server to connect to, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new connection. + */ + public static StatefulRedisMasterReplicaConnection connect(RedisClient redisClient, RedisCodec codec, + RedisURI redisURI) { + return getConnection(connectAsyncSentinelOrAutodiscovery(redisClient, codec, redisURI), redisURI); + } + + /** + * Open asynchronously a new connection to a Redis Master-Replica server/servers using the supplied {@link RedisURI} and the + * supplied {@link RedisCodec codec} to encode/decode keys. + *

    + * This {@link MasterReplica} performs auto-discovery of nodes using either Redis Sentinel or Master/Replica. A + * {@link RedisURI} can point to either a master or a replica host. + *

    + * + * @param redisClient the Redis client. + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null}. + * @param redisURI the Redis server to connect to, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return {@link CompletableFuture} that is notified once the connect is finished. + * @since 6.0 + */ + public static CompletableFuture> connectAsync(RedisClient redisClient, + RedisCodec codec, RedisURI redisURI) { + return transformAsyncConnectionException(connectAsyncSentinelOrAutodiscovery(redisClient, codec, redisURI), redisURI); + } + + private static CompletableFuture> connectAsyncSentinelOrAutodiscovery( + RedisClient redisClient, RedisCodec codec, RedisURI redisURI) { + + LettuceAssert.notNull(redisClient, "RedisClient must not be null"); + LettuceAssert.notNull(codec, "RedisCodec must not be null"); + LettuceAssert.notNull(redisURI, "RedisURI must not be null"); + + if (isSentinel(redisURI)) { + return new SentinelConnector<>(redisClient, codec, redisURI).connectAsync(); + } + + return new AutodiscoveryConnector<>(redisClient, codec, redisURI).connectAsync(); + } + + /** + * Open a new connection to a Redis Master-Replica server/servers using the supplied {@link RedisURI} and the supplied + * {@link RedisCodec codec} to encode/decode keys. + *

    + * This {@link MasterReplica} performs auto-discovery of nodes if the URI is a Redis Sentinel URI. Master/Replica URIs will + * be treated as static topology and no additional hosts are discovered in such case. Redis Standalone Master/Replica will + * discover the roles of the supplied {@link RedisURI URIs} and issue commands to the appropriate node. + *

    + *

    + * When using Redis Sentinel, ensure that {@link Iterable redisURIs} contains only a single entry as only the first URI is + * considered. {@link RedisURI} pointing to multiple Sentinels can be configured through + * {@link RedisURI.Builder#withSentinel}. + *

    + * + * @param redisClient the Redis client. + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null}. + * @param redisURIs the Redis server(s) to connect to, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new connection. + * @since 6.0 + */ + public static StatefulRedisMasterReplicaConnection connect(RedisClient redisClient, RedisCodec codec, + Iterable redisURIs) { + return getConnection(connectAsyncSentinelOrStaticSetup(redisClient, codec, redisURIs), redisURIs); + } + + /** + * Open asynchronously a new connection to a Redis Master-Replica server/servers using the supplied {@link RedisURI} and the + * supplied {@link RedisCodec codec} to encode/decode keys. + *

    + * This {@link MasterReplica} performs auto-discovery of nodes if the URI is a Redis Sentinel URI. Master/Replica URIs will + * be treated as static topology and no additional hosts are discovered in such case. Redis Standalone Master/Replica will + * discover the roles of the supplied {@link RedisURI URIs} and issue commands to the appropriate node. + *

    + *

    + * When using Redis Sentinel, ensure that {@link Iterable redisURIs} contains only a single entry as only the first URI is + * considered. {@link RedisURI} pointing to multiple Sentinels can be configured through + * {@link RedisURI.Builder#withSentinel}. + *

    + * + * @param redisClient the Redis client. + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null}. + * @param redisURIs the Redis server(s) to connect to, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return {@link CompletableFuture} that is notified once the connect is finished. + * @since 6.0 + */ + public static CompletableFuture> connectAsync(RedisClient redisClient, + RedisCodec codec, Iterable redisURIs) { + return transformAsyncConnectionException(connectAsyncSentinelOrStaticSetup(redisClient, codec, redisURIs), redisURIs); + } + + private static CompletableFuture> connectAsyncSentinelOrStaticSetup( + RedisClient redisClient, RedisCodec codec, Iterable redisURIs) { + + LettuceAssert.notNull(redisClient, "RedisClient must not be null"); + LettuceAssert.notNull(codec, "RedisCodec must not be null"); + LettuceAssert.notNull(redisURIs, "RedisURIs must not be null"); + + List uriList = LettuceLists.newList(redisURIs); + LettuceAssert.isTrue(!uriList.isEmpty(), "RedisURIs must not be empty"); + + RedisURI first = uriList.get(0); + if (isSentinel(first)) { + + if (uriList.size() > 1) { + InternalLogger logger = InternalLoggerFactory.getInstance(MasterReplica.class); + logger.warn( + "RedisURIs contains multiple endpoints of which the first is configured for Sentinel usage. Using only the first URI [{}] without considering the remaining URIs. Make sure to include all Sentinel endpoints in a single RedisURI.", + first); + } + return new SentinelConnector<>(redisClient, codec, first).connectAsync(); + } + + return new StaticMasterReplicaConnector<>(redisClient, codec, uriList).connectAsync(); + } + + private static boolean isSentinel(RedisURI redisURI) { + return !redisURI.getSentinels().isEmpty(); + } + + /** + * Retrieve the connection from {@link ConnectionFuture}. Performs a blocking {@link ConnectionFuture#get()} to synchronize + * the channel/connection initialization. Any exception is rethrown as {@link RedisConnectionException}. + * + * @param connectionFuture must not be null. + * @param context context information (single RedisURI, multiple URIs), used as connection target in the reported exception. + * @param Connection type. + * @return the connection. + * @throws RedisConnectionException in case of connection failures. + */ + private static T getConnection(CompletableFuture connectionFuture, Object context) { + + try { + return connectionFuture.get(); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw RedisConnectionException.create(context.toString(), e); + } catch (Exception e) { + + if (e instanceof ExecutionException) { + + // filter intermediate RedisConnectionException exceptions that bloat the stack trace + if (e.getCause() instanceof RedisConnectionException + && e.getCause().getCause() instanceof RedisConnectionException) { + throw RedisConnectionException.create(context.toString(), e.getCause().getCause()); + } + + throw RedisConnectionException.create(context.toString(), e.getCause()); + } + + throw RedisConnectionException.create(context.toString(), e); + } + } + + private static CompletableFuture transformAsyncConnectionException(CompletionStage future, Object context) { + + return ConnectionFuture.from(null, future.toCompletableFuture()).thenCompose((v, e) -> { + + if (e != null) { + + // filter intermediate RedisConnectionException exceptions that bloat the stack trace + if (e.getCause() instanceof RedisConnectionException + && e.getCause().getCause() instanceof RedisConnectionException) { + return Futures.failed(RedisConnectionException.create(context.toString(), e.getCause())); + } + return Futures.failed(RedisConnectionException.create(context.toString(), e)); + } + + return CompletableFuture.completedFuture(v); + }).toCompletableFuture(); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/MasterReplicaChannelWriter.java b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaChannelWriter.java new file mode 100644 index 0000000000..aefbd2a063 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaChannelWriter.java @@ -0,0 +1,278 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.Collection; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.RedisException; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.masterreplica.MasterReplicaConnectionProvider.Intent; +import io.lettuce.core.protocol.ConnectionFacade; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; + +/** + * Channel writer/dispatcher that dispatches commands based on the intent to different connections. + * + * @author Mark Paluch + */ +class MasterReplicaChannelWriter implements RedisChannelWriter { + + private MasterReplicaConnectionProvider masterReplicaConnectionProvider; + private final ClientResources clientResources; + + private boolean closed = false; + private boolean inTransaction; + + MasterReplicaChannelWriter(MasterReplicaConnectionProvider masterReplicaConnectionProvider, + ClientResources clientResources) { + this.masterReplicaConnectionProvider = masterReplicaConnectionProvider; + this.clientResources = clientResources; + } + + @Override + @SuppressWarnings("unchecked") + public RedisCommand write(RedisCommand command) { + + LettuceAssert.notNull(command, "Command must not be null"); + + if (closed) { + throw new RedisException("Connection is closed"); + } + + if (isStartTransaction(command.getType())) { + inTransaction = true; + } + + Intent intent = inTransaction ? Intent.WRITE : getIntent(command.getType()); + CompletableFuture> future = (CompletableFuture) masterReplicaConnectionProvider + .getConnectionAsync(intent); + + if (isEndTransaction(command.getType())) { + inTransaction = false; + } + + if (isSuccessfullyCompleted(future)) { + writeCommand(command, future.join(), null); + } else { + future.whenComplete((c, t) -> writeCommand(command, c, t)); + } + + return command; + } + + @SuppressWarnings("unchecked") + private static void writeCommand(RedisCommand command, StatefulRedisConnection connection, + Throwable throwable) { + + if (throwable != null) { + command.completeExceptionally(throwable); + return; + } + + try { + connection.dispatch(command); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + @Override + @SuppressWarnings("unchecked") + public Collection> write(Collection> commands) { + + LettuceAssert.notNull(commands, "Commands must not be null"); + + if (closed) { + throw new RedisException("Connection is closed"); + } + + for (RedisCommand command : commands) { + if (isStartTransaction(command.getType())) { + inTransaction = true; + break; + } + } + + // TODO: Retain order or retain Intent preference? + // Currently: Retain order + Intent intent = inTransaction ? Intent.WRITE : getIntent(commands); + + CompletableFuture> future = (CompletableFuture) masterReplicaConnectionProvider + .getConnectionAsync(intent); + + for (RedisCommand command : commands) { + if (isEndTransaction(command.getType())) { + inTransaction = false; + break; + } + } + + if (isSuccessfullyCompleted(future)) { + writeCommands(commands, future.join(), null); + } else { + future.whenComplete((c, t) -> writeCommands(commands, c, t)); + } + + return (Collection) commands; + } + + @SuppressWarnings("unchecked") + private static void writeCommands(Collection> commands, + StatefulRedisConnection connection, Throwable throwable) { + + if (throwable != null) { + commands.forEach(c -> c.completeExceptionally(throwable)); + return; + } + + try { + connection.dispatch(commands); + } catch (Exception e) { + commands.forEach(c -> c.completeExceptionally(e)); + } + } + + /** + * Optimization: Determine command intents and optimize for bulk execution preferring one node. + *

    + * If there is only one intent, then we take the intent derived from the commands. If there is more than one intent, then + * use {@link Intent#WRITE}. + * + * @param commands {@link Collection} of {@link RedisCommand commands}. + * @return the intent. + */ + static Intent getIntent(Collection> commands) { + + boolean w = false; + boolean r = false; + Intent singleIntent = Intent.WRITE; + + for (RedisCommand command : commands) { + + singleIntent = getIntent(command.getType()); + if (singleIntent == Intent.READ) { + r = true; + } + + if (singleIntent == Intent.WRITE) { + w = true; + } + + if (r && w) { + return Intent.WRITE; + } + } + + return singleIntent; + } + + private static Intent getIntent(ProtocolKeyword type) { + return ReadOnlyCommands.isReadOnlyCommand(type) ? Intent.READ : Intent.WRITE; + } + + @Override + public void close() { + closeAsync().join(); + } + + @Override + public CompletableFuture closeAsync() { + + if (closed) { + return CompletableFuture.completedFuture(null); + } + + closed = true; + + CompletableFuture future = null; + + if (masterReplicaConnectionProvider != null) { + future = masterReplicaConnectionProvider.closeAsync(); + masterReplicaConnectionProvider = null; + } + + if (future == null) { + future = CompletableFuture.completedFuture(null); + } + + return future; + } + + MasterReplicaConnectionProvider getMasterReplicaConnectionProvider() { + return masterReplicaConnectionProvider; + } + + @Override + public void setConnectionFacade(ConnectionFacade connection) { + } + + @Override + public ClientResources getClientResources() { + return clientResources; + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + masterReplicaConnectionProvider.setAutoFlushCommands(autoFlush); + } + + @Override + public void flushCommands() { + masterReplicaConnectionProvider.flushCommands(); + } + + @Override + public void reset() { + masterReplicaConnectionProvider.reset(); + } + + /** + * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the + * documentation for {@link ReadFrom} for more information. + * + * @param readFrom the read from setting, must not be {@literal null} + */ + public void setReadFrom(ReadFrom readFrom) { + masterReplicaConnectionProvider.setReadFrom(readFrom); + } + + /** + * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. + * + * @return the read from setting + */ + public ReadFrom getReadFrom() { + return masterReplicaConnectionProvider.getReadFrom(); + } + + private static boolean isSuccessfullyCompleted(CompletableFuture connectFuture) { + return connectFuture.isDone() && !connectFuture.isCompletedExceptionally(); + } + + private static boolean isStartTransaction(ProtocolKeyword command) { + return command.name().equals("MULTI"); + } + + private boolean isEndTransaction(ProtocolKeyword command) { + return command.name().equals("EXEC") || command.name().equals("DISCARD"); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/MasterReplicaConnectionProvider.java b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaConnectionProvider.java new file mode 100644 index 0000000000..b3e39cffae --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaConnectionProvider.java @@ -0,0 +1,387 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static io.lettuce.core.masterreplica.MasterReplicaUtils.findNodeByHostAndPort; + +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ThreadLocalRandom; +import java.util.function.Function; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.AsyncConnectionProvider; +import io.lettuce.core.internal.Exceptions; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Connection provider for master/replica setups. The connection provider + * + * @author Mark Paluch + * @since 4.1 + */ +class MasterReplicaConnectionProvider { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(MasterReplicaConnectionProvider.class); + private final boolean debugEnabled = logger.isDebugEnabled(); + + private final RedisURI initialRedisUri; + private final AsyncConnectionProvider, CompletionStage>> connectionProvider; + + private List knownNodes = new ArrayList<>(); + + private boolean autoFlushCommands = true; + private final Object stateLock = new Object(); + private ReadFrom readFrom; + + MasterReplicaConnectionProvider(RedisClient redisClient, RedisCodec redisCodec, RedisURI initialRedisUri, + Map> initialConnections) { + + this.initialRedisUri = initialRedisUri; + + Function>> connectionFactory = new DefaultConnectionFactory( + redisClient, redisCodec); + + this.connectionProvider = new AsyncConnectionProvider<>(connectionFactory); + + for (Map.Entry> entry : initialConnections.entrySet()) { + connectionProvider.register(toConnectionKey(entry.getKey()), entry.getValue()); + } + } + + /** + * Retrieve a {@link StatefulRedisConnection} by the intent. {@link MasterReplicaConnectionProvider.Intent#WRITE} intentions + * use the master connection, {@link MasterReplicaConnectionProvider.Intent#READ} intentions lookup one or more read + * candidates using the {@link ReadFrom} setting. + * + * @param intent command intent + * @return the connection. + */ + public StatefulRedisConnection getConnection(Intent intent) { + + if (debugEnabled) { + logger.debug("getConnection(" + intent + ")"); + } + + try { + return getConnectionAsync(intent).get(); + } catch (Exception e) { + throw Exceptions.bubble(e); + } + } + + /** + * Retrieve a {@link StatefulRedisConnection} by the intent. {@link MasterReplicaConnectionProvider.Intent#WRITE} intentions + * use the master connection, {@link MasterReplicaConnectionProvider.Intent#READ} intentions lookup one or more read + * candidates using the {@link ReadFrom} setting. + * + * @param intent command intent + * @return the connection. + * @throws RedisException if the host is not part of the cluster + */ + public CompletableFuture> getConnectionAsync(Intent intent) { + + if (debugEnabled) { + logger.debug("getConnectionAsync(" + intent + ")"); + } + + if (readFrom != null && intent == Intent.READ) { + List selection = readFrom.select(new ReadFrom.Nodes() { + @Override + public List getNodes() { + return knownNodes; + } + + @Override + public Iterator iterator() { + return knownNodes.iterator(); + } + }); + + if (selection.isEmpty()) { + throw new RedisException(String.format("Cannot determine a node to read (Known nodes: %s) with setting %s", + knownNodes, readFrom)); + } + + try { + + Flux> connections = Flux.empty(); + + for (RedisNodeDescription node : selection) { + connections = connections.concatWith(Mono.fromFuture(getConnection(node))); + } + + if (OrderingReadFromAccessor.isOrderSensitive(readFrom) || selection.size() == 1) { + return connections.filter(StatefulConnection::isOpen).next().switchIfEmpty(connections.next()).toFuture(); + } + + return connections.filter(StatefulConnection::isOpen).collectList().map(it -> { + int index = ThreadLocalRandom.current().nextInt(it.size()); + return it.get(index); + }).switchIfEmpty(connections.next()).toFuture(); + } catch (RuntimeException e) { + throw Exceptions.bubble(e); + } + } + + return getConnection(getMaster()); + } + + protected CompletableFuture> getConnection(RedisNodeDescription redisNodeDescription) { + + RedisURI uri = redisNodeDescription.getUri(); + + return connectionProvider.getConnection(toConnectionKey(uri)).toCompletableFuture(); + } + + /** + * @return number of connections. + */ + protected long getConnectionCount() { + return connectionProvider.getConnectionCount(); + } + + /** + * Retrieve a set of PoolKey's for all pooled connections that are within the pool but not within the {@link Partitions}. + * + * @return Set of {@link ConnectionKey}s + */ + private Set getStaleConnectionKeys() { + + Map> map = new ConcurrentHashMap<>(); + connectionProvider.forEach(map::put); + + Set stale = new HashSet<>(); + + for (ConnectionKey connectionKey : map.keySet()) { + + if (connectionKey.host != null + && findNodeByHostAndPort(knownNodes, connectionKey.host, connectionKey.port) != null) { + continue; + } + stale.add(connectionKey); + } + return stale; + } + + /** + * Close stale connections. + */ + public void closeStaleConnections() { + + logger.debug("closeStaleConnections() count before expiring: {}", getConnectionCount()); + + Set stale = getStaleConnectionKeys(); + + for (ConnectionKey connectionKey : stale) { + connectionProvider.close(connectionKey); + } + + logger.debug("closeStaleConnections() count after expiring: {}", getConnectionCount()); + } + + /** + * Reset the command state of all connections. + * + * @see StatefulRedisConnection#reset() + */ + public void reset() { + connectionProvider.forEach(StatefulRedisConnection::reset); + } + + /** + * Close all connections. + */ + public void close() { + closeAsync().join(); + } + + /** + * Close all connections asynchronously. + * + * @since 5.1 + */ + @SuppressWarnings("unchecked") + public CompletableFuture closeAsync() { + return connectionProvider.close(); + } + + /** + * Flush pending commands on all connections. + * + * @see StatefulConnection#flushCommands() + */ + public void flushCommands() { + connectionProvider.forEach(StatefulConnection::flushCommands); + } + + /** + * Disable or enable auto-flush behavior for all connections. + * + * @param autoFlush state of autoFlush. + * @see StatefulConnection#setAutoFlushCommands(boolean) + */ + public void setAutoFlushCommands(boolean autoFlush) { + + synchronized (stateLock) { + this.autoFlushCommands = autoFlush; + connectionProvider.forEach(connection -> connection.setAutoFlushCommands(autoFlush)); + } + } + + /** + * @return all connections that are connected. + */ + @Deprecated + protected Collection> allConnections() { + + Set> set = ConcurrentHashMap.newKeySet(); + connectionProvider.forEach(set::add); + return set; + } + + /** + * @param knownNodes + */ + public void setKnownNodes(Collection knownNodes) { + synchronized (stateLock) { + + this.knownNodes.clear(); + this.knownNodes.addAll(knownNodes); + + closeStaleConnections(); + } + } + + /** + * @return the current read-from setting. + */ + public ReadFrom getReadFrom() { + synchronized (stateLock) { + return readFrom; + } + } + + public void setReadFrom(ReadFrom readFrom) { + synchronized (stateLock) { + this.readFrom = readFrom; + } + } + + public RedisNodeDescription getMaster() { + + for (RedisNodeDescription knownNode : knownNodes) { + if (knownNode.getRole() == RedisInstance.Role.MASTER) { + return knownNode; + } + } + + throw new RedisException(String.format("Master is currently unknown: %s", knownNodes)); + } + + class DefaultConnectionFactory implements Function>> { + + private final RedisClient redisClient; + private final RedisCodec redisCodec; + + DefaultConnectionFactory(RedisClient redisClient, RedisCodec redisCodec) { + this.redisClient = redisClient; + this.redisCodec = redisCodec; + } + + @Override + public ConnectionFuture> apply(ConnectionKey key) { + + RedisURI.Builder builder = RedisURI.Builder.redis(key.host, key.port).withSsl(initialRedisUri.isSsl()) + .withVerifyPeer(initialRedisUri.isVerifyPeer()).withStartTls(initialRedisUri.isStartTls()); + + if (initialRedisUri.getPassword() != null && initialRedisUri.getPassword().length != 0) { + builder.withPassword(initialRedisUri.getPassword()); + } + + if (initialRedisUri.getClientName() != null) { + builder.withClientName(initialRedisUri.getClientName()); + } + builder.withDatabase(initialRedisUri.getDatabase()); + + ConnectionFuture> connectionFuture = redisClient.connectAsync(redisCodec, + builder.build()); + + connectionFuture.thenAccept(connection -> { + synchronized (stateLock) { + connection.setAutoFlushCommands(autoFlushCommands); + } + }); + + return connectionFuture; + } + } + + private static ConnectionKey toConnectionKey(RedisURI redisURI) { + return new ConnectionKey(redisURI.getHost(), redisURI.getPort()); + } + + /** + * Connection to identify a connection by host/port. + */ + static class ConnectionKey { + + private final String host; + private final int port; + + ConnectionKey(String host, int port) { + this.host = host; + this.port = port; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof ConnectionKey)) + return false; + + ConnectionKey that = (ConnectionKey) o; + + if (port != that.port) + return false; + return !(host != null ? !host.equals(that.host) : that.host != null); + + } + + @Override + public int hashCode() { + int result = (host != null ? host.hashCode() : 0); + result = 31 * result + port; + return result; + } + } + + enum Intent { + READ, WRITE; + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/MasterReplicaConnector.java b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaConnector.java new file mode 100644 index 0000000000..cb39a12ebb --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaConnector.java @@ -0,0 +1,36 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Interface declaring an asynchronous connect method to connect a Master/Replica setup. + * + * @author Mark Paluch + * @since 5.1 + */ +interface MasterReplicaConnector { + + /** + * Asynchronously connect to a Master/Replica setup given {@link RedisCodec}. + * + * @return Future that is notified about the connection progress. + */ + CompletableFuture> connectAsync(); +} diff --git a/src/main/java/io/lettuce/core/masterreplica/MasterReplicaTopologyProvider.java b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaTopologyProvider.java new file mode 100644 index 0000000000..fa7018acfc --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaTopologyProvider.java @@ -0,0 +1,195 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import io.lettuce.core.internal.Exceptions; +import reactor.core.publisher.Mono; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Topology provider using Redis Standalone and the {@code INFO REPLICATION} output. Replicas are listed as {@code slaveN=...} + * entries. + * + * @author Mark Paluch + * @since 4.1 + */ +class MasterReplicaTopologyProvider implements TopologyProvider { + + public static final Pattern ROLE_PATTERN = Pattern.compile("^role\\:([a-z]+)$", Pattern.MULTILINE); + public static final Pattern SLAVE_PATTERN = Pattern.compile("^slave(\\d+)\\:([a-zA-Z\\,\\=\\d\\.\\:]+)$", + Pattern.MULTILINE); + public static final Pattern MASTER_HOST_PATTERN = Pattern.compile("^master_host\\:([a-zA-Z\\,\\=\\d\\.\\:\\-]+)$", + Pattern.MULTILINE); + public static final Pattern MASTER_PORT_PATTERN = Pattern.compile("^master_port\\:(\\d+)$", Pattern.MULTILINE); + public static final Pattern IP_PATTERN = Pattern.compile("ip\\=([a-zA-Z\\d\\.\\:]+)"); + public static final Pattern PORT_PATTERN = Pattern.compile("port\\=([\\d]+)"); + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(MasterReplicaTopologyProvider.class); + + private final StatefulRedisConnection connection; + private final RedisURI redisURI; + + /** + * Creates a new {@link MasterReplicaTopologyProvider}. + * + * @param connection must not be {@literal null} + * @param redisURI must not be {@literal null} + */ + public MasterReplicaTopologyProvider(StatefulRedisConnection connection, RedisURI redisURI) { + + LettuceAssert.notNull(connection, "Redis Connection must not be null"); + LettuceAssert.notNull(redisURI, "RedisURI must not be null"); + + this.connection = connection; + this.redisURI = redisURI; + } + + @Override + public List getNodes() { + + logger.debug("Performing topology lookup"); + + String info = connection.sync().info("replication"); + try { + return getNodesFromInfo(info); + } catch (RuntimeException e) { + throw Exceptions.bubble(e); + } + } + + @Override + public CompletableFuture> getNodesAsync() { + + logger.debug("Performing topology lookup"); + + RedisFuture info = connection.async().info("replication"); + + try { + return Mono.fromCompletionStage(info).timeout(redisURI.getTimeout()).map(this::getNodesFromInfo).toFuture(); + } catch (RuntimeException e) { + throw Exceptions.bubble(e); + } + } + + protected List getNodesFromInfo(String info) { + + List result = new ArrayList<>(); + + RedisNodeDescription currentNodeDescription = getCurrentNodeDescription(info); + + result.add(currentNodeDescription); + + if (currentNodeDescription.getRole() == RedisInstance.Role.MASTER) { + result.addAll(getReplicasFromInfo(info)); + } else { + result.add(getMasterFromInfo(info)); + } + + return result; + } + + private RedisNodeDescription getCurrentNodeDescription(String info) { + + Matcher matcher = ROLE_PATTERN.matcher(info); + + if (!matcher.find()) { + throw new IllegalStateException("No role property in info " + info); + } + + return getRedisNodeDescription(matcher); + } + + private List getReplicasFromInfo(String info) { + + List replicas = new ArrayList<>(); + + Matcher matcher = SLAVE_PATTERN.matcher(info); + while (matcher.find()) { + + String group = matcher.group(2); + String ip = getNested(IP_PATTERN, group, 1); + String port = getNested(PORT_PATTERN, group, 1); + + replicas.add(new RedisMasterReplicaNode(ip, Integer.parseInt(port), redisURI, RedisInstance.Role.SLAVE)); + } + + return replicas; + } + + private RedisNodeDescription getMasterFromInfo(String info) { + + Matcher masterHostMatcher = MASTER_HOST_PATTERN.matcher(info); + Matcher masterPortMatcher = MASTER_PORT_PATTERN.matcher(info); + + boolean foundHost = masterHostMatcher.find(); + boolean foundPort = masterPortMatcher.find(); + + if (!foundHost || !foundPort) { + throw new IllegalStateException("Cannot resolve master from info " + info); + } + + String host = masterHostMatcher.group(1); + int port = Integer.parseInt(masterPortMatcher.group(1)); + + return new RedisMasterReplicaNode(host, port, redisURI, RedisInstance.Role.MASTER); + } + + private String getNested(Pattern pattern, String string, int group) { + + Matcher matcher = pattern.matcher(string); + if (matcher.find()) { + return matcher.group(group); + } + + throw new IllegalArgumentException("Cannot extract group " + group + " with pattern " + pattern + " from " + string); + + } + + private RedisNodeDescription getRedisNodeDescription(Matcher matcher) { + + String roleString = matcher.group(1); + RedisInstance.Role role = null; + + if (RedisInstance.Role.MASTER.name().equalsIgnoreCase(roleString)) { + role = RedisInstance.Role.MASTER; + } + + if (RedisInstance.Role.SLAVE.name().equalsIgnoreCase(roleString)) { + role = RedisInstance.Role.SLAVE; + } + + if (role == null) { + throw new IllegalStateException("Cannot resolve role " + roleString + " to " + RedisInstance.Role.MASTER + " or " + + RedisInstance.Role.SLAVE); + } + + return new RedisMasterReplicaNode(redisURI.getHost(), redisURI.getPort(), redisURI, role); + } + +} diff --git a/src/main/java/io/lettuce/core/masterreplica/MasterReplicaTopologyRefresh.java b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaTopologyRefresh.java new file mode 100644 index 0000000000..a0b6478f75 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaTopologyRefresh.java @@ -0,0 +1,145 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ScheduledExecutorService; + +import reactor.core.publisher.Mono; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Utility to refresh the Master-Replica topology view based on {@link RedisNodeDescription}. + * + * @author Mark Paluch + */ +class MasterReplicaTopologyRefresh { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(MasterReplicaTopologyRefresh.class); + private static final StringCodec CODEC = StringCodec.UTF8; + + private final NodeConnectionFactory nodeConnectionFactory; + private final TopologyProvider topologyProvider; + private ScheduledExecutorService eventExecutors; + + MasterReplicaTopologyRefresh(RedisClient client, TopologyProvider topologyProvider) { + this(new RedisClientNodeConnectionFactory(client), client.getResources().eventExecutorGroup(), topologyProvider); + } + + MasterReplicaTopologyRefresh(NodeConnectionFactory nodeConnectionFactory, ScheduledExecutorService eventExecutors, + TopologyProvider topologyProvider) { + + this.nodeConnectionFactory = nodeConnectionFactory; + this.eventExecutors = eventExecutors; + this.topologyProvider = topologyProvider; + } + + /** + * Load master replica nodes. Result contains an ordered list of {@link RedisNodeDescription}s. The sort key is the latency. + * Nodes with lower latency come first. + * + * @param seed collection of {@link RedisURI}s + * @return mapping between {@link RedisURI} and {@link Partitions} + */ + public Mono> getNodes(RedisURI seed) { + + CompletableFuture> future = topologyProvider.getNodesAsync(); + + Mono> initialNodes = Mono.fromFuture(future).doOnNext(nodes -> { + addPasswordIfNeeded(nodes, seed); + }); + + return initialNodes.map(this::getConnections) + .flatMap(asyncConnections -> asyncConnections.asMono(seed.getTimeout(), eventExecutors)) + .flatMap(connections -> { + + Requests requests = connections.requestPing(); + + CompletionStage> nodes = requests.getOrTimeout(seed.getTimeout(), + eventExecutors); + + return Mono.fromCompletionStage(nodes).flatMap(it -> ResumeAfter.close(connections).thenEmit(it)); + }); + } + + /* + * Establish connections asynchronously. + */ + private AsyncConnections getConnections(Iterable nodes) { + + List nodeList = LettuceLists.newList(nodes); + AsyncConnections connections = new AsyncConnections(nodeList); + + for (RedisNodeDescription node : nodeList) { + + RedisURI redisURI = node.getUri(); + String message = String.format("Unable to connect to %s", redisURI); + try { + CompletableFuture> connectionFuture = nodeConnectionFactory + .connectToNodeAsync(CODEC, redisURI); + + CompletableFuture> sync = new CompletableFuture<>(); + + connectionFuture.whenComplete((connection, throwable) -> { + + if (throwable != null) { + + if (throwable instanceof RedisConnectionException) { + if (logger.isDebugEnabled()) { + logger.debug(throwable.getMessage(), throwable); + } else { + logger.warn(throwable.getMessage()); + } + } else { + logger.warn(message, throwable); + } + + sync.completeExceptionally(new RedisConnectionException(message, throwable)); + } else { + connection.async().clientSetname("lettuce#MasterReplicaTopologyRefresh"); + sync.complete(connection); + } + }); + + connections.addConnection(redisURI, sync); + } catch (RuntimeException e) { + logger.warn(String.format(message, redisURI), e); + } + } + + return connections; + } + + private static void addPasswordIfNeeded(List nodes, RedisURI seed) { + + if (seed.getPassword() != null && seed.getPassword().length != 0) { + for (RedisNodeDescription node : nodes) { + node.getUri().setPassword(new String(seed.getPassword())); + } + } + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/MasterReplicaUtils.java b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaUtils.java new file mode 100644 index 0000000000..3eab3d4dc0 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/MasterReplicaUtils.java @@ -0,0 +1,127 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.Collection; +import java.util.Comparator; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * @author Mark Paluch + */ +class MasterReplicaUtils { + + /** + * Check if properties changed. + * + * @param o1 the first object to be compared. + * @param o2 the second object to be compared. + * @return {@literal true} if {@code MASTER} or {@code SLAVE} flags changed or the URIs are changed. + */ + static boolean isChanged(Collection o1, Collection o2) { + + if (o1.size() != o2.size()) { + return true; + } + + for (RedisNodeDescription base : o2) { + if (!essentiallyEqualsTo(base, findNodeByUri(o1, base.getUri()))) { + return true; + } + } + + return false; + } + + /** + * Lookup a {@link RedisNodeDescription} by {@link RedisURI}. + * + * @param nodes + * @param lookupUri + * @return the {@link RedisNodeDescription} or {@literal null} + */ + static RedisNodeDescription findNodeByUri(Collection nodes, RedisURI lookupUri) { + return findNodeByHostAndPort(nodes, lookupUri.getHost(), lookupUri.getPort()); + } + + /** + * Lookup a {@link RedisNodeDescription} by {@code host} and {@code port}. + * + * @param nodes + * @param host + * @param port + * @return the {@link RedisNodeDescription} or {@literal null} + */ + static RedisNodeDescription findNodeByHostAndPort(Collection nodes, String host, int port) { + for (RedisNodeDescription node : nodes) { + RedisURI nodeUri = node.getUri(); + if (nodeUri.getHost().equals(host) && nodeUri.getPort() == port) { + return node; + } + } + return null; + } + + /** + * Check for {@code MASTER} or {@code SLAVE} roles and the URI. + * + * @param o1 the first object to be compared. + * @param o2 the second object to be compared. + * @return {@literal true} if {@code MASTER} or {@code SLAVE} flags changed or the URI changed. + */ + static boolean essentiallyEqualsTo(RedisNodeDescription o1, RedisNodeDescription o2) { + + if (o2 == null) { + return false; + } + + if (o1.getRole() != o2.getRole()) { + return false; + } + + if (!o1.getUri().equals(o2.getUri())) { + return false; + } + + return true; + } + + /** + * Compare {@link RedisURI} based on their host and port representation. + */ + enum RedisURIComparator implements Comparator { + + INSTANCE; + + @Override + public int compare(RedisURI o1, RedisURI o2) { + String h1 = ""; + String h2 = ""; + + if (o1 != null) { + h1 = o1.getHost() + ":" + o1.getPort(); + } + + if (o2 != null) { + h2 = o2.getHost() + ":" + o2.getPort(); + } + + return h1.compareToIgnoreCase(h2); + } + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/NodeConnectionFactory.java b/src/main/java/io/lettuce/core/masterreplica/NodeConnectionFactory.java new file mode 100644 index 0000000000..dbccc07ee6 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/NodeConnectionFactory.java @@ -0,0 +1,43 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.net.SocketAddress; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.RedisCodec; + +/** + * Factory interface to obtain {@link StatefulRedisConnection connections} to Redis nodes. + * + * @author Mark Paluch + * @since 4.4 + */ +interface NodeConnectionFactory { + + /** + * Connects to a {@link SocketAddress} with the given {@link RedisCodec} asynchronously. + * + * @param codec must not be {@literal null}. + * @param redisURI must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new {@link StatefulRedisConnection} + */ + CompletableFuture> connectToNodeAsync(RedisCodec codec, RedisURI redisURI); +} diff --git a/src/main/java/io/lettuce/core/masterreplica/ReadOnlyCommands.java b/src/main/java/io/lettuce/core/masterreplica/ReadOnlyCommands.java new file mode 100644 index 0000000000..eaecd0d793 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/ReadOnlyCommands.java @@ -0,0 +1,67 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.Collections; +import java.util.EnumSet; +import java.util.Set; + +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Contains all command names that are read-only commands. + * + * @author Mark Paluch + */ +class ReadOnlyCommands { + + private static final Set READ_ONLY_COMMANDS = EnumSet.noneOf(CommandType.class); + + static { + for (CommandName commandNames : CommandName.values()) { + READ_ONLY_COMMANDS.add(CommandType.valueOf(commandNames.name())); + } + } + + /** + * @param protocolKeyword must not be {@literal null}. + * @return {@literal true} if {@link ProtocolKeyword} is a read-only command. + */ + public static boolean isReadOnlyCommand(ProtocolKeyword protocolKeyword) { + return READ_ONLY_COMMANDS.contains(protocolKeyword); + } + + /** + * @return an unmodifiable {@link Set} of {@link CommandType read-only} commands. + */ + public static Set getReadOnlyCommands() { + return Collections.unmodifiableSet(READ_ONLY_COMMANDS); + } + + enum CommandName { + ASKING, BITCOUNT, BITPOS, CLIENT, COMMAND, DUMP, ECHO, EVAL, EVALSHA, EXISTS, // + GEODIST, GEOPOS, GEORADIUS, GEORADIUSBYMEMBER, GEOHASH, GET, GETBIT, // + GETRANGE, HEXISTS, HGET, HGETALL, HKEYS, HLEN, HMGET, HSCAN, HSTRLEN, // + HVALS, INFO, KEYS, LINDEX, LLEN, LRANGE, MGET, PFCOUNT, PTTL, // + RANDOMKEY, READWRITE, SCAN, SCARD, SCRIPT, // + SDIFF, SINTER, SISMEMBER, SMEMBERS, SRANDMEMBER, SSCAN, STRLEN, // + SUNION, TIME, TTL, TYPE, // + XINFO, XLEN, XPENDING, XRANGE, XREVRANGE, XREAD, // + ZCARD, ZCOUNT, ZLEXCOUNT, ZRANGE, // + ZRANGEBYLEX, ZRANGEBYSCORE, ZRANK, ZREVRANGE, ZREVRANGEBYLEX, ZREVRANGEBYSCORE, ZREVRANK, ZSCAN, ZSCORE, + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/RedisClientNodeConnectionFactory.java b/src/main/java/io/lettuce/core/masterreplica/RedisClientNodeConnectionFactory.java new file mode 100644 index 0000000000..fad4393e24 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/RedisClientNodeConnectionFactory.java @@ -0,0 +1,42 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link NodeConnectionFactory} implementation that on {@link RedisClient}. + * + * @author Mark Paluch + */ +class RedisClientNodeConnectionFactory implements NodeConnectionFactory { + + private final RedisClient client; + + RedisClientNodeConnectionFactory(RedisClient client) { + this.client = client; + } + + @Override + public CompletableFuture> connectToNodeAsync(RedisCodec codec, RedisURI redisURI) { + return client.connectAsync(codec, redisURI).toCompletableFuture(); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/RedisMasterReplicaNode.java b/src/main/java/io/lettuce/core/masterreplica/RedisMasterReplicaNode.java new file mode 100644 index 0000000000..f67725baa9 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/RedisMasterReplicaNode.java @@ -0,0 +1,90 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * A node within a Redis Master-Replica setup. + * + * @author Mark Paluch + * @author Adam McElwee + */ +class RedisMasterReplicaNode implements RedisNodeDescription { + + private final RedisURI redisURI; + private final Role role; + + RedisMasterReplicaNode(String host, int port, RedisURI seed, Role role) { + + RedisURI.Builder builder = RedisURI.Builder.redis(host, port).withSsl(seed.isSsl()).withVerifyPeer(seed.isVerifyPeer()) + .withStartTls(seed.isStartTls()); + if (seed.getPassword() != null && seed.getPassword().length != 0) { + builder.withPassword(seed.getPassword()); + } + + if (seed.getClientName() != null) { + builder.withClientName(seed.getClientName()); + } + + builder.withDatabase(seed.getDatabase()); + + this.redisURI = builder.build(); + this.role = role; + } + + @Override + public RedisURI getUri() { + return redisURI; + } + + @Override + public Role getRole() { + return role; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof RedisMasterReplicaNode)) + return false; + + RedisMasterReplicaNode that = (RedisMasterReplicaNode) o; + + if (!redisURI.equals(that.redisURI)) + return false; + return role == that.role; + } + + @Override + public int hashCode() { + int result = redisURI.hashCode(); + result = 31 * result + role.hashCode(); + return result; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [redisURI=").append(redisURI); + sb.append(", role=").append(role); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/Requests.java b/src/main/java/io/lettuce/core/masterreplica/Requests.java new file mode 100644 index 0000000000..c29772e16a --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/Requests.java @@ -0,0 +1,93 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static io.lettuce.core.masterreplica.MasterReplicaUtils.findNodeByUri; +import static io.lettuce.core.masterreplica.TopologyComparators.LatencyComparator; + +import java.util.*; + +import reactor.util.function.Tuple2; +import reactor.util.function.Tuples; +import io.lettuce.core.RedisURI; +import io.lettuce.core.masterreplica.TopologyComparators.SortAction; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * Encapsulates asynchronously executed commands to multiple {@link RedisURI nodes}. + * + * @author Mark Paluch + */ +class Requests extends + CompletableEventLatchSupport>, List> { + + private final Map> rawViews = new TreeMap<>( + MasterReplicaUtils.RedisURIComparator.INSTANCE); + private final List nodes; + + public Requests(int expectedCount, List nodes) { + super(expectedCount); + this.nodes = nodes; + } + + protected void addRequest(RedisURI redisURI, TimedAsyncCommand command) { + + rawViews.put(redisURI, command); + command.onComplete((s, throwable) -> { + + if (throwable != null) { + accept(throwable); + } else { + accept(Tuples.of(redisURI, command)); + } + }); + } + + @Override + protected void onEmit(Emission> emission) { + + List result = new ArrayList<>(); + + Map latencies = new HashMap<>(); + + for (RedisNodeDescription node : nodes) { + + TimedAsyncCommand future = getRequest(node.getUri()); + + if (future == null || !future.isDone()) { + continue; + } + + RedisNodeDescription redisNodeDescription = findNodeByUri(nodes, node.getUri()); + latencies.put(redisNodeDescription, future.duration()); + result.add(redisNodeDescription); + } + + + SortAction sortAction = SortAction.getSortAction(); + sortAction.sort(result, new LatencyComparator(latencies)); + + emission.success(result); + } + + protected Set nodes() { + return rawViews.keySet(); + } + + protected TimedAsyncCommand getRequest(RedisURI redisURI) { + return rawViews.get(redisURI); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/ResumeAfter.java b/src/main/java/io/lettuce/core/masterreplica/ResumeAfter.java new file mode 100644 index 0000000000..fa86d677d6 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/ResumeAfter.java @@ -0,0 +1,88 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; + +import reactor.core.publisher.Mono; +import io.lettuce.core.internal.AsyncCloseable; + +/** + * Utility to resume a {@link org.reactivestreams.Publisher} after termination. + * + * @author Mark Paluch + */ +class ResumeAfter { + + private static final AtomicIntegerFieldUpdater UPDATER = AtomicIntegerFieldUpdater.newUpdater( + ResumeAfter.class, "closed"); + + private final AsyncCloseable closeable; + + private static final int ST_OPEN = 0; + private static final int ST_CLOSED = 1; + + @SuppressWarnings("unused") + private volatile int closed = ST_OPEN; + + private ResumeAfter(AsyncCloseable closeable) { + this.closeable = closeable; + } + + public static ResumeAfter close(AsyncCloseable closeable) { + return new ResumeAfter(closeable); + } + + public Mono thenEmit(T value) { + + return Mono.defer(() -> { + + if (firstCloseLatch()) { + return Mono.fromCompletionStage(closeable.closeAsync()); + } + + return Mono.empty(); + + }).then(Mono.just(value)).doFinally(s -> { + + if (firstCloseLatch()) { + closeable.closeAsync(); + } + }); + } + + public Mono thenError(Throwable t) { + + return Mono.defer(() -> { + + if (firstCloseLatch()) { + return Mono.fromCompletionStage(closeable.closeAsync()); + } + + return Mono.empty(); + + }).then(Mono. error(t)).doFinally(s -> { + + if (firstCloseLatch()) { + closeable.closeAsync(); + } + }); + } + + private boolean firstCloseLatch() { + return UPDATER.compareAndSet(ResumeAfter.this, ST_OPEN, ST_CLOSED); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/SentinelConnector.java b/src/main/java/io/lettuce/core/masterreplica/SentinelConnector.java new file mode 100644 index 0000000000..f0b86f56d9 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/SentinelConnector.java @@ -0,0 +1,124 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.Collections; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.ExecutionException; + +import reactor.core.publisher.Mono; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * {@link MasterReplicaConnector} to connect a Sentinel-managed Master/Replica setup using a Sentinel {@link RedisURI}. + * + * @author Mark Paluch + * @since 5.1 + */ +class SentinelConnector implements MasterReplicaConnector { + + private static final InternalLogger LOG = InternalLoggerFactory.getInstance(SentinelConnector.class); + + private final RedisClient redisClient; + private final RedisCodec codec; + private final RedisURI redisURI; + + SentinelConnector(RedisClient redisClient, RedisCodec codec, RedisURI redisURI) { + this.redisClient = redisClient; + this.codec = codec; + this.redisURI = redisURI; + } + + @Override + public CompletableFuture> connectAsync() { + + TopologyProvider topologyProvider = new SentinelTopologyProvider(redisURI.getSentinelMasterId(), redisClient, redisURI); + SentinelTopologyRefresh sentinelTopologyRefresh = new SentinelTopologyRefresh(redisClient, + redisURI.getSentinelMasterId(), redisURI.getSentinels()); + + MasterReplicaTopologyRefresh refresh = new MasterReplicaTopologyRefresh(redisClient, topologyProvider); + MasterReplicaConnectionProvider connectionProvider = new MasterReplicaConnectionProvider<>(redisClient, codec, + redisURI, Collections.emptyMap()); + + Runnable runnable = getTopologyRefreshRunnable(refresh, connectionProvider); + + return refresh.getNodes(redisURI).flatMap(nodes -> { + + if (nodes.isEmpty()) { + return Mono.error(new RedisException(String.format("Cannot determine topology from %s", redisURI))); + } + + return initializeConnection(codec, sentinelTopologyRefresh, connectionProvider, runnable, nodes); + }).onErrorMap(ExecutionException.class, Throwable::getCause).toFuture(); + } + + private Mono> initializeConnection(RedisCodec codec, + SentinelTopologyRefresh sentinelTopologyRefresh, MasterReplicaConnectionProvider connectionProvider, + Runnable runnable, List nodes) { + + connectionProvider.setKnownNodes(nodes); + + MasterReplicaChannelWriter channelWriter = new MasterReplicaChannelWriter(connectionProvider, + redisClient.getResources()) { + + @Override + public CompletableFuture closeAsync() { + return CompletableFuture.allOf(super.closeAsync(), sentinelTopologyRefresh.closeAsync()); + } + }; + + StatefulRedisMasterReplicaConnectionImpl connection = new StatefulRedisMasterReplicaConnectionImpl<>( + channelWriter, codec, redisURI.getTimeout()); + connection.setOptions(redisClient.getOptions()); + + CompletionStage bind = sentinelTopologyRefresh.bind(runnable); + + return Mono.fromCompletionStage(bind).onErrorResume(t -> { + return ResumeAfter.close(connection).thenError(t); + }).then(Mono.just(connection)); + } + + private Runnable getTopologyRefreshRunnable(MasterReplicaTopologyRefresh refresh, + MasterReplicaConnectionProvider connectionProvider) { + + return () -> { + try { + + LOG.debug("Refreshing topology"); + refresh.getNodes(redisURI).subscribe(nodes -> { + if (nodes.isEmpty()) { + LOG.warn("Topology refresh returned no nodes from {}", redisURI); + } + + LOG.debug("New topology: {}", nodes); + connectionProvider.setKnownNodes(nodes); + + }, t -> LOG.error("Error during background refresh", t)); + + } catch (Exception e) { + LOG.error("Error during background refresh", e); + } + }; + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyProvider.java b/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyProvider.java new file mode 100644 index 0000000000..42c5f282c6 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyProvider.java @@ -0,0 +1,136 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; + +import io.lettuce.core.internal.Exceptions; +import reactor.core.publisher.Mono; +import reactor.util.function.Tuple2; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.reactive.RedisSentinelReactiveCommands; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Topology provider using Redis Sentinel and the Sentinel API. + * + * @author Mark Paluch + * @since 4.1 + */ +class SentinelTopologyProvider implements TopologyProvider { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(SentinelTopologyProvider.class); + + private final String masterId; + private final RedisClient redisClient; + private final RedisURI sentinelUri; + private final Duration timeout; + + /** + * Creates a new {@link SentinelTopologyProvider}. + * + * @param masterId must not be empty + * @param redisClient must not be {@literal null}. + * @param sentinelUri must not be {@literal null}. + */ + public SentinelTopologyProvider(String masterId, RedisClient redisClient, RedisURI sentinelUri) { + + LettuceAssert.notEmpty(masterId, "MasterId must not be empty"); + LettuceAssert.notNull(redisClient, "RedisClient must not be null"); + LettuceAssert.notNull(sentinelUri, "Sentinel URI must not be null"); + + this.masterId = masterId; + this.redisClient = redisClient; + this.sentinelUri = sentinelUri; + this.timeout = sentinelUri.getTimeout(); + } + + @Override + public List getNodes() { + + logger.debug("lookup topology for masterId {}", masterId); + + try { + return getNodesAsync().get(timeout.toMillis(), TimeUnit.MILLISECONDS); + } catch (Exception e) { + throw Exceptions.bubble(e); + } + } + + @Override + public CompletableFuture> getNodesAsync() { + + logger.debug("lookup topology for masterId {}", masterId); + + Mono> connect = Mono + .fromFuture(redisClient.connectSentinelAsync(StringCodec.UTF8, sentinelUri)); + + return connect.flatMap(this::getNodes).toFuture(); + } + + protected Mono> getNodes(StatefulRedisSentinelConnection connection) { + + RedisSentinelReactiveCommands reactive = connection.reactive(); + + Mono, List>>> masterAndReplicas = reactive.master(masterId) + .zipWith(reactive.slaves(masterId).collectList()).timeout(this.timeout).flatMap(tuple -> { + return ResumeAfter.close(connection).thenEmit(tuple); + }).doOnError(e -> connection.closeAsync()); + + return masterAndReplicas.map(tuple -> { + + List result = new ArrayList<>(); + + result.add(toNode(tuple.getT1(), RedisInstance.Role.MASTER)); + result.addAll(tuple.getT2().stream().filter(SentinelTopologyProvider::isAvailable) + .map(map -> toNode(map, RedisInstance.Role.SLAVE)).collect(Collectors.toList())); + + return result; + }); + } + + private static boolean isAvailable(Map map) { + + String flags = map.get("flags"); + if (flags != null) { + if (flags.contains("s_down") || flags.contains("o_down") || flags.contains("disconnected")) { + return false; + } + } + return true; + } + + private RedisNodeDescription toNode(Map map, RedisInstance.Role role) { + + String ip = map.get("ip"); + String port = map.get("port"); + return new RedisMasterReplicaNode(ip, Integer.parseInt(port), sentinelUri, role); + } + +} diff --git a/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyRefresh.java b/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyRefresh.java new file mode 100644 index 0000000000..d87ec75e5f --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyRefresh.java @@ -0,0 +1,417 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.io.Closeable; +import java.nio.charset.StandardCharsets; +import java.time.Duration; +import java.util.*; +import java.util.concurrent.*; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.BiPredicate; +import java.util.function.Consumer; +import java.util.function.Supplier; + +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.AsyncCloseable; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.pubsub.RedisPubSubAdapter; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Sentinel Pub/Sub listener-enabled topology refresh. This refresh triggers topology updates if Redis topology changes + * (monitored master/replicas) or the Sentinel availability changes. + * + * @author Mark Paluch + * @since 4.2 + */ +class SentinelTopologyRefresh implements AsyncCloseable, Closeable { + + private static final InternalLogger LOG = InternalLoggerFactory.getInstance(SentinelTopologyRefresh.class); + private static final StringCodec CODEC = new StringCodec(StandardCharsets.US_ASCII); + private static final Set PROCESSING_CHANNELS = new HashSet<>( + Arrays.asList("failover-end", "failover-end-for-timeout")); + + private final Map>> pubSubConnections = new ConcurrentHashMap<>(); + private final RedisClient redisClient; + private final List sentinels; + private final List refreshRunnables = new CopyOnWriteArrayList<>(); + private final RedisPubSubAdapter adapter = new RedisPubSubAdapter() { + + @Override + public void message(String pattern, String channel, String message) { + SentinelTopologyRefresh.this.processMessage(pattern, channel, message); + } + }; + + private final PubSubMessageActionScheduler topologyRefresh; + private final PubSubMessageActionScheduler sentinelReconnect; + private final CompletableFuture closeFuture = new CompletableFuture<>(); + + private volatile boolean closed = false; + + SentinelTopologyRefresh(RedisClient redisClient, String masterId, List sentinels) { + + this.redisClient = redisClient; + this.sentinels = LettuceLists.newList(sentinels); + this.topologyRefresh = new PubSubMessageActionScheduler(redisClient.getResources().eventExecutorGroup(), + new TopologyRefreshMessagePredicate(masterId)); + this.sentinelReconnect = new PubSubMessageActionScheduler(redisClient.getResources().eventExecutorGroup(), + new SentinelReconnectMessagePredicate()); + } + + @Override + public void close() { + closeAsync().join(); + } + + @Override + public CompletableFuture closeAsync() { + + if (closed) { + return closeFuture; + } + + closed = true; + + HashMap>> connections = new HashMap<>( + pubSubConnections); + List> futures = new ArrayList<>(); + connections.forEach((k, f) -> { + + futures.add(f.exceptionally(t -> null).thenCompose(c -> { + + if (c == null) { + return CompletableFuture.completedFuture(null); + } + c.removeListener(adapter); + return c.closeAsync(); + }).toCompletableFuture()); + + pubSubConnections.remove(k); + }); + + Futures.allOf(futures).whenComplete((aVoid, throwable) -> { + + if (throwable != null) { + closeFuture.completeExceptionally(throwable); + } else { + closeFuture.complete(null); + } + }); + + return closeFuture; + } + + CompletionStage bind(Runnable runnable) { + + refreshRunnables.add(runnable); + + return initializeSentinels(); + } + + /** + * Initialize/extend connections to Sentinel servers. + * + * @return + */ + private CompletionStage initializeSentinels() { + + if (closed) { + return closeFuture; + } + + Duration timeout = getTimeout(); + + List>> connectionFutures = potentiallyConnectSentinels(); + + if (connectionFutures.isEmpty()) { + return CompletableFuture.completedFuture(null); + } + + if (closed) { + return closeAsync(); + } + + SentinelTopologyRefreshConnections collector = collectConnections(connectionFutures); + + CompletionStage completionStage = collector.getOrTimeout(timeout, + redisClient.getResources().eventExecutorGroup()); + + return completionStage.whenComplete((aVoid, throwable) -> { + + if (throwable != null) { + closeAsync(); + } + }).thenApply(noop -> (Void) null); + } + + /** + * Inspect whether additional Sentinel connections are required based on the which Sentinels are currently connected. + * + * @return list of futures that are notified with the connection progress. + */ + private List>> potentiallyConnectSentinels() { + + List>> connectionFutures = new ArrayList<>(); + for (RedisURI sentinel : sentinels) { + + if (pubSubConnections.containsKey(sentinel)) { + continue; + } + + ConnectionFuture> future = redisClient.connectPubSubAsync(CODEC, + sentinel); + pubSubConnections.put(sentinel, future); + + future.whenComplete((connection, throwable) -> { + + if (throwable != null || closed) { + pubSubConnections.remove(sentinel); + } + + if (closed) { + connection.closeAsync(); + } + }); + + connectionFutures.add(future); + } + + return connectionFutures; + } + + private SentinelTopologyRefreshConnections collectConnections( + List>> connectionFutures) { + + SentinelTopologyRefreshConnections collector = new SentinelTopologyRefreshConnections(connectionFutures.size()); + + for (ConnectionFuture> connectionFuture : connectionFutures) { + + connectionFuture.thenCompose(connection -> { + + connection.addListener(adapter); + return connection.async().psubscribe("*").thenApply(v -> connection).whenComplete((c, t) -> { + + if (t != null) { + connection.closeAsync(); + } + }); + }).whenComplete((connection, throwable) -> { + + if (throwable != null) { + collector.accept(throwable); + } else { + collector.accept(connection); + } + }); + } + + return collector; + } + + /** + * @return operation timeout from the first sentinel to connect/first URI. Fallback to default timeout if no other timeout + * found. + * @see RedisURI#DEFAULT_TIMEOUT_DURATION + */ + private Duration getTimeout() { + + for (RedisURI sentinel : sentinels) { + + if (!pubSubConnections.containsKey(sentinel)) { + return sentinel.getTimeout(); + } + } + + for (RedisURI sentinel : sentinels) { + return sentinel.getTimeout(); + } + + return RedisURI.DEFAULT_TIMEOUT_DURATION; + } + + private void processMessage(String pattern, String channel, String message) { + + topologyRefresh.processMessage(channel, message, () -> { + LOG.debug("Received topology changed signal from Redis Sentinel ({}), scheduling topology update", channel); + return () -> refreshRunnables.forEach(Runnable::run); + }); + + sentinelReconnect.processMessage(channel, message, () -> { + + LOG.debug("Received sentinel state changed signal from Redis Sentinel, scheduling sentinel reconnect attempts"); + + return this::initializeSentinels; + }); + } + + private static class PubSubMessageActionScheduler { + + private final TimedSemaphore timedSemaphore = new TimedSemaphore(); + private final EventExecutorGroup eventExecutors; + private final MessagePredicate filter; + + PubSubMessageActionScheduler(EventExecutorGroup eventExecutors, MessagePredicate filter) { + this.eventExecutors = eventExecutors; + this.filter = filter; + } + + void processMessage(String channel, String message, Supplier runnableSupplier) { + + if (!processingAllowed(channel, message)) { + return; + } + + timedSemaphore.onEvent(timeout -> { + + Runnable runnable = runnableSupplier.get(); + + if (timeout == null) { + eventExecutors.submit(runnable); + } else { + eventExecutors.schedule(runnable, timeout.remaining(), TimeUnit.MILLISECONDS); + } + + }); + } + + private boolean processingAllowed(String channel, String message) { + + if (eventExecutors.isShuttingDown()) { + return false; + } + + if (!filter.test(channel, message)) { + return false; + } + + return true; + } + } + + /** + * Lock-free semaphore that limits calls by using a {@link Timeout}. This class is thread-safe and + * {@link #onEvent(Consumer)} may be called by multiple threads concurrently. It's guaranteed the first caller for an + * expired {@link Timeout} will be called. + */ + static class TimedSemaphore { + + private final AtomicReference timeoutRef = new AtomicReference<>(); + + private final int timeout = 5; + private final TimeUnit timeUnit = TimeUnit.SECONDS; + + /** + * Rate-limited method that notifies the given {@link Consumer} once the current {@link Timeout} is expired. + * + * @param timeoutConsumer callback. + */ + protected void onEvent(Consumer timeoutConsumer) { + + Timeout existingTimeout = timeoutRef.get(); + + if (existingTimeout != null) { + if (!existingTimeout.isExpired()) { + return; + } + } + + Timeout timeout = new Timeout(this.timeout, this.timeUnit); + boolean state = timeoutRef.compareAndSet(existingTimeout, timeout); + + if (state) { + timeoutConsumer.accept(timeout); + } + } + } + + interface MessagePredicate extends BiPredicate { + + @Override + boolean test(String message, String channel); + } + + /** + * {@link MessagePredicate} to check whether the channel and message contain topology changes related to the monitored + * master. + */ + private static class TopologyRefreshMessagePredicate implements MessagePredicate { + + private final String masterId; + private Set TOPOLOGY_CHANGE_CHANNELS = new HashSet<>( + Arrays.asList("+slave", "+sdown", "-sdown", "fix-slave-config", "+convert-to-slave", "+role-change")); + + TopologyRefreshMessagePredicate(String masterId) { + this.masterId = masterId; + } + + @Override + public boolean test(String channel, String message) { + + // trailing spaces after the master name are not bugs + if (channel.equals("+elected-leader") || channel.equals("+reset-master")) { + if (message.startsWith(String.format("master %s ", masterId))) { + return true; + } + } + + if (TOPOLOGY_CHANGE_CHANNELS.contains(channel)) { + if (message.contains(String.format("@ %s ", masterId))) { + return true; + } + } + + if (channel.equals("+switch-master")) { + if (message.startsWith(String.format("%s ", masterId))) { + return true; + } + } + + return PROCESSING_CHANNELS.contains(channel); + } + } + + /** + * {@link MessagePredicate} to check whether the channel and message contain Sentinel availability changes or a Sentinel was + * added. + */ + private static class SentinelReconnectMessagePredicate implements MessagePredicate { + + @Override + public boolean test(String channel, String message) { + + if (channel.equals("+sentinel")) { + return true; + } + + if (channel.equals("-odown") || channel.equals("-sdown")) { + if (message.startsWith("sentinel ")) { + return true; + } + } + + return false; + } + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyRefreshConnections.java b/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyRefreshConnections.java new file mode 100644 index 0000000000..8236957459 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/SentinelTopologyRefreshConnections.java @@ -0,0 +1,65 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.List; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.atomic.AtomicInteger; + +import io.lettuce.core.RedisException; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; + +/** + * @author Mark Paluch + */ +class SentinelTopologyRefreshConnections extends + CompletableEventLatchSupport, SentinelTopologyRefreshConnections> { + + private final List exceptions = new CopyOnWriteArrayList<>(); + private final AtomicInteger success = new AtomicInteger(); + + /** + * Construct a new {@link CompletableEventLatchSupport} class expecting {@code expectedCount} notifications. + * + * @param expectedCount + */ + public SentinelTopologyRefreshConnections(int expectedCount) { + super(expectedCount); + } + + @Override + protected void onAccept(StatefulRedisPubSubConnection value) { + success.incrementAndGet(); + } + + @Override + protected void onError(Throwable value) { + exceptions.add(value); + } + + @Override + protected void onEmit(Emission emission) { + + if (success.get() == 0) { + + RedisException exception = new RedisException("Cannot attach to Redis Sentinel for topology refresh"); + exceptions.forEach(exception::addSuppressed); + emission.error(exception); + } else { + emission.success(this); + } + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/StatefulRedisMasterReplicaConnection.java b/src/main/java/io/lettuce/core/masterreplica/StatefulRedisMasterReplicaConnection.java new file mode 100644 index 0000000000..0e2d817126 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/StatefulRedisMasterReplicaConnection.java @@ -0,0 +1,45 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.api.StatefulRedisConnection; + +/** + * Redis Master-Replica connection. The connection allows replica reads by setting {@link ReadFrom}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.1 + */ +public interface StatefulRedisMasterReplicaConnection extends StatefulRedisConnection { + + /** + * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the + * documentation for {@link ReadFrom} for more information. + * + * @param readFrom the read from setting, must not be {@literal null} + */ + void setReadFrom(ReadFrom readFrom); + + /** + * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. + * + * @return the read from setting + */ + ReadFrom getReadFrom(); +} diff --git a/src/main/java/io/lettuce/core/masterreplica/StatefulRedisMasterReplicaConnectionImpl.java b/src/main/java/io/lettuce/core/masterreplica/StatefulRedisMasterReplicaConnectionImpl.java new file mode 100644 index 0000000000..256ddfd77a --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/StatefulRedisMasterReplicaConnectionImpl.java @@ -0,0 +1,55 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.time.Duration; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.StatefulRedisConnectionImpl; +import io.lettuce.core.codec.RedisCodec; + +/** + * @author Mark Paluch + */ +class StatefulRedisMasterReplicaConnectionImpl extends StatefulRedisConnectionImpl + implements StatefulRedisMasterReplicaConnection { + + /** + * Initialize a new connection. + * + * @param writer the channel writer + * @param codec Codec used to encode/decode keys and values. + * @param timeout Maximum time to wait for a response. + */ + StatefulRedisMasterReplicaConnectionImpl(MasterReplicaChannelWriter writer, RedisCodec codec, Duration timeout) { + super(writer, codec, timeout); + } + + @Override + public void setReadFrom(ReadFrom readFrom) { + getChannelWriter().setReadFrom(readFrom); + } + + @Override + public ReadFrom getReadFrom() { + return getChannelWriter().getReadFrom(); + } + + @Override + public MasterReplicaChannelWriter getChannelWriter() { + return (MasterReplicaChannelWriter) super.getChannelWriter(); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/StaticMasterReplicaConnector.java b/src/main/java/io/lettuce/core/masterreplica/StaticMasterReplicaConnector.java new file mode 100644 index 0000000000..f288980c9d --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/StaticMasterReplicaConnector.java @@ -0,0 +1,89 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; + +import reactor.core.publisher.Mono; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * {@link MasterReplicaConnector} to connect to a static declared Master/Replica setup providing a fixed array of + * {@link RedisURI}. This connector determines roles and remains using only the provided endpoints. + * + * @author Mark Paluch + * @since 5.1 + */ +class StaticMasterReplicaConnector implements MasterReplicaConnector { + + private final RedisClient redisClient; + private final RedisCodec codec; + private final Iterable redisURIs; + + StaticMasterReplicaConnector(RedisClient redisClient, RedisCodec codec, Iterable redisURIs) { + this.redisClient = redisClient; + this.codec = codec; + this.redisURIs = redisURIs; + } + + @Override + public CompletableFuture> connectAsync() { + + Map> initialConnections = new HashMap<>(); + + TopologyProvider topologyProvider = new StaticMasterReplicaTopologyProvider(redisClient, redisURIs); + + RedisURI seedNode = redisURIs.iterator().next(); + + MasterReplicaTopologyRefresh refresh = new MasterReplicaTopologyRefresh(redisClient, topologyProvider); + MasterReplicaConnectionProvider connectionProvider = new MasterReplicaConnectionProvider<>(redisClient, codec, + seedNode, initialConnections); + + return refresh.getNodes(seedNode).flatMap(nodes -> { + + if (nodes.isEmpty()) { + return Mono.error(new RedisException(String.format("Cannot determine topology from %s", redisURIs))); + } + + return initializeConnection(codec, seedNode, connectionProvider, nodes); + }).onErrorMap(ExecutionException.class, Throwable::getCause).toFuture(); + } + + private Mono> initializeConnection(RedisCodec codec, RedisURI seedNode, + MasterReplicaConnectionProvider connectionProvider, List nodes) { + + connectionProvider.setKnownNodes(nodes); + + MasterReplicaChannelWriter channelWriter = new MasterReplicaChannelWriter(connectionProvider, + redisClient.getResources()); + + StatefulRedisMasterReplicaConnectionImpl connection = new StatefulRedisMasterReplicaConnectionImpl<>( + channelWriter, + codec, seedNode.getTimeout()); + connection.setOptions(redisClient.getOptions()); + + return Mono.just(connection); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/StaticMasterReplicaTopologyProvider.java b/src/main/java/io/lettuce/core/masterreplica/StaticMasterReplicaTopologyProvider.java new file mode 100644 index 0000000000..930a26ee92 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/StaticMasterReplicaTopologyProvider.java @@ -0,0 +1,121 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.internal.Exceptions; +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.lettuce.core.models.role.RoleParser; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Topology provider for a static node collection. This provider uses a static collection of nodes to determine the role of each + * {@link RedisURI node}. Node roles may change during runtime but the configuration must remain the same. This + * {@link TopologyProvider} does not auto-discover nodes. + * + * @author Mark Paluch + * @author Adam McElwee + */ +class StaticMasterReplicaTopologyProvider implements TopologyProvider { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(StaticMasterReplicaTopologyProvider.class); + + private final RedisClient redisClient; + private final Iterable redisURIs; + + public StaticMasterReplicaTopologyProvider(RedisClient redisClient, Iterable redisURIs) { + + LettuceAssert.notNull(redisClient, "RedisClient must not be null"); + LettuceAssert.notNull(redisURIs, "RedisURIs must not be null"); + LettuceAssert.notNull(redisURIs.iterator().hasNext(), "RedisURIs must not be empty"); + + this.redisClient = redisClient; + this.redisURIs = redisURIs; + } + + @Override + @SuppressWarnings("rawtypes") + public List getNodes() { + + RedisURI next = redisURIs.iterator().next(); + + try { + return getNodesAsync().get(next.getTimeout().toMillis(), TimeUnit.MILLISECONDS); + } catch (Exception e) { + throw Exceptions.bubble(e); + } + } + + @Override + public CompletableFuture> getNodesAsync() { + + List> connections = new CopyOnWriteArrayList<>(); + + Flux uris = Flux.fromIterable(redisURIs); + Mono> nodes = uris.flatMap(uri -> getNodeDescription(connections, uri)).collectList() + .flatMap((nodeDescriptions) -> { + + if (nodeDescriptions.isEmpty()) { + return Mono.error(new RedisConnectionException( + String.format("Failed to connect to at least one node in %s", redisURIs))); + } + + return Mono.just(nodeDescriptions); + }); + + return nodes.toFuture(); + } + + private Mono getNodeDescription(List> connections, + RedisURI uri) { + + return Mono.fromCompletionStage(redisClient.connectAsync(StringCodec.UTF8, uri)) // + .onErrorResume(t -> { + + logger.warn("Cannot connect to {}", uri, t); + return Mono.empty(); + }) // + .doOnNext(connections::add) // + .flatMap(connection -> { + + Mono instance = getNodeDescription(uri, connection); + + return instance.flatMap(it -> ResumeAfter.close(connection).thenEmit(it)).doFinally(s -> { + connections.remove(connection); + }); + }); + } + + private static Mono getNodeDescription(RedisURI uri, + StatefulRedisConnection connection) { + + return connection.reactive().role().collectList().map(RoleParser::parse) + .map(it -> new RedisMasterReplicaNode(uri.getHost(), uri.getPort(), uri, it.getRole())); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/TimedAsyncCommand.java b/src/main/java/io/lettuce/core/masterreplica/TimedAsyncCommand.java new file mode 100644 index 0000000000..1e81136536 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/TimedAsyncCommand.java @@ -0,0 +1,60 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.RedisCommand; +import io.netty.buffer.ByteBuf; + +/** + * Timed command that records the time at which the command was encoded and completed. + * + * @param Key type + * @param Value type + * @param Result type + * @author Mark Paluch + */ +class TimedAsyncCommand extends AsyncCommand { + + long encodedAtNs = -1; + long completedAtNs = -1; + + public TimedAsyncCommand(RedisCommand command) { + super(command); + } + + @Override + public void encode(ByteBuf buf) { + completedAtNs = -1; + encodedAtNs = -1; + + super.encode(buf); + encodedAtNs = System.nanoTime(); + } + + @Override + public void complete() { + completedAtNs = System.nanoTime(); + super.complete(); + } + + public long duration() { + if (completedAtNs == -1 || encodedAtNs == -1) { + return -1; + } + return completedAtNs - encodedAtNs; + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/Timeout.java b/src/main/java/io/lettuce/core/masterreplica/Timeout.java new file mode 100644 index 0000000000..ab2c42cab4 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/Timeout.java @@ -0,0 +1,46 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.concurrent.TimeUnit; + +/** + * Value object to represent a timeout. + * + * @author Mark Paluch + * @since 4.2 + */ +class Timeout { + + private final long expiresMs; + + public Timeout(long timeout, TimeUnit timeUnit) { + this.expiresMs = System.currentTimeMillis() + timeUnit.toMillis(timeout); + } + + public boolean isExpired() { + return expiresMs < System.currentTimeMillis(); + } + + public long remaining() { + + long diff = expiresMs - System.currentTimeMillis(); + if (diff > 0) { + return diff; + } + return 0; + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/TopologyComparators.java b/src/main/java/io/lettuce/core/masterreplica/TopologyComparators.java new file mode 100644 index 0000000000..19c0db344e --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/TopologyComparators.java @@ -0,0 +1,123 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.Collections; +import java.util.Comparator; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * Comparators for {@link RedisNodeDescription} and {@link RedisURI}. + * + * @author Mark Paluch + */ +class TopologyComparators { + + /** + * Compare {@link RedisNodeDescription} based on their latency. Lowest comes first. + */ + static class LatencyComparator implements Comparator { + + private final Map latencies; + + public LatencyComparator(Map latencies) { + this.latencies = latencies; + } + + @Override + public int compare(RedisNodeDescription o1, RedisNodeDescription o2) { + + Long latency1 = latencies.get(o1); + Long latency2 = latencies.get(o2); + + if (latency1 != null && latency2 != null) { + return latency1.compareTo(latency2); + } + + if (latency1 != null) { + return -1; + } + + if (latency2 != null) { + return 1; + } + + return 0; + } + } + + /** + * Sort action for topology. Defaults to sort by latency. Can be set via {@code io.lettuce.core.topology.sort} system + * property. + * + * @since 4.5 + */ + enum SortAction { + + /** + * Sort by latency. + */ + BY_LATENCY { + @Override + void sort(List nodes, Comparator latencyComparator) { + nodes.sort(latencyComparator); + } + }, + + /** + * Do not sort. + */ + NONE { + @Override + void sort(List nodes, Comparator latencyComparator) { + + } + }, + + /** + * Randomize nodes. + */ + RANDOMIZE { + @Override + void sort(List nodes, Comparator latencyComparator) { + Collections.shuffle(nodes); + } + }; + + abstract void sort(List nodes, Comparator latencyComparator); + + /** + * @return determine {@link SortAction} and fall back to {@link SortAction#BY_LATENCY} if sort action cannot be + * resolved. + */ + static SortAction getSortAction() { + + String sortAction = System.getProperty("io.lettuce.core.topology.sort", BY_LATENCY.name()); + + for (SortAction action : values()) { + if (sortAction.equalsIgnoreCase(action.name())) { + return action; + } + } + + return BY_LATENCY; + } + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/TopologyProvider.java b/src/main/java/io/lettuce/core/masterreplica/TopologyProvider.java new file mode 100644 index 0000000000..952f9259dc --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/TopologyProvider.java @@ -0,0 +1,52 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import java.util.List; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.RedisException; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * Topology provider for Master-Replica topology discovery during runtime. Implementors of this interface return an unordered + * list of {@link RedisNodeDescription} instances. + * + * @author Mark Paluch + * @since 4.1 + */ +@FunctionalInterface +interface TopologyProvider { + + /** + * Lookup nodes within the topology. + * + * @return list of {@link RedisNodeDescription} instances + * @throws RedisException on errors that occurred during the lookup + */ + List getNodes(); + + /** + * Lookup nodes asynchronously within the topology. + * + * @return list of {@link RedisNodeDescription} instances + * @throws RedisException on errors that occurred during the lookup + * @since 5.1 + */ + default CompletableFuture> getNodesAsync() { + return CompletableFuture.completedFuture(getNodes()); + } +} diff --git a/src/main/java/io/lettuce/core/masterreplica/package-info.java b/src/main/java/io/lettuce/core/masterreplica/package-info.java new file mode 100644 index 0000000000..feec0f7600 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterreplica/package-info.java @@ -0,0 +1,20 @@ +/** + * Client support for Redis Master/Replica setups. {@link io.lettuce.core.masterreplica.MasterReplica} supports self-managed, + * Redis Sentinel-managed, AWS ElastiCache and Azure Redis managed Master/Replica setups. + * + * Connections can be obtained by providing the {@link io.lettuce.core.RedisClient}, a {@link io.lettuce.core.RedisURI} and a {@link io.lettuce.core.codec.RedisCodec}. + * + *

    + *
    + *   RedisClient client = RedisClient.create();
    + *   StatefulRedisMasterReplicaConnection connection = MasterReplica.connect(client,
    + *                                                                      RedisURI.create("redis://localhost"),
    + *                                                                      StringCodec.UTF8);
    + *   // ...
    + *
    + *   connection.close();
    + *   client.shutdown();
    + * 
    + */ +package io.lettuce.core.masterreplica; + diff --git a/src/main/java/io/lettuce/core/masterslave/MasterSlave.java b/src/main/java/io/lettuce/core/masterslave/MasterSlave.java new file mode 100644 index 0000000000..3055c57920 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterslave/MasterSlave.java @@ -0,0 +1,187 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterslave; + +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.masterreplica.MasterReplica; + +/** + * Master-Slave connection API. + *

    + * This API allows connections to Redis Master/Slave setups which run either in a static Master/Slave setup or are managed by + * Redis Sentinel. Master-Slave connections can discover topologies and select a source for read operations using + * {@link io.lettuce.core.ReadFrom}. + *

    + *

    + * + * Connections can be obtained by providing the {@link RedisClient}, a {@link RedisURI} and a {@link RedisCodec}. + * + *

    + * RedisClient client = RedisClient.create();
    + * StatefulRedisMasterSlaveConnection<String, String> connection = MasterSlave.connect(client,
    + *         RedisURI.create("redis://localhost"), StringCodec.UTF8);
    + * // ...
    + *
    + * connection.close();
    + * client.shutdown();
    + * 
    + * + *

    + *

    Topology Discovery

    + *

    + * Master-Slave topologies are either static or semi-static. Redis Standalone instances with attached slaves provide no + * failover/HA mechanism. Redis Sentinel managed instances are controlled by Redis Sentinel and allow failover (which include + * master promotion). The {@link MasterSlave} API supports both mechanisms. The topology is provided by a + * {@code TopologyProvider}: + * + *

      + *
    • {@code MasterReplicaTopologyProvider}: Dynamic topology lookup using the {@code INFO REPLICATION} output. Slaves are + * listed as {@code slaveN=...} entries. The initial connection can either point to a master or a replica and the topology + * provider will discover nodes. The connection needs to be re-established outside of lettuce in a case of Master/Slave failover + * or topology changes.
    • + *
    • {@code StaticMasterReplicaTopologyProvider}: Topology is defined by the list of {@link RedisURI URIs} and the + * {@code ROLE} output. MasterSlave uses only the supplied nodes and won't discover additional nodes in the setup. The + * connection needs to be re-established outside of lettuce in a case of Master/Slave failover or topology changes.
    • + *
    • {@code SentinelTopologyProvider}: Dynamic topology lookup using the Redis Sentinel API. In particular, + * {@code SENTINEL MASTER} and {@code SENTINEL SLAVES} output. Master/Slave failover is handled by lettuce.
    • + *
    + * + *

    Topology Updates

    + *
      + *
    • Standalone Master/Slave: Performs a one-time topology lookup which remains static afterward
    • + *
    • Redis Sentinel: Subscribes to all Sentinels and listens for Pub/Sub messages to trigger topology refreshing
    • + *
    + * + *

    Connection Fault-Tolerance

    Connecting to Master/Slave bears the possibility that individual nodes are not reachable. + * {@link MasterSlave} can still connect to a partially-available set of nodes. + * + *
      + *
    • Redis Sentinel: At least one Sentinel must be reachable, the masterId must be registered and at least one host must be + * available (master or slave). Allows for runtime-recovery based on Sentinel Events.
    • + *
    • Static Setup (auto-discovery): The initial endpoint must be reachable. No recovery/reconfiguration during runtime.
    • + *
    • Static Setup (provided hosts): All endpoints must be reachable. No recovery/reconfiguration during runtime.
    • + *
    + * + * @author Mark Paluch + * @since 4.1 + * @deprecated since 5.2, use {@link io.lettuce.core.masterreplica.MasterReplica} + */ +@Deprecated +public class MasterSlave { + + /** + * Open a new connection to a Redis Master-Slave server/servers using the supplied {@link RedisURI} and the supplied + * {@link RedisCodec codec} to encode/decode keys. + *

    + * This {@link MasterSlave} performs auto-discovery of nodes using either Redis Sentinel or Master/Slave. A {@link RedisURI} + * can point to either a master or a replica host. + *

    + * + * @param redisClient the Redis client. + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null}. + * @param redisURI the Redis server to connect to, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new connection. + */ + public static StatefulRedisMasterSlaveConnection connect(RedisClient redisClient, RedisCodec codec, + RedisURI redisURI) { + + LettuceAssert.notNull(redisClient, "RedisClient must not be null"); + LettuceAssert.notNull(codec, "RedisCodec must not be null"); + LettuceAssert.notNull(redisURI, "RedisURI must not be null"); + + return new MasterSlaveConnectionWrapper<>(MasterReplica.connect(redisClient, codec, redisURI)); + } + + /** + * Open asynchronously a new connection to a Redis Master-Slave server/servers using the supplied {@link RedisURI} and the + * supplied {@link RedisCodec codec} to encode/decode keys. + *

    + * This {@link MasterSlave} performs auto-discovery of nodes using either Redis Sentinel or Master/Slave. A {@link RedisURI} + * can point to either a master or a replica host. + *

    + * + * @param redisClient the Redis client. + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null}. + * @param redisURI the Redis server to connect to, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return {@link CompletableFuture} that is notified once the connect is finished. + * @since + */ + public static CompletableFuture> connectAsync(RedisClient redisClient, + RedisCodec codec, RedisURI redisURI) { + return MasterReplica.connectAsync(redisClient, codec, redisURI).thenApply(MasterSlaveConnectionWrapper::new); + } + + /** + * Open a new connection to a Redis Master-Slave server/servers using the supplied {@link RedisURI} and the supplied + * {@link RedisCodec codec} to encode/decode keys. + *

    + * This {@link MasterSlave} performs auto-discovery of nodes if the URI is a Redis Sentinel URI. Master/Slave URIs will be + * treated as static topology and no additional hosts are discovered in such case. Redis Standalone Master/Slave will + * discover the roles of the supplied {@link RedisURI URIs} and issue commands to the appropriate node. + *

    + *

    + * When using Redis Sentinel, ensure that {@link Iterable redisURIs} contains only a single entry as only the first URI is + * considered. {@link RedisURI} pointing to multiple Sentinels can be configured through + * {@link RedisURI.Builder#withSentinel}. + *

    + * + * @param redisClient the Redis client. + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null}. + * @param redisURIs the Redis server(s) to connect to, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return a new connection. + */ + public static StatefulRedisMasterSlaveConnection connect(RedisClient redisClient, RedisCodec codec, + Iterable redisURIs) { + return new MasterSlaveConnectionWrapper<>(MasterReplica.connect(redisClient, codec, redisURIs)); + } + + /** + * Open asynchronously a new connection to a Redis Master-Slave server/servers using the supplied {@link RedisURI} and the + * supplied {@link RedisCodec codec} to encode/decode keys. + *

    + * This {@link MasterSlave} performs auto-discovery of nodes if the URI is a Redis Sentinel URI. Master/Slave URIs will be + * treated as static topology and no additional hosts are discovered in such case. Redis Standalone Master/Slave will + * discover the roles of the supplied {@link RedisURI URIs} and issue commands to the appropriate node. + *

    + *

    + * When using Redis Sentinel, ensure that {@link Iterable redisURIs} contains only a single entry as only the first URI is + * considered. {@link RedisURI} pointing to multiple Sentinels can be configured through + * {@link RedisURI.Builder#withSentinel}. + *

    + * + * @param redisClient the Redis client. + * @param codec Use this codec to encode/decode keys and values, must not be {@literal null}. + * @param redisURIs the Redis server(s) to connect to, must not be {@literal null}. + * @param Key type. + * @param Value type. + * @return {@link CompletableFuture} that is notified once the connect is finished. + */ + public static CompletableFuture> connectAsync(RedisClient redisClient, + RedisCodec codec, Iterable redisURIs) { + return MasterReplica.connectAsync(redisClient, codec, redisURIs).thenApply(MasterSlaveConnectionWrapper::new); + } +} diff --git a/src/main/java/io/lettuce/core/masterslave/MasterSlaveConnectionWrapper.java b/src/main/java/io/lettuce/core/masterslave/MasterSlaveConnectionWrapper.java new file mode 100644 index 0000000000..afcd3c5388 --- /dev/null +++ b/src/main/java/io/lettuce/core/masterslave/MasterSlaveConnectionWrapper.java @@ -0,0 +1,135 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterslave; + +import java.time.Duration; +import java.util.Collection; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.ReadFrom; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.masterreplica.StatefulRedisMasterReplicaConnection; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; + +/** + * Connection wrapper for {@link StatefulRedisMasterSlaveConnection}. + * + * @author Mark Paluch + * @since 5.2 + */ +class MasterSlaveConnectionWrapper implements StatefulRedisMasterSlaveConnection { + + private final StatefulRedisMasterReplicaConnection delegate; + + public MasterSlaveConnectionWrapper(StatefulRedisMasterReplicaConnection delegate) { + this.delegate = delegate; + } + + @Override + public void setReadFrom(ReadFrom readFrom) { + delegate.setReadFrom(readFrom); + } + + @Override + public ReadFrom getReadFrom() { + return delegate.getReadFrom(); + } + + @Override + public boolean isMulti() { + return delegate.isMulti(); + } + + @Override + public RedisCommands sync() { + return delegate.sync(); + } + + @Override + public RedisAsyncCommands async() { + return delegate.async(); + } + + @Override + public RedisReactiveCommands reactive() { + return delegate.reactive(); + } + + @Override + public void setTimeout(Duration timeout) { + delegate.setTimeout(timeout); + } + + @Override + public Duration getTimeout() { + return delegate.getTimeout(); + } + + @Override + public RedisCommand dispatch(RedisCommand command) { + return delegate.dispatch(command); + } + + @Override + public Collection> dispatch(Collection> redisCommands) { + return delegate.dispatch(redisCommands); + } + + @Override + public void close() { + delegate.close(); + } + + @Override + public CompletableFuture closeAsync() { + return delegate.closeAsync(); + } + + @Override + public boolean isOpen() { + return delegate.isOpen(); + } + + @Override + public ClientOptions getOptions() { + return delegate.getOptions(); + } + + @Override + public ClientResources getResources() { + return delegate.getResources(); + } + + @Override + @Deprecated + public void reset() { + delegate.reset(); + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + delegate.setAutoFlushCommands(autoFlush); + } + + @Override + public void flushCommands() { + delegate.flushCommands(); + } +} diff --git a/src/main/java/io/lettuce/core/masterslave/StatefulRedisMasterSlaveConnection.java b/src/main/java/io/lettuce/core/masterslave/StatefulRedisMasterSlaveConnection.java new file mode 100644 index 0000000000..5c430c4c8c --- /dev/null +++ b/src/main/java/io/lettuce/core/masterslave/StatefulRedisMasterSlaveConnection.java @@ -0,0 +1,48 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterslave; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.masterreplica.StatefulRedisMasterReplicaConnection; + +/** + * Redis Master-Slave connection. The connection allows slave reads by setting {@link ReadFrom}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.1 + * @deprecated since 5.2, use {@link io.lettuce.core.masterreplica.MasterReplica} and + * {@link io.lettuce.core.masterreplica.StatefulRedisMasterReplicaConnection}. + */ +@Deprecated +public interface StatefulRedisMasterSlaveConnection extends StatefulRedisMasterReplicaConnection { + + /** + * Set from which nodes data is read. The setting is used as default for read operations on this connection. See the + * documentation for {@link ReadFrom} for more information. + * + * @param readFrom the read from setting, must not be {@literal null} + */ + void setReadFrom(ReadFrom readFrom); + + /** + * Gets the {@link ReadFrom} setting for this connection. Defaults to {@link ReadFrom#MASTER} if not set. + * + * @return the read from setting + */ + ReadFrom getReadFrom(); +} diff --git a/src/main/java/io/lettuce/core/masterslave/package-info.java b/src/main/java/io/lettuce/core/masterslave/package-info.java new file mode 100644 index 0000000000..23abb92baa --- /dev/null +++ b/src/main/java/io/lettuce/core/masterslave/package-info.java @@ -0,0 +1,21 @@ +/** + * Client support for Redis Master/Slave setups. {@link io.lettuce.core.masterslave.MasterSlave} supports self-managed, + * Redis Sentinel-managed, AWS ElastiCache and Azure Redis managed Master/Slave setups. + * + * Connections can be obtained by providing the {@link io.lettuce.core.RedisClient}, a {@link io.lettuce.core.RedisURI} and a {@link io.lettuce.core.codec.RedisCodec}. + * + *
    + *
    + *   RedisClient client = RedisClient.create();
    + *   StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(client,
    + *                                                                      RedisURI.create("redis://localhost"),
    + *                                                                      StringCodec.UTF8);
    + *   // ...
    + *
    + *   connection.close();
    + *   client.shutdown();
    + * 
    + * @deprecated will be moved to {@code masterreplica} package with version 6. + */ +package io.lettuce.core.masterslave; + diff --git a/src/main/java/io/lettuce/core/metrics/CommandLatencyCollector.java b/src/main/java/io/lettuce/core/metrics/CommandLatencyCollector.java new file mode 100644 index 0000000000..062d0455fa --- /dev/null +++ b/src/main/java/io/lettuce/core/metrics/CommandLatencyCollector.java @@ -0,0 +1,90 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +import java.net.SocketAddress; +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * {@link MetricCollector} for command latencies. Command latencies are collected per connection (identified by local/remote + * tuples of {@link SocketAddress}es) and {@link ProtocolKeyword command type}. Two command latencies are available: + *
      + *
    • Latency between command send and first response (first response received)
    • + *
    • Latency between command send and command completion (complete response received)
    • + *
    + * + * @author Mark Paluch + * @since 3.4 + */ +public interface CommandLatencyCollector extends MetricCollector> { + + /** + * Creates a new {@link CommandLatencyCollector} using {@link CommandLatencyCollectorOptions}. + * + * @param options must not be {@literal null}. + * @return the {@link CommandLatencyCollector} using {@link CommandLatencyCollectorOptions}. + */ + static CommandLatencyCollector create(CommandLatencyCollectorOptions options) { + return new DefaultCommandLatencyCollector(options); + } + + /** + * Returns a disabled no-op {@link CommandLatencyCollector}. + * + * @return + * @since 5.1 + */ + static CommandLatencyCollector disabled() { + + return new CommandLatencyCollector() { + @Override + public void recordCommandLatency(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType, + long firstResponseLatency, long completionLatency) { + } + + @Override + public void shutdown() { + } + + @Override + public Map retrieveMetrics() { + return Collections.emptyMap(); + } + + @Override + public boolean isEnabled() { + return false; + } + }; + } + + /** + * Record the command latency per {@code connectionPoint} and {@code commandType}. + * + * @param local the local address + * @param remote the remote address + * @param commandType the command type + * @param firstResponseLatency latency value in {@link TimeUnit#NANOSECONDS} from send to the first response + * @param completionLatency latency value in {@link TimeUnit#NANOSECONDS} from send to the command completion + */ + void recordCommandLatency(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType, + long firstResponseLatency, long completionLatency); + +} diff --git a/src/main/java/io/lettuce/core/metrics/CommandLatencyCollectorOptions.java b/src/main/java/io/lettuce/core/metrics/CommandLatencyCollectorOptions.java new file mode 100644 index 0000000000..2d7c8afe53 --- /dev/null +++ b/src/main/java/io/lettuce/core/metrics/CommandLatencyCollectorOptions.java @@ -0,0 +1,175 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +import java.util.concurrent.TimeUnit; + +/** + * Configuration interface for command latency collection. + * + * @author Mark Paluch + */ +public interface CommandLatencyCollectorOptions { + + /** + * Create a new {@link CommandLatencyCollectorOptions} instance using default settings. + * + * @return a new instance of {@link CommandLatencyCollectorOptions} instance using default settings + * @since 5.1 + */ + static CommandLatencyCollectorOptions create() { + return builder().build(); + } + + /** + * Create a {@link CommandLatencyCollectorOptions} instance with disabled event emission. + * + * @return a new instance of {@link CommandLatencyCollectorOptions} with disabled event emission + * @since 5.1 + */ + static CommandLatencyCollectorOptions disabled() { + return DefaultCommandLatencyCollectorOptions.disabled(); + } + + /** + * Returns a new {@link CommandLatencyCollectorOptions.Builder} to construct {@link CommandLatencyCollectorOptions}. + * + * @return a new {@link CommandLatencyCollectorOptions.Builder} to construct {@link CommandLatencyCollectorOptions}. + * @since 5.1 + */ + static CommandLatencyCollectorOptions.Builder builder() { + return DefaultCommandLatencyCollectorOptions.builder(); + } + + /** + * Returns a builder to create new {@link CommandLatencyCollectorOptions} whose settings are replicated from the current + * {@link CommandLatencyCollectorOptions}. + * + * @return a a {@link CommandLatencyCollectorOptions.Builder} to create new {@link CommandLatencyCollectorOptions} whose + * settings are replicated from the current {@link CommandLatencyCollectorOptions} + * + * @since 5.1 + */ + CommandLatencyCollectorOptions.Builder mutate(); + + /** + * Returns the target {@link TimeUnit} for the emitted latencies. + * + * @return the target {@link TimeUnit} for the emitted latencies + */ + TimeUnit targetUnit(); + + /** + * Returns the percentiles which should be exposed in the metric. + * + * @return the percentiles which should be exposed in the metric + */ + double[] targetPercentiles(); + + /** + * Returns whether the latencies should be reset once an event is emitted. + * + * @return {@literal true} if the latencies should be reset once an event is emitted. + */ + boolean resetLatenciesAfterEvent(); + + /** + * Returns whether to distinct latencies on local level. If {@literal true}, multiple connections to the same + * host/connection point will be recorded separately which allows to inspect every connection individually. If + * {@literal false}, multiple connections to the same host/connection point will be recorded together. This allows a + * consolidated view on one particular service. + * + * @return {@literal true} if latencies are recorded distinct on local level (per connection) + */ + boolean localDistinction(); + + /** + * Returns whether the latency collector is enabled. + * + * @return {@literal true} if the latency collector is enabled + */ + boolean isEnabled(); + + /** + * Builder for {@link CommandLatencyCollectorOptions}. + * + * @since 5.1 + */ + interface Builder { + + /** + * Disable the latency collector. + * + * @return this + */ + Builder disable(); + + /** + * Enable the latency collector. + * + * @return this {@link DefaultCommandLatencyCollectorOptions.Builder}. + */ + Builder enable(); + + /** + * Enables per connection metrics tracking insead of per host/port. If {@literal true}, multiple connections to the same + * host/connection point will be recorded separately which allows to inspect every connection individually. If + * {@literal false}, multiple connections to the same host/connection point will be recorded together. This allows a + * consolidated view on one particular service. Defaults to {@literal false}. See + * {@link DefaultCommandLatencyCollectorOptions#DEFAULT_LOCAL_DISTINCTION}. + * + * @param localDistinction {@literal true} if latencies are recorded distinct on local level (per connection). + * @return this {@link Builder}. + */ + Builder localDistinction(boolean localDistinction); + + /** + * Sets whether the recorded latencies should be reset once the metrics event was emitted. Defaults to {@literal true}. + * See {@link DefaultCommandLatencyCollectorOptions#DEFAULT_RESET_LATENCIES_AFTER_EVENT}. + * + * @param resetLatenciesAfterEvent {@literal true} if the recorded latencies should be reset once the metrics event was + * emitted. + * + * @return this {@link Builder}. + */ + Builder resetLatenciesAfterEvent(boolean resetLatenciesAfterEvent); + + /** + * Sets the emitted percentiles. Defaults to 50.0, 90.0, 95.0, 99.0, 99.9}. See + * {@link DefaultCommandLatencyCollectorOptions#DEFAULT_TARGET_PERCENTILES}. + * + * @param targetPercentiles the percentiles which should be emitted, must not be {@literal null}. + * + * @return this {@link Builder}. + */ + Builder targetPercentiles(double[] targetPercentiles); + + /** + * Set the target unit for the latencies. Defaults to {@link TimeUnit#MILLISECONDS}. See + * {@link DefaultCommandLatencyCollectorOptions#DEFAULT_TARGET_UNIT}. + * + * @param targetUnit the target unit, must not be {@literal null}. + * @return this {@link Builder}. + * + */ + Builder targetUnit(TimeUnit targetUnit); + + /** + * @return a new instance of {@link CommandLatencyCollectorOptions}. + */ + CommandLatencyCollectorOptions build(); + } +} diff --git a/src/main/java/io/lettuce/core/metrics/CommandLatencyId.java b/src/main/java/io/lettuce/core/metrics/CommandLatencyId.java new file mode 100644 index 0000000000..be4f2ea41c --- /dev/null +++ b/src/main/java/io/lettuce/core/metrics/CommandLatencyId.java @@ -0,0 +1,141 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +import java.io.Serializable; +import java.net.SocketAddress; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Identifier for a command latency. Consists of a local/remote tuple of {@link SocketAddress}es and a + * {@link io.lettuce.core.protocol.ProtocolKeyword commandType} part. + * + * @author Mark Paluch + */ +@SuppressWarnings("serial") +public class CommandLatencyId implements Serializable, Comparable { + + private final SocketAddress localAddress; + private final SocketAddress remoteAddress; + private final ProtocolKeyword commandType; + private final String commandName; + + protected CommandLatencyId(SocketAddress localAddress, SocketAddress remoteAddress, ProtocolKeyword commandType) { + LettuceAssert.notNull(localAddress, "LocalAddress must not be null"); + LettuceAssert.notNull(remoteAddress, "RemoteAddress must not be null"); + LettuceAssert.notNull(commandType, "CommandType must not be null"); + + this.localAddress = localAddress; + this.remoteAddress = remoteAddress; + this.commandType = commandType; + this.commandName = commandType.name(); + } + + /** + * Create a new instance of {@link CommandLatencyId}. + * + * @param localAddress the local address + * @param remoteAddress the remote address + * @param commandType the command type + * @return a new instance of {@link CommandLatencyId} + */ + public static CommandLatencyId create(SocketAddress localAddress, SocketAddress remoteAddress, ProtocolKeyword commandType) { + return new CommandLatencyId(localAddress, remoteAddress, commandType); + } + + /** + * Returns the local address. + * + * @return the local address + */ + public SocketAddress localAddress() { + return localAddress; + } + + /** + * Returns the remote address. + * + * @return the remote address + */ + public SocketAddress remoteAddress() { + return remoteAddress; + } + + /** + * Returns the command type. + * + * @return the command type + */ + public ProtocolKeyword commandType() { + return commandType; + } + + @Override + public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof CommandLatencyId)) + return false; + + CommandLatencyId that = (CommandLatencyId) o; + + if (!localAddress.equals(that.localAddress)) + return false; + if (!remoteAddress.equals(that.remoteAddress)) + return false; + return commandName.equals(that.commandName); + } + + @Override + public int hashCode() { + int result = localAddress.hashCode(); + result = 31 * result + remoteAddress.hashCode(); + result = 31 * result + commandName.hashCode(); + return result; + } + + @Override + public int compareTo(CommandLatencyId o) { + + if (o == null) { + return -1; + } + + int remoteResult = remoteAddress.toString().compareTo(o.remoteAddress.toString()); + if (remoteResult != 0) { + return remoteResult; + } + + int localResult = localAddress.toString().compareTo(o.localAddress.toString()); + if (localResult != 0) { + return localResult; + } + + return commandName.compareTo(o.commandName); + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("[").append(localAddress); + sb.append(" -> ").append(remoteAddress); + sb.append(", commandType=").append(commandType); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/com/lambdaworks/redis/metrics/CommandMetrics.java b/src/main/java/io/lettuce/core/metrics/CommandMetrics.java similarity index 76% rename from src/main/java/com/lambdaworks/redis/metrics/CommandMetrics.java rename to src/main/java/io/lettuce/core/metrics/CommandMetrics.java index c7b4b0770c..957a76b47b 100644 --- a/src/main/java/com/lambdaworks/redis/metrics/CommandMetrics.java +++ b/src/main/java/io/lettuce/core/metrics/CommandMetrics.java @@ -1,14 +1,30 @@ -package com.lambdaworks.redis.metrics; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; import java.util.Map; import java.util.concurrent.TimeUnit; /** * Latency metrics for commands. This class provides the count, time unit and firstResponse/completion latencies. - * + * * @author Mark Paluch */ public class CommandMetrics { + private final long count; private final TimeUnit timeUnit; @@ -23,7 +39,7 @@ public CommandMetrics(long count, TimeUnit timeUnit, CommandLatency firstRespons } /** - * + * * @return the count */ public long getCount() { @@ -31,7 +47,7 @@ public long getCount() { } /** - * + * * @return the time unit for the {@link #getFirstResponse()} and {@link #getCompletion()} latencies. */ public TimeUnit getTimeUnit() { @@ -39,7 +55,7 @@ public TimeUnit getTimeUnit() { } /** - * + * * @return latencies between send and the first command response */ public CommandLatency getFirstResponse() { @@ -47,7 +63,7 @@ public CommandLatency getFirstResponse() { } /** - * + * * @return latencies between send and the command completion */ public CommandLatency getCompletion() { @@ -56,7 +72,7 @@ public CommandLatency getCompletion() { @Override public String toString() { - final StringBuffer sb = new StringBuffer(); + StringBuilder sb = new StringBuilder(); sb.append("[count=").append(count); sb.append(", timeUnit=").append(timeUnit); sb.append(", firstResponse=").append(firstResponse); @@ -77,7 +93,7 @@ public CommandLatency(long min, long max, Map percentiles) { } /** - * + * * @return the minimum time */ public long getMin() { @@ -85,7 +101,7 @@ public long getMin() { } /** - * + * * @return the maximum time */ public long getMax() { @@ -93,7 +109,7 @@ public long getMax() { } /** - * + * * @return percentile mapping */ public Map getPercentiles() { @@ -102,7 +118,7 @@ public Map getPercentiles() { @Override public String toString() { - final StringBuffer sb = new StringBuffer(); + StringBuilder sb = new StringBuilder(); sb.append("[min=").append(min); sb.append(", max=").append(max); sb.append(", percentiles=").append(percentiles); diff --git a/src/main/java/io/lettuce/core/metrics/DefaultCommandLatencyCollector.java b/src/main/java/io/lettuce/core/metrics/DefaultCommandLatencyCollector.java new file mode 100644 index 0000000000..d2def86cf6 --- /dev/null +++ b/src/main/java/io/lettuce/core/metrics/DefaultCommandLatencyCollector.java @@ -0,0 +1,426 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +import static io.lettuce.core.internal.LettuceClassUtils.isPresent; + +import java.net.SocketAddress; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.TreeMap; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; + +import org.HdrHistogram.Histogram; +import org.LatencyUtils.LatencyStats; +import org.LatencyUtils.PauseDetector; +import org.LatencyUtils.SimplePauseDetector; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.metrics.CommandMetrics.CommandLatency; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.netty.channel.local.LocalAddress; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Default implementation of a {@link CommandLatencyCollector} for command latencies. + * + * @author Mark Paluch + */ +public class DefaultCommandLatencyCollector implements CommandLatencyCollector { + + private static final AtomicReferenceFieldUpdater PAUSE_DETECTOR_UPDATER = AtomicReferenceFieldUpdater + .newUpdater(DefaultCommandLatencyCollector.class, PauseDetectorWrapper.class, "pauseDetectorWrapper"); + + private static final boolean LATENCY_UTILS_AVAILABLE = isPresent("org.LatencyUtils.PauseDetector"); + private static final boolean HDR_UTILS_AVAILABLE = isPresent("org.HdrHistogram.Histogram"); + private static final PauseDetectorWrapper GLOBAL_PAUSE_DETECTOR = PauseDetectorWrapper.create(); + + private static final long MIN_LATENCY = 1000; + private static final long MAX_LATENCY = TimeUnit.MINUTES.toNanos(5); + + private final CommandLatencyCollectorOptions options; + + private final AtomicReference> latencyMetricsRef = new AtomicReference<>( + createNewLatencyMap()); + + // Updated via PAUSE_DETECTOR_UPDATER + private volatile PauseDetectorWrapper pauseDetectorWrapper; + + private volatile boolean stopped; + + public DefaultCommandLatencyCollector(CommandLatencyCollectorOptions options) { + + LettuceAssert.notNull(options, "CommandLatencyCollectorOptions must not be null"); + + this.options = options; + } + + /** + * Record the command latency per {@code connectionPoint} and {@code commandType}. + * + * @param local the local address + * @param remote the remote address + * @param commandType the command type + * @param firstResponseLatency latency value in {@link TimeUnit#NANOSECONDS} from send to the first response + * @param completionLatency latency value in {@link TimeUnit#NANOSECONDS} from send to the command completion + */ + public void recordCommandLatency(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType, + long firstResponseLatency, long completionLatency) { + + if (!isEnabled()) { + return; + } + + PauseDetector pauseDetector; + + do { + if (PAUSE_DETECTOR_UPDATER.get(this) == null) { + if (PAUSE_DETECTOR_UPDATER.compareAndSet(this, null, GLOBAL_PAUSE_DETECTOR)) { + PAUSE_DETECTOR_UPDATER.get(this).retain(); + } + } + pauseDetector = ((DefaultPauseDetectorWrapper) PAUSE_DETECTOR_UPDATER.get(this)).getPauseDetector(); + } while (pauseDetector == null); + + PauseDetector pauseDetectorToUse = pauseDetector; + Latencies latencies = latencyMetricsRef.get().computeIfAbsent(createId(local, remote, commandType), id -> { + + if (options.resetLatenciesAfterEvent()) { + return new Latencies(pauseDetectorToUse); + } + + return new CummulativeLatencies(pauseDetectorToUse); + }); + + latencies.firstResponse.recordLatency(rangify(firstResponseLatency)); + latencies.completion.recordLatency(rangify(completionLatency)); + } + + private CommandLatencyId createId(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType) { + return CommandLatencyId.create(options.localDistinction() ? local : LocalAddress.ANY, remote, commandType); + } + + private long rangify(long latency) { + return Math.max(MIN_LATENCY, Math.min(MAX_LATENCY, latency)); + } + + @Override + public boolean isEnabled() { + return options.isEnabled() && !stopped; + } + + @Override + public void shutdown() { + + stopped = true; + + PauseDetectorWrapper pauseDetectorWrapper = PAUSE_DETECTOR_UPDATER.get(this); + if (pauseDetectorWrapper != null && PAUSE_DETECTOR_UPDATER.compareAndSet(this, pauseDetectorWrapper, null)) { + pauseDetectorWrapper.release(); + } + + Map latenciesMap = latencyMetricsRef.get(); + if (latencyMetricsRef.compareAndSet(latenciesMap, Collections.emptyMap())) { + latenciesMap.values().forEach(Latencies::stop); + } + } + + @Override + public Map retrieveMetrics() { + + Map latenciesMap = latencyMetricsRef.get(); + Map metricsToUse; + + if (options.resetLatenciesAfterEvent()) { + + metricsToUse = latenciesMap; + latencyMetricsRef.set(createNewLatencyMap()); + + metricsToUse.values().forEach(Latencies::stop); + } else { + metricsToUse = new HashMap<>(latenciesMap); + } + + return getMetrics(metricsToUse); + } + + private Map getMetrics(Map latencyMetrics) { + + Map result = new TreeMap<>(); + + for (Map.Entry entry : latencyMetrics.entrySet()) { + + Latencies latencies = entry.getValue(); + + Histogram firstResponse = latencies.getFirstResponseHistogram(); + Histogram completion = latencies.getCompletionHistogram(); + + if (firstResponse.getTotalCount() == 0 && completion.getTotalCount() == 0) { + continue; + } + + CommandLatency firstResponseLatency = getMetric(firstResponse); + CommandLatency completionLatency = getMetric(completion); + + CommandMetrics metrics = new CommandMetrics(firstResponse.getTotalCount(), options.targetUnit(), + firstResponseLatency, completionLatency); + + result.put(entry.getKey(), metrics); + } + + return result; + } + + private CommandLatency getMetric(Histogram histogram) { + + Map percentiles = getPercentiles(histogram); + + TimeUnit timeUnit = options.targetUnit(); + return new CommandLatency(timeUnit.convert(histogram.getMinValue(), TimeUnit.NANOSECONDS), + timeUnit.convert(histogram.getMaxValue(), TimeUnit.NANOSECONDS), percentiles); + } + + private Map getPercentiles(Histogram histogram) { + + Map percentiles = new TreeMap<>(); + for (double targetPercentile : options.targetPercentiles()) { + percentiles.put(targetPercentile, + options.targetUnit().convert(histogram.getValueAtPercentile(targetPercentile), TimeUnit.NANOSECONDS)); + } + + return percentiles; + } + + /** + * Returns {@literal true} if HdrUtils and LatencyUtils are available on the class path. + * + * @return + */ + public static boolean isAvailable() { + return LATENCY_UTILS_AVAILABLE && HDR_UTILS_AVAILABLE; + } + + private static ConcurrentHashMap createNewLatencyMap() { + return new ConcurrentHashMap<>(CommandType.values().length); + } + + /** + * Returns a disabled no-op {@link CommandLatencyCollector}. + * + * @return + */ + public static CommandLatencyCollector disabled() { + + return new CommandLatencyCollector() { + @Override + public void recordCommandLatency(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType, + long firstResponseLatency, long completionLatency) { + } + + @Override + public void shutdown() { + } + + @Override + public Map retrieveMetrics() { + return Collections.emptyMap(); + } + + @Override + public boolean isEnabled() { + return false; + } + }; + } + + private static class Latencies { + + private final LatencyStats firstResponse; + private final LatencyStats completion; + + Latencies(PauseDetector pauseDetector) { + firstResponse = LatencyStats.Builder.create().pauseDetector(pauseDetector).build(); + completion = LatencyStats.Builder.create().pauseDetector(pauseDetector).build(); + } + + public Histogram getFirstResponseHistogram() { + return firstResponse.getIntervalHistogram(); + } + + public Histogram getCompletionHistogram() { + return completion.getIntervalHistogram(); + } + + public void stop() { + firstResponse.stop(); + completion.stop(); + } + } + + private static class CummulativeLatencies extends Latencies { + + private final Histogram firstResponse; + private final Histogram completion; + + CummulativeLatencies(PauseDetector pauseDetector) { + super(pauseDetector); + + firstResponse = super.firstResponse.getIntervalHistogram(); + completion = super.completion.getIntervalHistogram(); + } + + @Override + public Histogram getFirstResponseHistogram() { + + firstResponse.add(super.getFirstResponseHistogram()); + return firstResponse; + } + + @Override + public Histogram getCompletionHistogram() { + + completion.add(super.getFirstResponseHistogram()); + return completion; + } + } + + /** + * Wrapper for initialization of {@link PauseDetector}. Encapsulates absence of LatencyUtils. + */ + interface PauseDetectorWrapper { + + /** + * No-operation {@link PauseDetectorWrapper} implementation. + */ + PauseDetectorWrapper NO_OP = new PauseDetectorWrapper() { + @Override + public void release() { + } + + @Override + public void retain() { + } + }; + + static PauseDetectorWrapper create() { + + if (HDR_UTILS_AVAILABLE && LATENCY_UTILS_AVAILABLE) { + return new DefaultPauseDetectorWrapper(); + } + + return NO_OP; + } + + /** + * Retain reference to {@link PauseDetectorWrapper} and increment reference counter. + */ + void retain(); + + /** + * Release reference to {@link PauseDetectorWrapper} and decrement reference counter. + */ + void release(); + } + + /** + * Reference-counted wrapper for {@link PauseDetector} instances. + */ + static class DefaultPauseDetectorWrapper implements PauseDetectorWrapper { + + private static final AtomicLong instanceCounter = new AtomicLong(); + + private final AtomicLong counter = new AtomicLong(); + private final Object mutex = new Object(); + + private volatile PauseDetector pauseDetector; + private volatile Thread shutdownHook; + + /** + * Obtain the current {@link PauseDetector}. Requires a call to {@link #retain()} first. + * + * @return + */ + public PauseDetector getPauseDetector() { + return pauseDetector; + } + + /** + * Creates or initializes a {@link PauseDetector} instance after incrementing the usage counter to one. Should be + * {@link #release() released} once it is no longer in use. + */ + public void retain() { + + if (counter.incrementAndGet() == 1) { + + // Avoid concurrent calls to retain/release + synchronized (mutex) { + + if (instanceCounter.getAndIncrement() > 0) { + InternalLogger instance = InternalLoggerFactory.getInstance(getClass()); + instance.info("Initialized PauseDetectorWrapper more than once."); + } + + PauseDetector pauseDetector = new SimplePauseDetector(TimeUnit.MILLISECONDS.toNanos(10), + TimeUnit.MILLISECONDS.toNanos(10), 3); + + shutdownHook = new Thread("ShutdownHook for SimplePauseDetector") { + @Override + public void run() { + pauseDetector.shutdown(); + } + }; + + this.pauseDetector = pauseDetector; + Runtime.getRuntime().addShutdownHook(shutdownHook); + } + } + } + + /** + * Decrements the usage counter. When reaching {@code 0}, the {@link PauseDetector} instance is released. + */ + public void release() { + + if (counter.decrementAndGet() == 0) { + + // Avoid concurrent calls to retain/release + synchronized (mutex) { + + instanceCounter.decrementAndGet(); + + pauseDetector.shutdown(); + pauseDetector = null; + + try { + Runtime.getRuntime().removeShutdownHook(shutdownHook); + } catch (IllegalStateException e) { + // Do not prevent shutdown + // java.lang.IllegalStateException: Shutdown in progress + } + + shutdownHook = null; + } + } + } + } +} diff --git a/src/main/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollectorOptions.java b/src/main/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorOptions.java similarity index 76% rename from src/main/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollectorOptions.java rename to src/main/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorOptions.java index eb4fa60ac9..cd9ce2c838 100644 --- a/src/main/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollectorOptions.java +++ b/src/main/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorOptions.java @@ -1,9 +1,23 @@ -package com.lambdaworks.redis.metrics; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; import java.util.concurrent.TimeUnit; -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceAssert; /** * The default implementation of {@link CommandLatencyCollectorOptions}. @@ -25,6 +39,7 @@ public class DefaultCommandLatencyCollectorOptions implements CommandLatencyColl private final boolean resetLatenciesAfterEvent; private final boolean localDistinction; private final boolean enabled; + private final Builder builder; protected DefaultCommandLatencyCollectorOptions(Builder builder) { this.targetUnit = builder.targetUnit; @@ -32,17 +47,7 @@ protected DefaultCommandLatencyCollectorOptions(Builder builder) { this.resetLatenciesAfterEvent = builder.resetLatenciesAfterEvent; this.localDistinction = builder.localDistinction; this.enabled = builder.enabled; - } - - /** - * Returns a new {@link DefaultCommandLatencyCollectorOptions.Builder} to construct - * {@link DefaultCommandLatencyCollectorOptions}. - * - * @return a new {@link DefaultCommandLatencyCollectorOptions.Builder} to construct - * {@link DefaultCommandLatencyCollectorOptions}. - */ - public static DefaultCommandLatencyCollectorOptions.Builder builder() { - return new DefaultCommandLatencyCollectorOptions.Builder(); + this.builder = builder; } /** @@ -63,10 +68,35 @@ public static DefaultCommandLatencyCollectorOptions disabled() { return DISABLED; } + /** + * Returns a new {@link DefaultCommandLatencyCollectorOptions.Builder} to construct + * {@link DefaultCommandLatencyCollectorOptions}. + * + * @return a new {@link DefaultCommandLatencyCollectorOptions.Builder} to construct + * {@link DefaultCommandLatencyCollectorOptions}. + */ + public static DefaultCommandLatencyCollectorOptions.Builder builder() { + return new DefaultCommandLatencyCollectorOptions.Builder(); + } + + /** + * Returns a builder to create new {@link DefaultCommandLatencyCollectorOptions} whose settings are replicated from the + * current {@link DefaultCommandLatencyCollectorOptions}. + * + * @return a a {@link CommandLatencyCollectorOptions.Builder} to create new {@link DefaultCommandLatencyCollectorOptions} + * whose settings are replicated from the current {@link DefaultCommandLatencyCollectorOptions} + * + * @since 5.1 + */ + @Override + public DefaultCommandLatencyCollectorOptions.Builder mutate() { + return this.builder; + } + /** * Builder for {@link DefaultCommandLatencyCollectorOptions}. */ - public static class Builder { + public static class Builder implements CommandLatencyCollectorOptions.Builder { private TimeUnit targetUnit = DEFAULT_TARGET_UNIT; private double[] targetPercentiles = DEFAULT_TARGET_PERCENTILES; @@ -74,31 +104,40 @@ public static class Builder { private boolean localDistinction = DEFAULT_LOCAL_DISTINCTION; private boolean enabled = DEFAULT_ENABLED; - /** - * @deprecated Use {@link ClientOptions#builder()} - */ - @Deprecated - public Builder() { + private Builder() { } /** * Disable the latency collector. - * - * @return this + * + * @return this {@link Builder}. */ + @Override public Builder disable() { this.enabled = false; return this; } + /** + * Enable the latency collector. + * + * @return this {@link Builder}. + * @since 5.1 + */ + @Override + public Builder enable() { + this.enabled = true; + return this; + } + /** * Set the target unit for the latencies. Defaults to {@link TimeUnit#MILLISECONDS}. See * {@link DefaultCommandLatencyCollectorOptions#DEFAULT_TARGET_UNIT}. - * - * @param targetUnit the target unit, must not be {@literal null} - * @return this * + * @param targetUnit the target unit, must not be {@literal null} + * @return this {@link Builder}. */ + @Override public Builder targetUnit(TimeUnit targetUnit) { LettuceAssert.notNull(targetUnit, "TargetUnit must not be null"); this.targetUnit = targetUnit; @@ -110,9 +149,9 @@ public Builder targetUnit(TimeUnit targetUnit) { * {@link DefaultCommandLatencyCollectorOptions#DEFAULT_TARGET_PERCENTILES}. * * @param targetPercentiles the percentiles which should be emitted, must not be {@literal null} - * - * @return this + * @return this {@link Builder}. */ + @Override public Builder targetPercentiles(double[] targetPercentiles) { LettuceAssert.notNull(targetPercentiles, "TargetPercentiles must not be null"); this.targetPercentiles = targetPercentiles; @@ -125,9 +164,9 @@ public Builder targetPercentiles(double[] targetPercentiles) { * * @param resetLatenciesAfterEvent {@literal true} if the recorded latencies should be reset once the metrics event was * emitted - * - * @return this + * @return this {@link Builder}. */ + @Override public Builder resetLatenciesAfterEvent(boolean resetLatenciesAfterEvent) { this.resetLatenciesAfterEvent = resetLatenciesAfterEvent; return this; @@ -141,17 +180,18 @@ public Builder resetLatenciesAfterEvent(boolean resetLatenciesAfterEvent) { * {@link DefaultCommandLatencyCollectorOptions#DEFAULT_LOCAL_DISTINCTION}. * * @param localDistinction {@literal true} if latencies are recorded distinct on local level (per connection) - * @return this + * @return this {@link Builder}. */ + @Override public Builder localDistinction(boolean localDistinction) { this.localDistinction = localDistinction; return this; } /** - * * @return a new instance of {@link DefaultCommandLatencyCollectorOptions}. */ + @Override public DefaultCommandLatencyCollectorOptions build() { return new DefaultCommandLatencyCollectorOptions(this); } diff --git a/src/main/java/io/lettuce/core/metrics/MetricCollector.java b/src/main/java/io/lettuce/core/metrics/MetricCollector.java new file mode 100644 index 0000000000..bdea377c21 --- /dev/null +++ b/src/main/java/io/lettuce/core/metrics/MetricCollector.java @@ -0,0 +1,46 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +/** + * Generic metrics collector interface. A metrics collector collects metrics and emits metric events. + * + * @author Mark Paluch + * @param data type of the metrics + * @since 3.4 + * + */ +public interface MetricCollector { + + /** + * Shut down the metrics collector. + */ + void shutdown(); + + /** + * Returns the collected/aggregated metrics. + * + * @return the the collected/aggregated metrics + */ + T retrieveMetrics(); + + /** + * Returns {@literal true} if the metric collector is enabled. + * + * @return {@literal true} if the metric collector is enabled + */ + boolean isEnabled(); +} diff --git a/src/main/java/io/lettuce/core/metrics/package-info.java b/src/main/java/io/lettuce/core/metrics/package-info.java new file mode 100644 index 0000000000..b3be1f51ff --- /dev/null +++ b/src/main/java/io/lettuce/core/metrics/package-info.java @@ -0,0 +1,5 @@ +/** + * Collectors for client metrics. + */ +package io.lettuce.core.metrics; + diff --git a/src/main/java/com/lambdaworks/redis/models/command/CommandDetail.java b/src/main/java/io/lettuce/core/models/command/CommandDetail.java similarity index 85% rename from src/main/java/com/lambdaworks/redis/models/command/CommandDetail.java rename to src/main/java/io/lettuce/core/models/command/CommandDetail.java index ee09ba3be7..8ca8c96115 100644 --- a/src/main/java/com/lambdaworks/redis/models/command/CommandDetail.java +++ b/src/main/java/io/lettuce/core/models/command/CommandDetail.java @@ -1,4 +1,19 @@ -package com.lambdaworks.redis.models.command; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.command; import java.io.Serializable; import java.util.Set; @@ -22,7 +37,7 @@ public CommandDetail() { /** * Constructs a {@link CommandDetail} - * + * * @param name name of the command, must not be {@literal null} * @param arity command arity specification * @param flags set of flags, must not be {@literal null} but may be empty diff --git a/src/main/java/com/lambdaworks/redis/models/command/CommandDetailParser.java b/src/main/java/io/lettuce/core/models/command/CommandDetailParser.java similarity index 80% rename from src/main/java/com/lambdaworks/redis/models/command/CommandDetailParser.java rename to src/main/java/io/lettuce/core/models/command/CommandDetailParser.java index acc4b30515..609326bf52 100644 --- a/src/main/java/com/lambdaworks/redis/models/command/CommandDetailParser.java +++ b/src/main/java/io/lettuce/core/models/command/CommandDetailParser.java @@ -1,13 +1,28 @@ -package com.lambdaworks.redis.models.command; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.command; import java.util.*; -import com.lambdaworks.redis.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceAssert; /** - * Parser for redis
    COMMAND/COMMAND INFOcommand output. - * + * Parser for Redis COMMAND/COMMAND INFO output. + * * @author Mark Paluch * @since 3.0 */ @@ -45,8 +60,8 @@ private CommandDetailParser() { } /** - * Parse the output of the redis COMMAND/COMMAND INFO command and convert to a list of {@link CommandDetail}. - * + * Parse the output of the Redis COMMAND/COMMAND INFO command and convert to a list of {@link CommandDetail}. + * * @param commandOutput the command output, must not be {@literal null} * @return RedisInstance */ @@ -61,7 +76,7 @@ public static List parse(List commandOutput) { } Collection collection = (Collection) o; - if (collection.size() != COMMAND_INFO_SIZE) { + if (collection.size() < COMMAND_INFO_SIZE) { continue; } @@ -83,6 +98,7 @@ private static CommandDetail parseCommandDetail(Collection collection) { Set parsedFlags = parseFlags(flags); + // TODO: Extract command grouping (ACL) return new CommandDetail(name, arity, parsedFlags, firstKey, lastKey, keyStepCount); } diff --git a/src/main/java/io/lettuce/core/models/command/package-info.java b/src/main/java/io/lettuce/core/models/command/package-info.java new file mode 100644 index 0000000000..79a0bba9ea --- /dev/null +++ b/src/main/java/io/lettuce/core/models/command/package-info.java @@ -0,0 +1,4 @@ +/** + * Model and parser to for the {@code COMMAND} and {@code COMMAND INFO} output. + */ +package io.lettuce.core.models.command; diff --git a/src/main/java/io/lettuce/core/models/role/RedisInstance.java b/src/main/java/io/lettuce/core/models/role/RedisInstance.java new file mode 100644 index 0000000000..debfbc05b7 --- /dev/null +++ b/src/main/java/io/lettuce/core/models/role/RedisInstance.java @@ -0,0 +1,38 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.role; + +/** + * Represents a redis instance according to the {@code ROLE} output. + * + * @author Mark Paluch + * @since 3.0 + */ +public interface RedisInstance { + + /** + * + * @return Redis instance role, see {@link io.lettuce.core.models.role.RedisInstance.Role} + */ + Role getRole(); + + /** + * Possible Redis instance roles. + */ + public enum Role { + MASTER, SLAVE, SENTINEL; + } +} diff --git a/src/main/java/io/lettuce/core/models/role/RedisMasterInstance.java b/src/main/java/io/lettuce/core/models/role/RedisMasterInstance.java new file mode 100644 index 0000000000..fce08375fc --- /dev/null +++ b/src/main/java/io/lettuce/core/models/role/RedisMasterInstance.java @@ -0,0 +1,96 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.role; + +import java.io.Serializable; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Represents a master instance. + * + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings("serial") +public class RedisMasterInstance implements RedisInstance, Serializable { + + private long replicationOffset; + private List replicas = Collections.emptyList(); + + public RedisMasterInstance() { + } + + /** + * Constructs a {@link RedisMasterInstance} + * + * @param replicationOffset the replication offset + * @param replicas list of replicas, must not be {@literal null} but may be empty + */ + public RedisMasterInstance(long replicationOffset, List replicas) { + LettuceAssert.notNull(replicas, "Replicas must not be null"); + this.replicationOffset = replicationOffset; + this.replicas = replicas; + } + + /** + * + * @return always {@link io.lettuce.core.models.role.RedisInstance.Role#MASTER} + */ + @Override + public Role getRole() { + return Role.MASTER; + } + + public long getReplicationOffset() { + return replicationOffset; + } + + @Deprecated + public List getSlaves() { + return getReplicas(); + } + + public List getReplicas() { + return replicas; + } + + public void setReplicationOffset(long replicationOffset) { + this.replicationOffset = replicationOffset; + } + + @Deprecated + public void setSlaves(List replicas) { + setReplicas(replicas); + } + + public void setReplicas(List replicas) { + LettuceAssert.notNull(replicas, "Replicas must not be null"); + this.replicas = replicas; + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [replicationOffset=").append(replicationOffset); + sb.append(", replicas=").append(replicas); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/models/role/RedisNodeDescription.java b/src/main/java/io/lettuce/core/models/role/RedisNodeDescription.java new file mode 100644 index 0000000000..47b4228c2a --- /dev/null +++ b/src/main/java/io/lettuce/core/models/role/RedisNodeDescription.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.role; + +import io.lettuce.core.RedisURI; + +/** + * Description of a single Redis Node. + * + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisNodeDescription extends RedisInstance { + + /** + * + * @return the URI of the node + */ + RedisURI getUri(); +} diff --git a/src/main/java/io/lettuce/core/models/role/RedisSentinelInstance.java b/src/main/java/io/lettuce/core/models/role/RedisSentinelInstance.java new file mode 100644 index 0000000000..c5afc106dc --- /dev/null +++ b/src/main/java/io/lettuce/core/models/role/RedisSentinelInstance.java @@ -0,0 +1,77 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.role; + +import java.io.Serializable; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Redis sentinel instance. + * + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings("serial") +public class RedisSentinelInstance implements RedisInstance, Serializable { + private List monitoredMasters = Collections.emptyList(); + + public RedisSentinelInstance() { + } + + /** + * Constructs a {@link RedisSentinelInstance} + * + * @param monitoredMasters list of monitored masters, must not be {@literal null} but may be empty + */ + public RedisSentinelInstance(List monitoredMasters) { + LettuceAssert.notNull(monitoredMasters, "List of monitoredMasters must not be null"); + this.monitoredMasters = monitoredMasters; + } + + /** + * + * @return always {@link io.lettuce.core.models.role.RedisInstance.Role#SENTINEL} + */ + @Override + public Role getRole() { + return Role.SENTINEL; + } + + /** + * + * @return List of monitored master names. + */ + public List getMonitoredMasters() { + return monitoredMasters; + } + + public void setMonitoredMasters(List monitoredMasters) { + LettuceAssert.notNull(monitoredMasters, "List of monitoredMasters must not be null"); + this.monitoredMasters = monitoredMasters; + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [monitoredMasters=").append(monitoredMasters); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/models/role/RedisSlaveInstance.java b/src/main/java/io/lettuce/core/models/role/RedisSlaveInstance.java new file mode 100644 index 0000000000..b3f193774f --- /dev/null +++ b/src/main/java/io/lettuce/core/models/role/RedisSlaveInstance.java @@ -0,0 +1,118 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.role; + +import java.io.Serializable; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Redis replica instance. + * + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings("serial") +public class RedisSlaveInstance implements RedisInstance, Serializable { + private ReplicationPartner master; + private State state; + + public RedisSlaveInstance() { + } + + /** + * Constructs a {@link RedisSlaveInstance} + * + * @param master master for the replication, must not be {@literal null} + * @param state replica state, must not be {@literal null} + */ + RedisSlaveInstance(ReplicationPartner master, State state) { + LettuceAssert.notNull(master, "Master must not be null"); + LettuceAssert.notNull(state, "State must not be null"); + this.master = master; + this.state = state; + } + + /** + * + * @return always {@link io.lettuce.core.models.role.RedisInstance.Role#SLAVE} + */ + @Override + public Role getRole() { + return Role.SLAVE; + } + + /** + * + * @return the replication master. + */ + public ReplicationPartner getMaster() { + return master; + } + + /** + * + * @return Slave state. + */ + public State getState() { + return state; + } + + public void setMaster(ReplicationPartner master) { + LettuceAssert.notNull(master, "Master must not be null"); + this.master = master; + } + + public void setState(State state) { + LettuceAssert.notNull(state, "State must not be null"); + this.state = state; + } + + /** + * State of the Replica. + */ + public enum State { + /** + * the instance needs to connect to its master. + */ + CONNECT, + + /** + * the replica-master connection is in progress. + */ + CONNECTING, + + /** + * the master and replica are trying to perform the synchronization. + */ + SYNC, + + /** + * the replica is online. + */ + CONNECTED; + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [master=").append(master); + sb.append(", state=").append(state); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/models/role/ReplicationPartner.java b/src/main/java/io/lettuce/core/models/role/ReplicationPartner.java new file mode 100644 index 0000000000..431fae3749 --- /dev/null +++ b/src/main/java/io/lettuce/core/models/role/ReplicationPartner.java @@ -0,0 +1,84 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.role; + +import java.io.Serializable; + +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Replication partner providing the host and the replication offset. + * + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings("serial") +public class ReplicationPartner implements Serializable { + private HostAndPort host; + private long replicationOffset; + + public ReplicationPartner() { + + } + + /** + * Constructs a replication partner. + * + * @param host host information, must not be {@literal null} + * @param replicationOffset the replication offset + */ + public ReplicationPartner(HostAndPort host, long replicationOffset) { + LettuceAssert.notNull(host, "Host must not be null"); + this.host = host; + this.replicationOffset = replicationOffset; + } + + /** + * + * @return host with port of the replication partner. + */ + public HostAndPort getHost() { + return host; + } + + /** + * + * @return the replication offset. + */ + public long getReplicationOffset() { + return replicationOffset; + } + + public void setHost(HostAndPort host) { + LettuceAssert.notNull(host, "Host must not be null"); + this.host = host; + } + + public void setReplicationOffset(long replicationOffset) { + this.replicationOffset = replicationOffset; + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [host=").append(host); + sb.append(", replicationOffset=").append(replicationOffset); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/models/role/RoleParser.java b/src/main/java/io/lettuce/core/models/role/RoleParser.java new file mode 100644 index 0000000000..de2c3399a3 --- /dev/null +++ b/src/main/java/io/lettuce/core/models/role/RoleParser.java @@ -0,0 +1,211 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.role; + +import java.util.*; + +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Parser for Redis ROLE command output. + * + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings("serial") +public class RoleParser { + protected static final Map ROLE_MAPPING; + protected static final Map SLAVE_STATE_MAPPING; + + static { + Map roleMap = new HashMap<>(); + roleMap.put("master", RedisInstance.Role.MASTER); + roleMap.put("slave", RedisInstance.Role.SLAVE); + roleMap.put("sentinel", RedisInstance.Role.SENTINEL); + + ROLE_MAPPING = Collections.unmodifiableMap(roleMap); + + Map replicas = new HashMap<>(); + replicas.put("connect", RedisSlaveInstance.State.CONNECT); + replicas.put("connected", RedisSlaveInstance.State.CONNECTED); + replicas.put("connecting", + RedisSlaveInstance.State.CONNECTING); + replicas.put("sync", RedisSlaveInstance.State.SYNC); + + SLAVE_STATE_MAPPING = Collections.unmodifiableMap(replicas); + } + + /** + * Utility constructor. + */ + private RoleParser() { + + } + + /** + * Parse the output of the Redis ROLE command and convert to a RedisInstance. + * + * @param roleOutput output of the Redis ROLE command. + * @return RedisInstance + */ + public static RedisInstance parse(List roleOutput) { + LettuceAssert.isTrue(roleOutput != null && !roleOutput.isEmpty(), "Empty role output"); + LettuceAssert.isTrue(roleOutput.get(0) instanceof String && ROLE_MAPPING.containsKey(roleOutput.get(0)), + () -> "First role element must be a string (any of " + ROLE_MAPPING.keySet() + ")"); + + RedisInstance.Role role = ROLE_MAPPING.get(roleOutput.get(0)); + + switch (role) { + case MASTER: + return parseMaster(roleOutput); + + case SLAVE: + return parseReplica(roleOutput); + + case SENTINEL: + return parseSentinel(roleOutput); + } + + return null; + + } + + private static RedisInstance parseMaster(List roleOutput) { + + long replicationOffset = getMasterReplicationOffset(roleOutput); + List replicas = getMasterReplicaReplicationPartners(roleOutput); + + RedisMasterInstance redisMasterInstanceRole = new RedisMasterInstance(replicationOffset, + Collections.unmodifiableList(replicas)); + return redisMasterInstanceRole; + } + + private static RedisInstance parseReplica(List roleOutput) { + + Iterator iterator = roleOutput.iterator(); + iterator.next(); // skip first element + + String ip = getStringFromIterator(iterator, ""); + long port = getLongFromIterator(iterator, 0); + + String stateString = getStringFromIterator(iterator, null); + long replicationOffset = getLongFromIterator(iterator, 0); + + ReplicationPartner master = new ReplicationPartner(HostAndPort.of(ip, Math.toIntExact(port)), replicationOffset); + + RedisSlaveInstance.State state = SLAVE_STATE_MAPPING.get(stateString); + + RedisSlaveInstance redisSlaveInstanceRole = new RedisSlaveInstance(master, state); + return redisSlaveInstanceRole; + } + + private static RedisInstance parseSentinel(List roleOutput) { + + Iterator iterator = roleOutput.iterator(); + iterator.next(); // skip first element + + List monitoredMasters = getMonitoredMasters(iterator); + + RedisSentinelInstance result = new RedisSentinelInstance(Collections.unmodifiableList(monitoredMasters)); + return result; + } + + private static List getMonitoredMasters(Iterator iterator) { + List monitoredMasters = new ArrayList<>(); + + if (!iterator.hasNext()) { + return monitoredMasters; + } + + Object masters = iterator.next(); + + if (!(masters instanceof Collection)) { + return monitoredMasters; + } + + for (Object monitoredMaster : (Collection) masters) { + if (monitoredMaster instanceof String) { + monitoredMasters.add((String) monitoredMaster); + } + } + + return monitoredMasters; + } + + private static List getMasterReplicaReplicationPartners(List roleOutput) { + + List replicas = new ArrayList<>(); + if (roleOutput.size() > 2 && roleOutput.get(2) instanceof Collection) { + Collection segments = (Collection) roleOutput.get(2); + + for (Object output : segments) { + if (!(output instanceof Collection)) { + continue; + } + + ReplicationPartner replicationPartner = getMasterSlaveReplicationPartner((Collection) output); + replicas.add(replicationPartner); + } + } + return replicas; + } + + private static ReplicationPartner getMasterSlaveReplicationPartner(Collection segments) { + + Iterator iterator = segments.iterator(); + + String ip = getStringFromIterator(iterator, ""); + long port = getLongFromIterator(iterator, 0); + long replicationOffset = getLongFromIterator(iterator, 0); + + return new ReplicationPartner(HostAndPort.of(ip, Math.toIntExact(port)), replicationOffset); + } + + private static long getLongFromIterator(Iterator iterator, long defaultValue) { + if (iterator.hasNext()) { + Object object = iterator.next(); + if (object instanceof String) { + return Long.parseLong((String) object); + } + + if (object instanceof Number) { + return ((Number) object).longValue(); + } + } + return defaultValue; + } + + private static String getStringFromIterator(Iterator iterator, String defaultValue) { + if (iterator.hasNext()) { + Object object = iterator.next(); + if (object instanceof String) { + return (String) object; + } + } + return defaultValue; + } + + private static long getMasterReplicationOffset(List roleOutput) { + long replicationOffset = 0; + + if (roleOutput.size() > 1 && roleOutput.get(1) instanceof Number) { + Number number = (Number) roleOutput.get(1); + replicationOffset = number.longValue(); + } + return replicationOffset; + } +} diff --git a/src/main/java/io/lettuce/core/models/role/package-info.java b/src/main/java/io/lettuce/core/models/role/package-info.java new file mode 100644 index 0000000000..2f6c92c430 --- /dev/null +++ b/src/main/java/io/lettuce/core/models/role/package-info.java @@ -0,0 +1,4 @@ +/** + * Model and parser for the {@code ROLE} output. + */ +package io.lettuce.core.models.role; diff --git a/src/main/java/io/lettuce/core/models/stream/PendingEntry.java b/src/main/java/io/lettuce/core/models/stream/PendingEntry.java new file mode 100644 index 0000000000..7ede83bdc4 --- /dev/null +++ b/src/main/java/io/lettuce/core/models/stream/PendingEntry.java @@ -0,0 +1,61 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.stream; + +/** + * Value object representing an entry of the Pending Entry List retrieved via {@literal XPENDING}. + * + * @author Mark Paluch + * @since 5.1 + */ +public class PendingEntry { + + private final String messageId; + private final String consumer; + private final long millisSinceDelivery; + private final long deliveryCount; + + public PendingEntry(String messageId, String consumer, long millisSinceDelivery, long deliveryCount) { + + this.messageId = messageId; + this.consumer = consumer; + this.millisSinceDelivery = millisSinceDelivery; + this.deliveryCount = deliveryCount; + } + + public String getMessageId() { + return messageId; + } + + public String getConsumer() { + return consumer; + } + + public long getMillisSinceDelivery() { + return millisSinceDelivery; + } + + public long getDeliveryCount() { + return deliveryCount; + } + + @Override + public String toString() { + + return String.format("%s [messageId='%s', consumer='%s', millisSinceDelivery=%d, deliveryCount=%d]", getClass() + .getSimpleName(), messageId, consumer, millisSinceDelivery, deliveryCount); + } +} diff --git a/src/main/java/io/lettuce/core/models/stream/PendingMessage.java b/src/main/java/io/lettuce/core/models/stream/PendingMessage.java new file mode 100644 index 0000000000..4b9093aa81 --- /dev/null +++ b/src/main/java/io/lettuce/core/models/stream/PendingMessage.java @@ -0,0 +1,60 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.stream; + +import java.time.Duration; + +/** + * Value object representing a pending message reported through XPENDING with range/limit. + * + * @author Mark Paluch + * @since 5.1 + */ +public class PendingMessage { + + private final String id; + private final String consumer; + private final long msSinceLastDelivery; + private final long redeliveryCount; + + public PendingMessage(String id, String consumer, long msSinceLastDelivery, long redeliveryCount) { + + this.id = id; + this.consumer = consumer; + this.msSinceLastDelivery = msSinceLastDelivery; + this.redeliveryCount = redeliveryCount; + } + + public String getId() { + return id; + } + + public String getConsumer() { + return consumer; + } + + public long getMsSinceLastDelivery() { + return msSinceLastDelivery; + } + + public Duration getSinceLastDelivery() { + return Duration.ofMillis(getMsSinceLastDelivery()); + } + + public long getRedeliveryCount() { + return redeliveryCount; + } +} diff --git a/src/main/java/io/lettuce/core/models/stream/PendingMessages.java b/src/main/java/io/lettuce/core/models/stream/PendingMessages.java new file mode 100644 index 0000000000..fc6e32803b --- /dev/null +++ b/src/main/java/io/lettuce/core/models/stream/PendingMessages.java @@ -0,0 +1,52 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.stream; + +import java.util.Map; + +import io.lettuce.core.Range; + +/** + * Value object representing the output of the Redis {@literal XPENDING} reporting a summary on pending messages. + * + * @author Mark Paluch + * @since 5.1 + */ +public class PendingMessages { + + private final long count; + private final Range messageIds; + private final Map consumerMessageCount; + + public PendingMessages(long count, Range messageIds, Map consumerMessageCount) { + + this.count = count; + this.messageIds = messageIds; + this.consumerMessageCount = consumerMessageCount; + } + + public long getCount() { + return count; + } + + public Range getMessageIds() { + return messageIds; + } + + public Map getConsumerMessageCount() { + return consumerMessageCount; + } +} diff --git a/src/main/java/io/lettuce/core/models/stream/PendingParser.java b/src/main/java/io/lettuce/core/models/stream/PendingParser.java new file mode 100644 index 0000000000..ac25c2c89b --- /dev/null +++ b/src/main/java/io/lettuce/core/models/stream/PendingParser.java @@ -0,0 +1,99 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.stream; + +import java.util.*; + +import io.lettuce.core.Range; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Parser for redis XPENDING command output. + * + * @author Mark Paluch + * @since 5.1 + */ +public class PendingParser { + + /** + * Utility constructor. + */ + private PendingParser() { + } + + /** + * Parse the output of the Redis {@literal XPENDING} command with {@link Range}. + * + * @param xpendingOutput output of the Redis {@literal XPENDING}. + * @return list of {@link PendingMessage}s. + */ + @SuppressWarnings("unchecked") + public static List parseRange(List xpendingOutput) { + + LettuceAssert.notNull(xpendingOutput, "XPENDING output must not be null"); + + List result = new ArrayList<>(); + + for (Object element : xpendingOutput) { + + LettuceAssert.isTrue(element instanceof List, "Output elements must be a List"); + + List message = (List) element; + + String messageId = (String) message.get(0); + String consumer = (String) message.get(1); + Long msSinceLastDelivery = (Long) message.get(2); + Long deliveryCount = (Long) message.get(3); + + result.add(new PendingMessage(messageId, consumer, msSinceLastDelivery, deliveryCount)); + } + + return result; + } + + /** + * Parse the output of the Redis {@literal XPENDING} reporting a summary on pending messages. + * + * @param xpendingOutput output of the Redis {@literal XPENDING}. + * @return {@link PendingMessages}. + */ + @SuppressWarnings("unchecked") + public static PendingMessages parse(List xpendingOutput) { + + LettuceAssert.notNull(xpendingOutput, "XPENDING output must not be null"); + LettuceAssert.isTrue(xpendingOutput.size() == 4, "XPENDING output must have exactly four output elements"); + + Long count = (Long) xpendingOutput.get(0); + String from = (String) xpendingOutput.get(1); + String to = (String) xpendingOutput.get(2); + + Range messageIdRange = Range.create(from, to); + + Collection consumerMessageCounts = (Collection) xpendingOutput.get(3); + + Map counts = new LinkedHashMap<>(); + + for (Object element : consumerMessageCounts) { + + LettuceAssert.isTrue(element instanceof List, "Consumer message counts must be a List"); + List messageCount = (List) element; + + counts.put((String) messageCount.get(0), (Long) messageCount.get(1)); + } + + return new PendingMessages(count, messageIdRange, Collections.unmodifiableMap(counts)); + } +} diff --git a/src/main/java/io/lettuce/core/models/stream/package-info.java b/src/main/java/io/lettuce/core/models/stream/package-info.java new file mode 100644 index 0000000000..c830a764e6 --- /dev/null +++ b/src/main/java/io/lettuce/core/models/stream/package-info.java @@ -0,0 +1,21 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Model and parser for the Stream-related command output such as {@literal XPENDING}. + */ +package io.lettuce.core.models.stream; + diff --git a/src/main/java/io/lettuce/core/output/ArrayOutput.java b/src/main/java/io/lettuce/core/output/ArrayOutput.java new file mode 100644 index 0000000000..6e38489e17 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ArrayOutput.java @@ -0,0 +1,95 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.*; + +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link java.util.List} of objects and lists to support dynamic nested structures (List with mixed content of values and + * sublists). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class ArrayOutput extends CommandOutput> { + + private boolean initialized; + private Deque counts = new ArrayDeque(); + private Deque> stack = new ArrayDeque>(); + + public ArrayOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + } + + @Override + public void set(ByteBuffer bytes) { + if (!initialized) { + stack.push(new ArrayList<>()); + initialized = true; + } + if (bytes != null) { + V value = codec.decodeValue(bytes); + stack.peek().add(value); + } + } + + @Override + public void set(long integer) { + if (!initialized) { + stack.push(new ArrayList<>()); + initialized = true; + } + stack.peek().add(integer); + } + + @Override + public void complete(int depth) { + if (counts.isEmpty()) { + return; + } + + if (depth == stack.size()) { + if (stack.peek().size() == counts.peek()) { + List pop = stack.pop(); + counts.pop(); + if (!stack.isEmpty()) { + stack.peek().add(pop); + } + } + } + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + + if (stack.isEmpty()) { + stack.push(output); + } else { + stack.push(OutputFactory.newList(count)); + + } + counts.push(count); + } +} diff --git a/src/main/java/io/lettuce/core/output/BooleanListOutput.java b/src/main/java/io/lettuce/core/output/BooleanListOutput.java new file mode 100644 index 0000000000..a91f640256 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/BooleanListOutput.java @@ -0,0 +1,66 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link java.util.List} of boolean output. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class BooleanListOutput extends CommandOutput> implements StreamingOutput { + + private boolean initialized; + private Subscriber subscriber; + + public BooleanListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(long integer) { + subscriber.onNext(output, (integer == 1) ? Boolean.TRUE : Boolean.FALSE); + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void setSubscriber(Subscriber subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/BooleanOutput.java b/src/main/java/io/lettuce/core/output/BooleanOutput.java new file mode 100644 index 0000000000..383c16b7a5 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/BooleanOutput.java @@ -0,0 +1,51 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Boolean output. The actual value is returned as an integer where 0 indicates false and 1 indicates true, or as a null bulk + * reply for script output. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class BooleanOutput extends CommandOutput { + + public BooleanOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(long integer) { + output = (integer == 1) ? Boolean.TRUE : Boolean.FALSE; + } + + @Override + public void set(ByteBuffer bytes) { + output = (bytes != null) ? Boolean.TRUE : Boolean.FALSE; + } + + @Override + public void set(boolean value) { + output = value; + } +} diff --git a/src/main/java/io/lettuce/core/output/ByteArrayOutput.java b/src/main/java/io/lettuce/core/output/ByteArrayOutput.java new file mode 100644 index 0000000000..ef18aa8708 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ByteArrayOutput.java @@ -0,0 +1,43 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Byte array output. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class ByteArrayOutput extends CommandOutput { + + public ByteArrayOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + if (bytes != null) { + output = new byte[bytes.remaining()]; + bytes.get(output); + } + } +} diff --git a/src/main/java/io/lettuce/core/output/CommandOutput.java b/src/main/java/io/lettuce/core/output/CommandOutput.java new file mode 100644 index 0000000000..b17a45d26c --- /dev/null +++ b/src/main/java/io/lettuce/core/output/CommandOutput.java @@ -0,0 +1,237 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Abstract representation of the output of a redis command. + * + * @param Key type. + * @param Value type. + * @param Output type. + * @author Will Glozer + */ +public abstract class CommandOutput { + + protected final RedisCodec codec; + protected T output; + protected String error; + + /** + * Initialize a new instance that encodes and decodes keys and values using the supplied codec. + * + * @param codec Codec used to encode/decode keys and values, must not be {@literal null}. + * @param output Initial value of output. + */ + public CommandOutput(RedisCodec codec, T output) { + LettuceAssert.notNull(codec, "RedisCodec must not be null"); + this.codec = codec; + this.output = output; + } + + /** + * Get the command output. + * + * @return The command output. + */ + public T get() { + return output; + } + + /** + * Set the command output to a sequence of bytes, or null. Concrete {@link CommandOutput} implementations must override this + * method unless they only receive an integer value which cannot be null. + * + * @param bytes The command output, or null. + */ + public void set(ByteBuffer bytes) { + throw new IllegalStateException(); + } + + /** + * Set the command output to a sequence of bytes, or null representing a simple string. Concrete {@link CommandOutput} + * implementations can override this method unless they only receive an integer value which cannot be null. + * + * @param bytes The command output, or null. + */ + public void setSingle(ByteBuffer bytes) { + set(bytes); + } + + /** + * Set the command output to a big number. Concrete {@link CommandOutput} implementations can override this method unless + * they only receive an integer value which cannot be null. + * + * @param bytes The command output, or null. + * @since 6.0/RESP 3 + */ + public void setBigNumber(ByteBuffer bytes) { + set(bytes); + } + + /** + * Set the command output to a 64-bit signed integer. Concrete {@link CommandOutput} implementations must override this + * method unless they only receive a byte array value. + * + * @param integer The command output. + */ + public void set(long integer) { + throw new IllegalStateException(); + } + + /** + * Set the command output to a floating-point number. Concrete {@link CommandOutput} implementations must override this + * method unless they only receive a byte array value. + * + * @param number The command output. + * @since 6.0/RESP 3 + */ + public void set(double number) { + throw new IllegalStateException(); + } + + /** + * Set the command output to a boolean. Concrete {@link CommandOutput} implementations must override this method unless they + * only receive a byte array value. + * + * @param value The command output. + * @since 6.0/RESP 3 + */ + public void set(boolean value) { + throw new IllegalStateException(); + } + + /** + * Set command output to an error message from the server. + * + * @param error Error message. + */ + public void setError(ByteBuffer error) { + this.error = decodeAscii(error); + } + + /** + * Set command output to an error message from the client. + * + * @param error Error message. + */ + public void setError(String error) { + this.error = error; + } + + /** + * Check if the command resulted in an error. + * + * @return true if command resulted in an error. + */ + public boolean hasError() { + return this.error != null; + } + + /** + * Get the error that occurred. + * + * @return The error. + */ + public String getError() { + return error; + } + + /** + * Mark the command output complete. + * + * @param depth Remaining depth of output queue. + * + */ + public void complete(int depth) { + // nothing to do by default + } + + protected String decodeAscii(ByteBuffer bytes) { + if (bytes == null) { + return null; + } + + char[] chars = new char[bytes.remaining()]; + for (int i = 0; i < chars.length; i++) { + chars[i] = (char) bytes.get(); + } + return new String(chars); + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [output=").append(output); + sb.append(", error='").append(error).append('\''); + sb.append(']'); + return sb.toString(); + } + + /** + * Mark the beginning of a multi sequence (array). + * + * @param count expected number of elements in this multi sequence. + */ + public void multi(int count) { + + } + + /** + * Mark the beginning of a multi sequence (array). + * + * @param count expected number of elements in this multi sequence. + * @since 6.0/RESP 3 + */ + public void multiArray(int count) { + multi(count); + } + + /** + * Mark the beginning of a multi sequence (push-array). + * + * @param count expected number of elements in this multi sequence. + * @since 6.0/RESP 3 + */ + public void multiPush(int count) { + multi(count); + } + + /** + * Mark the beginning of a multi sequence (map). + * + * @param count expected number of elements in this multi sequence. + * @since 6.0/RESP 3 + */ + public void multiMap(int count) { + multi(count * 2); + } + + /** + * Mark the beginning of a set. + * + * @param count expected number of elements in this multi sequence. + * @since 6.0/RESP 3 + */ + public void multiSet(int count) { + multi(count); + } +} diff --git a/src/main/java/io/lettuce/core/output/DateOutput.java b/src/main/java/io/lettuce/core/output/DateOutput.java new file mode 100644 index 0000000000..19b404e713 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/DateOutput.java @@ -0,0 +1,40 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.util.Date; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Date output with no milliseconds. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class DateOutput extends CommandOutput { + + public DateOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(long time) { + output = new Date(time * 1000); + } +} diff --git a/src/main/java/io/lettuce/core/output/DefaultTransactionResult.java b/src/main/java/io/lettuce/core/output/DefaultTransactionResult.java new file mode 100644 index 0000000000..74e3bb6bcc --- /dev/null +++ b/src/main/java/io/lettuce/core/output/DefaultTransactionResult.java @@ -0,0 +1,90 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.util.Iterator; +import java.util.List; +import java.util.stream.Stream; + +import io.lettuce.core.TransactionResult; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Result of a {@code MULTI} transaction. + * + * @author Mark Paluch + * @since 5.0 + */ +class DefaultTransactionResult implements Iterable, TransactionResult { + + private final boolean discarded; + private final List result; + + /** + * Creates a new {@link DefaultTransactionResult}. + * + * @param discarded {@literal true} if the transaction is discarded. + * @param result the transaction result, must not be {@literal null}. + */ + public DefaultTransactionResult(boolean discarded, List result) { + + LettuceAssert.notNull(result, "Result must not be null"); + + this.discarded = discarded; + this.result = result; + } + + @Override + public boolean wasDiscarded() { + return discarded; + } + + @Override + public Iterator iterator() { + return result.iterator(); + } + + @Override + public int size() { + return result.size(); + } + + @Override + public boolean isEmpty() { + return result.isEmpty(); + } + + @Override + public T get(int index) { + return (T) result.get(index); + } + + @Override + public Stream stream() { + return result.stream(); + } + + @Override + public String toString() { + + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [wasRolledBack=").append(discarded); + sb.append(", responses=").append(size()); + sb.append(']'); + return sb.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/output/DoubleOutput.java b/src/main/java/io/lettuce/core/output/DoubleOutput.java new file mode 100644 index 0000000000..2a4052d67e --- /dev/null +++ b/src/main/java/io/lettuce/core/output/DoubleOutput.java @@ -0,0 +1,46 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static java.lang.Double.parseDouble; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Double output, may be null. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + */ +public class DoubleOutput extends CommandOutput { + + public DoubleOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + output = (bytes == null) ? null : parseDouble(decodeAscii(bytes)); + } + + @Override + public void set(double number) { + output = number; + } +} diff --git a/src/main/java/io/lettuce/core/output/GenericMapOutput.java b/src/main/java/io/lettuce/core/output/GenericMapOutput.java new file mode 100644 index 0000000000..39fb1dbf59 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/GenericMapOutput.java @@ -0,0 +1,93 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.LinkedHashMap; +import java.util.Map; + +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link Map} of keys and objects output. + * + * @param Key type. + * @param Value type. + * + * @author Mark Paluch + * @since 6.0/RESP3 + */ +public class GenericMapOutput extends CommandOutput> { + + private K key; + + public GenericMapOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + + if (key == null) { + key = (bytes == null) ? null : codec.decodeKey(bytes); + return; + } + + Object value = (bytes == null) ? null : codec.decodeValue(bytes); + output.put(key, value); + key = null; + } + + @Override + public void setBigNumber(ByteBuffer bytes) { + set(bytes); + } + + @Override + @SuppressWarnings("unchecked") + public void set(long integer) { + + if (key == null) { + key = (K) Long.valueOf(integer); + return; + } + + V value = (V) Long.valueOf(integer); + output.put(key, value); + key = null; + } + + @Override + public void set(double number) { + + if (key == null) { + key = (K) Double.valueOf(number); + return; + } + + Object value = Double.valueOf(number); + output.put(key, value); + key = null; + } + + @Override + public void multi(int count) { + + if (output == null) { + output = new LinkedHashMap<>(count / 2, 1); + } + } +} diff --git a/src/main/java/io/lettuce/core/output/GeoCoordinatesListOutput.java b/src/main/java/io/lettuce/core/output/GeoCoordinatesListOutput.java new file mode 100644 index 0000000000..e0aa799dcb --- /dev/null +++ b/src/main/java/io/lettuce/core/output/GeoCoordinatesListOutput.java @@ -0,0 +1,92 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static java.lang.Double.parseDouble; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.GeoCoordinates; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * A list output that creates a list with {@link GeoCoordinates}'s. + * + * @author Mark Paluch + */ +public class GeoCoordinatesListOutput extends CommandOutput> implements + StreamingOutput { + + private Double x; + private boolean initialized; + private Subscriber subscriber; + + public GeoCoordinatesListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + + if (bytes == null) { + subscriber.onNext(output, null); + return; + } + + double value = (bytes == null) ? 0 : parseDouble(decodeAscii(bytes)); + set(value); + } + + @Override + public void set(double number) { + + if (x == null) { + x = number; + return; + } + + subscriber.onNext(output, new GeoCoordinates(x, number)); + x = null; + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + + if (count == -1) { + subscriber.onNext(output, null); + } + } + + @Override + public void setSubscriber(Subscriber subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/GeoCoordinatesValueListOutput.java b/src/main/java/io/lettuce/core/output/GeoCoordinatesValueListOutput.java new file mode 100644 index 0000000000..86857b1293 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/GeoCoordinatesValueListOutput.java @@ -0,0 +1,93 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static java.lang.Double.parseDouble; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.GeoCoordinates; +import io.lettuce.core.Value; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * A list output that creates a list with {@link GeoCoordinates} {@link Value}s. + * + * @author Mark Paluch + */ +public class GeoCoordinatesValueListOutput extends CommandOutput>> implements + StreamingOutput> { + + private Double x; + private boolean initialized; + private Subscriber> subscriber; + + public GeoCoordinatesValueListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + + if (bytes == null) { + subscriber.onNext(output, Value.empty()); + return; + } + + double value = parseDouble(decodeAscii(bytes)); + set(value); + } + + @Override + public void set(double number) { + + if (x == null) { + x = number; + return; + } + + subscriber.onNext(output, Value.fromNullable(new GeoCoordinates(x, number))); + x = null; + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count / 2); + initialized = true; + } + + if (count == -1) { + subscriber.onNext(output, Value.empty()); + } + } + + @Override + public void setSubscriber(Subscriber> subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber> getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/GeoWithinListOutput.java b/src/main/java/io/lettuce/core/output/GeoWithinListOutput.java new file mode 100644 index 0000000000..bd897486b6 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/GeoWithinListOutput.java @@ -0,0 +1,124 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static java.lang.Double.parseDouble; + +import java.nio.ByteBuffer; +import java.util.List; + +import io.lettuce.core.GeoCoordinates; +import io.lettuce.core.GeoWithin; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * A list output that creates a list with either double/long or {@link GeoCoordinates}'s. + * + * @author Mark Paluch + */ +public class GeoWithinListOutput extends CommandOutput>> implements StreamingOutput> { + + private V member; + private Double distance; + private Long geohash; + private GeoCoordinates coordinates; + + private Double x; + + private boolean withDistance; + private boolean withHash; + private boolean withCoordinates; + private Subscriber> subscriber; + + public GeoWithinListOutput(RedisCodec codec, boolean withDistance, boolean withHash, boolean withCoordinates) { + super(codec, OutputFactory.newList(16)); + this.withDistance = withDistance; + this.withHash = withHash; + this.withCoordinates = withCoordinates; + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(long integer) { + + if (member == null) { + member = (V) (Long) integer; + return; + } + + if (withHash) { + geohash = integer; + } + } + + @Override + public void set(ByteBuffer bytes) { + + if (member == null) { + member = codec.decodeValue(bytes); + return; + } + + double value = (bytes == null) ? 0 : parseDouble(decodeAscii(bytes)); + set(value); + } + + @Override + public void set(double number) { + + if (withDistance) { + if (distance == null) { + distance = number; + return; + } + } + + if (withCoordinates) { + if (x == null) { + x = number; + return; + } + + coordinates = new GeoCoordinates(x, number); + } + } + + @Override + public void complete(int depth) { + + if (depth == 1) { + subscriber.onNext(output, new GeoWithin(member, distance, geohash, coordinates)); + + member = null; + distance = null; + geohash = null; + coordinates = null; + x = null; + } + } + + @Override + public void setSubscriber(Subscriber> subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber> getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/IntegerOutput.java b/src/main/java/io/lettuce/core/output/IntegerOutput.java new file mode 100644 index 0000000000..47c315c60b --- /dev/null +++ b/src/main/java/io/lettuce/core/output/IntegerOutput.java @@ -0,0 +1,45 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * 64-bit integer output, may be null. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class IntegerOutput extends CommandOutput { + + public IntegerOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(long integer) { + output = integer; + } + + @Override + public void set(ByteBuffer bytes) { + output = null; + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyListOutput.java b/src/main/java/io/lettuce/core/output/KeyListOutput.java new file mode 100644 index 0000000000..2fd8177207 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyListOutput.java @@ -0,0 +1,72 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link List} of keys output. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class KeyListOutput extends CommandOutput> implements StreamingOutput { + + private boolean initialized; + private Subscriber subscriber; + + public KeyListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + + if (bytes == null) { + return; + } + + subscriber.onNext(output, bytes == null ? null : codec.decodeKey(bytes)); + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void setSubscriber(Subscriber subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyOutput.java b/src/main/java/io/lettuce/core/output/KeyOutput.java new file mode 100644 index 0000000000..c1667e590d --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyOutput.java @@ -0,0 +1,41 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Key output. + * + * @param Key type. + * @param Value type. + * + * @author Will Glozer + * @author Mark Paluch + */ +public class KeyOutput extends CommandOutput { + + public KeyOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + output = (bytes == null) ? null : codec.decodeKey(bytes); + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyScanOutput.java b/src/main/java/io/lettuce/core/output/KeyScanOutput.java new file mode 100644 index 0000000000..80d705cc54 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyScanOutput.java @@ -0,0 +1,40 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.KeyScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link io.lettuce.core.KeyScanCursor} for scan cursor output. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class KeyScanOutput extends ScanOutput> { + + public KeyScanOutput(RedisCodec codec) { + super(codec, new KeyScanCursor()); + } + + @Override + protected void setOutput(ByteBuffer bytes) { + output.getKeys().add(bytes == null ? null : codec.decodeKey(bytes)); + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyScanStreamingOutput.java b/src/main/java/io/lettuce/core/output/KeyScanStreamingOutput.java new file mode 100644 index 0000000000..c2cd408283 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyScanStreamingOutput.java @@ -0,0 +1,45 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * Streaming API for multiple Keys. You can implement this interface in order to receive a call to {@code onKey} on every key. + * Key uniqueness is not guaranteed. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class KeyScanStreamingOutput extends ScanOutput { + + private final KeyStreamingChannel channel; + + public KeyScanStreamingOutput(RedisCodec codec, KeyStreamingChannel channel) { + super(codec, new StreamScanCursor()); + this.channel = channel; + } + + @Override + protected void setOutput(ByteBuffer bytes) { + channel.onKey(bytes == null ? null : codec.decodeKey(bytes)); + output.setCount(output.getCount() + 1); + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyStreamingChannel.java b/src/main/java/io/lettuce/core/output/KeyStreamingChannel.java new file mode 100644 index 0000000000..3c024b3c80 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyStreamingChannel.java @@ -0,0 +1,34 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +/** + * Streaming API for multiple Keys. You can implement this interface in order to receive a call to {@code onKey} on every key. + * Key uniqueness is not guaranteed. + * + * @param Key type. + * @author Mark Paluch + * @since 3.0 + */ +@FunctionalInterface +public interface KeyStreamingChannel extends StreamingChannel{ + /** + * Called on every incoming key. + * + * @param key the key + */ + void onKey(K key); +} diff --git a/src/main/java/io/lettuce/core/output/KeyStreamingOutput.java b/src/main/java/io/lettuce/core/output/KeyStreamingOutput.java new file mode 100644 index 0000000000..61ac5978f1 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyStreamingOutput.java @@ -0,0 +1,44 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Streaming-Output of Keys. Returns the count of all keys (including null). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class KeyStreamingOutput extends CommandOutput { + + private final KeyStreamingChannel channel; + + public KeyStreamingOutput(RedisCodec codec, KeyStreamingChannel channel) { + super(codec, Long.valueOf(0)); + this.channel = channel; + } + + @Override + public void set(ByteBuffer bytes) { + + channel.onKey(bytes == null ? null : codec.decodeKey(bytes)); + output = output.longValue() + 1; + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyValueListOutput.java b/src/main/java/io/lettuce/core/output/KeyValueListOutput.java new file mode 100644 index 0000000000..4151798037 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyValueListOutput.java @@ -0,0 +1,78 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link List} of values output. + * + * @param Key type. + * @param Value type. + * + * @author Mark Paluch + */ +public class KeyValueListOutput extends CommandOutput>> implements + StreamingOutput> { + + private boolean initialized; + private Subscriber> subscriber; + private Iterable keys; + private Iterator keyIterator; + + public KeyValueListOutput(RedisCodec codec, Iterable keys) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + this.keys = keys; + } + + @Override + public void set(ByteBuffer bytes) { + + if (keyIterator == null) { + keyIterator = keys.iterator(); + } + + subscriber.onNext(output, KeyValue.fromNullable(keyIterator.next(), bytes == null ? null : codec.decodeValue(bytes))); + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void setSubscriber(Subscriber> subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber> getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyValueOutput.java b/src/main/java/io/lettuce/core/output/KeyValueOutput.java new file mode 100644 index 0000000000..3c40ad1e71 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyValueOutput.java @@ -0,0 +1,51 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.codec.RedisCodec; + +/** + * Key-value pair output. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class KeyValueOutput extends CommandOutput> { + + private K key; + + public KeyValueOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + + if (bytes != null) { + if (key == null) { + key = codec.decodeKey(bytes); + } else { + V value = codec.decodeValue(bytes); + output = KeyValue.fromNullable(key, value); + } + } + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyValueScanStreamingOutput.java b/src/main/java/io/lettuce/core/output/KeyValueScanStreamingOutput.java new file mode 100644 index 0000000000..34bd6fdf6b --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyValueScanStreamingOutput.java @@ -0,0 +1,56 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * Streaming-Output of Key Value Pairs. Returns the count of all Key-Value pairs (including null). + * + * @param Key type. + * @param Value type. + * + * @author Mark Paluch + */ +public class KeyValueScanStreamingOutput extends ScanOutput { + + private K key; + private KeyValueStreamingChannel channel; + + public KeyValueScanStreamingOutput(RedisCodec codec, KeyValueStreamingChannel channel) { + super(codec, new StreamScanCursor()); + this.channel = channel; + } + + @Override + protected void setOutput(ByteBuffer bytes) { + + if (key == null) { + key = codec.decodeKey(bytes); + return; + } + + V value = (bytes == null) ? null : codec.decodeValue(bytes); + + channel.onKeyValue(key, value); + output.setCount(output.getCount() + 1); + key = null; + } + +} diff --git a/src/main/java/io/lettuce/core/output/KeyValueScoredValueOutput.java b/src/main/java/io/lettuce/core/output/KeyValueScoredValueOutput.java new file mode 100644 index 0000000000..c04a01207f --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyValueScoredValueOutput.java @@ -0,0 +1,71 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.ScoredValue; +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link KeyValue} encapsulating {@link ScoredValue}. See {@code BZPOPMIN}/{@code BZPOPMAX} commands. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.1 + */ +public class KeyValueScoredValueOutput extends CommandOutput>> { + + private K key; + private V value; + + public KeyValueScoredValueOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + + if (bytes == null) { + return; + } + + if (key == null) { + key = codec.decodeKey(bytes); + return; + } + + if (value == null) { + value = codec.decodeValue(bytes); + return; + } + + double score = LettuceStrings.toDouble(decodeAscii(bytes)); + + set(score); + } + + @Override + public void set(double number) { + + output = KeyValue.just(key, ScoredValue.just(number, value)); + key = null; + value = null; + } +} diff --git a/src/main/java/io/lettuce/core/output/KeyValueStreamingChannel.java b/src/main/java/io/lettuce/core/output/KeyValueStreamingChannel.java new file mode 100644 index 0000000000..301c6d6faa --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyValueStreamingChannel.java @@ -0,0 +1,36 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +/** + * Streaming API for multiple keys and values (tuples). You can implement this interface in order to receive a call to + * {@code onKeyValue} on every key-value. + * + * @param Value type. + * @author Mark Paluch + * @since 5.0 + */ +@FunctionalInterface +public interface KeyValueStreamingChannel extends StreamingChannel { + + /** + * Called on every incoming key/value pair. + * + * @param key the key + * @param value the value + */ + void onKeyValue(K key, V value); +} diff --git a/src/main/java/io/lettuce/core/output/KeyValueStreamingOutput.java b/src/main/java/io/lettuce/core/output/KeyValueStreamingOutput.java new file mode 100644 index 0000000000..188bc342d3 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/KeyValueStreamingOutput.java @@ -0,0 +1,68 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Iterator; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Streaming-Output of Key Value Pairs. Returns the count of all Key-Value pairs (including null). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class KeyValueStreamingOutput extends CommandOutput { + + private Iterable keys; + private Iterator keyIterator; + private K key; + private KeyValueStreamingChannel channel; + + public KeyValueStreamingOutput(RedisCodec codec, KeyValueStreamingChannel channel) { + super(codec, Long.valueOf(0)); + this.channel = channel; + } + + public KeyValueStreamingOutput(RedisCodec codec, KeyValueStreamingChannel channel, Iterable keys) { + super(codec, Long.valueOf(0)); + this.channel = channel; + this.keys = keys; + } + + @Override + public void set(ByteBuffer bytes) { + + if (keys == null) { + if (key == null) { + key = codec.decodeKey(bytes); + return; + } + } else { + if (keyIterator == null) { + keyIterator = keys.iterator(); + } + key = keyIterator.next(); + } + + V value = (bytes == null) ? null : codec.decodeValue(bytes); + channel.onKeyValue(key, value); + output = output.longValue() + 1; + key = null; + } +} diff --git a/src/main/java/io/lettuce/core/output/ListOfMapsOutput.java b/src/main/java/io/lettuce/core/output/ListOfMapsOutput.java new file mode 100644 index 0000000000..b7bb7c6654 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ListOfMapsOutput.java @@ -0,0 +1,76 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link java.util.List} of maps output. + * + * @param Key type. + * @param Value type. + * + * @author Will Glozer + */ +public class ListOfMapsOutput extends CommandOutput>> { + + private MapOutput nested; + private int mapCount = -1; + private final List counts = new ArrayList<>(); + + public ListOfMapsOutput(RedisCodec codec) { + super(codec, new ArrayList<>()); + nested = new MapOutput<>(codec); + } + + @Override + public void set(ByteBuffer bytes) { + nested.set(bytes); + } + + @Override + public void complete(int depth) { + + if (!counts.isEmpty()) { + int expectedSize = counts.get(0); + + if (nested.get().size() == expectedSize) { + counts.remove(0); + output.add(new LinkedHashMap<>(nested.get())); + nested.get().clear(); + } + } + } + + @Override + public void multi(int count) { + + nested.multi(count); + + if (mapCount == -1) { + mapCount = count; + } else { + // div 2 because of key value pair counts twice + counts.add(count / 2); + } + } +} diff --git a/src/main/java/io/lettuce/core/output/ListSubscriber.java b/src/main/java/io/lettuce/core/output/ListSubscriber.java new file mode 100644 index 0000000000..fa7fb8e4bc --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ListSubscriber.java @@ -0,0 +1,51 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.util.Collection; + +import io.lettuce.core.output.StreamingOutput.Subscriber; + +/** + * Simple subscriber feeding a {@link Collection} {@link #onNext(Collection, Object)}. Does not support {@link #onNext(Object) + * plain onNext}. + * + * @author Mark Paluch + * @author Julien Ruaux + * @since 4.2 + */ +public class ListSubscriber extends Subscriber { + + private static final ListSubscriber INSTANCE = new ListSubscriber<>(); + + private ListSubscriber() { + } + + @SuppressWarnings("unchecked") + public static ListSubscriber instance() { + return (ListSubscriber) INSTANCE; + } + + @Override + public void onNext(T t) { + throw new UnsupportedOperationException(); + } + + @Override + public void onNext(Collection outputTarget, T t) { + outputTarget.add(t); + } +} diff --git a/src/main/java/io/lettuce/core/output/MapOutput.java b/src/main/java/io/lettuce/core/output/MapOutput.java new file mode 100644 index 0000000000..21a2b10b08 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/MapOutput.java @@ -0,0 +1,78 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.LinkedHashMap; +import java.util.Map; + +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link Map} of keys and values output. + * + * @param Key type. + * @param Value type. + * + * @author Will Glozer + * @author Mark Paluch + */ +public class MapOutput extends CommandOutput> { + + private boolean initialized; + private K key; + + public MapOutput(RedisCodec codec) { + super(codec, Collections.emptyMap()); + } + + @Override + public void set(ByteBuffer bytes) { + + if (key == null) { + key = (bytes == null) ? null : codec.decodeKey(bytes); + return; + } + + V value = (bytes == null) ? null : codec.decodeValue(bytes); + output.put(key, value); + key = null; + } + + @Override + @SuppressWarnings("unchecked") + public void set(long integer) { + + if (key == null) { + key = (K) Long.valueOf(integer); + return; + } + + V value = (V) Long.valueOf(integer); + output.put(key, value); + key = null; + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = new LinkedHashMap<>(count / 2, 1); + initialized = true; + } + } +} diff --git a/src/main/java/io/lettuce/core/output/MapScanOutput.java b/src/main/java/io/lettuce/core/output/MapScanOutput.java new file mode 100644 index 0000000000..d4a8916ac1 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/MapScanOutput.java @@ -0,0 +1,50 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.MapScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link io.lettuce.core.MapScanCursor} for scan cursor output. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class MapScanOutput extends ScanOutput> { + + private K key; + + public MapScanOutput(RedisCodec codec) { + super(codec, new MapScanCursor()); + } + + @Override + protected void setOutput(ByteBuffer bytes) { + + if (key == null) { + key = codec.decodeKey(bytes); + return; + } + + V value = (bytes == null) ? null : codec.decodeValue(bytes); + output.getMap().put(key, value); + key = null; + } +} diff --git a/src/main/java/io/lettuce/core/output/MultiOutput.java b/src/main/java/io/lettuce/core/output/MultiOutput.java new file mode 100644 index 0000000000..f0cfc1efd7 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/MultiOutput.java @@ -0,0 +1,160 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.List; +import java.util.Queue; + +import io.lettuce.core.internal.ExceptionFactory; +import io.lettuce.core.TransactionResult; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceFactories; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Output of all commands within a MULTI block. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class MultiOutput extends CommandOutput { + + private final Queue> queue; + private List responses = new ArrayList<>(); + private Boolean discarded; + private Integer multi; + + public MultiOutput(RedisCodec codec) { + super(codec, null); + queue = LettuceFactories.newSpScQueue(); + } + + public void add(RedisCommand cmd) { + queue.add(cmd); + } + + public void cancel() { + for (RedisCommand c : queue) { + c.complete(); + } + } + + @Override + public void set(long integer) { + RedisCommand command = queue.peek(); + if (command != null && command.getOutput() != null) { + command.getOutput().set(integer); + } + } + + @Override + public void setSingle(ByteBuffer bytes) { + RedisCommand command = queue.peek(); + if (command != null && command.getOutput() != null) { + command.getOutput().setSingle(bytes); + } + } + + @Override + public void setBigNumber(ByteBuffer bytes) { + RedisCommand command = queue.peek(); + if (command != null && command.getOutput() != null) { + command.getOutput().setBigNumber(bytes); + } + } + + @Override + public void set(double number) { + RedisCommand command = queue.peek(); + if (command != null && command.getOutput() != null) { + command.getOutput().set(number); + } + } + + @Override + public void set(ByteBuffer bytes) { + + if (multi == null && bytes == null) { + discarded = true; + return; + } + + RedisCommand command = queue.peek(); + if (command != null && command.getOutput() != null) { + command.getOutput().set(bytes); + } + } + + @Override + public void multi(int count) { + + if (multi == null) { + multi = count; + } + if (discarded == null) { + discarded = count == -1; + } else { + if (!queue.isEmpty()) { + queue.peek().getOutput().multi(count); + } + } + } + + @Override + public void setError(ByteBuffer error) { + + if (discarded == null) { + super.setError(error); + return; + } + + CommandOutput output = queue.isEmpty() ? this : queue.peek().getOutput(); + output.setError(decodeAscii(error)); + } + + @Override + public void complete(int depth) { + + if (queue.isEmpty()) { + return; + } + + if (depth >= 1) { + RedisCommand cmd = queue.peek(); + cmd.getOutput().complete(depth - 1); + } + + if (depth == 1) { + RedisCommand cmd = queue.remove(); + CommandOutput o = cmd.getOutput(); + responses.add(!o.hasError() ? o.get() : ExceptionFactory.createExecutionException(o.getError())); + cmd.complete(); + } else if (depth == 0 && !queue.isEmpty()) { + for (RedisCommand cmd : queue) { + cmd.complete(); + } + } + } + + @Override + public TransactionResult get() { + return new DefaultTransactionResult(discarded == null ? false : discarded, responses); + } +} diff --git a/src/main/java/io/lettuce/core/output/NestedMultiOutput.java b/src/main/java/io/lettuce/core/output/NestedMultiOutput.java new file mode 100644 index 0000000000..878b32ecff --- /dev/null +++ b/src/main/java/io/lettuce/core/output/NestedMultiOutput.java @@ -0,0 +1,100 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Deque; +import java.util.List; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.LettuceFactories; + +/** + * {@link List} of command outputs, possibly deeply nested. Decodes simple strings through {@link StringCodec#UTF8}. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class NestedMultiOutput extends CommandOutput> { + + private final Deque> stack; + private int depth; + private boolean initialized; + + public NestedMultiOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + stack = LettuceFactories.newSpScQueue(); + depth = 0; + } + + @Override + public void set(long integer) { + + if (!initialized) { + output = new ArrayList<>(); + } + + output.add(integer); + } + + @Override + public void set(ByteBuffer bytes) { + + if (!initialized) { + output = new ArrayList<>(); + } + + output.add(bytes == null ? null : codec.decodeValue(bytes)); + } + + @Override + public void setSingle(ByteBuffer bytes) { + + if (!initialized) { + output = new ArrayList<>(); + } + + output.add(bytes == null ? null : StringCodec.UTF8.decodeValue(bytes)); + } + + @Override + public void complete(int depth) { + if (depth > 0 && depth < this.depth) { + output = stack.pop(); + this.depth--; + } + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(Math.max(1, count)); + initialized = true; + } + + List a = OutputFactory.newList(count); + output.add(a); + stack.push(output); + output = a; + this.depth++; + } +} diff --git a/src/main/java/io/lettuce/core/output/OutputFactory.java b/src/main/java/io/lettuce/core/output/OutputFactory.java new file mode 100644 index 0000000000..8aa0f73151 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/OutputFactory.java @@ -0,0 +1,42 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.util.*; + +/** + * @author Mark Paluch + */ +class OutputFactory { + + static List newList(int capacity) { + + if (capacity < 1) { + return Collections.emptyList(); + } + + return new ArrayList<>(Math.max(1, capacity)); + } + + static Set newSet(int capacity) { + + if (capacity < 1) { + return Collections.emptySet(); + } + + return new LinkedHashSet<>(capacity, 1); + } +} diff --git a/src/main/java/io/lettuce/core/output/ReplayOutput.java b/src/main/java/io/lettuce/core/output/ReplayOutput.java new file mode 100644 index 0000000000..6728adcbf4 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ReplayOutput.java @@ -0,0 +1,197 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; + +/** + * Replayable {@link CommandOutput} capturing output signals to replay these on a target {@link CommandOutput}. Replay is useful + * when the response requires inspection prior to dispatching the actual output to a command target. + * + * @author Mark Paluch + * @since 5.0.3 + */ +public class ReplayOutput extends CommandOutput> { + + /** + * Initialize a new instance that encodes and decodes keys and values using the supplied codec. + */ + public ReplayOutput() { + super((RedisCodec) StringCodec.ASCII, new ArrayList<>()); + } + + @Override + public void set(ByteBuffer bytes) { + output.add(new BulkString(bytes)); + } + + @Override + public void set(long integer) { + output.add(new Integer(integer)); + } + + @Override + public void setError(ByteBuffer error) { + error.mark(); + output.add(new ErrorBytes(error)); + error.reset(); + super.setError(error); + } + + @Override + public void setError(String error) { + output.add(new ErrorString(error)); + super.setError(error); + } + + @Override + public void complete(int depth) { + output.add(new Complete(depth)); + } + + @Override + public void multi(int count) { + output.add(new Multi(count)); + } + + /** + * Replay all captured signals on a {@link CommandOutput}. + * + * @param target the target {@link CommandOutput}. + */ + public void replay(CommandOutput target) { + + for (Signal signal : output) { + signal.replay(target); + } + } + + /** + * Encapsulates a replayable decoding signal. + */ + public static abstract class Signal { + + /** + * Replay the signal on a {@link CommandOutput}. + * + * @param target + */ + protected abstract void replay(CommandOutput target); + } + + abstract static class BulkStringSupport extends Signal { + + final ByteBuffer message; + + BulkStringSupport(ByteBuffer message) { + + if (message != null) { + + // need to copy the buffer to prevent buffer lifecycle mismatch + this.message = ByteBuffer.allocate(message.remaining()); + this.message.put(message); + this.message.rewind(); + } else { + this.message = null; + } + } + } + + public static class BulkString extends BulkStringSupport { + + BulkString(ByteBuffer message) { + super(message); + } + + @Override + protected void replay(CommandOutput target) { + target.set(message); + } + } + + static class Integer extends Signal { + + final long message; + + Integer(long message) { + this.message = message; + } + + @Override + protected void replay(CommandOutput target) { + target.set(message); + } + } + + public static class ErrorBytes extends BulkStringSupport { + + ErrorBytes(ByteBuffer message) { + super(message); + } + + @Override + protected void replay(CommandOutput target) { + target.setError(message); + } + } + + static class ErrorString extends Signal { + + final String message; + + ErrorString(String message) { + this.message = message; + } + + @Override + protected void replay(CommandOutput target) { + target.setError(message); + } + } + + static class Multi extends Signal { + + final int count; + + Multi(int count) { + this.count = count; + } + + @Override + protected void replay(CommandOutput target) { + target.multi(count); + } + } + + static class Complete extends Signal { + + final int depth; + + public Complete(int depth) { + this.depth = depth; + } + + @Override + protected void replay(CommandOutput target) { + target.complete(depth); + } + } +} diff --git a/src/main/java/io/lettuce/core/output/ScanOutput.java b/src/main/java/io/lettuce/core/output/ScanOutput.java new file mode 100644 index 0000000000..cb3fa98862 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ScanOutput.java @@ -0,0 +1,54 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.ScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * Cursor handling output. + * + * @param Key type. + * @param Value type. + * @param Cursor type. + * @author Mark Paluch + */ +public abstract class ScanOutput extends CommandOutput { + + public ScanOutput(RedisCodec codec, T cursor) { + super(codec, cursor); + } + + @Override + public void set(ByteBuffer bytes) { + + if (output.getCursor() == null) { + output.setCursor(decodeAscii(bytes)); + if (LettuceStrings.isNotEmpty(output.getCursor()) && "0".equals(output.getCursor())) { + output.setFinished(true); + } + return; + } + + setOutput(bytes); + + } + + protected abstract void setOutput(ByteBuffer bytes); +} diff --git a/src/main/java/io/lettuce/core/output/ScoredValueListOutput.java b/src/main/java/io/lettuce/core/output/ScoredValueListOutput.java new file mode 100644 index 0000000000..1850bfc7c4 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ScoredValueListOutput.java @@ -0,0 +1,84 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.ScoredValue; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link List} of values and their associated scores. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + */ +public class ScoredValueListOutput extends CommandOutput>> implements + StreamingOutput> { + + private boolean initialized; + private Subscriber> subscriber; + private V value; + + public ScoredValueListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + + if (value == null) { + value = codec.decodeValue(bytes); + return; + } + + double score = LettuceStrings.toDouble(decodeAscii(bytes)); + set(score); + } + + @Override + public void set(double number) { + + subscriber.onNext(output, ScoredValue.fromNullable(number, value)); + value = null; + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void setSubscriber(Subscriber> subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber> getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/ScoredValueOutput.java b/src/main/java/io/lettuce/core/output/ScoredValueOutput.java new file mode 100644 index 0000000000..2e2a0669bb --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ScoredValueOutput.java @@ -0,0 +1,61 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.ScoredValue; +import io.lettuce.core.codec.RedisCodec; + +/** + * A single {@link ScoredValue}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.1 + */ +public class ScoredValueOutput extends CommandOutput> { + + private V value; + + public ScoredValueOutput(RedisCodec codec) { + super(codec, ScoredValue.empty()); + } + + @Override + public void set(ByteBuffer bytes) { + + if (bytes == null) { + return; + } + + if (value == null) { + value = codec.decodeValue(bytes); + return; + } + + double score = LettuceStrings.toDouble(decodeAscii(bytes)); + set(score); + } + + @Override + public void set(double number) { + output = ScoredValue.just(number, value); + value = null; + } +} diff --git a/src/main/java/io/lettuce/core/output/ScoredValueScanOutput.java b/src/main/java/io/lettuce/core/output/ScoredValueScanOutput.java new file mode 100644 index 0000000000..017933211f --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ScoredValueScanOutput.java @@ -0,0 +1,57 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.ScoredValue; +import io.lettuce.core.ScoredValueScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link io.lettuce.core.ScoredValueScanCursor} for scan cursor output. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class ScoredValueScanOutput extends ScanOutput> { + + private V value; + + public ScoredValueScanOutput(RedisCodec codec) { + super(codec, new ScoredValueScanCursor()); + } + + @Override + protected void setOutput(ByteBuffer bytes) { + + if (value == null) { + value = codec.decodeValue(bytes); + return; + } + + double score = LettuceStrings.toDouble(decodeAscii(bytes)); + set(score); + } + + @Override + public void set(double number) { + output.getValues().add(ScoredValue.fromNullable(number, value)); + value = null; + } +} diff --git a/src/main/java/io/lettuce/core/output/ScoredValueScanStreamingOutput.java b/src/main/java/io/lettuce/core/output/ScoredValueScanStreamingOutput.java new file mode 100644 index 0000000000..71b71e7ea3 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ScoredValueScanStreamingOutput.java @@ -0,0 +1,60 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.ScoredValue; +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * Streaming-Output of of values and their associated scores. Returns the count of all values (including null). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class ScoredValueScanStreamingOutput extends ScanOutput { + + private final ScoredValueStreamingChannel channel; + + private V value; + + public ScoredValueScanStreamingOutput(RedisCodec codec, ScoredValueStreamingChannel channel) { + super(codec, new StreamScanCursor()); + this.channel = channel; + } + + @Override + protected void setOutput(ByteBuffer bytes) { + if (value == null) { + value = codec.decodeValue(bytes); + return; + } + + double score = LettuceStrings.toDouble(decodeAscii(bytes)); + set(score); + } + + @Override + public void set(double number) { + channel.onValue(ScoredValue.fromNullable(number, value)); + value = null; + output.setCount(output.getCount() + 1); + } +} diff --git a/src/main/java/io/lettuce/core/output/ScoredValueStreamingChannel.java b/src/main/java/io/lettuce/core/output/ScoredValueStreamingChannel.java new file mode 100644 index 0000000000..4f3f1ef693 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ScoredValueStreamingChannel.java @@ -0,0 +1,36 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import io.lettuce.core.ScoredValue; + +/** + * Streaming API for multiple Keys. You can implement this interface in order to receive a call to {@code onValue} on every + * value. + * + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +@FunctionalInterface +public interface ScoredValueStreamingChannel extends StreamingChannel { + /** + * Called on every incoming ScoredValue. + * + * @param value the scored value + */ + void onValue(ScoredValue value); +} diff --git a/src/main/java/io/lettuce/core/output/ScoredValueStreamingOutput.java b/src/main/java/io/lettuce/core/output/ScoredValueStreamingOutput.java new file mode 100644 index 0000000000..a8a3b1b520 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ScoredValueStreamingOutput.java @@ -0,0 +1,60 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.ScoredValue; +import io.lettuce.core.codec.RedisCodec; + +/** + * Streaming-Output of of values and their associated scores. Returns the count of all values (including null). + * + * @author Mark Paluch + * @param Key type. + * @param Value type. + */ +public class ScoredValueStreamingOutput extends CommandOutput { + + private V value; + private final ScoredValueStreamingChannel channel; + + public ScoredValueStreamingOutput(RedisCodec codec, ScoredValueStreamingChannel channel) { + super(codec, Long.valueOf(0)); + this.channel = channel; + } + + @Override + public void set(ByteBuffer bytes) { + + if (value == null) { + value = codec.decodeValue(bytes); + return; + } + + double score = LettuceStrings.toDouble(decodeAscii(bytes)); + set(score); + } + + @Override + public void set(double number) { + + channel.onValue(ScoredValue.fromNullable(number, value)); + value = null; + output = output.longValue() + 1; + } +} diff --git a/src/main/java/io/lettuce/core/output/SocketAddressOutput.java b/src/main/java/io/lettuce/core/output/SocketAddressOutput.java new file mode 100644 index 0000000000..f3a646e5b4 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/SocketAddressOutput.java @@ -0,0 +1,49 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.net.InetSocketAddress; +import java.net.SocketAddress; +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Output capturing a hostname and port (both string elements) into a {@link SocketAddress}. + * + * @author Mark Paluch + * @since 5.0.1 + */ +public class SocketAddressOutput extends CommandOutput { + + private String hostname; + + public SocketAddressOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + + if (hostname == null) { + hostname = decodeAscii(bytes); + return; + } + + int port = Integer.parseInt(decodeAscii(bytes)); + output = InetSocketAddress.createUnresolved(hostname, port); + } +} diff --git a/src/main/java/io/lettuce/core/output/StatusOutput.java b/src/main/java/io/lettuce/core/output/StatusOutput.java new file mode 100644 index 0000000000..67355c3e64 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/StatusOutput.java @@ -0,0 +1,42 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Status message output. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + */ +public class StatusOutput extends CommandOutput { + + private static final ByteBuffer OK = StandardCharsets.US_ASCII.encode("OK"); + + public StatusOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + output = OK.equals(bytes) ? "OK" : decodeAscii(bytes); + } +} diff --git a/src/main/java/io/lettuce/core/output/StreamMessageListOutput.java b/src/main/java/io/lettuce/core/output/StreamMessageListOutput.java new file mode 100644 index 0000000000..31e5db4134 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/StreamMessageListOutput.java @@ -0,0 +1,103 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.StreamMessage; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link List} of {@link StreamMessage}s. + * + * @author Mark Paluch + * @since 5.1 + */ +public class StreamMessageListOutput extends CommandOutput>> implements + StreamingOutput> { + + private final K stream; + + private boolean initialized; + private Subscriber> subscriber; + + private K key; + private String id; + private Map body; + + public StreamMessageListOutput(RedisCodec codec, K stream) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + this.stream = stream; + } + + @Override + public void set(ByteBuffer bytes) { + + if (id == null) { + id = decodeAscii(bytes); + return; + } + + if (key == null) { + key = codec.decodeKey(bytes); + return; + } + + if (body == null) { + body = new LinkedHashMap<>(); + } + + body.put(key, bytes == null ? null : codec.decodeValue(bytes)); + key = null; + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void complete(int depth) { + + if (depth == 1) { + subscriber.onNext(output, new StreamMessage<>(stream, id, body)); + key = null; + id = null; + body = null; + } + } + + @Override + public void setSubscriber(Subscriber> subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber> getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/StreamReadOutput.java b/src/main/java/io/lettuce/core/output/StreamReadOutput.java new file mode 100644 index 0000000000..9c147d1df6 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/StreamReadOutput.java @@ -0,0 +1,122 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.StreamMessage; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * @author Mark Paluch + * @since 5.1 + */ +public class StreamReadOutput extends CommandOutput>> + implements StreamingOutput> { + + private boolean initialized; + private Subscriber> subscriber; + private boolean skipStreamKeyReset = false; + private K stream; + private K key; + private String id; + private Map body; + + public StreamReadOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + + if (stream == null) { + if (bytes == null) { + return; + } + + stream = codec.decodeKey(bytes); + skipStreamKeyReset = true; + return; + } + + if (id == null) { + id = decodeAscii(bytes); + return; + } + + if (key == null) { + key = codec.decodeKey(bytes); + return; + } + + if (body == null) { + body = new LinkedHashMap<>(); + } + + body.put(key, bytes == null ? null : codec.decodeValue(bytes)); + key = null; + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void complete(int depth) { + + if (depth == 3 && body != null) { + subscriber.onNext(output, new StreamMessage<>(stream, id, body)); + key = null; + body = null; + id = null; + } + + // RESP2/RESP3 compat + if (depth == 2 && skipStreamKeyReset) { + skipStreamKeyReset = false; + } + + if (depth == 1) { + if (skipStreamKeyReset) { + skipStreamKeyReset = false; + } else { + stream = null; + } + } + } + + @Override + public void setSubscriber(Subscriber> subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber> getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/StreamingChannel.java b/src/main/java/io/lettuce/core/output/StreamingChannel.java new file mode 100644 index 0000000000..50ab99ece3 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/StreamingChannel.java @@ -0,0 +1,25 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +/** + * Marker interface for streaming channels. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface StreamingChannel { +} diff --git a/src/main/java/io/lettuce/core/output/StreamingOutput.java b/src/main/java/io/lettuce/core/output/StreamingOutput.java new file mode 100644 index 0000000000..90a9648201 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/StreamingOutput.java @@ -0,0 +1,67 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.util.Collection; + +/** + * Implementors of this class support a streaming {@link CommandOutput} while the command is still processed. The receiving + * {@link Subscriber} receives {@link Subscriber#onNext(Collection, Object)} calls while the command is active. + * + * @author Mark Paluch + * @since 4.2 + */ +public interface StreamingOutput { + + /** + * Sets the {@link Subscriber}. + * + * @param subscriber + */ + void setSubscriber(Subscriber subscriber); + + /** + * Retrieves the {@link Subscriber}. + * + * @return + */ + Subscriber getSubscriber(); + + /** + * Subscriber to a {@link StreamingOutput}. + * + * @param + */ + abstract class Subscriber { + + /** + * Data notification sent by the {@link StreamingOutput}. + * + * @param t element + */ + public abstract void onNext(T t); + + /** + * Data notification sent by the {@link StreamingOutput}. + * + * @param outputTarget target + * @param t element + */ + public void onNext(Collection outputTarget, T t) { + onNext(t); + } + } +} diff --git a/src/main/java/io/lettuce/core/output/StringListOutput.java b/src/main/java/io/lettuce/core/output/StringListOutput.java new file mode 100644 index 0000000000..16eb04a92c --- /dev/null +++ b/src/main/java/io/lettuce/core/output/StringListOutput.java @@ -0,0 +1,67 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link List} of string output. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class StringListOutput extends CommandOutput> implements StreamingOutput { + + private boolean initialized; + private Subscriber subscriber; + + public StringListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + subscriber.onNext(output, bytes == null ? null : decodeAscii(bytes)); + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void setSubscriber(Subscriber subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/StringValueListOutput.java b/src/main/java/io/lettuce/core/output/StringValueListOutput.java new file mode 100644 index 0000000000..e0dcef4219 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/StringValueListOutput.java @@ -0,0 +1,69 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.Value; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link List} of {@link io.lettuce.core.Value} output. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.0 + */ +public class StringValueListOutput extends CommandOutput>> implements + StreamingOutput> { + + private boolean initialized; + private Subscriber> subscriber; + + public StringValueListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + subscriber.onNext(output, bytes == null ? Value.empty() : Value.fromNullable(decodeAscii(bytes))); + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void setSubscriber(Subscriber> subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber> getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/ValueListOutput.java b/src/main/java/io/lettuce/core/output/ValueListOutput.java new file mode 100644 index 0000000000..df93be431d --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ValueListOutput.java @@ -0,0 +1,73 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link List} of values output. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class ValueListOutput extends CommandOutput> implements StreamingOutput { + + private boolean initialized; + private Subscriber subscriber; + + public ValueListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + + // RESP 3 behavior + if (bytes == null && !initialized) { + return; + } + + subscriber.onNext(output, bytes == null ? null : codec.decodeValue(bytes)); + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void setSubscriber(Subscriber subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/ValueOutput.java b/src/main/java/io/lettuce/core/output/ValueOutput.java new file mode 100644 index 0000000000..8fb5be2e61 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ValueOutput.java @@ -0,0 +1,41 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Value output. + * + * @param Key type. + * @param Value type. + * + * @author Will Glozer + * @author Mark Paluch + */ +public class ValueOutput extends CommandOutput { + + public ValueOutput(RedisCodec codec) { + super(codec, null); + } + + @Override + public void set(ByteBuffer bytes) { + output = (bytes == null) ? null : codec.decodeValue(bytes); + } +} diff --git a/src/main/java/io/lettuce/core/output/ValueScanOutput.java b/src/main/java/io/lettuce/core/output/ValueScanOutput.java new file mode 100644 index 0000000000..45725f3470 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ValueScanOutput.java @@ -0,0 +1,40 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.ValueScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link io.lettuce.core.ValueScanCursor} for scan cursor output. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class ValueScanOutput extends ScanOutput> { + + public ValueScanOutput(RedisCodec codec) { + super(codec, new ValueScanCursor()); + } + + @Override + protected void setOutput(ByteBuffer bytes) { + output.getValues().add(bytes == null ? null : codec.decodeValue(bytes)); + } +} diff --git a/src/main/java/io/lettuce/core/output/ValueScanStreamingOutput.java b/src/main/java/io/lettuce/core/output/ValueScanStreamingOutput.java new file mode 100644 index 0000000000..fdbe7f7907 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ValueScanStreamingOutput.java @@ -0,0 +1,45 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.codec.RedisCodec; + +/** + * Streaming API for multiple Values. You can implement this interface in order to receive a call to {@code onValue} on every + * key. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class ValueScanStreamingOutput extends ScanOutput { + + private final ValueStreamingChannel channel; + + public ValueScanStreamingOutput(RedisCodec codec, ValueStreamingChannel channel) { + super(codec, new StreamScanCursor()); + this.channel = channel; + } + + @Override + protected void setOutput(ByteBuffer bytes) { + channel.onValue(bytes == null ? null : codec.decodeValue(bytes)); + output.setCount(output.getCount() + 1); + } +} diff --git a/src/main/java/io/lettuce/core/output/ValueSetOutput.java b/src/main/java/io/lettuce/core/output/ValueSetOutput.java new file mode 100644 index 0000000000..6a7c258669 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ValueSetOutput.java @@ -0,0 +1,59 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.Set; + +import io.lettuce.core.codec.RedisCodec; + +/** + * {@link Set} of value output. + * + * @param Key type. + * @param Value type. + * + * @author Will Glozer + */ +public class ValueSetOutput extends CommandOutput> { + + private boolean initialized; + + public ValueSetOutput(RedisCodec codec) { + super(codec, Collections.emptySet()); + } + + @Override + public void set(ByteBuffer bytes) { + + // RESP 3 behavior + if (bytes == null && !initialized) { + return; + } + + output.add(bytes == null ? null : codec.decodeValue(bytes)); + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newSet(count); + initialized = true; + } + } +} diff --git a/src/main/java/io/lettuce/core/output/ValueStreamingChannel.java b/src/main/java/io/lettuce/core/output/ValueStreamingChannel.java new file mode 100644 index 0000000000..871ae25d3d --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ValueStreamingChannel.java @@ -0,0 +1,35 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +/** + * Streaming API for multiple Keys. You can implement this interface in order to receive a call to {@code onValue} on every + * value. + * + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +@FunctionalInterface +public interface ValueStreamingChannel { + + /** + * Called on every incoming value. + * + * @param value the value + */ + void onValue(V value); +} diff --git a/src/main/java/io/lettuce/core/output/ValueStreamingOutput.java b/src/main/java/io/lettuce/core/output/ValueStreamingOutput.java new file mode 100644 index 0000000000..ffe32e776f --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ValueStreamingOutput.java @@ -0,0 +1,45 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * Streaming-Output of Values. Returns the count of all values (including null). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class ValueStreamingOutput extends CommandOutput { + + private final ValueStreamingChannel channel; + + public ValueStreamingOutput(RedisCodec codec, ValueStreamingChannel channel) { + super(codec, Long.valueOf(0)); + this.channel = channel; + } + + @Override + public void set(ByteBuffer bytes) { + + channel.onValue(bytes == null ? null : codec.decodeValue(bytes)); + output = output.longValue() + 1; + } + +} diff --git a/src/main/java/io/lettuce/core/output/ValueValueListOutput.java b/src/main/java/io/lettuce/core/output/ValueValueListOutput.java new file mode 100644 index 0000000000..6605b48808 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/ValueValueListOutput.java @@ -0,0 +1,73 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.Value; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link List} of {@link Value} wrapped values output. + * + * @param Key type. + * @param Value type. + * + * @author Mark Paluch + */ +public class ValueValueListOutput extends CommandOutput>> implements StreamingOutput> { + + private boolean initialized; + private Subscriber> subscriber; + + public ValueValueListOutput(RedisCodec codec) { + super(codec, Collections.emptyList()); + setSubscriber(ListSubscriber.instance()); + } + + @Override + public void set(ByteBuffer bytes) { + subscriber.onNext(output, Value.fromNullable(bytes == null ? null : codec.decodeValue(bytes))); + } + + @Override + public void set(long integer) { + subscriber.onNext(output, Value.fromNullable((V) Long.valueOf(integer))); + } + + @Override + public void multi(int count) { + + if (!initialized) { + output = OutputFactory.newList(count); + initialized = true; + } + } + + @Override + public void setSubscriber(Subscriber> subscriber) { + LettuceAssert.notNull(subscriber, "Subscriber must not be null"); + this.subscriber = subscriber; + } + + @Override + public Subscriber> getSubscriber() { + return subscriber; + } +} diff --git a/src/main/java/io/lettuce/core/output/package-info.java b/src/main/java/io/lettuce/core/output/package-info.java new file mode 100644 index 0000000000..fe6ec2fce2 --- /dev/null +++ b/src/main/java/io/lettuce/core/output/package-info.java @@ -0,0 +1,4 @@ +/** + * Implementation of different output protocols including the Streaming API. + */ +package io.lettuce.core.output; diff --git a/src/main/java/io/lettuce/core/package-info.java b/src/main/java/io/lettuce/core/package-info.java new file mode 100644 index 0000000000..ac406c557c --- /dev/null +++ b/src/main/java/io/lettuce/core/package-info.java @@ -0,0 +1,4 @@ +/** + * The Redis client package containing {@link io.lettuce.core.RedisClient} for Redis Standalone and Redis Sentinel operations. + */ +package io.lettuce.core; diff --git a/src/main/java/io/lettuce/core/protocol/AsyncCommand.java b/src/main/java/io/lettuce/core/protocol/AsyncCommand.java new file mode 100644 index 0000000000..5c7700e761 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/AsyncCommand.java @@ -0,0 +1,231 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import java.util.function.BiConsumer; +import java.util.function.Consumer; + +import io.lettuce.core.internal.ExceptionFactory; +import io.lettuce.core.RedisCommandInterruptedException; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.CommandOutput; +import io.netty.buffer.ByteBuf; + +/** + * An asynchronous redis command and its result. All successfully executed commands will eventually return a + * {@link CommandOutput} object. + * + * @param Key type. + * @param Value type. + * @param Command output type. + * + * @author Mark Paluch + */ +public class AsyncCommand extends CompletableFuture implements RedisCommand, RedisFuture, + CompleteableCommand, DecoratedCommand { + + @SuppressWarnings("rawtypes") + private static final AtomicIntegerFieldUpdater COUNT_UPDATER = AtomicIntegerFieldUpdater.newUpdater( + AsyncCommand.class, "count"); + + private final RedisCommand command; + + // access via COUNT_UPDATER + @SuppressWarnings({ "unused" }) + private volatile int count = 1; + + /** + * @param command the command, must not be {@literal null}. + */ + public AsyncCommand(RedisCommand command) { + this(command, 1); + } + + /** + * @param command the command, must not be {@literal null}. + */ + protected AsyncCommand(RedisCommand command, int count) { + LettuceAssert.notNull(command, "RedisCommand must not be null"); + this.command = command; + this.count = count; + } + + /** + * Wait up to the specified time for the command output to become available. + * + * @param timeout Maximum time to wait for a result. + * @param unit Unit of time for the timeout. + * + * @return true if the output became available. + */ + @Override + public boolean await(long timeout, TimeUnit unit) { + try { + get(timeout, unit); + return true; + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw new RedisCommandInterruptedException(e); + } catch (ExecutionException e) { + return true; + } catch (TimeoutException e) { + return false; + } + } + + /** + * Get the object that holds this command's output. + * + * @return The command output object. + */ + @Override + public CommandOutput getOutput() { + return command.getOutput(); + } + + /** + * Mark this command complete and notify all waiting threads. + */ + @Override + public void complete() { + if (COUNT_UPDATER.decrementAndGet(this) == 0) { + completeResult(); + command.complete(); + } + } + + protected void completeResult() { + if (command.getOutput() == null) { + complete(null); + } else if (command.getOutput().hasError()) { + doCompleteExceptionally(ExceptionFactory.createExecutionException(command.getOutput().getError())); + } else { + complete(command.getOutput().get()); + } + } + + @Override + public boolean completeExceptionally(Throwable ex) { + boolean result = false; + + int ref = COUNT_UPDATER.get(this); + if (ref > 0 && COUNT_UPDATER.compareAndSet(this, ref, 0)) { + result = doCompleteExceptionally(ex); + } + return result; + } + + private boolean doCompleteExceptionally(Throwable ex) { + command.completeExceptionally(ex); + return super.completeExceptionally(ex); + } + + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + try { + command.cancel(); + return super.cancel(mayInterruptIfRunning); + } finally { + COUNT_UPDATER.set(this, 0); + } + } + + @Override + public String getError() { + return command.getOutput().getError(); + } + + @Override + public CommandArgs getArgs() { + return command.getArgs(); + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [type=").append(getType()); + sb.append(", output=").append(getOutput()); + sb.append(", commandType=").append(command.getClass().getName()); + sb.append(']'); + return sb.toString(); + } + + @Override + public ProtocolKeyword getType() { + return command.getType(); + } + + @Override + public void cancel() { + cancel(true); + } + + @Override + public void encode(ByteBuf buf) { + command.encode(buf); + } + + @Override + public void setOutput(CommandOutput output) { + command.setOutput(output); + } + + @Override + public void onComplete(Consumer action) { + thenAccept(action); + } + + @Override + public void onComplete(BiConsumer action) { + whenComplete(action); + } + + @Override + public RedisCommand getDelegate() { + return command; + } + + @Override + public boolean equals(Object o) { + + if (this == o) + return true; + if (!(o instanceof RedisCommand)) { + return false; + } + + RedisCommand left = CommandWrapper.unwrap(command); + RedisCommand right = CommandWrapper.unwrap((RedisCommand) o); + + return left == right; + } + + @Override + public int hashCode() { + + RedisCommand toHash = CommandWrapper.unwrap(command); + + return toHash != null ? toHash.hashCode() : 0; + } + +} diff --git a/src/main/java/io/lettuce/core/protocol/BaseRedisCommandBuilder.java b/src/main/java/io/lettuce/core/protocol/BaseRedisCommandBuilder.java new file mode 100644 index 0000000000..4ce02df074 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/BaseRedisCommandBuilder.java @@ -0,0 +1,75 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import io.lettuce.core.RedisException; +import io.lettuce.core.ScriptOutputType; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.output.*; + +/** + * @author Mark Paluch + * @since 3.0 + */ +public class BaseRedisCommandBuilder { + + protected final RedisCodec codec; + + public BaseRedisCommandBuilder(RedisCodec codec) { + this.codec = codec; + } + + protected Command createCommand(CommandType type, CommandOutput output) { + return createCommand(type, output, (CommandArgs) null); + } + + protected Command createCommand(CommandType type, CommandOutput output, K key) { + CommandArgs args = new CommandArgs<>(codec).addKey(key); + return createCommand(type, output, args); + } + + protected Command createCommand(CommandType type, CommandOutput output, K key, V value) { + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValue(value); + return createCommand(type, output, args); + } + + protected Command createCommand(CommandType type, CommandOutput output, K key, V[] values) { + CommandArgs args = new CommandArgs<>(codec).addKey(key).addValues(values); + return createCommand(type, output, args); + } + + protected Command createCommand(CommandType type, CommandOutput output, CommandArgs args) { + return new Command<>(type, output, args); + } + + @SuppressWarnings("unchecked") + protected CommandOutput newScriptOutput(RedisCodec codec, ScriptOutputType type) { + switch (type) { + case BOOLEAN: + return (CommandOutput) new BooleanOutput<>(codec); + case INTEGER: + return (CommandOutput) new IntegerOutput<>(codec); + case STATUS: + return (CommandOutput) new StatusOutput<>(codec); + case MULTI: + return (CommandOutput) new NestedMultiOutput<>(codec); + case VALUE: + return (CommandOutput) new ValueOutput<>(codec); + default: + throw new RedisException("Unsupported script output type"); + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/ChannelLogDescriptor.java b/src/main/java/io/lettuce/core/protocol/ChannelLogDescriptor.java new file mode 100644 index 0000000000..a2cd8f0612 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/ChannelLogDescriptor.java @@ -0,0 +1,55 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import io.netty.channel.Channel; + +/** + * @author Mark Paluch + */ +class ChannelLogDescriptor { + + static String logDescriptor(Channel channel) { + + if (channel == null) { + return "unknown"; + } + + StringBuffer buffer = new StringBuffer(64); + + buffer.append("channel=").append(getId(channel)).append(", "); + + if (channel.localAddress() != null && channel.remoteAddress() != null) { + buffer.append(channel.localAddress()).append(" -> ").append(channel.remoteAddress()); + } else { + buffer.append(channel); + } + + if (!channel.isActive()) { + if (buffer.length() != 0) { + buffer.append(' '); + } + + buffer.append("(inactive)"); + } + + return buffer.toString(); + } + + private static String getId(Channel channel) { + return String.format("0x%08x", channel.hashCode()); + } +} diff --git a/src/main/java/io/lettuce/core/protocol/Command.java b/src/main/java/io/lettuce/core/protocol/Command.java new file mode 100644 index 0000000000..807118ef2b --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/Command.java @@ -0,0 +1,175 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.CommandOutput; +import io.netty.buffer.ByteBuf; + +/** + * A Redis command with a {@link ProtocolKeyword command type}, {@link CommandArgs arguments} and an optional + * {@link CommandOutput output}. All successfully executed commands will eventually return a {@link CommandOutput} object. + * + * @param Key type. + * @param Value type. + * @param Command output type. + * + * @author Will Glozer + * @author Mark Paluch + */ +public class Command implements RedisCommand { + + protected static final byte ST_INITIAL = 0; + protected static final byte ST_COMPLETED = 1; + protected static final byte ST_CANCELLED = 2; + + private final ProtocolKeyword type; + + protected CommandArgs args; + protected CommandOutput output; + protected Throwable exception; + protected volatile byte status = ST_INITIAL; + + /** + * Create a new command with the supplied type. + * + * @param type Command type, must not be {@literal null}. + * @param output Command output, can be {@literal null}. + */ + public Command(ProtocolKeyword type, CommandOutput output) { + this(type, output, null); + } + + /** + * Create a new command with the supplied type and args. + * + * @param type Command type, must not be {@literal null}. + * @param output Command output, can be {@literal null}. + * @param args Command args, can be {@literal null} + */ + public Command(ProtocolKeyword type, CommandOutput output, CommandArgs args) { + LettuceAssert.notNull(type, "Command type must not be null"); + this.type = type; + this.output = output; + this.args = args; + } + + /** + * Get the object that holds this command's output. + * + * @return The command output object. + */ + @Override + public CommandOutput getOutput() { + return output; + } + + @Override + public boolean completeExceptionally(Throwable throwable) { + if (output != null) { + output.setError(throwable.getMessage()); + } + + exception = throwable; + return true; + } + + /** + * Mark this command complete and notify all waiting threads. + */ + @Override + public void complete() { + this.status = ST_COMPLETED; + } + + @Override + public void cancel() { + this.status = ST_CANCELLED; + } + + /** + * Encode and write this command to the supplied buffer using the new Unified + * Request Protocol. + * + * @param buf Buffer to write to. + */ + public void encode(ByteBuf buf) { + + buf.touch("Command.encode(…)"); + buf.writeByte('*'); + CommandArgs.IntegerArgument.writeInteger(buf, 1 + (args != null ? args.count() : 0)); + + buf.writeBytes(CommandArgs.CRLF); + + CommandArgs.BytesArgument.writeBytes(buf, type.getBytes()); + + if (args != null) { + args.encode(buf); + } + } + + public String getError() { + return output.getError(); + } + + @Override + public CommandArgs getArgs() { + return args; + } + + /** + * + * @return the resut from the output. + */ + public T get() { + if (output != null) { + return output.get(); + } + return null; + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [type=").append(type); + sb.append(", output=").append(output); + sb.append(']'); + return sb.toString(); + } + + public void setOutput(CommandOutput output) { + if (this.status != ST_INITIAL) { + throw new IllegalStateException("Command is completed/cancelled. Cannot set a new output"); + } + this.output = output; + } + + @Override + public ProtocolKeyword getType() { + return type; + } + + @Override + public boolean isCancelled() { + return status == ST_CANCELLED; + } + + @Override + public boolean isDone() { + return status != ST_INITIAL; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CommandArgs.java b/src/main/java/io/lettuce/core/protocol/CommandArgs.java new file mode 100644 index 0000000000..4c233e04eb --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CommandArgs.java @@ -0,0 +1,721 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Base64; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.codec.ToByteBufEncoder; +import io.lettuce.core.internal.LettuceAssert; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.UnpooledByteBufAllocator; + +/** + * Redis command arguments. {@link CommandArgs} is a container for multiple singular arguments. Key and Value arguments are + * encoded using the {@link RedisCodec} to their byte representation. {@link CommandArgs} provides a fluent style of adding + * multiple arguments. A {@link CommandArgs} instance can be reused across multiple commands and invocations. + * + *

    Example

    + * + *
    + * new CommandArgs<>(codec).addKey(key).addValue(value).add(CommandKeyword.FORCE);
    + * 
    + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class CommandArgs { + + static final byte[] CRLF = "\r\n".getBytes(StandardCharsets.US_ASCII); + + protected final RedisCodec codec; + + final List singularArguments = new ArrayList<>(10); + + /** + * @param codec Codec used to encode/decode keys and values, must not be {@literal null}. + */ + public CommandArgs(RedisCodec codec) { + + LettuceAssert.notNull(codec, "RedisCodec must not be null"); + this.codec = codec; + } + + /** + * + * @return the number of arguments. + */ + public int count() { + return singularArguments.size(); + } + + /** + * Adds a key argument. + * + * @param key the key + * @return the command args. + */ + public CommandArgs addKey(K key) { + + singularArguments.add(KeyArgument.of(key, codec)); + return this; + } + + /** + * Add multiple key arguments. + * + * @param keys must not be {@literal null}. + * @return the command args. + */ + public CommandArgs addKeys(Iterable keys) { + + LettuceAssert.notNull(keys, "Keys must not be null"); + + for (K key : keys) { + addKey(key); + } + return this; + } + + /** + * Add multiple key arguments. + * + * @param keys must not be {@literal null}. + * @return the command args. + */ + @SafeVarargs + public final CommandArgs addKeys(K... keys) { + + LettuceAssert.notNull(keys, "Keys must not be null"); + + for (K key : keys) { + addKey(key); + } + return this; + } + + /** + * Add a value argument. + * + * @param value the value + * @return the command args. + */ + public CommandArgs addValue(V value) { + + singularArguments.add(ValueArgument.of(value, codec)); + return this; + } + + /** + * Add multiple value arguments. + * + * @param values must not be {@literal null}. + * @return the command args. + */ + public CommandArgs addValues(Iterable values) { + + LettuceAssert.notNull(values, "Values must not be null"); + + for (V value : values) { + addValue(value); + } + return this; + } + + /** + * Add multiple value arguments. + * + * @param values must not be {@literal null}. + * @return the command args. + */ + @SafeVarargs + public final CommandArgs addValues(V... values) { + + LettuceAssert.notNull(values, "Values must not be null"); + + for (V value : values) { + addValue(value); + } + return this; + } + + /** + * Add a map (hash) argument. + * + * @param map the map, must not be {@literal null}. + * @return the command args. + */ + public CommandArgs add(Map map) { + + LettuceAssert.notNull(map, "Map must not be null"); + + for (Map.Entry entry : map.entrySet()) { + addKey(entry.getKey()).addValue(entry.getValue()); + } + + return this; + } + + /** + * Add a string argument. The argument is represented as bulk string. + * + * @param s the string. + * @return the command args. + */ + public CommandArgs add(String s) { + + singularArguments.add(StringArgument.of(s)); + return this; + } + + /** + * Add a string as char-array. The argument is represented as bulk string. + * + * @param cs the string. + * @return the command args. + */ + public CommandArgs add(char[] cs) { + + singularArguments.add(CharArrayArgument.of(cs)); + return this; + } + + /** + * Add an 64-bit integer (long) argument. + * + * @param n the argument. + * @return the command args. + */ + public CommandArgs add(long n) { + + singularArguments.add(IntegerArgument.of(n)); + return this; + } + + /** + * Add a double argument. + * + * @param n the double argument. + * @return the command args. + */ + public CommandArgs add(double n) { + + singularArguments.add(DoubleArgument.of(n)); + return this; + } + + /** + * Add a byte-array argument. The argument is represented as bulk string. + * + * @param value the byte-array. + * @return the command args. + */ + public CommandArgs add(byte[] value) { + + singularArguments.add(BytesArgument.of(value)); + return this; + } + + /** + * Add a {@link CommandKeyword} argument. The argument is represented as bulk string. + * + * @param keyword must not be {@literal null}. + * @return the command args. + */ + public CommandArgs add(CommandKeyword keyword) { + + LettuceAssert.notNull(keyword, "CommandKeyword must not be null"); + singularArguments.add(ProtocolKeywordArgument.of(keyword)); + return this; + } + + /** + * Add a {@link CommandType} argument. The argument is represented as bulk string. + * + * @param type must not be {@literal null}. + * @return the command args. + */ + public CommandArgs add(CommandType type) { + + LettuceAssert.notNull(type, "CommandType must not be null"); + singularArguments.add(ProtocolKeywordArgument.of(type)); + return this; + } + + /** + * Add a {@link ProtocolKeyword} argument. The argument is represented as bulk string. + * + * @param keyword the keyword, must not be {@literal null} + * @return the command args. + */ + public CommandArgs add(ProtocolKeyword keyword) { + + LettuceAssert.notNull(keyword, "CommandKeyword must not be null"); + singularArguments.add(ProtocolKeywordArgument.of(keyword)); + return this; + } + + @Override + public String toString() { + + final StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + + ByteBuf buffer = UnpooledByteBufAllocator.DEFAULT.buffer(singularArguments.size() * 10); + encode(buffer); + buffer.resetReaderIndex(); + + byte[] bytes = new byte[buffer.readableBytes()]; + buffer.readBytes(bytes); + sb.append(" [buffer=").append(new String(bytes)); + sb.append(']'); + buffer.release(); + + return sb.toString(); + } + + /** + * Returns a command string representation of {@link CommandArgs} with annotated key and value parameters. + * + * {@code args.addKey("mykey").add(2.0)} will return {@code key 2.0}. + * + * @return the command string representation. + */ + public String toCommandString() { + return LettuceStrings.collectionToDelimitedString(singularArguments, " ", "", ""); + } + + /** + * Returns the first integer argument. + * + * @return the first integer argument or {@literal null}. + */ + @Deprecated + public Long getFirstInteger() { + return CommandArgsAccessor.getFirstInteger(this); + } + + /** + * Returns the first string argument. + * + * @return the first string argument or {@literal null}. + */ + @Deprecated + public String getFirstString() { + return CommandArgsAccessor.getFirstString(this); + } + + /** + * Returns the first key argument in its byte-encoded representation. + * + * @return the first key argument in its byte-encoded representation or {@literal null}. + */ + public ByteBuffer getFirstEncodedKey() { + return CommandArgsAccessor.encodeFirstKey(this); + } + + /** + * Encode the {@link CommandArgs} and write the arguments to the {@link ByteBuf}. + * + * @param buf the target buffer. + */ + public void encode(ByteBuf buf) { + + buf.touch("CommandArgs.encode(…)"); + for (SingularArgument singularArgument : singularArguments) { + singularArgument.encode(buf); + } + } + + /** + * Single argument wrapper that can be encoded. + */ + static abstract class SingularArgument { + + /** + * Encode the argument and write it to the {@code buffer}. + * + * @param buffer + */ + abstract void encode(ByteBuf buffer); + } + + static class BytesArgument extends SingularArgument { + + final byte[] val; + + private BytesArgument(byte[] val) { + this.val = val; + } + + static BytesArgument of(byte[] val) { + return new BytesArgument(val); + } + + @Override + void encode(ByteBuf buffer) { + writeBytes(buffer, val); + } + + static void writeBytes(ByteBuf buffer, byte[] value) { + + buffer.writeByte('$'); + + IntegerArgument.writeInteger(buffer, value.length); + buffer.writeBytes(CRLF); + + buffer.writeBytes(value); + buffer.writeBytes(CRLF); + } + + @Override + public String toString() { + return Base64.getEncoder().encodeToString(val); + } + } + + static class ProtocolKeywordArgument extends BytesArgument { + + private final ProtocolKeyword protocolKeyword; + + private ProtocolKeywordArgument(ProtocolKeyword protocolKeyword) { + super(protocolKeyword.getBytes()); + this.protocolKeyword = protocolKeyword; + } + + static BytesArgument of(ProtocolKeyword protocolKeyword) { + + if (protocolKeyword instanceof CommandType) { + return CommandTypeCache.cache[((Enum) protocolKeyword).ordinal()]; + } + + if (protocolKeyword instanceof CommandKeyword) { + return CommandKeywordCache.cache[((Enum) protocolKeyword).ordinal()]; + } + + return ProtocolKeywordArgument.of(protocolKeyword.getBytes()); + } + + @Override + public String toString() { + return protocolKeyword.name(); + } + } + + static class CommandTypeCache { + + static final ProtocolKeywordArgument cache[]; + + static { + + CommandType[] values = CommandType.values(); + cache = new ProtocolKeywordArgument[values.length]; + for (int i = 0; i < cache.length; i++) { + cache[i] = new ProtocolKeywordArgument(values[i]); + } + } + } + + static class CommandKeywordCache { + + static final ProtocolKeywordArgument cache[]; + + static { + + CommandKeyword[] values = CommandKeyword.values(); + cache = new ProtocolKeywordArgument[values.length]; + for (int i = 0; i < cache.length; i++) { + cache[i] = new ProtocolKeywordArgument(values[i]); + } + } + } + + static class ByteBufferArgument { + + static void writeByteBuffer(ByteBuf target, ByteBuffer value) { + + target.writeByte('$'); + + IntegerArgument.writeInteger(target, value.remaining()); + target.writeBytes(CRLF); + + target.writeBytes(value); + target.writeBytes(CRLF); + } + + static void writeByteBuf(ByteBuf target, ByteBuf value) { + + target.writeByte('$'); + + IntegerArgument.writeInteger(target, value.readableBytes()); + target.writeBytes(CRLF); + + target.writeBytes(value); + target.writeBytes(CRLF); + } + } + + static class IntegerArgument extends SingularArgument { + + final long val; + + private IntegerArgument(long val) { + this.val = val; + } + + static IntegerArgument of(long val) { + + if (val >= 0 && val < IntegerCache.cache.length) { + return IntegerCache.cache[(int) val]; + } + + if (val < 0 && -val < IntegerCache.cache.length) { + return IntegerCache.negativeCache[(int) -val]; + } + + return new IntegerArgument(val); + } + + @Override + void encode(ByteBuf target) { + StringArgument.writeString(target, Long.toString(val)); + } + + @Override + public String toString() { + return "" + val; + } + + static void writeInteger(ByteBuf target, long value) { + + if (value < 10) { + target.writeByte((byte) ('0' + value)); + return; + } + + String asString = Long.toString(value); + + for (int i = 0; i < asString.length(); i++) { + target.writeByte((byte) asString.charAt(i)); + } + } + } + + static class IntegerCache { + + static final IntegerArgument cache[]; + static final IntegerArgument negativeCache[]; + + static { + int high = Integer.getInteger("io.lettuce.core.CommandArgs.IntegerCache", 128); + cache = new IntegerArgument[high]; + negativeCache = new IntegerArgument[high]; + for (int i = 0; i < high; i++) { + cache[i] = new IntegerArgument(i); + negativeCache[i] = new IntegerArgument(-i); + } + } + } + + static class DoubleArgument extends SingularArgument { + + final double val; + + private DoubleArgument(double val) { + this.val = val; + } + + static DoubleArgument of(double val) { + return new DoubleArgument(val); + } + + @Override + void encode(ByteBuf target) { + StringArgument.writeString(target, Double.toString(val)); + } + + @Override + public String toString() { + return "" + val; + } + } + + static class StringArgument extends SingularArgument { + + final String val; + + private StringArgument(String val) { + this.val = val; + } + + static StringArgument of(String val) { + return new StringArgument(val); + } + + @Override + void encode(ByteBuf target) { + writeString(target, val); + } + + static void writeString(ByteBuf target, String value) { + + target.writeByte('$'); + + IntegerArgument.writeInteger(target, value.length()); + target.writeBytes(CRLF); + + for (int i = 0; i < value.length(); i++) { + target.writeByte((byte) value.charAt(i)); + } + target.writeBytes(CRLF); + } + + @Override + public String toString() { + return val; + } + } + + static class CharArrayArgument extends SingularArgument { + + final char[] val; + + private CharArrayArgument(char[] val) { + this.val = val; + } + + static CharArrayArgument of(char[] val) { + return new CharArrayArgument(val); + } + + @Override + void encode(ByteBuf target) { + writeString(target, val); + } + + static void writeString(ByteBuf target, char[] value) { + + target.writeByte('$'); + + IntegerArgument.writeInteger(target, value.length); + target.writeBytes(CRLF); + + for (int i = 0; i < value.length; i++) { + target.writeByte((byte) value[i]); + } + target.writeBytes(CRLF); + } + + @Override + public String toString() { + return new String(val); + } + } + + static class KeyArgument extends SingularArgument { + + final K key; + final RedisCodec codec; + + private KeyArgument(K key, RedisCodec codec) { + this.key = key; + this.codec = codec; + } + + static KeyArgument of(K key, RedisCodec codec) { + return new KeyArgument<>(key, codec); + } + + @SuppressWarnings("unchecked") + @Override + void encode(ByteBuf target) { + + if (codec instanceof ToByteBufEncoder) { + + ToByteBufEncoder toByteBufEncoder = (ToByteBufEncoder) codec; + ByteBuf temporaryBuffer = target.alloc().buffer(toByteBufEncoder.estimateSize(key) + 6); + + try { + + toByteBufEncoder.encodeKey(key, temporaryBuffer); + ByteBufferArgument.writeByteBuf(target, temporaryBuffer); + } finally { + temporaryBuffer.release(); + } + + return; + } + + ByteBufferArgument.writeByteBuffer(target, codec.encodeKey(key)); + } + + @Override + public String toString() { + return String.format("key<%s>", new StringCodec().decodeKey(codec.encodeKey(key))); + } + } + + static class ValueArgument extends SingularArgument { + + final V val; + final RedisCodec codec; + + private ValueArgument(V val, RedisCodec codec) { + this.val = val; + this.codec = codec; + } + + static ValueArgument of(V val, RedisCodec codec) { + return new ValueArgument<>(val, codec); + } + + @SuppressWarnings("unchecked") + @Override + void encode(ByteBuf target) { + + if (codec instanceof ToByteBufEncoder) { + + ToByteBufEncoder toByteBufEncoder = (ToByteBufEncoder) codec; + ByteBuf temporaryBuffer = target.alloc().buffer(toByteBufEncoder.estimateSize(val) + 6); + + try { + toByteBufEncoder.encodeValue(val, temporaryBuffer); + ByteBufferArgument.writeByteBuf(target, temporaryBuffer); + } finally { + temporaryBuffer.release(); + } + + return; + } + + ByteBufferArgument.writeByteBuffer(target, codec.encodeValue(val)); + } + + @Override + public String toString() { + return String.format("value<%s>", new StringCodec().decodeValue(codec.encodeValue(val))); + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CommandArgsAccessor.java b/src/main/java/io/lettuce/core/protocol/CommandArgsAccessor.java new file mode 100644 index 0000000000..f278eb6406 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CommandArgsAccessor.java @@ -0,0 +1,156 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.protocol.CommandArgs.CharArrayArgument; +import io.lettuce.core.protocol.CommandArgs.SingularArgument; +import io.lettuce.core.protocol.CommandArgs.StringArgument; + +/** + * Accessor for first encoded key, first string and first {@link Long integer} argument of {@link CommandArgs}. This class is + * part of the internal API and may change without further notice. + * + * @author Mark Paluch + * @since 4.4 + */ +public class CommandArgsAccessor { + + /** + * Get the first encoded key for cluster command routing. + * + * @param commandArgs must not be null. + * @return the first encoded key or {@literal null}. + */ + @SuppressWarnings("unchecked") + public static ByteBuffer encodeFirstKey(CommandArgs commandArgs) { + + for (SingularArgument singularArgument : commandArgs.singularArguments) { + + if (singularArgument instanceof CommandArgs.KeyArgument) { + return commandArgs.codec.encodeKey(((CommandArgs.KeyArgument) singularArgument).key); + } + } + + return null; + } + + /** + * Get the first {@link String} argument. + * + * @param commandArgs must not be null. + * @return the first {@link String} argument or {@literal null}. + */ + @SuppressWarnings("unchecked") + public static String getFirstString(CommandArgs commandArgs) { + + for (SingularArgument singularArgument : commandArgs.singularArguments) { + + if (singularArgument instanceof StringArgument) { + return ((StringArgument) singularArgument).val; + } + } + + return null; + } + + /** + * Get the first {@code char[]}-array argument. + * + * @param commandArgs must not be null. + * @return the first {@link String} argument or {@literal null}. + */ + @SuppressWarnings("unchecked") + public static char[] getFirstCharArray(CommandArgs commandArgs) { + + for (SingularArgument singularArgument : commandArgs.singularArguments) { + + if (singularArgument instanceof CharArrayArgument) { + return ((CharArrayArgument) singularArgument).val; + } + } + + return null; + } + + /** + * Get the all {@link String} arguments. + * + * @param commandArgs must not be null. + * @return the first {@link String} argument or {@literal null}. + * @since 6.0 + */ + public static List getStringArguments(CommandArgs commandArgs) { + + List args = new ArrayList<>(); + + for (SingularArgument singularArgument : commandArgs.singularArguments) { + + if (singularArgument instanceof StringArgument) { + args.add(((StringArgument) singularArgument).val); + } + } + + return args; + } + + /** + * Get the all {@code char[]} arguments. + * + * @param commandArgs must not be null. + * @return the first {@link String} argument or {@literal null}. + * @since 6.0 + */ + public static List getCharArrayArguments(CommandArgs commandArgs) { + + List args = new ArrayList<>(); + + for (SingularArgument singularArgument : commandArgs.singularArguments) { + + if (singularArgument instanceof CharArrayArgument) { + args.add(((CharArrayArgument) singularArgument).val); + } + + if (singularArgument instanceof StringArgument) { + args.add(((StringArgument) singularArgument).val.toCharArray()); + } + } + + return args; + } + + /** + * Get the first {@link Long integer} argument. + * + * @param commandArgs must not be null. + * @return the first {@link Long integer} argument or {@literal null}. + */ + @SuppressWarnings("unchecked") + public static Long getFirstInteger(CommandArgs commandArgs) { + + for (SingularArgument singularArgument : commandArgs.singularArguments) { + + if (singularArgument instanceof CommandArgs.IntegerArgument) { + return ((CommandArgs.IntegerArgument) singularArgument).val; + } + } + + return null; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CommandEncoder.java b/src/main/java/io/lettuce/core/protocol/CommandEncoder.java new file mode 100644 index 0000000000..08810d7420 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CommandEncoder.java @@ -0,0 +1,113 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.nio.charset.Charset; +import java.util.Collection; + +import io.netty.buffer.ByteBuf; +import io.netty.channel.Channel; +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelHandlerContext; +import io.netty.handler.codec.EncoderException; +import io.netty.handler.codec.MessageToByteEncoder; +import io.netty.util.internal.PlatformDependent; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * A netty {@link ChannelHandler} responsible for encoding commands. + * + * @author Mark Paluch + */ +public class CommandEncoder extends MessageToByteEncoder { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(CommandEncoder.class); + + private final boolean traceEnabled = logger.isTraceEnabled(); + private final boolean debugEnabled = logger.isDebugEnabled(); + + public CommandEncoder() { + this(PlatformDependent.directBufferPreferred()); + } + + public CommandEncoder(boolean preferDirect) { + super(preferDirect); + } + + @Override + protected ByteBuf allocateBuffer(ChannelHandlerContext ctx, Object msg, boolean preferDirect) throws Exception { + + if (msg instanceof Collection) { + + if (preferDirect) { + return ctx.alloc().ioBuffer(((Collection) msg).size() * 16); + } else { + return ctx.alloc().heapBuffer(((Collection) msg).size() * 16); + } + } + + if (preferDirect) { + return ctx.alloc().ioBuffer(); + } else { + return ctx.alloc().heapBuffer(); + } + } + + @Override + @SuppressWarnings("unchecked") + protected void encode(ChannelHandlerContext ctx, Object msg, ByteBuf out) throws Exception { + + out.touch("CommandEncoder.encode(…)"); + if (msg instanceof RedisCommand) { + RedisCommand command = (RedisCommand) msg; + encode(ctx, out, command); + } + + if (msg instanceof Collection) { + Collection> commands = (Collection>) msg; + for (RedisCommand command : commands) { + encode(ctx, out, command); + } + } + } + + private void encode(ChannelHandlerContext ctx, ByteBuf out, RedisCommand command) { + + try { + out.markWriterIndex(); + command.encode(out); + } catch (RuntimeException e) { + out.resetWriterIndex(); + command.completeExceptionally(new EncoderException( + "Cannot encode command. Please close the connection as the connection state may be out of sync.", + e)); + } + + if (debugEnabled) { + logger.debug("{} writing command {}", logPrefix(ctx.channel()), command); + if (traceEnabled) { + logger.trace("{} Sent: {}", logPrefix(ctx.channel()), out.toString(Charset.defaultCharset()).trim()); + } + } + } + + private String logPrefix(Channel channel) { + StringBuilder buffer = new StringBuilder(64); + buffer.append('[').append(ChannelLogDescriptor.logDescriptor(channel)).append(']'); + return buffer.toString(); + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CommandExpiryWriter.java b/src/main/java/io/lettuce/core/protocol/CommandExpiryWriter.java new file mode 100644 index 0000000000..680971ecac --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CommandExpiryWriter.java @@ -0,0 +1,182 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static io.lettuce.core.TimeoutOptions.TimeoutSource; + +import java.time.Duration; +import java.util.Collection; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.internal.ExceptionFactory; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.TimeoutOptions; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.resource.ClientResources; + +/** + * Extension to {@link RedisChannelWriter} that expires commands. Command timeout starts at the time the command is written + * regardless to {@link #setAutoFlushCommands(boolean) flushing mode} (user-controlled batching). + * + * @author Mark Paluch + * @since 5.1 + * @see io.lettuce.core.TimeoutOptions + */ +public class CommandExpiryWriter implements RedisChannelWriter { + + private final RedisChannelWriter writer; + private final TimeoutSource source; + private final TimeUnit timeUnit; + private final ScheduledExecutorService executorService; + private final boolean applyConnectionTimeout; + + private volatile long timeout = -1; + + /** + * Create a new {@link CommandExpiryWriter}. + * + * @param writer must not be {@literal null}. + * @param clientOptions must not be {@literal null}. + * @param clientResources must not be {@literal null}. + */ + public CommandExpiryWriter(RedisChannelWriter writer, ClientOptions clientOptions, ClientResources clientResources) { + + LettuceAssert.notNull(writer, "RedisChannelWriter must not be null"); + LettuceAssert.isTrue(isSupported(clientOptions), "Command timeout not enabled"); + LettuceAssert.notNull(clientResources, "ClientResources must not be null"); + + TimeoutOptions timeoutOptions = clientOptions.getTimeoutOptions(); + this.writer = writer; + this.source = timeoutOptions.getSource(); + this.applyConnectionTimeout = timeoutOptions.isApplyConnectionTimeout(); + this.timeUnit = source.getTimeUnit(); + this.executorService = clientResources.eventExecutorGroup(); + } + + /** + * Check whether {@link ClientOptions} is configured to timeout commands. + * + * @param clientOptions must not be {@literal null}. + * @return {@literal true} if {@link ClientOptions} are configured to timeout commands. + */ + public static boolean isSupported(ClientOptions clientOptions) { + + LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); + + return isSupported(clientOptions.getTimeoutOptions()); + } + + private static boolean isSupported(TimeoutOptions timeoutOptions) { + + LettuceAssert.notNull(timeoutOptions, "TimeoutOptions must not be null"); + + return timeoutOptions.isTimeoutCommands(); + } + + @Override + public void setConnectionFacade(ConnectionFacade connectionFacade) { + writer.setConnectionFacade(connectionFacade); + } + + @Override + public ClientResources getClientResources() { + return writer.getClientResources(); + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + writer.setAutoFlushCommands(autoFlush); + } + + @Override + public RedisCommand write(RedisCommand command) { + + potentiallyExpire(command, getExecutorService()); + return writer.write(command); + } + + @Override + public Collection> write(Collection> redisCommands) { + + ScheduledExecutorService executorService = getExecutorService(); + + for (RedisCommand command : redisCommands) { + potentiallyExpire(command, executorService); + } + + return writer.write(redisCommands); + } + + @Override + public void flushCommands() { + writer.flushCommands(); + } + + @Override + public void close() { + writer.close(); + } + + @Override + public CompletableFuture closeAsync() { + return writer.closeAsync(); + } + + @Override + public void reset() { + writer.reset(); + } + + public void setTimeout(Duration timeout) { + this.timeout = timeUnit.convert(timeout.toNanos(), TimeUnit.NANOSECONDS); + } + + private ScheduledExecutorService getExecutorService() { + return this.executorService; + } + + @SuppressWarnings("unchecked") + private void potentiallyExpire(RedisCommand command, ScheduledExecutorService executors) { + + long timeout = applyConnectionTimeout ? this.timeout : source.getTimeout(command); + + if (timeout <= 0) { + return; + } + + ScheduledFuture schedule = executors.schedule(() -> { + + if (!command.isDone()) { + command.completeExceptionally(ExceptionFactory.createTimeoutException(Duration.ofNanos(timeUnit + .toNanos(timeout)))); + } + + }, timeout, timeUnit); + + if (command instanceof CompleteableCommand) { + ((CompleteableCommand) command).onComplete((o, o2) -> { + + if (!schedule.isDone()) { + schedule.cancel(false); + } + }); + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CommandHandler.java b/src/main/java/io/lettuce/core/protocol/CommandHandler.java new file mode 100644 index 0000000000..97f6fadf76 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CommandHandler.java @@ -0,0 +1,939 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static io.lettuce.core.ConnectionEvents.Reset; + +import java.io.IOException; +import java.net.SocketAddress; +import java.nio.charset.Charset; +import java.util.*; +import java.util.concurrent.atomic.AtomicLong; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.RedisException; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceSets; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.tracing.TraceContext; +import io.lettuce.core.tracing.TraceContextProvider; +import io.lettuce.core.tracing.Tracer; +import io.lettuce.core.tracing.Tracing; +import io.netty.buffer.ByteBuf; +import io.netty.channel.*; +import io.netty.channel.local.LocalAddress; +import io.netty.util.Recycler; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.GenericFutureListener; +import io.netty.util.internal.logging.InternalLogLevel; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * A netty {@link ChannelHandler} responsible for writing redis commands and reading responses from the server. + * + * @author Will Glozer + * @author Mark Paluch + * @author Jongyeol Choi + * @author Grzegorz Szpak + * @author Daniel Albuquerque + * @author Gavin Cook + */ +public class CommandHandler extends ChannelDuplexHandler implements HasQueuedCommands { + + /** + * When we encounter an unexpected IOException we look for these {@link Throwable#getMessage() messages} (because we have no + * better way to distinguish) and log them at DEBUG rather than WARN, since they are generally caused by unclean client + * disconnects rather than an actual problem. + */ + static final Set SUPPRESS_IO_EXCEPTION_MESSAGES = LettuceSets.unmodifiableSet("Connection reset by peer", + "Broken pipe", "Connection timed out"); + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(CommandHandler.class); + private static final AtomicLong COMMAND_HANDLER_COUNTER = new AtomicLong(); + + private final ClientOptions clientOptions; + private final ClientResources clientResources; + private final Endpoint endpoint; + + private final ArrayDeque> stack = new ArrayDeque<>(); + private final long commandHandlerId = COMMAND_HANDLER_COUNTER.incrementAndGet(); + + private final boolean traceEnabled = logger.isTraceEnabled(); + private final boolean debugEnabled = logger.isDebugEnabled(); + private final boolean latencyMetricsEnabled; + private final boolean tracingEnabled; + private final boolean includeCommandArgsInSpanTags; + private final float discardReadBytesRatio; + private final boolean boundedQueues; + private final BackpressureSource backpressureSource = new BackpressureSource(); + + private RedisStateMachine rsm; + private Channel channel; + private ByteBuf buffer; + private LifecycleState lifecycleState = LifecycleState.NOT_CONNECTED; + private String logPrefix; + private PristineFallbackCommand fallbackCommand; + private boolean pristine; + private Tracing.Endpoint tracedEndpoint; + + /** + * Initialize a new instance that handles commands from the supplied queue. + * + * @param clientOptions client options for this connection, must not be {@literal null} + * @param clientResources client resources for this connection, must not be {@literal null} + * @param endpoint must not be {@literal null}. + */ + public CommandHandler(ClientOptions clientOptions, ClientResources clientResources, Endpoint endpoint) { + + LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); + LettuceAssert.notNull(clientResources, "ClientResources must not be null"); + LettuceAssert.notNull(endpoint, "RedisEndpoint must not be null"); + + this.clientOptions = clientOptions; + this.clientResources = clientResources; + this.endpoint = endpoint; + this.latencyMetricsEnabled = clientResources.commandLatencyCollector().isEnabled(); + this.boundedQueues = clientOptions.getRequestQueueSize() != Integer.MAX_VALUE; + + Tracing tracing = clientResources.tracing(); + + this.tracingEnabled = tracing.isEnabled(); + this.includeCommandArgsInSpanTags = tracing.includeCommandArgsInSpanTags(); + + float bufferUsageRatio = clientOptions.getBufferUsageRatio(); + this.discardReadBytesRatio = bufferUsageRatio / (bufferUsageRatio + 1); + } + + public Queue> getStack() { + return stack; + } + + protected void setState(LifecycleState lifecycleState) { + + if (this.lifecycleState != LifecycleState.CLOSED) { + this.lifecycleState = lifecycleState; + } + } + + @Override + public Collection> drainQueue() { + return drainCommands(stack); + } + + protected LifecycleState getState() { + return lifecycleState; + } + + public boolean isClosed() { + return lifecycleState == LifecycleState.CLOSED; + } + + /** + * @see io.netty.channel.ChannelInboundHandlerAdapter#channelRegistered(io.netty.channel.ChannelHandlerContext) + */ + @Override + public void channelRegistered(ChannelHandlerContext ctx) throws Exception { + + if (isClosed()) { + logger.debug("{} Dropping register for a closed channel", logPrefix()); + } + + channel = ctx.channel(); + + if (debugEnabled) { + logPrefix = null; + logger.debug("{} channelRegistered()", logPrefix()); + } + + tracedEndpoint = clientResources.tracing().createEndpoint(ctx.channel().remoteAddress()); + logPrefix = null; + pristine = true; + fallbackCommand = null; + + setState(LifecycleState.REGISTERED); + + buffer = ctx.alloc().buffer(8192 * 8); + rsm = new RedisStateMachine(ctx.alloc()); + ctx.fireChannelRegistered(); + } + + /** + * @see io.netty.channel.ChannelInboundHandlerAdapter#channelUnregistered(io.netty.channel.ChannelHandlerContext) + */ + @Override + public void channelUnregistered(ChannelHandlerContext ctx) throws Exception { + + if (debugEnabled) { + logger.debug("{} channelUnregistered()", logPrefix()); + } + + if (channel != null && ctx.channel() != channel) { + logger.debug("{} My channel and ctx.channel mismatch. Propagating event to other listeners", logPrefix()); + ctx.fireChannelUnregistered(); + return; + } + + channel = null; + buffer.release(); + rsm.close(); + rsm = null; + + reset(); + + setState(LifecycleState.CLOSED); + + ctx.fireChannelUnregistered(); + } + + /** + * @see io.netty.channel.ChannelInboundHandlerAdapter#userEventTriggered(io.netty.channel.ChannelHandlerContext, Object) + */ + @Override + public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { + + if (evt == EnableAutoRead.INSTANCE) { + channel.config().setAutoRead(true); + } else if (evt instanceof Reset) { + reset(); + } + + super.userEventTriggered(ctx, evt); + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { + + InternalLogLevel logLevel = InternalLogLevel.WARN; + + if (!stack.isEmpty()) { + RedisCommand command = stack.poll(); + if (debugEnabled) { + logger.debug("{} Storing exception in {}", logPrefix(), command); + } + logLevel = InternalLogLevel.DEBUG; + + try { + command.completeExceptionally(cause); + } catch (Exception ex) { + logger.warn("{} Unexpected exception during command completion exceptionally: {}", logPrefix, ex.toString(), + ex); + } + } + + if (channel == null || !channel.isActive() || !isConnected()) { + + if (debugEnabled) { + logger.debug("{} Storing exception in connectionError", logPrefix()); + } + + logLevel = InternalLogLevel.DEBUG; + endpoint.notifyException(cause); + } + + if (cause instanceof IOException && logLevel.ordinal() > InternalLogLevel.INFO.ordinal()) { + logLevel = InternalLogLevel.INFO; + if (SUPPRESS_IO_EXCEPTION_MESSAGES.contains(cause.getMessage())) { + logLevel = InternalLogLevel.DEBUG; + } + } + + logger.log(logLevel, "{} Unexpected exception during request: {}", logPrefix, cause.toString(), cause); + } + + /** + * @see io.netty.channel.ChannelInboundHandlerAdapter#channelActive(io.netty.channel.ChannelHandlerContext) + */ + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + + if (debugEnabled) { + logger.debug("{} channelActive()", logPrefix()); + } + + setState(LifecycleState.CONNECTED); + + endpoint.notifyChannelActive(ctx.channel()); + super.channelActive(ctx); + + if (debugEnabled) { + logger.debug("{} channelActive() done", logPrefix()); + } + } + + private static List drainCommands(Queue source) { + + List target = new ArrayList<>(source.size()); + + T cmd; + while ((cmd = source.poll()) != null) { + target.add(cmd); + } + + return target; + } + + /** + * @see io.netty.channel.ChannelInboundHandlerAdapter#channelInactive(io.netty.channel.ChannelHandlerContext) + */ + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + + if (debugEnabled) { + logger.debug("{} channelInactive()", logPrefix()); + } + + if (channel != null && ctx.channel() != channel) { + logger.debug("{} My channel and ctx.channel mismatch. Propagating event to other listeners.", logPrefix()); + super.channelInactive(ctx); + return; + } + + tracedEndpoint = null; + setState(LifecycleState.DISCONNECTED); + setState(LifecycleState.DEACTIVATING); + + endpoint.notifyChannelInactive(ctx.channel()); + endpoint.notifyDrainQueuedCommands(this); + + setState(LifecycleState.DEACTIVATED); + + PristineFallbackCommand command = this.fallbackCommand; + if (isProtectedMode(command)) { + onProtectedMode(command.getOutput().getError()); + } + + if (debugEnabled) { + logger.debug("{} channelInactive() done", logPrefix()); + } + + super.channelInactive(ctx); + } + + /** + * @see io.netty.channel.ChannelDuplexHandler#write(io.netty.channel.ChannelHandlerContext, java.lang.Object, + * io.netty.channel.ChannelPromise) + */ + @Override + @SuppressWarnings("unchecked") + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { + + if (debugEnabled) { + logger.debug("{} write(ctx, {}, promise)", logPrefix(), msg); + } + + if (msg instanceof RedisCommand) { + writeSingleCommand(ctx, (RedisCommand) msg, promise); + return; + } + + if (msg instanceof List) { + + List> batch = (List>) msg; + + if (batch.size() == 1) { + + writeSingleCommand(ctx, batch.get(0), promise); + return; + } + + writeBatch(ctx, batch, promise); + return; + } + + if (msg instanceof Collection) { + writeBatch(ctx, (Collection>) msg, promise); + } + } + + private void writeSingleCommand(ChannelHandlerContext ctx, RedisCommand command, ChannelPromise promise) { + + if (!isWriteable(command)) { + promise.trySuccess(); + return; + } + + addToStack(command, promise); + + if (tracingEnabled && command instanceof CompleteableCommand) { + + TracedCommand traced = CommandWrapper.unwrap(command, TracedCommand.class); + TraceContextProvider provider = (traced == null ? clientResources.tracing().initialTraceContextProvider() : traced); + Tracer tracer = clientResources.tracing().getTracerProvider().getTracer(); + TraceContext context = provider.getTraceContext(); + + Tracer.Span span = tracer.nextSpan(context); + span.name(command.getType().name()); + + if (includeCommandArgsInSpanTags && command.getArgs() != null) { + span.tag("redis.args", command.getArgs().toCommandString()); + } + + span.remoteEndpoint(tracedEndpoint); + span.start(); + + if (traced != null) { + traced.setSpan(span); + } + + CompleteableCommand completeableCommand = (CompleteableCommand) command; + completeableCommand.onComplete((o, throwable) -> { + + if (command.getOutput() != null) { + + String error = command.getOutput().getError(); + if (error != null) { + span.tag("error", error); + } else if (throwable != null) { + span.tag("exception", throwable.toString()); + span.error(throwable); + } + } + + span.finish(); + }); + } + + ctx.write(command, promise); + } + + private void writeBatch(ChannelHandlerContext ctx, Collection> batch, ChannelPromise promise) { + + Collection> deduplicated = new LinkedHashSet<>(batch.size(), 1); + + for (RedisCommand command : batch) { + + if (isWriteable(command) && !deduplicated.add(command)) { + deduplicated.remove(command); + command.completeExceptionally( + new RedisException("Attempting to write duplicate command that is already enqueued: " + command)); + } + } + + try { + validateWrite(deduplicated.size()); + } catch (Exception e) { + + for (RedisCommand redisCommand : deduplicated) { + redisCommand.completeExceptionally(e); + } + + throw e; + } + + for (RedisCommand command : deduplicated) { + addToStack(command, promise); + } + + if (!deduplicated.isEmpty()) { + ctx.write(deduplicated, promise); + } else { + promise.trySuccess(); + } + } + + private void addToStack(RedisCommand command, ChannelPromise promise) { + + try { + + validateWrite(1); + + if (command.getOutput() == null) { + // fire&forget commands are excluded from metrics + complete(command); + } + + RedisCommand redisCommand = potentiallyWrapLatencyCommand(command); + + if (promise.isVoid()) { + stack.add(redisCommand); + } else { + promise.addListener(AddToStack.newInstance(stack, redisCommand)); + } + } catch (Exception e) { + command.completeExceptionally(e); + throw e; + } + } + + private void validateWrite(int commands) { + + if (usesBoundedQueues()) { + + // number of maintenance commands (AUTH, CLIENT SETNAME, SELECT, READONLY) should be allowed on top + // of number of user commands to ensure the driver recovers properly from a disconnect + int maxMaintenanceCommands = 5; + int allowedRequestQueueSize = clientOptions.getRequestQueueSize() + maxMaintenanceCommands; + if (stack.size() + commands > allowedRequestQueueSize) + + throw new RedisException("Internal stack size exceeded: " + clientOptions.getRequestQueueSize() + + ". Commands are not accepted until the stack size drops."); + } + } + + private boolean usesBoundedQueues() { + return boundedQueues; + } + + private static boolean isWriteable(RedisCommand command) { + return !command.isDone(); + } + + private RedisCommand potentiallyWrapLatencyCommand(RedisCommand command) { + + if (!latencyMetricsEnabled) { + return command; + } + + if (command instanceof WithLatency) { + + WithLatency withLatency = (WithLatency) command; + + withLatency.firstResponse(-1); + withLatency.sent(nanoTime()); + + return command; + } + + LatencyMeteredCommand latencyMeteredCommand = new LatencyMeteredCommand<>(command); + latencyMeteredCommand.firstResponse(-1); + latencyMeteredCommand.sent(nanoTime()); + + return latencyMeteredCommand; + } + + /** + * @see io.netty.channel.ChannelInboundHandlerAdapter#channelRead(io.netty.channel.ChannelHandlerContext, java.lang.Object) + */ + @Override + public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { + + ByteBuf input = (ByteBuf) msg; + input.touch("CommandHandler.read(…)"); + + if (!input.isReadable() || input.refCnt() == 0) { + logger.warn("{} Input not readable {}, {}", logPrefix(), input.isReadable(), input.refCnt()); + return; + } + + if (debugEnabled) { + logger.debug("{} Received: {} bytes, {} commands in the stack", logPrefix(), input.readableBytes(), stack.size()); + } + + try { + if (buffer.refCnt() < 1) { + logger.warn("{} Ignoring received data for closed or abandoned connection", logPrefix()); + return; + } + + if (debugEnabled && ctx.channel() != channel) { + logger.debug("{} Ignoring data for a non-registered channel {}", logPrefix(), ctx.channel()); + return; + } + + if (traceEnabled) { + logger.trace("{} Buffer: {}", logPrefix(), input.toString(Charset.defaultCharset()).trim()); + } + + buffer.touch("CommandHandler.read(…)"); + buffer.writeBytes(input); + + decode(ctx, buffer); + } finally { + input.release(); + } + } + + protected void decode(ChannelHandlerContext ctx, ByteBuf buffer) throws InterruptedException { + + if (pristine && stack.isEmpty() && buffer.isReadable()) { + + if (debugEnabled) { + logger.debug("{} Received response without a command context (empty stack)", logPrefix()); + } + + if (consumeResponse(buffer)) { + pristine = false; + } + + return; + } + + while (canDecode(buffer)) { + + RedisCommand command = stack.peek(); + if (debugEnabled) { + logger.debug("{} Stack contains: {} commands", logPrefix(), stack.size()); + } + + pristine = false; + + try { + if (!decode(ctx, buffer, command)) { + discardReadBytesIfNecessary(buffer); + return; + } + } catch (Exception e) { + + ctx.close(); + throw e; + } + + if (isProtectedMode(command)) { + onProtectedMode(command.getOutput().getError()); + } else { + + if (canComplete(command)) { + stack.poll(); + + try { + complete(command); + } catch (Exception e) { + logger.warn("{} Unexpected exception during request: {}", logPrefix, e.toString(), e); + } + } + } + + afterDecode(ctx, command); + } + + discardReadBytesIfNecessary(buffer); + } + + /** + * Decoding hook: Can the buffer be decoded to a command. + * + * @param buffer + * @return + */ + protected boolean canDecode(ByteBuf buffer) { + return !stack.isEmpty() && buffer.isReadable(); + } + + /** + * Decoding hook: Can the command be completed. + * + * @param command + * @return + */ + protected boolean canComplete(RedisCommand command) { + return true; + } + + /** + * Decoding hook: Complete a command. + * + * @param command + * @see RedisCommand#complete() + */ + protected void complete(RedisCommand command) { + command.complete(); + } + + private boolean decode(ChannelHandlerContext ctx, ByteBuf buffer, RedisCommand command) { + + if (latencyMetricsEnabled && command instanceof WithLatency) { + + WithLatency withLatency = (WithLatency) command; + if (withLatency.getFirstResponse() == -1) { + withLatency.firstResponse(nanoTime()); + } + + if (!decode0(ctx, buffer, command)) { + return false; + } + + recordLatency(withLatency, command.getType()); + + return true; + } + + return decode0(ctx, buffer, command); + } + + private boolean decode0(ChannelHandlerContext ctx, ByteBuf buffer, RedisCommand command) { + + if (!decode(buffer, command, getCommandOutput(command))) { + + if (command instanceof DemandAware.Sink) { + + DemandAware.Sink sink = (DemandAware.Sink) command; + sink.setSource(backpressureSource); + + ctx.channel().config().setAutoRead(sink.hasDemand()); + } + + return false; + } + + if (!ctx.channel().config().isAutoRead()) { + ctx.channel().config().setAutoRead(true); + } + + return true; + } + + /** + * Decoding hook: Retrieve {@link CommandOutput} for {@link RedisCommand} decoding. + * + * @param command + * @return + * @see RedisCommand#getOutput() + */ + protected CommandOutput getCommandOutput(RedisCommand command) { + return command.getOutput(); + } + + protected boolean decode(ByteBuf buffer, CommandOutput output) { + return rsm.decode(buffer, output); + } + + protected boolean decode(ByteBuf buffer, RedisCommand command, CommandOutput output) { + return rsm.decode(buffer, command, output); + } + + /** + * Consume a response without having a command on the stack. + * + * @param buffer + * @return {@literal true} if the buffer decode was successful. {@literal false} if the buffer was not decoded. + */ + private boolean consumeResponse(ByteBuf buffer) { + + PristineFallbackCommand command = this.fallbackCommand; + + if (command == null || !command.isDone()) { + + if (debugEnabled) { + logger.debug("{} Consuming response using FallbackCommand", logPrefix()); + } + + if (command == null) { + command = new PristineFallbackCommand(); + this.fallbackCommand = command; + } + + if (!decode(buffer, command.getOutput())) { + return false; + } + + if (isProtectedMode(command)) { + onProtectedMode(command.getOutput().getError()); + } + } + + return true; + } + + private boolean isProtectedMode(RedisCommand command) { + return command != null && command.getOutput() != null && command.getOutput().hasError() + && RedisConnectionException.isProtectedMode(command.getOutput().getError()); + } + + private void onProtectedMode(String message) { + + RedisConnectionException exception = new RedisConnectionException(message); + + endpoint.notifyException(exception); + + if (channel != null) { + channel.disconnect(); + } + + stack.forEach(cmd -> cmd.completeExceptionally(exception)); + stack.clear(); + } + + /** + * Hook method called after command completion. + * + * @param ctx + * @param command + */ + protected void afterDecode(ChannelHandlerContext ctx, RedisCommand command) { + } + + private void recordLatency(WithLatency withLatency, ProtocolKeyword commandType) { + + if (withLatency != null && clientResources.commandLatencyCollector().isEnabled() && channel != null + && remote() != null) { + + long firstResponseLatency = withLatency.getFirstResponse() - withLatency.getSent(); + long completionLatency = nanoTime() - withLatency.getSent(); + + clientResources.commandLatencyCollector().recordCommandLatency(local(), remote(), commandType, firstResponseLatency, + completionLatency); + } + } + + private SocketAddress remote() { + return channel.remoteAddress(); + } + + private SocketAddress local() { + if (channel.localAddress() != null) { + return channel.localAddress(); + } + return LocalAddress.ANY; + } + + boolean isConnected() { + return lifecycleState.ordinal() >= LifecycleState.CONNECTED.ordinal() + && lifecycleState.ordinal() < LifecycleState.DISCONNECTED.ordinal(); + } + + private void reset() { + + resetInternals(); + cancelCommands("Reset", drainCommands(stack)); + } + + private void resetInternals() { + + if (rsm != null) { + rsm.reset(); + } + + if (buffer.refCnt() > 0) { + buffer.clear(); + } + } + + private static void cancelCommands(String message, List> toCancel) { + + for (RedisCommand cmd : toCancel) { + if (cmd.getOutput() != null) { + cmd.getOutput().setError(message); + } + cmd.cancel(); + } + } + + private String logPrefix() { + + if (logPrefix != null) { + return logPrefix; + } + + String buffer = "[" + ChannelLogDescriptor.logDescriptor(channel) + ", " + "chid=0x" + + Long.toHexString(commandHandlerId) + ']'; + return logPrefix = buffer; + } + + private static long nanoTime() { + return System.nanoTime(); + } + + /** + * Try to discard read bytes when buffer usage reach a higher usage ratio. + * + * @param buffer + */ + private void discardReadBytesIfNecessary(ByteBuf buffer) { + + float usedRatio = (float) buffer.readerIndex() / buffer.capacity(); + + if (usedRatio >= discardReadBytesRatio && buffer.refCnt() != 0) { + buffer.discardReadBytes(); + } + } + + public enum LifecycleState { + NOT_CONNECTED, REGISTERED, CONNECTED, ACTIVATING, ACTIVE, DISCONNECTED, DEACTIVATING, DEACTIVATED, CLOSED, + } + + /** + * Source for backpressure. + */ + class BackpressureSource implements DemandAware.Source { + + @Override + public void requestMore() { + + if (isConnected() && !isClosed()) { + if (!channel.config().isAutoRead()) { + channel.pipeline().fireUserEventTriggered(EnableAutoRead.INSTANCE); + } + } + } + } + + enum EnableAutoRead { + INSTANCE + } + + /** + * Add to stack listener. This listener is pooled and must be {@link #recycle() recycled after usage}. + */ + static class AddToStack implements GenericFutureListener> { + + private static final Recycler RECYCLER = new Recycler() { + @Override + protected AddToStack newObject(Handle handle) { + return new AddToStack(handle); + } + }; + + private final Recycler.Handle handle; + private ArrayDeque stack; + private RedisCommand command; + + AddToStack(Recycler.Handle handle) { + this.handle = handle; + } + + /** + * Allocate a new instance. + * + * @param stack + * @param command + * @return + */ + @SuppressWarnings("unchecked") + static AddToStack newInstance(ArrayDeque stack, RedisCommand command) { + + AddToStack entry = RECYCLER.get(); + + entry.stack = (ArrayDeque) stack; + entry.command = command; + + return entry; + } + + @SuppressWarnings("unchecked") + @Override + public void operationComplete(Future future) { + + try { + if (future.isSuccess()) { + stack.add(command); + } + } finally { + recycle(); + } + } + + private void recycle() { + + this.stack = null; + this.command = null; + + handle.recycle(this); + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CommandKeyword.java b/src/main/java/io/lettuce/core/protocol/CommandKeyword.java new file mode 100644 index 0000000000..4ac0f596a7 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CommandKeyword.java @@ -0,0 +1,57 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.nio.charset.StandardCharsets; + +/** + * Keyword modifiers for redis commands. + * + * @author Will Glozer + * @author Mark Paluch + * @author Zhang Jessey + */ +public enum CommandKeyword implements ProtocolKeyword { + + ADDR, ADDSLOTS, AFTER, AGGREGATE, ALPHA, AND, ASK, ASC, ASYNC, BEFORE, BLOCK, BUMPEPOCH, + + BY, CHANNELS, COPY, COUNT, COUNTKEYSINSLOT, CONSUMERS, CREATE, DELSLOTS, DESC, SOFT, HARD, ENCODING, + + FAILOVER, FORGET, FLUSH, FORCE, FLUSHSLOTS, GETNAME, GETKEYSINSLOT, GROUP, GROUPS, HTSTATS, ID, IDLE, + + IDLETIME, JUSTID, KILL, KEYSLOT, LEN, LIMIT, LIST, LOAD, MATCH, + + MAX, MAXLEN, MEET, MIN, MOVED, NO, NOACK, NODE, NODES, NOSAVE, NOT, NUMSUB, NUMPAT, ONE, OR, PAUSE, + + REFCOUNT, REMOVE, RELOAD, REPLACE, REPLICATE, RESET, + + RESETSTAT, RESTART, RETRYCOUNT, REWRITE, SAVECONFIG, SDSLEN, SETNAME, SETSLOT, SLOTS, STABLE, + + MIGRATING, IMPORTING, SKIPME, SLAVES, STREAM, STORE, SUM, SEGFAULT, UNBLOCK, WEIGHTS, + + WITHSCORES, XOR, USAGE; + + public final byte[] bytes; + + private CommandKeyword() { + bytes = name().getBytes(StandardCharsets.US_ASCII); + } + + @Override + public byte[] getBytes() { + return bytes; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CommandType.java b/src/main/java/io/lettuce/core/protocol/CommandType.java new file mode 100644 index 0000000000..d0c87f3b3c --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CommandType.java @@ -0,0 +1,119 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.nio.charset.StandardCharsets; + +/** + * Redis commands. + * + * @author Will Glozer + * @author Mark Paluch + * @author Zhang Jessey + */ +public enum CommandType implements ProtocolKeyword { + + // Authentication + + ACL, AUTH, + + // Connection + + ECHO, HELLO, PING, QUIT, READONLY, READWRITE, SELECT, SWAPDB, + + // Server + + BGREWRITEAOF, BGSAVE, CLIENT, COMMAND, CONFIG, DBSIZE, DEBUG, FLUSHALL, FLUSHDB, INFO, MYID, LASTSAVE, ROLE, MONITOR, SAVE, SHUTDOWN, SLAVEOF, SLOWLOG, SYNC, MEMORY, + + // Keys + + DEL, DUMP, EXISTS, EXPIRE, EXPIREAT, KEYS, MIGRATE, MOVE, OBJECT, PERSIST, PEXPIRE, PEXPIREAT, PTTL, RANDOMKEY, RENAME, RENAMENX, RESTORE, TOUCH, TTL, TYPE, SCAN, UNLINK, + + // String + + APPEND, GET, GETRANGE, GETSET, MGET, MSET, MSETNX, SET, SETEX, PSETEX, SETNX, SETRANGE, STRLEN, + + // Numeric + + DECR, DECRBY, INCR, INCRBY, INCRBYFLOAT, + + // List + + BLPOP, BRPOP, BRPOPLPUSH, LINDEX, LINSERT, LLEN, LPOP, LPUSH, LPUSHX, LRANGE, LREM, LSET, LTRIM, RPOP, RPOPLPUSH, RPUSH, RPUSHX, SORT, + + // Hash + + HDEL, HEXISTS, HGET, HGETALL, HINCRBY, HINCRBYFLOAT, HKEYS, HLEN, HSTRLEN, HMGET, HMSET, HSET, HSETNX, HVALS, HSCAN, + + // Transaction + + DISCARD, EXEC, MULTI, UNWATCH, WATCH, + + // HyperLogLog + + PFADD, PFCOUNT, PFMERGE, + + // Pub/Sub + + PSUBSCRIBE, PUBLISH, PUNSUBSCRIBE, SUBSCRIBE, UNSUBSCRIBE, PUBSUB, + + // Sets + + SADD, SCARD, SDIFF, SDIFFSTORE, SINTER, SINTERSTORE, SISMEMBER, SMEMBERS, SMOVE, SPOP, SRANDMEMBER, SREM, SUNION, SUNIONSTORE, SSCAN, + + // Sorted Set + + BZPOPMIN, BZPOPMAX, ZADD, ZCARD, ZCOUNT, ZINCRBY, ZINTERSTORE, ZPOPMIN, ZPOPMAX, ZRANGE, ZRANGEBYSCORE, ZRANK, ZREM, ZREMRANGEBYRANK, ZREMRANGEBYSCORE, ZREVRANGE, ZREVRANGEBYLEX, ZREVRANGEBYSCORE, ZREVRANK, ZSCORE, ZUNIONSTORE, ZSCAN, ZLEXCOUNT, ZREMRANGEBYLEX, ZRANGEBYLEX, + + // Scripting + + EVAL, EVALSHA, SCRIPT, + + // Bits + + BITCOUNT, BITFIELD, BITOP, GETBIT, SETBIT, BITPOS, + + // Geo + + GEOADD, GEORADIUS, GEORADIUS_RO, GEORADIUSBYMEMBER, GEORADIUSBYMEMBER_RO, GEOENCODE, GEODECODE, GEOPOS, GEODIST, GEOHASH, + + // Stream + + XACK, XADD, XCLAIM, XDEL, XGROUP, XINFO, XLEN, XPENDING, XRANGE, XREVRANGE, XREAD, XREADGROUP, XTRIM, + + // Others + + TIME, WAIT, + + // SENTINEL + + SENTINEL, + + // CLUSTER + + ASKING, CLUSTER; + + public final byte[] bytes; + + CommandType() { + bytes = name().getBytes(StandardCharsets.US_ASCII); + } + + @Override + public byte[] getBytes() { + return bytes; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CommandWrapper.java b/src/main/java/io/lettuce/core/protocol/CommandWrapper.java new file mode 100644 index 0000000000..e9da312b81 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CommandWrapper.java @@ -0,0 +1,288 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.concurrent.CancellationException; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; +import java.util.function.BiConsumer; +import java.util.function.Consumer; + +import io.lettuce.core.output.CommandOutput; +import io.netty.buffer.ByteBuf; + +/** + * Wrapper for a command. + * + * @author Mark Paluch + */ +public class CommandWrapper implements RedisCommand, CompleteableCommand, DecoratedCommand { + + @SuppressWarnings({ "rawtypes", "unchecked" }) + private static final AtomicReferenceFieldUpdater ONCOMPLETE = AtomicReferenceFieldUpdater + .newUpdater(CommandWrapper.class, Object[].class, "onComplete"); + + @SuppressWarnings({ "rawtypes", "unchecked" }) + private static final Object[] EMPTY = new Object[0]; + + protected final RedisCommand command; + + // accessed via AtomicReferenceFieldUpdater. + @SuppressWarnings("unused") + private volatile Object[] onComplete = EMPTY; + + public CommandWrapper(RedisCommand command) { + this.command = command; + } + + @Override + public CommandOutput getOutput() { + return command.getOutput(); + } + + @Override + @SuppressWarnings({ "rawtypes", "unchecked" }) + public void complete() { + + command.complete(); + + Object[] consumers = ONCOMPLETE.get(this); + if (!expireCallbacks(consumers)) { + return; + } + + for (Object callback : consumers) { + + if (callback instanceof Consumer) { + Consumer consumer = (Consumer) callback; + if (getOutput() != null) { + consumer.accept(getOutput().get()); + } else { + consumer.accept(null); + } + } + + if (callback instanceof BiConsumer) { + BiConsumer consumer = (BiConsumer) callback; + if (getOutput() != null) { + consumer.accept(getOutput().get(), null); + } else { + consumer.accept(null, null); + } + } + } + } + + @Override + public void cancel() { + + command.cancel(); + notifyBiConsumer(new CancellationException()); + } + + @Override + public boolean completeExceptionally(Throwable throwable) { + + boolean result = command.completeExceptionally(throwable); + notifyBiConsumer(throwable); + + return result; + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + private void notifyBiConsumer(Throwable exception) { + + Object[] consumers = ONCOMPLETE.get(this); + + if (!expireCallbacks(consumers)) { + return; + } + + for (Object callback : consumers) { + + if (!(callback instanceof BiConsumer)) { + continue; + } + + BiConsumer consumer = (BiConsumer) callback; + if (getOutput() != null) { + consumer.accept(getOutput().get(), exception); + } else { + consumer.accept(null, exception); + } + } + } + + private boolean expireCallbacks(Object[] consumers) { + return consumers != EMPTY && ONCOMPLETE.compareAndSet(this, consumers, EMPTY); + } + + @Override + public CommandArgs getArgs() { + return command.getArgs(); + } + + @Override + public ProtocolKeyword getType() { + return command.getType(); + } + + @Override + public void encode(ByteBuf buf) { + command.encode(buf); + } + + @Override + public boolean isCancelled() { + return command.isCancelled(); + } + + @Override + public void setOutput(CommandOutput output) { + command.setOutput(output); + } + + @Override + public void onComplete(Consumer action) { + addOnComplete(action); + } + + @Override + public void onComplete(BiConsumer action) { + addOnComplete(action); + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + private void addOnComplete(Object action) { + + for (;;) { + + Object[] existing = ONCOMPLETE.get(this); + Object[] updated = new Object[existing.length + 1]; + System.arraycopy(existing, 0, updated, 0, existing.length); + updated[existing.length] = action; + + if (ONCOMPLETE.compareAndSet(this, existing, updated)) { + return; + } + } + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(getClass().getSimpleName()); + sb.append(" [type=").append(getType()); + sb.append(", output=").append(getOutput()); + sb.append(", commandType=").append(command.getClass().getName()); + sb.append(']'); + return sb.toString(); + } + + @Override + public boolean isDone() { + return command.isDone(); + } + + @Override + public RedisCommand getDelegate() { + return command; + } + + /** + * Unwrap a wrapped command. + * + * @param wrapped + * @return + */ + @SuppressWarnings("unchecked") + public static RedisCommand unwrap(RedisCommand wrapped) { + + RedisCommand result = wrapped; + while (result instanceof DecoratedCommand) { + result = ((DecoratedCommand) result).getDelegate(); + } + + return result; + } + + /** + * Returns an object that implements the given interface to allow access to non-standard methods, or standard methods not + * exposed by the proxy. + * + * If the receiver implements the interface then the result is the receiver or a proxy for the receiver. If the receiver is + * a wrapper and the wrapped object implements the interface then the result is the wrapped object or a proxy for the + * wrapped object. Otherwise return the the result of calling unwrap recursively on the wrapped object or a + * proxy for that result. If the receiver is not a wrapper and does not implement the interface, then an {@literal null} is + * returned. + * + * @param wrapped + * @param iface A Class defining an interface that the result must implement. + * @return the unwrapped instance or {@literal null}. + * @since 5.1 + */ + @SuppressWarnings("unchecked") + public static R unwrap(RedisCommand wrapped, Class iface) { + + RedisCommand result = wrapped; + + if (iface.isInstance(wrapped)) { + return iface.cast(wrapped); + } + + while (result instanceof DecoratedCommand) { + result = ((DecoratedCommand) result).getDelegate(); + + if (iface.isInstance(result)) { + return iface.cast(result); + } + } + + return null; + } + + @Override + public boolean equals(Object o) { + + if (this == o) + return true; + if (!(o instanceof RedisCommand)) { + return false; + } + + RedisCommand left = command; + while (left instanceof DecoratedCommand) { + left = CommandWrapper.unwrap(left); + } + + RedisCommand right = (RedisCommand) o; + while (right instanceof DecoratedCommand) { + right = CommandWrapper.unwrap(right); + } + + return left == right; + } + + @Override + public int hashCode() { + + RedisCommand toHash = command; + while (toHash instanceof DecoratedCommand) { + toHash = CommandWrapper.unwrap(toHash); + } + + return toHash != null ? toHash.hashCode() : 0; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/CompleteableCommand.java b/src/main/java/io/lettuce/core/protocol/CompleteableCommand.java new file mode 100644 index 0000000000..ad57fd0401 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/CompleteableCommand.java @@ -0,0 +1,44 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.function.BiConsumer; +import java.util.function.Consumer; + +/** + * Extension to commands that provide registration of command completion callbacks. Completion callbacks allow execution of + * tasks after successive, failed or any completion outcome. A callback must be non-blocking. Callback registration gives no + * guarantee over callback ordering. + * + * @author Mark Paluch + */ +public interface CompleteableCommand { + + /** + * Register a command callback for successive command completion that notifies the callback with the command result. + * + * @param action must not be {@literal null}. + */ + void onComplete(Consumer action); + + /** + * Register a command callback for command completion that notifies the callback with the command result or the failure + * resulting from command completion. + * + * @param action must not be {@literal null}. + */ + void onComplete(BiConsumer action); +} diff --git a/src/main/java/io/lettuce/core/protocol/ConnectionFacade.java b/src/main/java/io/lettuce/core/protocol/ConnectionFacade.java new file mode 100644 index 0000000000..9e9ea88dc0 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/ConnectionFacade.java @@ -0,0 +1,42 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +/** + * Represents a stateful connection facade. Connections can be activated and deactivated and particular actions can be executed + * upon connection activation/deactivation. + * + * @author Mark Paluch + */ +public interface ConnectionFacade { + + /** + * Callback for a connection activated event. This method may invoke non-blocking connection operations to prepare the + * connection after the connection was established. + */ + void activated(); + + /** + * Callback for a connection deactivated event. This method may invoke non-blocking operations to cleanup the connection + * after disconnection. + */ + void deactivated(); + + /** + * Reset the connection state. + */ + void reset(); +} diff --git a/src/main/java/io/lettuce/core/protocol/ConnectionInitializer.java b/src/main/java/io/lettuce/core/protocol/ConnectionInitializer.java new file mode 100644 index 0000000000..3ca04024fd --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/ConnectionInitializer.java @@ -0,0 +1,38 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.concurrent.CompletionStage; + +import io.netty.channel.Channel; + +/** + * Initialize a connection to prepare it for usage. + * + * @author Mark Paluch + * @since 6.0 + */ +public interface ConnectionInitializer { + + /** + * Initialize the connection for usage. This method is invoked after establishing the transport connection and before the + * connection is used for user-space commands. + * + * @param channel the {@link Channel} to initialize. + * @return the {@link CompletionStage} that completes once the channel is fully initialized. + */ + CompletionStage initialize(Channel channel); +} diff --git a/src/main/java/io/lettuce/core/protocol/ConnectionWatchdog.java b/src/main/java/io/lettuce/core/protocol/ConnectionWatchdog.java new file mode 100644 index 0000000000..e7bc5f9274 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/ConnectionWatchdog.java @@ -0,0 +1,387 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.net.SocketAddress; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; + +import reactor.core.publisher.Mono; +import reactor.util.function.Tuple2; +import io.lettuce.core.ClientOptions; +import io.lettuce.core.ConnectionEvents; +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.connection.ReconnectFailedEvent; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.resource.Delay; +import io.lettuce.core.resource.Delay.StatefulDelay; +import io.netty.bootstrap.Bootstrap; +import io.netty.channel.*; +import io.netty.channel.group.ChannelGroup; +import io.netty.channel.local.LocalAddress; +import io.netty.util.Timeout; +import io.netty.util.Timer; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.internal.logging.InternalLogLevel; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * A netty {@link ChannelHandler} responsible for monitoring the channel and reconnecting when the connection is lost. + * + * @author Will Glozer + * @author Mark Paluch + * @author Koji Lin + */ +@ChannelHandler.Sharable +public class ConnectionWatchdog extends ChannelInboundHandlerAdapter { + + private static final long LOGGING_QUIET_TIME_MS = TimeUnit.MILLISECONDS.convert(5, TimeUnit.SECONDS); + private static final InternalLogger logger = InternalLoggerFactory.getInstance(ConnectionWatchdog.class); + + private final Delay reconnectDelay; + private final Bootstrap bootstrap; + private final EventExecutorGroup reconnectWorkers; + private final ReconnectionHandler reconnectionHandler; + private final ReconnectionListener reconnectionListener; + private final Timer timer; + private final EventBus eventBus; + + private Channel channel; + private SocketAddress remoteAddress; + private long lastReconnectionLogging = -1; + private String logPrefix; + + private final AtomicBoolean reconnectSchedulerSync; + private volatile int attempts; + private volatile boolean armed; + private volatile boolean listenOnChannelInactive; + private volatile Timeout reconnectScheduleTimeout; + + /** + * Create a new watchdog that adds to new connections to the supplied {@link ChannelGroup} and establishes a new + * {@link Channel} when disconnected, while reconnect is true. The socketAddressSupplier can supply the reconnect address. + * + * @param reconnectDelay reconnect delay, must not be {@literal null} + * @param clientOptions client options for the current connection, must not be {@literal null} + * @param bootstrap Configuration for new channels, must not be {@literal null} + * @param timer Timer used for delayed reconnect, must not be {@literal null} + * @param reconnectWorkers executor group for reconnect tasks, must not be {@literal null} + * @param socketAddressSupplier the socket address supplier to obtain an address for reconnection, may be {@literal null} + * @param reconnectionListener the reconnection listener, must not be {@literal null} + * @param connectionFacade the connection facade, must not be {@literal null} + * @param eventBus Event bus to emit reconnect events. + */ + public ConnectionWatchdog(Delay reconnectDelay, ClientOptions clientOptions, Bootstrap bootstrap, Timer timer, + EventExecutorGroup reconnectWorkers, Mono socketAddressSupplier, + ReconnectionListener reconnectionListener, ConnectionFacade connectionFacade, EventBus eventBus) { + + LettuceAssert.notNull(reconnectDelay, "Delay must not be null"); + LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); + LettuceAssert.notNull(bootstrap, "Bootstrap must not be null"); + LettuceAssert.notNull(timer, "Timer must not be null"); + LettuceAssert.notNull(reconnectWorkers, "ReconnectWorkers must not be null"); + LettuceAssert.notNull(socketAddressSupplier, "SocketAddressSupplier must not be null"); + LettuceAssert.notNull(reconnectionListener, "ReconnectionListener must not be null"); + LettuceAssert.notNull(connectionFacade, "ConnectionFacade must not be null"); + LettuceAssert.notNull(eventBus, "EventBus must not be null"); + + this.reconnectDelay = reconnectDelay; + this.bootstrap = bootstrap; + this.timer = timer; + this.reconnectWorkers = reconnectWorkers; + this.reconnectionListener = reconnectionListener; + this.reconnectSchedulerSync = new AtomicBoolean(false); + this.eventBus = eventBus; + + Mono wrappedSocketAddressSupplier = socketAddressSupplier.doOnNext(addr -> remoteAddress = addr) + .onErrorResume(t -> { + + if (logger.isDebugEnabled()) { + logger.warn("Cannot retrieve current address from socketAddressSupplier: " + t.toString() + + ", reusing cached address " + remoteAddress, t); + } else { + logger.warn("Cannot retrieve current address from socketAddressSupplier: " + t.toString() + + ", reusing cached address " + remoteAddress); + } + + return Mono.just(remoteAddress); + }); + + this.reconnectionHandler = new ReconnectionHandler(clientOptions, bootstrap, wrappedSocketAddressSupplier, timer, + reconnectWorkers, connectionFacade); + + resetReconnectDelay(); + } + + void prepareClose() { + + setListenOnChannelInactive(false); + setReconnectSuspended(true); + + Timeout reconnectScheduleTimeout = this.reconnectScheduleTimeout; + if (reconnectScheduleTimeout != null && !reconnectScheduleTimeout.isCancelled()) { + reconnectScheduleTimeout.cancel(); + } + + reconnectionHandler.prepareClose(); + } + + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + + reconnectSchedulerSync.set(false); + channel = ctx.channel(); + reconnectScheduleTimeout = null; + logPrefix = null; + remoteAddress = channel.remoteAddress(); + attempts = 0; + resetReconnectDelay(); + logPrefix = null; + logger.debug("{} channelActive()", logPrefix()); + + super.channelActive(ctx); + } + + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + + logger.debug("{} channelInactive()", logPrefix()); + if (!armed) { + logger.debug("{} ConnectionWatchdog not armed", logPrefix()); + return; + } + + channel = null; + + if (listenOnChannelInactive && !reconnectionHandler.isReconnectSuspended()) { + scheduleReconnect(); + } else { + logger.debug("{} Reconnect scheduling disabled", logPrefix(), ctx); + } + + super.channelInactive(ctx); + } + + /** + * Enable {@link ConnectionWatchdog} to listen for disconnected events. + */ + void arm() { + this.armed = true; + setListenOnChannelInactive(true); + } + + /** + * Schedule reconnect if channel is not available/not active. + */ + public void scheduleReconnect() { + + logger.debug("{} scheduleReconnect()", logPrefix()); + + if (!isEventLoopGroupActive()) { + logger.debug("isEventLoopGroupActive() == false"); + return; + } + + if (!isListenOnChannelInactive()) { + logger.debug("Skip reconnect scheduling, listener disabled"); + return; + } + + if ((channel == null || !channel.isActive()) && reconnectSchedulerSync.compareAndSet(false, true)) { + + attempts++; + final int attempt = attempts; + int timeout = (int) reconnectDelay.createDelay(attempt).toMillis(); + logger.debug("{} Reconnect attempt {}, delay {}ms", logPrefix(), attempt, timeout); + + this.reconnectScheduleTimeout = timer.newTimeout(it -> { + + reconnectScheduleTimeout = null; + + if (!isEventLoopGroupActive()) { + logger.warn("Cannot execute scheduled reconnect timer, reconnect workers are terminated"); + return; + } + + reconnectWorkers.submit(() -> { + ConnectionWatchdog.this.run(attempt); + return null; + }); + }, timeout, TimeUnit.MILLISECONDS); + + // Set back to null when ConnectionWatchdog#run runs earlier than reconnectScheduleTimeout's assignment. + if (!reconnectSchedulerSync.get()) { + reconnectScheduleTimeout = null; + } + } else { + logger.debug("{} Skipping scheduleReconnect() because I have an active channel", logPrefix()); + } + } + + /** + * Reconnect to the remote address that the closed channel was connected to. This creates a new {@link ChannelPipeline} with + * the same handler instances contained in the old channel's pipeline. + * + * @param attempt attempt counter + * + * @throws Exception when reconnection fails. + */ + public void run(int attempt) throws Exception { + + reconnectSchedulerSync.set(false); + reconnectScheduleTimeout = null; + + if (!isEventLoopGroupActive()) { + logger.debug("isEventLoopGroupActive() == false"); + return; + } + + if (!isListenOnChannelInactive()) { + logger.debug("Skip reconnect scheduling, listener disabled"); + return; + } + + if (isReconnectSuspended()) { + logger.debug("Skip reconnect scheduling, reconnect is suspended"); + return; + } + + boolean shouldLog = shouldLog(); + + InternalLogLevel infoLevel = InternalLogLevel.INFO; + InternalLogLevel warnLevel = InternalLogLevel.WARN; + + if (shouldLog) { + lastReconnectionLogging = System.currentTimeMillis(); + } else { + warnLevel = InternalLogLevel.DEBUG; + infoLevel = InternalLogLevel.DEBUG; + } + + InternalLogLevel warnLevelToUse = warnLevel; + + try { + reconnectionListener.onReconnectAttempt(new ConnectionEvents.Reconnect(attempt)); + logger.log(infoLevel, "Reconnecting, last destination was {}", remoteAddress); + + Tuple2, CompletableFuture> tuple = reconnectionHandler.reconnect(); + CompletableFuture future = tuple.getT1(); + + future.whenComplete((c, t) -> { + + if (c != null && t == null) { + return; + } + + CompletableFuture remoteAddressFuture = tuple.getT2(); + SocketAddress remote = remoteAddress; + if (remoteAddressFuture.isDone() && !remoteAddressFuture.isCompletedExceptionally() + && !remoteAddressFuture.isCancelled()) { + remote = remoteAddressFuture.join(); + } + + String message = String.format("Cannot reconnect to [%s]: %s", remote, + t.getMessage() != null ? t.getMessage() : t.toString()); + + if (ReconnectionHandler.isExecutionException(t)) { + if (logger.isDebugEnabled()) { + logger.debug(message, t); + } else { + logger.log(warnLevelToUse, message); + } + } else { + logger.log(warnLevelToUse, message, t); + } + + eventBus.publish(new ReconnectFailedEvent(LocalAddress.ANY, remote, t, attempt)); + + if (!isReconnectSuspended()) { + scheduleReconnect(); + } + }); + } catch (Exception e) { + logger.log(warnLevel, "Cannot reconnect: {}", e.toString()); + eventBus.publish(new ReconnectFailedEvent(LocalAddress.ANY, remoteAddress, e, attempt)); + } + } + + private boolean isEventLoopGroupActive() { + + if (!isEventLoopGroupActive(bootstrap.group()) || !isEventLoopGroupActive(reconnectWorkers)) { + return false; + } + + return true; + } + + private static boolean isEventLoopGroupActive(EventExecutorGroup executorService) { + return !(executorService.isShuttingDown()); + } + + private boolean shouldLog() { + + long quietUntil = lastReconnectionLogging + LOGGING_QUIET_TIME_MS; + return quietUntil <= System.currentTimeMillis(); + } + + /** + * Enable event listener for disconnected events. + * + * @param listenOnChannelInactive {@literal true} to listen for disconnected events. + */ + public void setListenOnChannelInactive(boolean listenOnChannelInactive) { + this.listenOnChannelInactive = listenOnChannelInactive; + } + + public boolean isListenOnChannelInactive() { + return listenOnChannelInactive; + } + + /** + * Suspend reconnection temporarily. Reconnect suspension will interrupt reconnection attempts. + * + * @param reconnectSuspended {@literal true} to suspend reconnection + */ + public void setReconnectSuspended(boolean reconnectSuspended) { + reconnectionHandler.setReconnectSuspended(reconnectSuspended); + } + + public boolean isReconnectSuspended() { + return reconnectionHandler.isReconnectSuspended(); + } + + ReconnectionHandler getReconnectionHandler() { + return reconnectionHandler; + } + + private void resetReconnectDelay() { + if (reconnectDelay instanceof StatefulDelay) { + ((StatefulDelay) reconnectDelay).reset(); + } + } + + private String logPrefix() { + + if (logPrefix != null) { + return logPrefix; + } + + String buffer = "[" + ChannelLogDescriptor.logDescriptor(channel) + ", last known addr=" + remoteAddress + ']'; + return logPrefix = buffer; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/DecoratedCommand.java b/src/main/java/io/lettuce/core/protocol/DecoratedCommand.java new file mode 100644 index 0000000000..228b345a1d --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/DecoratedCommand.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +/** + * A decorated command allowing access to the underlying {@link #getDelegate()}. + * + * @author Mark Paluch + */ +public interface DecoratedCommand { + + /** + * The underlying command. + * + * @return never {@literal null}. + */ + RedisCommand getDelegate(); + +} diff --git a/src/main/java/io/lettuce/core/protocol/DefaultEndpoint.java b/src/main/java/io/lettuce/core/protocol/DefaultEndpoint.java new file mode 100644 index 0000000000..a08c37080f --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/DefaultEndpoint.java @@ -0,0 +1,1042 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static io.lettuce.core.protocol.CommandHandler.SUPPRESS_IO_EXCEPTION_MESSAGES; + +import java.io.IOException; +import java.nio.channels.ClosedChannelException; +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.Consumer; +import java.util.function.Supplier; + +import io.lettuce.core.*; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceFactories; +import io.lettuce.core.resource.ClientResources; +import io.netty.channel.Channel; +import io.netty.channel.ChannelFuture; +import io.netty.channel.ChannelFutureListener; +import io.netty.handler.codec.EncoderException; +import io.netty.util.Recycler; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.GenericFutureListener; +import io.netty.util.internal.logging.InternalLogLevel; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Default {@link Endpoint} implementation. + * + * @author Mark Paluch + */ +public class DefaultEndpoint implements RedisChannelWriter, Endpoint { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultEndpoint.class); + private static final AtomicLong ENDPOINT_COUNTER = new AtomicLong(); + private static final AtomicIntegerFieldUpdater QUEUE_SIZE = AtomicIntegerFieldUpdater + .newUpdater(DefaultEndpoint.class, "queueSize"); + + private static final AtomicIntegerFieldUpdater STATUS = AtomicIntegerFieldUpdater + .newUpdater(DefaultEndpoint.class, "status"); + + private static final int ST_OPEN = 0; + private static final int ST_CLOSED = 1; + + protected volatile Channel channel; + + private final Reliability reliability; + private final ClientOptions clientOptions; + private final ClientResources clientResources; + private final Queue> disconnectedBuffer; + private final Queue> commandBuffer; + private final boolean boundedQueues; + private final boolean rejectCommandsWhileDisconnected; + + private final long endpointId = ENDPOINT_COUNTER.incrementAndGet(); + private final SharedLock sharedLock = new SharedLock(); + private final boolean debugEnabled = logger.isDebugEnabled(); + private final CompletableFuture closeFuture = new CompletableFuture<>(); + + private String logPrefix; + private boolean autoFlushCommands = true; + private boolean inActivation = false; + + private ConnectionWatchdog connectionWatchdog; + private ConnectionFacade connectionFacade; + + private volatile Throwable connectionError; + + // access via QUEUE_SIZE + @SuppressWarnings("unused") + private volatile int queueSize = 0; + + // access via STATUS + @SuppressWarnings("unused") + private volatile int status = ST_OPEN; + + /** + * Create a new {@link DefaultEndpoint}. + * + * @param clientOptions client options for this connection, must not be {@literal null}. + * @param clientResources client resources for this connection, must not be {@literal null}. + */ + public DefaultEndpoint(ClientOptions clientOptions, ClientResources clientResources) { + + LettuceAssert.notNull(clientOptions, "ClientOptions must not be null"); + LettuceAssert.notNull(clientOptions, "ClientResources must not be null"); + + this.clientOptions = clientOptions; + this.clientResources = clientResources; + this.reliability = clientOptions.isAutoReconnect() ? Reliability.AT_LEAST_ONCE : Reliability.AT_MOST_ONCE; + this.disconnectedBuffer = LettuceFactories.newConcurrentQueue(clientOptions.getRequestQueueSize()); + this.commandBuffer = LettuceFactories.newConcurrentQueue(clientOptions.getRequestQueueSize()); + this.boundedQueues = clientOptions.getRequestQueueSize() != Integer.MAX_VALUE; + this.rejectCommandsWhileDisconnected = isRejectCommand(clientOptions); + } + + @Override + public void setConnectionFacade(ConnectionFacade connectionFacade) { + this.connectionFacade = connectionFacade; + } + + @Override + public ClientResources getClientResources() { + return clientResources; + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + this.autoFlushCommands = autoFlush; + } + + @Override + public RedisCommand write(RedisCommand command) { + + LettuceAssert.notNull(command, "Command must not be null"); + + RedisException validation = validateWrite(1); + if (validation != null) { + command.completeExceptionally(validation); + return command; + } + + try { + sharedLock.incrementWriters(); + + if (inActivation) { + command = processActivationCommand(command); + } + + if (autoFlushCommands) { + + if (isConnected()) { + writeToChannelAndFlush(command); + } else { + writeToDisconnectedBuffer(command); + } + + } else { + writeToBuffer(command); + } + } finally { + sharedLock.decrementWriters(); + if (debugEnabled) { + logger.debug("{} write() done", logPrefix()); + } + } + + return command; + } + + @SuppressWarnings("unchecked") + @Override + public Collection> write(Collection> commands) { + + LettuceAssert.notNull(commands, "Commands must not be null"); + + RedisException validation = validateWrite(commands.size()); + + if (validation != null) { + commands.forEach(it -> it.completeExceptionally(validation)); + return (Collection>) commands; + } + + try { + sharedLock.incrementWriters(); + + if (inActivation) { + commands = processActivationCommands(commands); + } + + if (autoFlushCommands) { + + if (isConnected()) { + writeToChannelAndFlush(commands); + } else { + writeToDisconnectedBuffer(commands); + } + + } else { + writeToBuffer(commands); + } + } finally { + sharedLock.decrementWriters(); + if (debugEnabled) { + logger.debug("{} write() done", logPrefix()); + } + } + + return (Collection>) commands; + } + + private RedisCommand processActivationCommand(RedisCommand command) { + + if (!ActivationCommand.isActivationCommand(command)) { + return new ActivationCommand<>(command); + } + + return command; + } + + private Collection> processActivationCommands( + Collection> commands) { + + Collection> commandsToReturn = new ArrayList<>(commands.size()); + + for (RedisCommand command : commands) { + + if (!ActivationCommand.isActivationCommand(command)) { + command = new ActivationCommand<>(command); + } + + commandsToReturn.add(command); + } + + return commandsToReturn; + } + + private RedisException validateWrite(int commands) { + + if (isClosed()) { + return new RedisException("Connection is closed"); + } + + if (usesBoundedQueues()) { + + boolean connected = isConnected(); + + if (QUEUE_SIZE.get(this) + commands > clientOptions.getRequestQueueSize()) { + return new RedisException("Request queue size exceeded: " + clientOptions.getRequestQueueSize() + + ". Commands are not accepted until the queue size drops."); + } + + if (!connected && disconnectedBuffer.size() + commands > clientOptions.getRequestQueueSize()) { + return new RedisException("Request queue size exceeded: " + clientOptions.getRequestQueueSize() + + ". Commands are not accepted until the queue size drops."); + } + + if (connected && commandBuffer.size() + commands > clientOptions.getRequestQueueSize()) { + return new RedisException("Command buffer size exceeded: " + clientOptions.getRequestQueueSize() + + ". Commands are not accepted until the queue size drops."); + } + } + + if (!isConnected() && rejectCommandsWhileDisconnected) { + return new RedisException("Currently not connected. Commands are rejected."); + } + + return null; + } + + private boolean usesBoundedQueues() { + return boundedQueues; + } + + private void writeToBuffer(Iterable> commands) { + + for (RedisCommand command : commands) { + writeToBuffer(command); + } + } + + private void writeToDisconnectedBuffer(Collection> commands) { + for (RedisCommand command : commands) { + writeToDisconnectedBuffer(command); + } + } + + private void writeToDisconnectedBuffer(RedisCommand command) { + + if (connectionError != null) { + if (debugEnabled) { + logger.debug("{} writeToDisconnectedBuffer() Completing command {} due to connection error", logPrefix(), + command); + } + command.completeExceptionally(connectionError); + + return; + } + + if (debugEnabled) { + logger.debug("{} writeToDisconnectedBuffer() buffering (disconnected) command {}", logPrefix(), command); + } + + disconnectedBuffer.add(command); + } + + protected , T> void writeToBuffer(C command) { + + if (debugEnabled) { + logger.debug("{} writeToBuffer() buffering command {}", logPrefix(), command); + } + + if (connectionError != null) { + + if (debugEnabled) { + logger.debug("{} writeToBuffer() Completing command {} due to connection error", logPrefix(), command); + } + command.completeExceptionally(connectionError); + + return; + } + + commandBuffer.add(command); + } + + private void writeToChannelAndFlush(RedisCommand command) { + + QUEUE_SIZE.incrementAndGet(this); + + ChannelFuture channelFuture = channelWriteAndFlush(command); + + if (reliability == Reliability.AT_MOST_ONCE) { + // cancel on exceptions and remove from queue, because there is no housekeeping + channelFuture.addListener(AtMostOnceWriteListener.newInstance(this, command)); + } + + if (reliability == Reliability.AT_LEAST_ONCE) { + // commands are ok to stay within the queue, reconnect will retrigger them + channelFuture.addListener(RetryListener.newInstance(this, command)); + } + } + + private void writeToChannelAndFlush(Collection> commands) { + + QUEUE_SIZE.addAndGet(this, commands.size()); + + if (reliability == Reliability.AT_MOST_ONCE) { + + // cancel on exceptions and remove from queue, because there is no housekeeping + for (RedisCommand command : commands) { + channelWrite(command).addListener(AtMostOnceWriteListener.newInstance(this, command)); + } + } + + if (reliability == Reliability.AT_LEAST_ONCE) { + + // commands are ok to stay within the queue, reconnect will retrigger them + for (RedisCommand command : commands) { + channelWrite(command).addListener(RetryListener.newInstance(this, command)); + } + } + + channelFlush(); + } + + private void channelFlush() { + + if (debugEnabled) { + logger.debug("{} write() channelFlush", logPrefix()); + } + + channel.flush(); + } + + private ChannelFuture channelWrite(RedisCommand command) { + + if (debugEnabled) { + logger.debug("{} write() channelWrite command {}", logPrefix(), command); + } + + return channel.write(command); + } + + private ChannelFuture channelWriteAndFlush(RedisCommand command) { + + if (debugEnabled) { + logger.debug("{} write() writeAndFlush command {}", logPrefix(), command); + } + + return channel.writeAndFlush(command); + } + + @Override + public void notifyChannelActive(Channel channel) { + + this.logPrefix = null; + this.channel = channel; + this.connectionError = null; + + if (isClosed()) { + + logger.info("{} Closing channel because endpoint is already closed", logPrefix()); + channel.close(); + return; + } + + if (connectionWatchdog != null) { + connectionWatchdog.arm(); + } + + sharedLock.doExclusive(() -> { + + try { + // Move queued commands to buffer before issuing any commands because of connection activation. + // That's necessary to prepend queued commands first as some commands might get into the queue + // after the connection was disconnected. They need to be prepended to the command buffer + + if (debugEnabled) { + logger.debug("{} activateEndpointAndExecuteBufferedCommands {} command(s) buffered", logPrefix(), + disconnectedBuffer.size()); + } + + if (debugEnabled) { + logger.debug("{} activating endpoint", logPrefix()); + } + + try { + inActivation = true; + connectionFacade.activated(); + } finally { + inActivation = false; + } + + flushCommands(disconnectedBuffer); + } catch (Exception e) { + + if (debugEnabled) { + logger.debug("{} channelActive() ran into an exception", logPrefix()); + } + + if (clientOptions.isCancelCommandsOnReconnectFailure()) { + reset(); + } + + throw e; + } + }); + } + + @Override + public void notifyChannelInactive(Channel channel) { + + if (isClosed()) { + RedisException closed = new RedisException("Connection closed"); + cancelCommands("Connection closed", drainCommands(), it -> it.completeExceptionally(closed)); + } + + sharedLock.doExclusive(() -> { + + if (debugEnabled) { + logger.debug("{} deactivating endpoint handler", logPrefix()); + } + + connectionFacade.deactivated(); + }); + + if (this.channel == channel) { + this.channel = null; + } + } + + @Override + public void notifyException(Throwable t) { + + if (t instanceof RedisConnectionException && RedisConnectionException.isProtectedMode(t.getMessage())) { + + connectionError = t; + + if (connectionWatchdog != null) { + connectionWatchdog.setListenOnChannelInactive(false); + connectionWatchdog.setReconnectSuspended(false); + } + + doExclusive(this::drainCommands).forEach(cmd -> cmd.completeExceptionally(t)); + } + + if (!isConnected()) { + connectionError = t; + } + } + + @Override + public void registerConnectionWatchdog(ConnectionWatchdog connectionWatchdog) { + this.connectionWatchdog = connectionWatchdog; + } + + @Override + @SuppressWarnings({ "rawtypes", "unchecked" }) + public void flushCommands() { + flushCommands(commandBuffer); + } + + private void flushCommands(Queue> queue) { + + if (debugEnabled) { + logger.debug("{} flushCommands()", logPrefix()); + } + + if (isConnected()) { + + List> commands = sharedLock.doExclusive(() -> { + + if (queue.isEmpty()) { + return Collections.emptyList(); + } + + return drainCommands(queue); + }); + + if (debugEnabled) { + logger.debug("{} flushCommands() Flushing {} commands", logPrefix(), commands.size()); + } + + if (!commands.isEmpty()) { + writeToChannelAndFlush(commands); + } + } + } + + /** + * Close the connection. + */ + @Override + public void close() { + + if (debugEnabled) { + logger.debug("{} close()", logPrefix()); + } + + closeAsync().join(); + } + + @Override + public CompletableFuture closeAsync() { + + if (debugEnabled) { + logger.debug("{} closeAsync()", logPrefix()); + } + + if (isClosed()) { + return closeFuture; + } + + if (STATUS.compareAndSet(this, ST_OPEN, ST_CLOSED)) { + + if (connectionWatchdog != null) { + connectionWatchdog.prepareClose(); + } + + cancelBufferedCommands("Close"); + + Channel channel = getOpenChannel(); + + if (channel != null) { + Futures.adapt(channel.close(), closeFuture); + } else { + closeFuture.complete(null); + } + } + + return closeFuture; + + } + + private Channel getOpenChannel() { + + Channel currentChannel = this.channel; + + if (currentChannel != null) { + return currentChannel; + } + + return null; + } + + /** + * Reset the writer state. Queued commands will be canceled and the internal state will be reset. This is useful when the + * internal state machine gets out of sync with the connection. + */ + @Override + public void reset() { + + if (debugEnabled) { + logger.debug("{} reset()", logPrefix()); + } + + if (channel != null) { + channel.pipeline().fireUserEventTriggered(new ConnectionEvents.Reset()); + } + cancelBufferedCommands("Reset"); + } + + /** + * Reset the command-handler to the initial not-connected state. + */ + public void initialState() { + + commandBuffer.clear(); + + Channel currentChannel = this.channel; + if (currentChannel != null) { + + ChannelFuture close = currentChannel.close(); + if (currentChannel.isOpen()) { + close.syncUninterruptibly(); + } + } + } + + @Override + public void notifyDrainQueuedCommands(HasQueuedCommands queuedCommands) { + + if (isClosed()) { + + RedisException closed = new RedisException("Connection closed"); + cancelCommands(closed.getMessage(), queuedCommands.drainQueue(), it -> it.completeExceptionally(closed)); + cancelCommands(closed.getMessage(), drainCommands(), it -> it.completeExceptionally(closed)); + return; + } else if (reliability == Reliability.AT_MOST_ONCE && rejectCommandsWhileDisconnected) { + + RedisException disconnected = new RedisException("Connection disconnected"); + cancelCommands(disconnected.getMessage(), queuedCommands.drainQueue(), + it -> it.completeExceptionally(disconnected)); + cancelCommands(disconnected.getMessage(), drainCommands(), it -> it.completeExceptionally(disconnected)); + return; + } + + sharedLock.doExclusive(() -> { + + Collection> commands = queuedCommands.drainQueue(); + + if (debugEnabled) { + logger.debug("{} notifyQueuedCommands adding {} command(s) to buffer", logPrefix(), commands.size()); + } + + commands.addAll(drainCommands(disconnectedBuffer)); + + for (RedisCommand command : commands) { + + if (command instanceof DemandAware.Sink) { + ((DemandAware.Sink) command).removeSource(); + } + } + + try { + disconnectedBuffer.addAll(commands); + } catch (RuntimeException e) { + + if (debugEnabled) { + logger.debug("{} notifyQueuedCommands Queue overcommit. Cannot add all commands to buffer (disconnected).", + logPrefix(), commands.size()); + } + commands.removeAll(disconnectedBuffer); + + for (RedisCommand command : commands) { + command.completeExceptionally(e); + } + } + + if (isConnected()) { + flushCommands(disconnectedBuffer); + } + }); + } + + public boolean isClosed() { + return STATUS.get(this) == ST_CLOSED; + } + + /** + * Execute a {@link Supplier} callback guarded by an exclusive lock. + * + * @param supplier + * @param + * @return + */ + protected T doExclusive(Supplier supplier) { + return sharedLock.doExclusive(supplier); + } + + protected List> drainCommands() { + + List> target = new ArrayList<>(disconnectedBuffer.size() + commandBuffer.size()); + + target.addAll(drainCommands(disconnectedBuffer)); + target.addAll(drainCommands(commandBuffer)); + + return target; + } + + /** + * Drain commands from a queue and return only active commands. + * + * @param source the source queue. + * @return List of commands. + */ + private static List> drainCommands(Queue> source) { + + List> target = new ArrayList<>(source.size()); + + RedisCommand cmd; + while ((cmd = source.poll()) != null) { + + if (!cmd.isDone() && !ActivationCommand.isActivationCommand(cmd)) { + target.add(cmd); + } + } + + return target; + } + + private void cancelBufferedCommands(String message) { + cancelCommands(message, doExclusive(this::drainCommands), RedisCommand::cancel); + } + + private void cancelCommands(String message, Iterable> toCancel, + Consumer> commandConsumer) { + + for (RedisCommand cmd : toCancel) { + if (cmd.getOutput() != null) { + cmd.getOutput().setError(message); + } + commandConsumer.accept(cmd); + } + } + + private boolean isConnected() { + + Channel channel = this.channel; + return channel != null && channel.isActive(); + } + + protected String logPrefix() { + + if (logPrefix != null) { + return logPrefix; + } + + String buffer = "[" + ChannelLogDescriptor.logDescriptor(channel) + ", " + "epid=0x" + Long.toHexString(endpointId) + + ']'; + return logPrefix = buffer; + } + + private static boolean isRejectCommand(ClientOptions clientOptions) { + + switch (clientOptions.getDisconnectedBehavior()) { + case REJECT_COMMANDS: + return true; + + case ACCEPT_COMMANDS: + return false; + + default: + case DEFAULT: + if (!clientOptions.isAutoReconnect()) { + return true; + } + + return false; + } + } + + static class ListenerSupport { + + Collection> sentCommands; + RedisCommand sentCommand; + DefaultEndpoint endpoint; + + void dequeue() { + + if (sentCommand != null) { + QUEUE_SIZE.decrementAndGet(endpoint); + } else { + QUEUE_SIZE.addAndGet(endpoint, -sentCommands.size()); + } + } + + protected void complete(Throwable t) { + + if (sentCommand != null) { + sentCommand.completeExceptionally(t); + } else { + for (RedisCommand sentCommand : sentCommands) { + sentCommand.completeExceptionally(t); + } + } + } + } + + static class AtMostOnceWriteListener extends ListenerSupport implements ChannelFutureListener { + + private static final Recycler RECYCLER = new Recycler() { + @Override + protected AtMostOnceWriteListener newObject(Handle handle) { + return new AtMostOnceWriteListener(handle); + } + }; + + private final Recycler.Handle handle; + + AtMostOnceWriteListener(Recycler.Handle handle) { + this.handle = handle; + } + + static AtMostOnceWriteListener newInstance(DefaultEndpoint endpoint, RedisCommand command) { + + AtMostOnceWriteListener entry = RECYCLER.get(); + + entry.endpoint = endpoint; + entry.sentCommand = command; + + return entry; + } + + static AtMostOnceWriteListener newInstance(DefaultEndpoint endpoint, + Collection> commands) { + + AtMostOnceWriteListener entry = RECYCLER.get(); + + entry.endpoint = endpoint; + entry.sentCommands = commands; + + return entry; + } + + @Override + public void operationComplete(ChannelFuture future) { + + try { + + dequeue(); + + if (!future.isSuccess() && future.cause() != null) { + complete(future.cause()); + } + } finally { + recycle(); + } + } + + private void recycle() { + + this.endpoint = null; + this.sentCommand = null; + this.sentCommands = null; + + handle.recycle(this); + } + } + + /** + * A generic future listener which retries unsuccessful writes. + */ + static class RetryListener extends ListenerSupport implements GenericFutureListener> { + + private static final Recycler RECYCLER = new Recycler() { + @Override + protected RetryListener newObject(Handle handle) { + return new RetryListener(handle); + } + }; + + private final Recycler.Handle handle; + + RetryListener(Recycler.Handle handle) { + this.handle = handle; + } + + static RetryListener newInstance(DefaultEndpoint endpoint, RedisCommand command) { + + RetryListener entry = RECYCLER.get(); + + entry.endpoint = endpoint; + entry.sentCommand = command; + + return entry; + } + + static RetryListener newInstance(DefaultEndpoint endpoint, Collection> commands) { + + RetryListener entry = RECYCLER.get(); + + entry.endpoint = endpoint; + entry.sentCommands = commands; + + return entry; + } + + @SuppressWarnings("unchecked") + @Override + public void operationComplete(Future future) { + + try { + doComplete(future); + } finally { + recycle(); + } + } + + private void doComplete(Future future) { + + Throwable cause = future.cause(); + + boolean success = future.isSuccess(); + dequeue(); + + if (success) { + return; + } + + if (cause instanceof EncoderException || cause instanceof Error || cause.getCause() instanceof Error) { + complete(cause); + return; + } + + Channel channel = endpoint.channel; + + // Capture values before recycler clears these. + RedisCommand sentCommand = this.sentCommand; + Collection> sentCommands = this.sentCommands; + potentiallyRequeueCommands(channel, sentCommand, sentCommands); + + if (!(cause instanceof ClosedChannelException)) { + + String message = "Unexpected exception during request: {}"; + InternalLogLevel logLevel = InternalLogLevel.WARN; + + if (cause instanceof IOException && SUPPRESS_IO_EXCEPTION_MESSAGES.contains(cause.getMessage())) { + logLevel = InternalLogLevel.DEBUG; + } + + logger.log(logLevel, message, cause.toString(), cause); + } + } + + /** + * Requeue command/commands + * + * @param channel + * @param sentCommand + * @param sentCommands + */ + private void potentiallyRequeueCommands(Channel channel, RedisCommand sentCommand, + Collection> sentCommands) { + + if (sentCommand != null && sentCommand.isDone()) { + return; + } + + if (sentCommands != null) { + + boolean foundToSend = false; + + for (RedisCommand command : sentCommands) { + if (!command.isDone()) { + foundToSend = true; + break; + } + } + + if (!foundToSend) { + return; + } + } + + if (channel != null) { + DefaultEndpoint endpoint = this.endpoint; + channel.eventLoop().submit(() -> { + requeueCommands(sentCommand, sentCommands, endpoint); + }); + } else { + requeueCommands(sentCommand, sentCommands, endpoint); + } + } + + @SuppressWarnings({ "unchecked", "rawtypes" }) + private void requeueCommands(RedisCommand sentCommand, + Collection> sentCommands, DefaultEndpoint endpoint) { + + if (sentCommand != null) { + try { + endpoint.write(sentCommand); + } catch (Exception e) { + sentCommand.completeExceptionally(e); + } + } else { + try { + endpoint.write((Collection) sentCommands); + } catch (Exception e) { + for (RedisCommand command : sentCommands) { + command.completeExceptionally(e); + } + } + } + } + + private void recycle() { + + this.endpoint = null; + this.sentCommand = null; + this.sentCommands = null; + + handle.recycle(this); + } + } + + private enum Reliability { + AT_MOST_ONCE, AT_LEAST_ONCE + } + + static class ActivationCommand extends CommandWrapper { + + public ActivationCommand(RedisCommand command) { + super(command); + } + + public static boolean isActivationCommand(RedisCommand command) { + + if (command instanceof ActivationCommand) { + return true; + } + + while (command instanceof CommandWrapper) { + command = ((CommandWrapper) command).getDelegate(); + + if (command instanceof ActivationCommand) { + return true; + } + } + + return false; + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/DemandAware.java b/src/main/java/io/lettuce/core/protocol/DemandAware.java new file mode 100644 index 0000000000..8befd98630 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/DemandAware.java @@ -0,0 +1,72 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +/** + * Interface for demand-aware components. + *

    + * A demand-aware component is aware of its demand for data that is read from the {@link Source} and possibly awaits processing. + * A {@link Sink} with demand is ready to process data. A {@link Sink} without demand signals that it's ability to keep up with + * the incoming data is no longer given and it wishes to receive no more data. Submitting more data could cause overload and + * exhaust buffer space. + * + * @author Mark Paluch + * @since 5.0 + */ +public interface DemandAware { + + /** + * A demand-aware {@link Sink} that accepts data. It can signal its {@link Source} demand/readiness to emit more data. + * Instances of implementing classes are required to be thread-safe as they are shared amongst multiple threads. + */ + interface Sink { + + /** + * Returns {@literal true} if the {@link Sink} has demand or {@literal false} if the source has no demand. + * {@literal false} means either the {@link Sink} has no demand in general because data is not needed or the current + * demand is saturated. + * + * @return {@literal true} if the {@link Sink} demands data. + */ + boolean hasDemand(); + + /** + * Sets the {@link Source} for a {@link Sink}. The {@link Sink} is notified by this {@link Source} if the source + * indicates new demand or the sink catches up so it's ready to receive more data. + * + * @param source the reference to the data {@link Source}, must not be {@literal null}. + */ + void setSource(Source source); + + /** + * Removes the {@link Source} reference from this {@link Sink}. Any previously set {@link Source} will no longer be + * asked for data. + */ + void removeSource(); + } + + /** + * A {@link Source} provides data to a {@link DemandAware} and can be notified to produce more input for the command. + * Instances of implementing classes are required to be thread-safe as they are shared amongst multiple threads. + */ + interface Source { + + /** + * Signals demand to the {@link Source} + */ + void requestMore(); + } +} diff --git a/src/main/java/io/lettuce/core/protocol/Endpoint.java b/src/main/java/io/lettuce/core/protocol/Endpoint.java new file mode 100644 index 0000000000..ea03ba8780 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/Endpoint.java @@ -0,0 +1,70 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import io.netty.channel.Channel; + +/** + * Wraps a stateful {@link Endpoint} that abstracts the underlying channel. Endpoints may be connected, disconnected and in + * closed states. Endpoints may feature reconnection capabilities with replaying queued commands. + * + * @author Mark Paluch + */ +public interface Endpoint { + + /** + * Reset this endpoint to its initial state, clear all buffers and potentially close the bound channel. + * + * @since 5.1 + */ + void initialState(); + + /** + * Notify about channel activation. + * + * @param channel the channel + */ + void notifyChannelActive(Channel channel); + + /** + * Notify about channel deactivation. + * + * @param channel the channel + */ + void notifyChannelInactive(Channel channel); + + /** + * Notify about an exception occured in channel/command processing + * + * @param t the Exception + */ + void notifyException(Throwable t); + + /** + * Signal the endpoint to drain queued commands from the queue holder. + * + * @param queuedCommands the queue holder. + */ + void notifyDrainQueuedCommands(HasQueuedCommands queuedCommands); + + /** + * Associate a {@link ConnectionWatchdog} with the {@link Endpoint}. + * + * @param connectionWatchdog the connection watchdog. + */ + void registerConnectionWatchdog(ConnectionWatchdog connectionWatchdog); + +} diff --git a/src/main/java/io/lettuce/core/protocol/HasQueuedCommands.java b/src/main/java/io/lettuce/core/protocol/HasQueuedCommands.java new file mode 100644 index 0000000000..6db4327481 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/HasQueuedCommands.java @@ -0,0 +1,29 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.Collection; + +/** + * Interface to be implemented by classes that queue commands. Implementors of this class need to expose their queue to control + * all queues by maintenance and cleanup processes. + * + * @author Mark Paluch + */ +interface HasQueuedCommands { + + Collection> drainQueue(); +} diff --git a/src/main/java/io/lettuce/core/protocol/LatencyMeteredCommand.java b/src/main/java/io/lettuce/core/protocol/LatencyMeteredCommand.java new file mode 100644 index 0000000000..543652640f --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/LatencyMeteredCommand.java @@ -0,0 +1,65 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +/** + * {@link CommandWrapper} implementation to track {@link WithLatency command latency}. + * + * @author Mark Paluch + * @since 4.4 + */ +class LatencyMeteredCommand extends CommandWrapper implements WithLatency { + + private long sentNs = -1; + private long firstResponseNs = -1; + private long completedNs = -1; + + public LatencyMeteredCommand(RedisCommand command) { + super(command); + } + + @Override + public void sent(long timeNs) { + sentNs = timeNs; + firstResponseNs = -1; + completedNs = -1; + } + + @Override + public void firstResponse(long timeNs) { + firstResponseNs = timeNs; + } + + @Override + public void completed(long timeNs) { + completedNs = timeNs; + } + + @Override + public long getSent() { + return sentNs; + } + + @Override + public long getFirstResponse() { + return firstResponseNs; + } + + @Override + public long getCompleted() { + return completedNs; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/PristineFallbackCommand.java b/src/main/java/io/lettuce/core/protocol/PristineFallbackCommand.java new file mode 100644 index 0000000000..ce1ff5b32f --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/PristineFallbackCommand.java @@ -0,0 +1,106 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.CommandOutput; +import io.netty.buffer.ByteBuf; + +/** + * Generic fallback command to collect arbitrary Redis responses in a {@link List} represented as String. Used as buffer when + * received a Redis response without a command to correlate. + * + * @author Mark Paluch + * @since 4.5 + */ +class PristineFallbackCommand implements RedisCommand> { + + private final CommandOutput> output; + private volatile boolean complete; + + PristineFallbackCommand() { + this.output = new FallbackOutput(); + } + + @Override + public CommandOutput> getOutput() { + return output; + } + + @Override + public void complete() { + complete = true; + } + + @Override + public void cancel() { + complete = true; + } + + @Override + public CommandArgs getArgs() { + return null; + } + + @Override + public boolean completeExceptionally(Throwable throwable) { + return false; + } + + @Override + public ProtocolKeyword getType() { + return null; + } + + @Override + public void encode(ByteBuf buf) { + } + + @Override + public boolean isCancelled() { + return false; + } + + @Override + public boolean isDone() { + return complete; + } + + @Override + public void setOutput(CommandOutput> output) { + } + + static class FallbackOutput extends CommandOutput> { + + FallbackOutput() { + super(StringCodec.ASCII, new ArrayList<>()); + } + + @Override + public void set(ByteBuffer bytes) { + output.add(bytes != null ? codec.decodeKey(bytes) : null); + } + + @Override + public void set(long integer) { + output.add("" + integer); + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/ProtocolKeyword.java b/src/main/java/io/lettuce/core/protocol/ProtocolKeyword.java new file mode 100644 index 0000000000..67467c90d0 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/ProtocolKeyword.java @@ -0,0 +1,36 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +/** + * Interface for protocol keywords providing an encoded representation. + * + * @author Mark Paluch + */ +public interface ProtocolKeyword { + + /** + * + * @return byte[] encoded representation. + */ + byte[] getBytes(); + + /** + * + * @return name of the command. + */ + String name(); +} diff --git a/src/main/java/io/lettuce/core/protocol/ProtocolVersion.java b/src/main/java/io/lettuce/core/protocol/ProtocolVersion.java new file mode 100644 index 0000000000..49b8cdb9bd --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/ProtocolVersion.java @@ -0,0 +1,44 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +/** + * Versions of the native protocol supported by the driver. + * + * @author Mark Paluch + * @since 6.0 + */ +public enum ProtocolVersion { + + /** + * Redis 2 to Redis 5. + */ + RESP2, + + /** + * Redis 6. + */ + RESP3; + + /** + * Returns the newest supported protocol version. + * + * @return the newest supported protocol version. + */ + public static ProtocolVersion newestSupported() { + return RESP3; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/ReconnectionHandler.java b/src/main/java/io/lettuce/core/protocol/ReconnectionHandler.java new file mode 100644 index 0000000000..985e9ace88 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/ReconnectionHandler.java @@ -0,0 +1,203 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.net.ConnectException; +import java.net.SocketAddress; +import java.util.Set; +import java.util.concurrent.CancellationException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.TimeoutException; + +import reactor.core.publisher.Mono; +import reactor.util.function.Tuple2; +import reactor.util.function.Tuples; +import io.lettuce.core.ClientOptions; +import io.lettuce.core.RedisCommandTimeoutException; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceSets; +import io.netty.bootstrap.Bootstrap; +import io.netty.channel.Channel; +import io.netty.channel.ChannelFuture; +import io.netty.util.Timer; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * @author Mark Paluch + */ +class ReconnectionHandler { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(ReconnectionHandler.class); + + private static final Set> EXECUTION_EXCEPTION_TYPES = LettuceSets.unmodifiableSet(TimeoutException.class, + CancellationException.class, RedisCommandTimeoutException.class, ConnectException.class); + + private final ClientOptions clientOptions; + private final Bootstrap bootstrap; + private final Mono socketAddressSupplier; + private final ConnectionFacade connectionFacade; + + private volatile CompletableFuture currentFuture; + private volatile boolean reconnectSuspended; + + ReconnectionHandler(ClientOptions clientOptions, Bootstrap bootstrap, Mono socketAddressSupplier, + Timer timer, ExecutorService reconnectWorkers, ConnectionFacade connectionFacade) { + + LettuceAssert.notNull(socketAddressSupplier, "SocketAddressSupplier must not be null"); + LettuceAssert.notNull(bootstrap, "Bootstrap must not be null"); + LettuceAssert.notNull(timer, "Timer must not be null"); + LettuceAssert.notNull(reconnectWorkers, "ExecutorService must not be null"); + LettuceAssert.notNull(connectionFacade, "ConnectionFacade must not be null"); + + this.socketAddressSupplier = socketAddressSupplier; + this.bootstrap = bootstrap; + this.clientOptions = clientOptions; + this.connectionFacade = connectionFacade; + } + + /** + * Initiate reconnect and return a {@link ChannelFuture} for synchronization. The resulting future either succeeds or fails. + * It can be {@link ChannelFuture#cancel(boolean) canceled} to interrupt reconnection and channel initialization. A failed + * {@link ChannelFuture} will close the channel. + * + * @return reconnect {@link ChannelFuture}. + */ + protected Tuple2, CompletableFuture> reconnect() { + + CompletableFuture future = new CompletableFuture<>(); + CompletableFuture address = new CompletableFuture<>(); + + socketAddressSupplier.subscribe(remoteAddress -> { + + address.complete(remoteAddress); + + if (future.isCancelled()) { + return; + } + + reconnect0(future, remoteAddress); + + }, ex -> { + if (!address.isDone()) { + address.completeExceptionally(ex); + } + future.completeExceptionally(ex); + }); + + this.currentFuture = future; + return Tuples.of(future, address); + } + + private void reconnect0(CompletableFuture result, SocketAddress remoteAddress) { + + ChannelFuture connectFuture = bootstrap.connect(remoteAddress); + + logger.debug("Reconnecting to Redis at {}", remoteAddress); + + result.whenComplete((c, t) -> { + + if (t instanceof CancellationException) { + connectFuture.cancel(true); + } + }); + + connectFuture.addListener(future -> { + + if (!future.isSuccess()) { + result.completeExceptionally(future.cause()); + return; + } + + RedisHandshakeHandler handshakeHandler = connectFuture.channel().pipeline().get(RedisHandshakeHandler.class); + + if (handshakeHandler == null) { + result.completeExceptionally(new IllegalStateException("RedisHandshakeHandler not registered")); + return; + } + + handshakeHandler.channelInitialized().whenComplete((success, throwable) -> { + + if (throwable != null) { + + if (isExecutionException(throwable)) { + result.completeExceptionally(throwable); + return; + } + + if (clientOptions.isCancelCommandsOnReconnectFailure()) { + connectionFacade.reset(); + } + + if (clientOptions.isSuspendReconnectOnProtocolFailure()) { + + logger.error("Disabling autoReconnect due to initialization failure", throwable); + setReconnectSuspended(true); + } + + result.completeExceptionally(throwable); + return; + } + + if (logger.isDebugEnabled()) { + logger.info("Reconnected to {}, Channel {}", remoteAddress, + ChannelLogDescriptor.logDescriptor(connectFuture.channel())); + } else { + logger.info("Reconnected to {}", remoteAddress); + } + + result.complete(connectFuture.channel()); + }); + + }); + } + + boolean isReconnectSuspended() { + return reconnectSuspended; + } + + void setReconnectSuspended(boolean reconnectSuspended) { + this.reconnectSuspended = reconnectSuspended; + } + + void prepareClose() { + + CompletableFuture currentFuture = this.currentFuture; + if (currentFuture != null && !currentFuture.isDone()) { + currentFuture.cancel(true); + } + } + + /** + * @param throwable + * @return {@literal true} if {@code throwable} is an execution {@link Exception}. + */ + public static boolean isExecutionException(Throwable throwable) { + + for (Class type : EXECUTION_EXCEPTION_TYPES) { + if (type.isAssignableFrom(throwable.getClass())) { + return true; + } + } + + return false; + } + + ClientOptions getClientOptions() { + return clientOptions; + } +} diff --git a/src/main/java/io/lettuce/core/protocol/ReconnectionListener.java b/src/main/java/io/lettuce/core/protocol/ReconnectionListener.java new file mode 100644 index 0000000000..4eacf03c58 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/ReconnectionListener.java @@ -0,0 +1,42 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import io.lettuce.core.ConnectionEvents; + +/** + * Listener for reconnection events. + * + * @author Mark Paluch + * @since 4.2 + */ +public interface ReconnectionListener { + + ReconnectionListener NO_OP = new ReconnectionListener() { + @Override + public void onReconnectAttempt(ConnectionEvents.Reconnect reconnect) { + + } + }; + + /** + * Listener method notified on a reconnection attempt. + * + * @param reconnect the event payload. + */ + void onReconnectAttempt(ConnectionEvents.Reconnect reconnect); + +} diff --git a/src/main/java/io/lettuce/core/protocol/RedisCommand.java b/src/main/java/io/lettuce/core/protocol/RedisCommand.java new file mode 100644 index 0000000000..00987e9f6f --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/RedisCommand.java @@ -0,0 +1,98 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import io.lettuce.core.output.CommandOutput; +import io.netty.buffer.ByteBuf; + +/** + * A redis command that holds an output, arguments and a state, whether it is completed or not. + * + * Commands can be wrapped. Outer commands have to notify inner commands but inner commands do not communicate with outer + * commands. + * + * @author Mark Paluch + * @param Key type. + * @param Value type. + * @param Output type. + * @since 3.0 + */ +public interface RedisCommand { + + /** + * The command output. Can be null. + * + * @return the command output. + */ + CommandOutput getOutput(); + + /** + * Complete a command. + */ + void complete(); + + /** + * Cancel a command. + */ + void cancel(); + + /** + * + * @return the current command args + */ + CommandArgs getArgs(); + + /** + * + * @param throwable the exception + * @return {@code true} if this invocation caused this CompletableFuture to transition to a completed state, else + * {@code false} + */ + boolean completeExceptionally(Throwable throwable); + + /** + * + * @return the Redis command type like {@literal SADD}, {@literal HMSET}, {@literal QUIT}. + */ + ProtocolKeyword getType(); + + /** + * Encode the command. + * + * @param buf byte buffer to operate on. + */ + void encode(ByteBuf buf); + + /** + * + * @return true if the command is cancelled. + */ + boolean isCancelled(); + + /** + * + * @return true if the command is completed. + */ + boolean isDone(); + + /** + * Set a new output. Only possible as long as the command is not completed/cancelled. + * + * @param output the new command output + * @throws IllegalStateException if the command is cancelled/completed + */ + void setOutput(CommandOutput output); +} diff --git a/src/main/java/io/lettuce/core/protocol/RedisHandshakeHandler.java b/src/main/java/io/lettuce/core/protocol/RedisHandshakeHandler.java new file mode 100644 index 0000000000..89cd61bf7e --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/RedisHandshakeHandler.java @@ -0,0 +1,140 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.time.Duration; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.internal.ExceptionFactory; +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.resource.ClientResources; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.util.Timeout; + +/** + * Handler to initialize a Redis Connection using a {@link ConnectionInitializer}. + * + * @author Mark Paluch + * @since 6.0 + */ +public class RedisHandshakeHandler extends ChannelInboundHandlerAdapter { + + private final ConnectionInitializer connectionInitializer; + private final ClientResources clientResources; + private final Duration initializeTimeout; + + private final CompletableFuture handshakeFuture = new CompletableFuture<>(); + + public RedisHandshakeHandler(ConnectionInitializer connectionInitializer, ClientResources clientResources, + Duration initializeTimeout) { + this.connectionInitializer = connectionInitializer; + this.clientResources = clientResources; + this.initializeTimeout = initializeTimeout; + } + + @Override + public void channelRegistered(ChannelHandlerContext ctx) throws Exception { + + Runnable timeoutGuard = () -> { + + if (handshakeFuture.isDone()) { + return; + } + + fail(ctx, ExceptionFactory.createTimeoutException("Connection initialization timed out", initializeTimeout)); + }; + + Timeout timeoutHandle = clientResources.timer().newTimeout(t -> { + + if (clientResources.eventExecutorGroup().isShuttingDown()) { + timeoutGuard.run(); + return; + } + + clientResources.eventExecutorGroup().submit(timeoutGuard); + }, initializeTimeout.toNanos(), TimeUnit.NANOSECONDS); + + handshakeFuture.thenAccept(ignore -> { + timeoutHandle.cancel(); + }); + + super.channelRegistered(ctx); + } + + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + + if (!handshakeFuture.isDone()) { + fail(ctx, new RedisConnectionException("Connection closed prematurely")); + } + + super.channelInactive(ctx); + } + + @Override + public void channelActive(ChannelHandlerContext ctx) { + + CompletionStage future = connectionInitializer.initialize(ctx.channel()); + + future.whenComplete((ignore, throwable) -> { + + if (throwable != null) { + fail(ctx, throwable); + } else { + ctx.fireChannelActive(); + succeed(); + } + }); + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { + + if (!handshakeFuture.isDone()) { + fail(ctx, cause); + } + + super.exceptionCaught(ctx, cause); + } + + /** + * Complete the handshake future successfully. + */ + protected void succeed() { + handshakeFuture.complete(null); + } + + /** + * Complete the handshake future with an error and close the channel.. + */ + protected void fail(ChannelHandlerContext ctx, Throwable cause) { + + ctx.close().addListener(closeFuture -> { + handshakeFuture.completeExceptionally(cause); + }); + } + + /** + * @return future to synchronize channel initialization. Returns a new future for every reconnect. + */ + public CompletionStage channelInitialized() { + return handshakeFuture; + } + +} diff --git a/src/main/java/io/lettuce/core/protocol/RedisProtocolException.java b/src/main/java/io/lettuce/core/protocol/RedisProtocolException.java new file mode 100644 index 0000000000..801717a278 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/RedisProtocolException.java @@ -0,0 +1,31 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import io.lettuce.core.RedisException; + +/** + * Exception thrown on Redis protocol failures. + * + * @author Mark Paluch + * @since 6.0 + */ +public class RedisProtocolException extends RedisException { + + public RedisProtocolException(String msg) { + super(msg); + } +} diff --git a/src/main/java/io/lettuce/core/protocol/RedisStateMachine.java b/src/main/java/io/lettuce/core/protocol/RedisStateMachine.java new file mode 100644 index 0000000000..4528a2b9fa --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/RedisStateMachine.java @@ -0,0 +1,843 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static io.lettuce.core.protocol.RedisStateMachine.State.Type.*; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.concurrent.atomic.AtomicBoolean; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.output.CommandOutput; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.util.ByteProcessor; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * State machine that decodes redis server responses encoded according to the Unified + * Request Protocol (RESP). Supports RESP2 and RESP3. Initialized with protocol discovery. + * + * @author Will Glozer + * @author Mark Paluch + * @author Helly Guo + */ +public class RedisStateMachine { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(RedisStateMachine.class); + private static final ByteBuffer QUEUED = StandardCharsets.US_ASCII.encode("QUEUED"); + private static final int TERMINATOR_LENGTH = 2; + private static final int NOT_FOUND = -1; + + static class State { + enum Type { + + /** + * First byte: {@code +}. + */ + SINGLE, + + /** + * First byte: {@code +}. + */ + ERROR, + + /** + * First byte: {@code :}. + */ + INTEGER, + + /** + * First byte: {@code ,}. + * + * @since 6.0/RESP3 + */ + FLOAT, + + /** + * First byte: {@code #}. + * + * @since 6.0/RESP3 + */ + BOOLEAN, + + /** + * First byte: {@code !}. + * + * @since 6.0/RESP3 + */ + BULK_ERROR, + + /** + * First byte: {@code =}. + * + * @since 6.0/RESP3 + */ + VERBATIM, VERBATIM_STRING, + + /** + * First byte: {@code (}. + * + * @since 6.0/RESP3 + */ + BIG_NUMBER, + + /** + * First byte: {@code %}. + * + * @see #HELLO_V3 + * @since 6.0/RESP3 + */ + MAP, + + /** + * First byte: {@code ~}. + * + * @see #MULTI + * @since 6.0/RESP3 + */ + SET, + + /** + * First byte: {@code |}. + * + * @since 6.0/RESP3 + */ + ATTRIBUTE, + + /** + * First byte: {@code >}. + * + * @see #MULTI + * @since 6.0/RESP3 + */ + PUSH, + + /** + * First byte: {@code @}. + * + * @see #MAP + * @since 6.0/RESP3 + */ + HELLO_V3, + + /** + * First byte: {@code _}. + * + * @since 6.0/RESP3 + */ + NULL, + + /** + * First byte: {@code $}. + */ + BULK, + + /** + * First byte: {@code *}. + * + * @see #SET + * @see #MAP + */ + MULTI, BYTES + } + + Type type = null; + int count = NOT_FOUND; + + @Override + public String toString() { + final StringBuffer sb = new StringBuffer(); + sb.append(getClass().getSimpleName()); + sb.append(" [type=").append(type); + sb.append(", count=").append(count); + sb.append(']'); + return sb.toString(); + } + } + + private final State[] stack = new State[32]; + private final boolean debugEnabled = logger.isDebugEnabled(); + private final ByteBuf responseElementBuffer; + private final AtomicBoolean closed = new AtomicBoolean(); + private final Resp2LongProcessor longProcessor = new Resp2LongProcessor(); + + private ProtocolVersion protocolVersion = null; + private int stackElements; + + /** + * Initialize a new instance. + */ + public RedisStateMachine(ByteBufAllocator alloc) { + this.responseElementBuffer = alloc.buffer(1024); + } + + public boolean isDiscoverProtocol() { + return this.protocolVersion == null; + } + + public ProtocolVersion getProtocolVersion() { + return protocolVersion; + } + + public void setProtocolVersion(ProtocolVersion protocolVersion) { + this.protocolVersion = protocolVersion; + } + + /** + * Decode a command using the input buffer. + * + * @param buffer Buffer containing data from the server. + * @param output Current command output. + * @return true if a complete response was read. + */ + public boolean decode(ByteBuf buffer, CommandOutput output) { + return decode(buffer, null, output); + } + + /** + * Attempt to decode a redis response and return a flag indicating whether a complete response was read. + * + * @param buffer Buffer containing data from the server. + * @param command the command itself TODO: Change to Consumer + * @param output Current command output. + * @return true if a complete response was read. + */ + public boolean decode(ByteBuf buffer, RedisCommand command, CommandOutput output) { + + int length, end; + ByteBuffer bytes; + + buffer.touch("RedisStateMachine.decode(…)"); + if (debugEnabled) { + logger.debug("Decode {}", command); + } + + if (isEmpty(stack)) { + add(stack, new State()); + } + + if (output == null) { + return isEmpty(stack); + } + + boolean resp3Indicator = false; + + loop: + + while (!isEmpty(stack)) { + State state = peek(stack); + + if (state.type == null) { + if (!buffer.isReadable()) { + break; + } + state.type = readReplyType(buffer); + + if (state.type == HELLO_V3 || state.type == MAP) { + resp3Indicator = true; + } + + buffer.markReaderIndex(); + } + + switch (state.type) { + case SINGLE: + if ((bytes = readLine(buffer)) == null) { + break loop; + } + + if (!QUEUED.equals(bytes)) { + safeSetSingle(output, bytes, command); + } + break; + + case BIG_NUMBER: + if ((bytes = readLine(buffer)) == null) { + break loop; + } + + safeSetBigNumber(output, bytes, command); + break; + case ERROR: + if ((bytes = readLine(buffer)) == null) { + break loop; + } + safeSetError(output, bytes, command); + break; + case NULL: + if ((bytes = readLine(buffer)) == null) { + break loop; + } + safeSet(output, null, command); + break; + case INTEGER: + if ((end = findLineEnd(buffer)) == NOT_FOUND) { + break loop; + } + long integer = readLong(buffer, buffer.readerIndex(), end); + safeSet(output, integer, command); + break; + case BOOLEAN: + if ((end = findLineEnd(buffer)) == NOT_FOUND) { + break loop; + } + boolean value = readBoolean(buffer); + safeSet(output, value, command); + break; + case FLOAT: + if ((end = findLineEnd(buffer)) == NOT_FOUND) { + break loop; + } + double f = readFloat(buffer, buffer.readerIndex(), end); + safeSet(output, f, command); + break; + case BULK: + case VERBATIM: + if ((end = findLineEnd(buffer)) == NOT_FOUND) { + break loop; + } + length = (int) readLong(buffer, buffer.readerIndex(), end); + if (length == NOT_FOUND) { + safeSet(output, null, command); + } else { + state.type = state.type == VERBATIM ? VERBATIM_STRING : BYTES; + state.count = length + TERMINATOR_LENGTH; + buffer.markReaderIndex(); + continue loop; + } + break; + case BULK_ERROR: + if ((end = findLineEnd(buffer)) == NOT_FOUND) { + break loop; + } + length = (int) readLong(buffer, buffer.readerIndex(), end); + if (length == NOT_FOUND) { + safeSetError(output, null, command); + } else { + state.type = BYTES; + state.count = length + TERMINATOR_LENGTH; + buffer.markReaderIndex(); + continue loop; + } + break; + case HELLO_V3: + case PUSH: + case MULTI: + case SET: + case MAP: + + if (state.count == NOT_FOUND) { + if ((end = findLineEnd(buffer)) == NOT_FOUND) { + break loop; + } + length = (int) readLong(buffer, buffer.readerIndex(), end); + state.count = length; + buffer.markReaderIndex(); + + switch (state.type) { + case MULTI: + case PUSH: + safeMultiArray(output, state.count, command); + break; + case MAP: + safeMultiMap(output, state.count, command); + state.count = length * 2; + break; + case SET: + safeMultiSet(output, state.count, command); + break; + } + } + + if (state.count <= 0) { + break; + } + + state.count--; + addFirst(stack, new State()); + + continue loop; + + case VERBATIM_STRING: + if ((bytes = readBytes(buffer, state.count)) == null) { + break loop; + } + // skip txt: and mkd: + bytes.position(bytes.position() + 4); + safeSet(output, bytes, command); + break; + case BYTES: + if ((bytes = readBytes(buffer, state.count)) == null) { + break loop; + } + safeSet(output, bytes, command); + break; + case ATTRIBUTE: + throw new RedisProtocolException("Not implemented"); + default: + throw new RedisProtocolException("State " + state.type + " not supported"); + } + + buffer.markReaderIndex(); + remove(stack); + + output.complete(size(stack)); + } + + if (debugEnabled) { + logger.debug("Decoded {}, empty stack: {}", command, isEmpty(stack)); + } + + if (isDiscoverProtocol()) { + if (resp3Indicator) { + setProtocolVersion(ProtocolVersion.RESP3); + } else { + setProtocolVersion(ProtocolVersion.RESP2); + } + } + + return isEmpty(stack); + } + + /** + * Reset the state machine. + */ + public void reset() { + Arrays.fill(stack, null); + stackElements = 0; + } + + /** + * Close the state machine to free resources. + */ + public void close() { + if (closed.compareAndSet(false, true)) { + responseElementBuffer.release(); + } + } + + private int findLineEnd(ByteBuf buffer) { + + int index = buffer.forEachByte(ByteProcessor.FIND_LF); + return (index > 0 && buffer.getByte(index - 1) == '\r') ? index - 1 : NOT_FOUND; + } + + private State.Type readReplyType(ByteBuf buffer) { + return getType(buffer.readerIndex(), buffer.readByte()); + } + + private State.Type getType(int index, byte b) { + + switch (b) { + case '+': + return SINGLE; + case '-': + return ERROR; + case ':': + return INTEGER; + case ',': + return FLOAT; + case '#': + return BOOLEAN; + case '=': + return VERBATIM; + case '(': + return BIG_NUMBER; + case '%': + return MAP; + case '~': + return SET; + case '|': + return ATTRIBUTE; + case '@': + return HELLO_V3; + case '$': + return BULK; + case '*': + return MULTI; + case '>': + return PUSH; + case '_': + return NULL; + default: + throw new RedisProtocolException("Invalid first byte: " + b + " (" + new String(new byte[] { b }) + ")" + + " at buffer index " + index + " decoding using " + getProtocolVersion()); + } + } + + private long readLong(ByteBuf buffer, int start, int end) { + return longProcessor.getValue(buffer, start, end); + } + + private double readFloat(ByteBuf buffer, int start, int end) { + + int valueLength = end - start; + String value = buffer.toString(start, valueLength, StandardCharsets.US_ASCII); + + buffer.skipBytes(valueLength + TERMINATOR_LENGTH); + + return LettuceStrings.toDouble(value); + } + + private boolean readBoolean(ByteBuf buffer) { + + byte b = buffer.readByte(); + buffer.skipBytes(TERMINATOR_LENGTH); + + switch (b) { + case 't': + return true; + case 'f': + return false; + } + + throw new RedisProtocolException("Unexpected BOOLEAN value: " + b); + } + + private ByteBuffer readLine(ByteBuf buffer) { + + ByteBuffer bytes = null; + int end = findLineEnd(buffer); + + if (end > NOT_FOUND) { + bytes = readBytes0(buffer, end - buffer.readerIndex()); + + buffer.skipBytes(TERMINATOR_LENGTH); + buffer.markReaderIndex(); + } + + return bytes; + } + + private ByteBuffer readBytes(ByteBuf buffer, int count) { + + if (buffer.readableBytes() >= count) { + + ByteBuffer byteBuffer = readBytes0(buffer, count - TERMINATOR_LENGTH); + + buffer.skipBytes(TERMINATOR_LENGTH); + buffer.markReaderIndex(); + + return byteBuffer; + } + + return null; + } + + private ByteBuffer readBytes0(ByteBuf buffer, int count) { + + ByteBuffer bytes; + responseElementBuffer.clear(); + + if (responseElementBuffer.capacity() < count) { + responseElementBuffer.capacity(count); + } + + buffer.readBytes(responseElementBuffer, count); + bytes = responseElementBuffer.internalNioBuffer(0, count); + + return bytes; + } + + /** + * Remove the head element from the stack. + * + * @param stack + */ + private void remove(State[] stack) { + stack[stackElements - 1] = null; + stackElements--; + } + + /** + * Add the element to the stack to be the new head element. + * + * @param stack + * @param state + */ + private void addFirst(State[] stack, State state) { + stack[stackElements++] = state; + } + + /** + * Returns the head element without removing it. + * + * @param stack + * @return + */ + private State peek(State[] stack) { + return stack[stackElements - 1]; + } + + /** + * Add a state as tail element. This method shifts the whole stack if the stack is not empty. + * + * @param stack + * @param state + */ + private void add(State[] stack, State state) { + + if (stackElements != 0) { + System.arraycopy(stack, 0, stack, 1, stackElements); + } + + stack[0] = state; + stackElements++; + } + + /** + * @param stack + * @return number of stack elements. + */ + private int size(State[] stack) { + return stackElements; + } + + /** + * @param stack + * @return true if the stack is empty. + */ + private boolean isEmpty(State[] stack) { + return stackElements == 0; + } + + /** + * Safely sets {@link CommandOutput#set(boolean)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param value + * @param command + */ + protected void safeSet(CommandOutput output, boolean value, RedisCommand command) { + + try { + output.set(value); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#set(long)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param number + * @param command + */ + protected void safeSet(CommandOutput output, long number, RedisCommand command) { + + try { + output.set(number); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#set(double)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param number + * @param command + */ + protected void safeSet(CommandOutput output, double number, RedisCommand command) { + + try { + output.set(number); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#set(ByteBuffer)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param bytes + * @param command + */ + protected void safeSet(CommandOutput output, ByteBuffer bytes, RedisCommand command) { + + try { + output.set(bytes); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#set(ByteBuffer)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param bytes + * @param command + */ + protected void safeSetSingle(CommandOutput output, ByteBuffer bytes, RedisCommand command) { + + try { + output.set(bytes); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#set(ByteBuffer)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param bytes + * @param command + */ + protected void safeSetBigNumber(CommandOutput output, ByteBuffer bytes, RedisCommand command) { + + try { + output.setBigNumber(bytes); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#multiArray(int)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param count + * @param command + */ + protected void safeMultiArray(CommandOutput output, int count, RedisCommand command) { + + try { + output.multiArray(count); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#multiPush(int)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param count + * @param command + */ + protected void safeMultiPush(CommandOutput output, int count, RedisCommand command) { + + try { + output.multiPush(count); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#multiSet(int)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param count + * @param command + */ + protected void safeMultiSet(CommandOutput output, int count, RedisCommand command) { + + try { + output.multiSet(count); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#multiMap(int)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param count + * @param command + */ + protected void safeMultiMap(CommandOutput output, int count, RedisCommand command) { + + try { + output.multiMap(count); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + /** + * Safely sets {@link CommandOutput#setError(ByteBuffer)}. Completes a command exceptionally in case an exception occurs. + * + * @param output + * @param bytes + * @param command + */ + protected void safeSetError(CommandOutput output, ByteBuffer bytes, RedisCommand command) { + + try { + output.setError(bytes); + } catch (Exception e) { + command.completeExceptionally(e); + } + } + + @SuppressWarnings("unused") + static class Resp2LongProcessor implements ByteProcessor { + + long result; + boolean negative; + boolean first; + + public long getValue(ByteBuf buffer, int start, int end) { + + this.result = 0; + this.first = true; + + int length = end - start; + buffer.forEachByte(start, length, this); + + if (!this.negative) { + this.result = -this.result; + } + + buffer.skipBytes(length + TERMINATOR_LENGTH); + + return this.result; + } + + @Override + public boolean process(byte value) { + + if (first) { + first = false; + + if (value == '-') { + negative = true; + } else { + negative = false; + int digit = value - '0'; + result = result * 10 - digit; + } + return true; + } + + int digit = value - '0'; + result = result * 10 - digit; + + return true; + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/SharedLock.java b/src/main/java/io/lettuce/core/protocol/SharedLock.java new file mode 100644 index 0000000000..e22caefdb8 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/SharedLock.java @@ -0,0 +1,143 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.Supplier; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Shared locking facade that supports shared and exclusive locking. + *

    + * Multiple shared locks (writers) are allowed concurrently to process their work. If an exclusive lock is requested, the + * exclusive lock requestor will wait until all shared locks are released and the exclusive worker is permitted. + *

    + * Exclusive locking is reentrant. An exclusive lock owner is permitted to acquire and release shared locks. Shared/exclusive + * lock requests by other threads than the thread which holds the exclusive lock, are forced to wait until the exclusive lock is + * released. + * + * @author Mark Paluch + */ +class SharedLock { + + private final AtomicLong writers = new AtomicLong(); + private volatile Thread exclusiveLockOwner; + + /** + * Wait for stateLock and increment writers. Will wait if stateLock is locked and if writer counter is negative. + */ + void incrementWriters() { + + if (exclusiveLockOwner == Thread.currentThread()) { + return; + } + + synchronized (this) { + for (;;) { + + if (writers.get() >= 0) { + writers.incrementAndGet(); + return; + } + } + } + } + + /** + * Decrement writers without any wait. + */ + void decrementWriters() { + + if (exclusiveLockOwner == Thread.currentThread()) { + return; + } + + writers.decrementAndGet(); + } + + /** + * Execute a {@link Runnable} guarded by an exclusive lock. + * + * @param runnable the runnable, must not be {@literal null}. + */ + void doExclusive(Runnable runnable) { + + LettuceAssert.notNull(runnable, "Runnable must not be null"); + + doExclusive(() -> { + runnable.run(); + return null; + }); + } + + /** + * Retrieve a value produced by a {@link Supplier} guarded by an exclusive lock. + * + * @param supplier the {@link Supplier}, must not be {@literal null}. + * @param the return type + * @return the return value + */ + T doExclusive(Supplier supplier) { + + LettuceAssert.notNull(supplier, "Supplier must not be null"); + + synchronized (this) { + + try { + + lockWritersExclusive(); + return supplier.get(); + } finally { + unlockWritersExclusive(); + } + } + } + + /** + * Wait for stateLock and no writers. Must be used in an outer {@code synchronized} block to prevent interleaving with other + * methods using writers. Sets writers to a negative value to create a lock for {@link #incrementWriters()}. + */ + private void lockWritersExclusive() { + + if (exclusiveLockOwner == Thread.currentThread()) { + writers.decrementAndGet(); + return; + } + + synchronized (this) { + for (;;) { + + if (writers.compareAndSet(0, -1)) { + exclusiveLockOwner = Thread.currentThread(); + return; + } + } + } + } + + /** + * Unlock writers. + */ + private void unlockWritersExclusive() { + + if (exclusiveLockOwner == Thread.currentThread()) { + if (writers.incrementAndGet() == 0) { + exclusiveLockOwner = null; + } + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/SslRedisHandshakeHandler.java b/src/main/java/io/lettuce/core/protocol/SslRedisHandshakeHandler.java new file mode 100644 index 0000000000..1250d3b73f --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/SslRedisHandshakeHandler.java @@ -0,0 +1,58 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.time.Duration; + +import io.lettuce.core.resource.ClientResources; +import io.netty.channel.ChannelHandlerContext; +import io.netty.handler.ssl.SslHandshakeCompletionEvent; + +/** + * Handler to initialize a secure Redis Connection using a {@link ConnectionInitializer}. Delays channel activation to after the + * SSL handshake. + * + * @author Mark Paluch + * @since 6.0 + */ +public class SslRedisHandshakeHandler extends RedisHandshakeHandler { + + public SslRedisHandshakeHandler(ConnectionInitializer connectionInitializer, ClientResources clientResources, + Duration initializeTimeout) { + super(connectionInitializer, clientResources, initializeTimeout); + } + + @Override + public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { + + if (evt instanceof SslHandshakeCompletionEvent) { + + SslHandshakeCompletionEvent event = (SslHandshakeCompletionEvent) evt; + if (event.isSuccess()) { + super.channelActive(ctx); + } else { + fail(ctx, event.cause()); + } + } + + super.userEventTriggered(ctx, evt); + } + + @Override + public void channelActive(ChannelHandlerContext ctx) { + // do not propagate channel active when using SSL. + } +} diff --git a/src/main/java/io/lettuce/core/protocol/TracedCommand.java b/src/main/java/io/lettuce/core/protocol/TracedCommand.java new file mode 100644 index 0000000000..84f8bad469 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/TracedCommand.java @@ -0,0 +1,64 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import io.lettuce.core.tracing.TraceContext; +import io.lettuce.core.tracing.TraceContextProvider; +import io.lettuce.core.tracing.Tracer; +import io.netty.buffer.ByteBuf; + +/** + * Redis command that is aware of an associated {@link TraceContext}. + * + * @author Mark Paluch + * @since 5.1 + */ +public class TracedCommand extends CommandWrapper implements TraceContextProvider { + + private final TraceContext traceContext; + private Tracer.Span span; + + public TracedCommand(RedisCommand command, TraceContext traceContext) { + super(command); + this.traceContext = traceContext; + } + + @Override + public TraceContext getTraceContext() { + return traceContext; + } + + public Tracer.Span getSpan() { + return span; + } + + public void setSpan(Tracer.Span span) { + this.span = span; + } + + @Override + public void encode(ByteBuf buf) { + + if (span != null) { + span.annotate("redis.encode.start"); + } + super.encode(buf); + + if (span != null) { + span.annotate("redis.encode.end"); + } + } +} diff --git a/src/main/java/io/lettuce/core/protocol/TransactionalCommand.java b/src/main/java/io/lettuce/core/protocol/TransactionalCommand.java new file mode 100644 index 0000000000..80f152b2e7 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/TransactionalCommand.java @@ -0,0 +1,36 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +/** + * A wrapper for commands within a {@literal MULTI} transaction. Commands triggered within a transaction will be completed + * twice. Once on the submission and once during {@literal EXEC}. Only the second completion will complete the underlying + * command. + * + * + * @param Key type. + * @param Value type. + * @param Command output type. + * + * @author Mark Paluch + */ +public class TransactionalCommand extends AsyncCommand implements RedisCommand { + + public TransactionalCommand(RedisCommand command) { + super(command, 2); + } + +} diff --git a/src/main/java/io/lettuce/core/protocol/WithLatency.java b/src/main/java/io/lettuce/core/protocol/WithLatency.java new file mode 100644 index 0000000000..c286e8d4bb --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/WithLatency.java @@ -0,0 +1,60 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +/** + * Interface to items recording a latency. Unit of time depends on the actual implementation. + * + * @author Mark Paluch + */ +interface WithLatency { + + /** + * Sets the time of sending the item. + * @param time the time of when the item was sent. + */ + void sent(long time); + + /** + * Sets the time of the first response. + * @param time the time of the first response. + */ + void firstResponse(long time); + + /** + * Set the time of completion. + * @param time the time of completion. + */ + void completed(long time); + + /** + * @return the time of when the item was sent. + */ + long getSent(); + + /** + * + * @return the time of the first response. + */ + long getFirstResponse(); + + /** + * + * @return the time of completion. + */ + long getCompleted(); + +} diff --git a/src/main/java/io/lettuce/core/protocol/package-info.java b/src/main/java/io/lettuce/core/protocol/package-info.java new file mode 100644 index 0000000000..a28811c1f9 --- /dev/null +++ b/src/main/java/io/lettuce/core/protocol/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis protocol layer abstraction. + */ +package io.lettuce.core.protocol; diff --git a/src/main/java/io/lettuce/core/pubsub/PubSubCommandArgs.java b/src/main/java/io/lettuce/core/pubsub/PubSubCommandArgs.java new file mode 100644 index 0000000000..6218423332 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/PubSubCommandArgs.java @@ -0,0 +1,47 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.protocol.CommandArgs; + +/** + * + * Command args for Pub/Sub connections. This implementation hides the first key as PubSub keys are not keys from the key-space. + * + * @author Mark Paluch + * @since 4.2 + */ +class PubSubCommandArgs extends CommandArgs { + + /** + * @param codec Codec used to encode/decode keys and values, must not be {@literal null}. + */ + public PubSubCommandArgs(RedisCodec codec) { + super(codec); + } + + /** + * + * @return always {@literal null}. + */ + @Override + public ByteBuffer getFirstEncodedKey() { + return null; + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/PubSubCommandBuilder.java b/src/main/java/io/lettuce/core/pubsub/PubSubCommandBuilder.java new file mode 100644 index 0000000000..a2b4f0e882 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/PubSubCommandBuilder.java @@ -0,0 +1,96 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import static io.lettuce.core.protocol.CommandKeyword.CHANNELS; +import static io.lettuce.core.protocol.CommandKeyword.NUMSUB; +import static io.lettuce.core.protocol.CommandType.*; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.output.IntegerOutput; +import io.lettuce.core.output.KeyListOutput; +import io.lettuce.core.output.MapOutput; +import io.lettuce.core.protocol.BaseRedisCommandBuilder; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandType; + +/** + * Dedicated pub/sub command builder to build pub/sub commands. + * + * @author Mark Paluch + * @since 4.2 + */ +@SuppressWarnings("varargs") +class PubSubCommandBuilder extends BaseRedisCommandBuilder { + + static final String MUST_NOT_BE_EMPTY = "must not be empty"; + + PubSubCommandBuilder(RedisCodec codec) { + super(codec); + } + + Command publish(K channel, V message) { + CommandArgs args = new PubSubCommandArgs<>(codec).addKey(channel).addValue(message); + return createCommand(PUBLISH, new IntegerOutput<>(codec), args); + } + + Command> pubsubChannels(K pattern) { + CommandArgs args = new PubSubCommandArgs<>(codec).add(CHANNELS).addKey(pattern); + return createCommand(PUBSUB, new KeyListOutput<>(codec), args); + } + + @SafeVarargs + final Command> pubsubNumsub(K... patterns) { + LettuceAssert.notEmpty(patterns, "patterns " + MUST_NOT_BE_EMPTY); + + CommandArgs args = new PubSubCommandArgs<>(codec).add(NUMSUB).addKeys(patterns); + return createCommand(PUBSUB, new MapOutput<>((RedisCodec) codec), args); + } + + @SafeVarargs + final Command psubscribe(K... patterns) { + LettuceAssert.notEmpty(patterns, "patterns " + MUST_NOT_BE_EMPTY); + + return pubSubCommand(PSUBSCRIBE, new PubSubOutput<>(codec), patterns); + } + + @SafeVarargs + final Command punsubscribe(K... patterns) { + return pubSubCommand(PUNSUBSCRIBE, new PubSubOutput<>(codec), patterns); + } + + @SafeVarargs + final Command subscribe(K... channels) { + LettuceAssert.notEmpty(channels, "channels " + MUST_NOT_BE_EMPTY); + + return pubSubCommand(SUBSCRIBE, new PubSubOutput<>(codec), channels); + } + + @SafeVarargs + final Command unsubscribe(K... channels) { + return pubSubCommand(UNSUBSCRIBE, new PubSubOutput<>(codec), channels); + } + + Command pubSubCommand(CommandType type, CommandOutput output, K... keys) { + return new Command<>(type, output, new PubSubCommandArgs<>(codec).addKeys(keys)); + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/PubSubCommandHandler.java b/src/main/java/io/lettuce/core/pubsub/PubSubCommandHandler.java new file mode 100644 index 0000000000..461ee00f55 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/PubSubCommandHandler.java @@ -0,0 +1,254 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import java.nio.ByteBuffer; +import java.util.ArrayDeque; +import java.util.Deque; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.output.ReplayOutput; +import io.lettuce.core.protocol.CommandHandler; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; +import io.netty.buffer.ByteBuf; +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelHandlerContext; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * A netty {@link ChannelHandler} responsible for writing Redis Pub/Sub commands and reading the response stream from the + * server. {@link PubSubCommandHandler} accounts for Pub/Sub message notification calling back + * {@link PubSubEndpoint#notifyMessage(PubSubOutput)}. Redis responses can be interleaved in the sense that a response contains + * a Pub/Sub message first, then a command response. Possible interleave is introspected via {@link ResponseHeaderReplayOutput} + * and decoding hooks. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class PubSubCommandHandler extends CommandHandler { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(PubSubCommandHandler.class); + + private final PubSubEndpoint endpoint; + private final RedisCodec codec; + private final Deque> queue = new ArrayDeque<>(); + + private ResponseHeaderReplayOutput replay; + private PubSubOutput output; + + /** + * Initialize a new instance. + * + * @param clientOptions client options for this connection, must not be {@literal null} + * @param clientResources client resources for this connection + * @param codec Codec. + * @param endpoint the Pub/Sub endpoint for Pub/Sub callback. + */ + public PubSubCommandHandler(ClientOptions clientOptions, ClientResources clientResources, RedisCodec codec, + PubSubEndpoint endpoint) { + + super(clientOptions, clientResources, endpoint); + + this.endpoint = endpoint; + this.codec = codec; + this.output = new PubSubOutput<>(codec); + } + + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + + replay = null; + queue.clear(); + + super.channelInactive(ctx); + } + + @SuppressWarnings("unchecked") + @Override + protected void decode(ChannelHandlerContext ctx, ByteBuf buffer) throws InterruptedException { + + if (output.type() != null && !output.isCompleted()) { + + if (!super.decode(buffer, output)) { + return; + } + + RedisCommand peek = getStack().peek(); + canComplete(peek); + doNotifyMessage(output); + output = new PubSubOutput<>(codec); + } + + if (!getStack().isEmpty()) { + super.decode(ctx, buffer); + } + + ReplayOutput replay; + while ((replay = queue.poll()) != null) { + + replay.replay(output); + doNotifyMessage(output); + output = new PubSubOutput<>(codec); + } + + while (super.getStack().isEmpty() && buffer.isReadable()) { + + if (!super.decode(buffer, output)) { + return; + } + + doNotifyMessage(output); + output = new PubSubOutput<>(codec); + } + + buffer.discardReadBytes(); + + } + + @Override + protected boolean canDecode(ByteBuf buffer) { + return super.canDecode(buffer) && output.type() == null; + } + + @Override + protected boolean canComplete(RedisCommand command) { + + if (isPubSubMessage(replay)) { + + queue.add(replay); + replay = null; + return false; + } + + return super.canComplete(command); + } + + @Override + protected void complete(RedisCommand command) { + + if (replay != null && command.getOutput() != null) { + try { + replay.replay(command.getOutput()); + } catch (Exception e) { + command.completeExceptionally(e); + } + replay = null; + } + + super.complete(command); + } + + /** + * Check whether {@link ResponseHeaderReplayOutput} contains a Pub/Sub message that requires Pub/Sub dispatch instead of to + * be used as Command output. + * + * @param replay + * @return + */ + private static boolean isPubSubMessage(ResponseHeaderReplayOutput replay) { + + if (replay == null) { + return false; + } + + String firstElement = replay.firstElement; + if (replay.multiCount != null && firstElement != null) { + + if (replay.multiCount == 3 && firstElement.equalsIgnoreCase(PubSubOutput.Type.message.name())) { + return true; + } + + if (replay.multiCount == 4 && firstElement.equalsIgnoreCase(PubSubOutput.Type.pmessage.name())) { + return true; + } + } + + return false; + } + + @Override + protected CommandOutput getCommandOutput(RedisCommand command) { + + if (getStack().isEmpty() || command.getOutput() == null) { + return super.getCommandOutput(command); + } + + if (replay == null) { + replay = new ResponseHeaderReplayOutput<>(); + } + + return replay; + } + + @Override + @SuppressWarnings("unchecked") + protected void afterDecode(ChannelHandlerContext ctx, RedisCommand command) { + + if (command.getOutput() instanceof PubSubOutput) { + doNotifyMessage((PubSubOutput) command.getOutput()); + } + } + + private void doNotifyMessage(PubSubOutput output) { + try { + endpoint.notifyMessage(output); + } catch (Exception e) { + logger.error("Unexpected error occurred in PubSubEndpoint.notifyMessage", e); + } + } + + /** + * Inspectable {@link ReplayOutput} to investigate the first multi and string response elements. + * + * @param + * @param + */ + static class ResponseHeaderReplayOutput extends ReplayOutput { + + Integer multiCount; + String firstElement; + + @Override + public void set(ByteBuffer bytes) { + + if (firstElement == null && bytes != null && bytes.remaining() > 0) { + + bytes.mark(); + firstElement = StringCodec.ASCII.decodeKey(bytes); + bytes.reset(); + } + + super.set(bytes); + } + + @Override + public void multi(int count) { + + if (multiCount == null) { + multiCount = count; + } + + super.multi(count); + } + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/PubSubEndpoint.java b/src/main/java/io/lettuce/core/pubsub/PubSubEndpoint.java new file mode 100644 index 0000000000..e027d9fa1c --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/PubSubEndpoint.java @@ -0,0 +1,312 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import java.util.*; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.CopyOnWriteArrayList; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.RedisException; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.DefaultEndpoint; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; +import io.netty.channel.Channel; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * @author Mark Paluch + */ +public class PubSubEndpoint extends DefaultEndpoint { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(PubSubEndpoint.class); + private static final Set ALLOWED_COMMANDS_SUBSCRIBED; + private static final Set SUBSCRIBE_COMMANDS; + private final List> listeners = new CopyOnWriteArrayList<>(); + private final Set> channels; + private final Set> patterns; + private volatile boolean subscribeWritten = false; + + static { + + ALLOWED_COMMANDS_SUBSCRIBED = new HashSet<>(5, 1); + + ALLOWED_COMMANDS_SUBSCRIBED.add(CommandType.SUBSCRIBE.name()); + ALLOWED_COMMANDS_SUBSCRIBED.add(CommandType.PSUBSCRIBE.name()); + ALLOWED_COMMANDS_SUBSCRIBED.add(CommandType.UNSUBSCRIBE.name()); + ALLOWED_COMMANDS_SUBSCRIBED.add(CommandType.PUNSUBSCRIBE.name()); + ALLOWED_COMMANDS_SUBSCRIBED.add(CommandType.QUIT.name()); + + SUBSCRIBE_COMMANDS = new HashSet<>(2, 1); + + SUBSCRIBE_COMMANDS.add(CommandType.SUBSCRIBE.name()); + SUBSCRIBE_COMMANDS.add(CommandType.PSUBSCRIBE.name()); + } + + /** + * Initialize a new instance that handles commands from the supplied queue. + * + * @param clientOptions client options for this connection, must not be {@literal null} + * @param clientResources client resources for this connection, must not be {@literal null}. + */ + public PubSubEndpoint(ClientOptions clientOptions, ClientResources clientResources) { + + super(clientOptions, clientResources); + + this.channels = ConcurrentHashMap.newKeySet(); + this.patterns = ConcurrentHashMap.newKeySet(); + } + + /** + * Add a new {@link RedisPubSubListener listener}. + * + * @param listener the listener, must not be {@literal null}. + */ + public void addListener(RedisPubSubListener listener) { + listeners.add(listener); + } + + /** + * Remove an existing {@link RedisPubSubListener listener}.. + * + * @param listener the listener, must not be {@literal null}. + */ + public void removeListener(RedisPubSubListener listener) { + listeners.remove(listener); + } + + protected List> getListeners() { + return listeners; + } + + public boolean hasChannelSubscriptions() { + return !channels.isEmpty(); + } + + public Set getChannels() { + return unwrap(this.channels); + } + + public boolean hasPatternSubscriptions() { + return !patterns.isEmpty(); + } + + public Set getPatterns() { + return unwrap(this.patterns); + } + + @Override + public void notifyChannelActive(Channel channel) { + subscribeWritten = false; + super.notifyChannelActive(channel); + } + + @Override + public RedisCommand write(RedisCommand command) { + + if (isSubscribed() && !isAllowed(command)) { + rejectCommand(command); + return command; + } + + if (!subscribeWritten && SUBSCRIBE_COMMANDS.contains(command.getType().name())) { + subscribeWritten = true; + } + + return super.write(command); + } + + @Override + public Collection> write(Collection> redisCommands) { + + if (isSubscribed()) { + + if (containsViolatingCommands(redisCommands)) { + rejectCommands(redisCommands); + return (Collection>) redisCommands; + } + } + + if (!subscribeWritten) { + for (RedisCommand redisCommand : redisCommands) { + if (SUBSCRIBE_COMMANDS.contains(redisCommand.getType().name())) { + subscribeWritten = true; + break; + } + } + } + + return super.write(redisCommands); + } + + protected void rejectCommand(RedisCommand command) { + command.completeExceptionally( + new RedisException(String.format("Command %s not allowed while subscribed. Allowed commands are: %s", + command.getType().name(), ALLOWED_COMMANDS_SUBSCRIBED))); + } + + protected void rejectCommands(Collection> redisCommands) { + for (RedisCommand command : redisCommands) { + command.completeExceptionally( + new RedisException(String.format("Command %s not allowed while subscribed. Allowed commands are: %s", + command.getType().name(), ALLOWED_COMMANDS_SUBSCRIBED))); + } + } + + protected boolean containsViolatingCommands(Collection> redisCommands) { + + for (RedisCommand redisCommand : redisCommands) { + + if (!isAllowed(redisCommand)) { + return true; + } + } + + return false; + } + + private static boolean isAllowed(RedisCommand command) { + return ALLOWED_COMMANDS_SUBSCRIBED.contains(command.getType().name()); + } + + private boolean isSubscribed() { + return subscribeWritten && (hasChannelSubscriptions() || hasPatternSubscriptions()); + } + + public void notifyMessage(PubSubOutput output) { + + // drop empty messages + if (output.type() == null || (output.pattern() == null && output.channel() == null && output.get() == null)) { + return; + } + + updateInternalState(output); + try { + notifyListeners(output); + } catch (Exception e) { + logger.error("Unexpected error occurred in RedisPubSubListener callback", e); + } + } + + protected void notifyListeners(PubSubOutput output) { + // update listeners + for (RedisPubSubListener listener : listeners) { + switch (output.type()) { + case message: + listener.message(output.channel(), output.get()); + break; + case pmessage: + listener.message(output.pattern(), output.channel(), output.get()); + break; + case psubscribe: + listener.psubscribed(output.pattern(), output.count()); + break; + case punsubscribe: + listener.punsubscribed(output.pattern(), output.count()); + break; + case subscribe: + listener.subscribed(output.channel(), output.count()); + break; + case unsubscribe: + listener.unsubscribed(output.channel(), output.count()); + break; + default: + throw new UnsupportedOperationException("Operation " + output.type() + " not supported"); + } + } + } + + private void updateInternalState(PubSubOutput output) { + // update internal state + switch (output.type()) { + case psubscribe: + patterns.add(new Wrapper<>(output.pattern())); + break; + case punsubscribe: + patterns.remove(new Wrapper<>(output.pattern())); + break; + case subscribe: + channels.add(new Wrapper<>(output.channel())); + break; + case unsubscribe: + channels.remove(new Wrapper<>(output.channel())); + break; + default: + break; + } + } + + private Set unwrap(Set> wrapped) { + + Set result = new LinkedHashSet<>(wrapped.size()); + + for (Wrapper channel : wrapped) { + result.add(channel.name); + } + + return result; + } + + /** + * Comparison/equality wrapper with specific {@code byte[]} equals and hashCode implementations. + * + * @param + */ + static class Wrapper { + + protected final K name; + + public Wrapper(K name) { + this.name = name; + } + + @Override + public int hashCode() { + + if (name instanceof byte[]) { + return Arrays.hashCode((byte[]) name); + } + return name.hashCode(); + } + + @Override + public boolean equals(Object obj) { + + if (!(obj instanceof Wrapper)) { + return false; + } + + Wrapper that = (Wrapper) obj; + + if (name instanceof byte[] && that.name instanceof byte[]) { + return Arrays.equals((byte[]) name, (byte[]) that.name); + } + + return name.equals(that.name); + } + + @Override + public String toString() { + final StringBuffer sb = new StringBuffer(); + sb.append(getClass().getSimpleName()); + sb.append(" [name=").append(name); + sb.append(']'); + return sb.toString(); + } + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/PubSubOutput.java b/src/main/java/io/lettuce/core/pubsub/PubSubOutput.java new file mode 100644 index 0000000000..e02ba83ae3 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/PubSubOutput.java @@ -0,0 +1,118 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.output.CommandOutput; + +/** + * One element of the Redis pub/sub stream. May be a message or notification of subscription details. + * + * @param Key type. + * @param Value type. + * @param Result type. + * @author Will Glozer + */ +public class PubSubOutput extends CommandOutput { + + public enum Type { + message, pmessage, psubscribe, punsubscribe, subscribe, unsubscribe + } + + private Type type; + private K channel; + private K pattern; + private long count; + private boolean completed; + + public PubSubOutput(RedisCodec codec) { + super(codec, null); + } + + public Type type() { + return type; + } + + public K channel() { + return channel; + } + + public K pattern() { + return pattern; + } + + public long count() { + return count; + } + + @Override + @SuppressWarnings({ "fallthrough", "unchecked" }) + public void set(ByteBuffer bytes) { + + if (bytes == null) { + return; + } + + if (type == null) { + type = Type.valueOf(decodeAscii(bytes)); + return; + } + + handleOutput(bytes); + } + + @SuppressWarnings("unchecked") + private void handleOutput(ByteBuffer bytes) { + switch (type) { + case pmessage: + if (pattern == null) { + pattern = codec.decodeKey(bytes); + break; + } + case message: + if (channel == null) { + channel = codec.decodeKey(bytes); + break; + } + output = (T) codec.decodeValue(bytes); + completed = true; + break; + case psubscribe: + case punsubscribe: + pattern = codec.decodeKey(bytes); + break; + case subscribe: + case unsubscribe: + channel = codec.decodeKey(bytes); + break; + default: + throw new UnsupportedOperationException("Operation " + type + " not supported"); + } + } + + @Override + public void set(long integer) { + count = integer; + // count comes last in (p)(un)subscribe ack. + completed = true; + } + + boolean isCompleted() { + return completed; + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/RedisPubSubAdapter.java b/src/main/java/io/lettuce/core/pubsub/RedisPubSubAdapter.java new file mode 100644 index 0000000000..397f69b9c2 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/RedisPubSubAdapter.java @@ -0,0 +1,57 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +/** + * Convenience adapter with an empty implementation of all {@link RedisPubSubListener} callback methods. + * + * @param Key type. + * @param Value type. + * + * @author Will Glozer + */ +public class RedisPubSubAdapter implements RedisPubSubListener { + + @Override + public void message(K channel, V message) { + // empty adapter method + } + + @Override + public void message(K pattern, K channel, V message) { + // empty adapter method + } + + @Override + public void subscribed(K channel, long count) { + // empty adapter method + } + + @Override + public void psubscribed(K pattern, long count) { + // empty adapter method + } + + @Override + public void unsubscribed(K channel, long count) { + // empty adapter method + } + + @Override + public void punsubscribed(K pattern, long count) { + // empty adapter method + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/RedisPubSubAsyncCommandsImpl.java b/src/main/java/io/lettuce/core/pubsub/RedisPubSubAsyncCommandsImpl.java new file mode 100644 index 0000000000..64f42f2f9b --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/RedisPubSubAsyncCommandsImpl.java @@ -0,0 +1,93 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.RedisAsyncCommandsImpl; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; + +/** + * An asynchronous and thread-safe API for a Redis pub/sub connection. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + * @author Mark Paluch + */ +public class RedisPubSubAsyncCommandsImpl extends RedisAsyncCommandsImpl implements RedisPubSubAsyncCommands { + + private final PubSubCommandBuilder commandBuilder; + + /** + * Initialize a new connection. + * + * @param connection the connection . + * @param codec Codec used to encode/decode keys and values. + */ + public RedisPubSubAsyncCommandsImpl(StatefulRedisPubSubConnection connection, RedisCodec codec) { + super(connection, codec); + this.commandBuilder = new PubSubCommandBuilder<>(codec); + } + + @Override + @SuppressWarnings("unchecked") + public RedisFuture psubscribe(K... patterns) { + return (RedisFuture) dispatch(commandBuilder.psubscribe(patterns)); + } + + @Override + @SuppressWarnings("unchecked") + public RedisFuture punsubscribe(K... patterns) { + return (RedisFuture) dispatch(commandBuilder.punsubscribe(patterns)); + } + + @Override + @SuppressWarnings("unchecked") + public RedisFuture subscribe(K... channels) { + return (RedisFuture) dispatch(commandBuilder.subscribe(channels)); + } + + @Override + @SuppressWarnings("unchecked") + public RedisFuture unsubscribe(K... channels) { + return (RedisFuture) dispatch(commandBuilder.unsubscribe(channels)); + } + + @Override + public RedisFuture publish(K channel, V message) { + return dispatch(commandBuilder.publish(channel, message)); + } + + @Override + public RedisFuture> pubsubChannels(K channel) { + return dispatch(commandBuilder.pubsubChannels(channel)); + } + + @Override + public RedisFuture> pubsubNumsub(K... channels) { + return dispatch(commandBuilder.pubsubNumsub(channels)); + } + + @Override + @SuppressWarnings("unchecked") + public StatefulRedisPubSubConnection getStatefulConnection() { + return (StatefulRedisPubSubConnection) super.getStatefulConnection(); + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/RedisPubSubListener.java b/src/main/java/io/lettuce/core/pubsub/RedisPubSubListener.java new file mode 100644 index 0000000000..006d7c33b6 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/RedisPubSubListener.java @@ -0,0 +1,74 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +/** + * Interface for redis pub/sub listeners. + * + * @param Key type. + * @param Value type. + * @author Will Glozer + */ +public interface RedisPubSubListener { + /** + * Message received from a channel subscription. + * + * @param channel Channel. + * @param message Message. + */ + void message(K channel, V message); + + /** + * Message received from a pattern subscription. + * + * @param pattern Pattern + * @param channel Channel + * @param message Message + */ + void message(K pattern, K channel, V message); + + /** + * Subscribed to a channel. + * + * @param channel Channel + * @param count Subscription count. + */ + void subscribed(K channel, long count); + + /** + * Subscribed to a pattern. + * + * @param pattern Pattern. + * @param count Subscription count. + */ + void psubscribed(K pattern, long count); + + /** + * Unsubscribed from a channel. + * + * @param channel Channel + * @param count Subscription count. + */ + void unsubscribed(K channel, long count); + + /** + * Unsubscribed from a pattern. + * + * @param pattern Channel + * @param count Subscription count. + */ + void punsubscribed(K pattern, long count); +} diff --git a/src/main/java/io/lettuce/core/pubsub/RedisPubSubReactiveCommandsImpl.java b/src/main/java/io/lettuce/core/pubsub/RedisPubSubReactiveCommandsImpl.java new file mode 100644 index 0000000000..e0f02303a1 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/RedisPubSubReactiveCommandsImpl.java @@ -0,0 +1,150 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.FluxSink; +import reactor.core.publisher.Mono; +import io.lettuce.core.RedisReactiveCommandsImpl; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.pubsub.api.reactive.ChannelMessage; +import io.lettuce.core.pubsub.api.reactive.PatternMessage; +import io.lettuce.core.pubsub.api.reactive.RedisPubSubReactiveCommands; + +/** + * A reactive and thread-safe API for a Redis pub/sub connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.0 + */ +public class RedisPubSubReactiveCommandsImpl extends RedisReactiveCommandsImpl + implements RedisPubSubReactiveCommands { + + private final PubSubCommandBuilder commandBuilder; + + /** + * Initialize a new connection. + * + * @param connection the connection . + * @param codec Codec used to encode/decode keys and values. + */ + public RedisPubSubReactiveCommandsImpl(StatefulRedisPubSubConnection connection, RedisCodec codec) { + super(connection, codec); + this.commandBuilder = new PubSubCommandBuilder<>(codec); + } + + @Override + public Flux> observePatterns() { + return observePatterns(FluxSink.OverflowStrategy.BUFFER); + } + + @Override + public Flux> observePatterns(FluxSink.OverflowStrategy overflowStrategy) { + + return Flux.create(sink -> { + + RedisPubSubAdapter listener = new RedisPubSubAdapter() { + + @Override + public void message(K pattern, K channel, V message) { + sink.next(new PatternMessage<>(pattern, channel, message)); + } + }; + + StatefulRedisPubSubConnection statefulConnection = getStatefulConnection(); + statefulConnection.addListener(listener); + + sink.onDispose(() -> { + statefulConnection.removeListener(listener); + }); + + }, overflowStrategy); + } + + @Override + public Flux> observeChannels() { + return observeChannels(FluxSink.OverflowStrategy.BUFFER); + } + + @Override + public Flux> observeChannels(FluxSink.OverflowStrategy overflowStrategy) { + + return Flux.create(sink -> { + + RedisPubSubAdapter listener = new RedisPubSubAdapter() { + + @Override + public void message(K channel, V message) { + sink.next(new ChannelMessage<>(channel, message)); + } + }; + + StatefulRedisPubSubConnection statefulConnection = getStatefulConnection(); + statefulConnection.addListener(listener); + + sink.onDispose(() -> { + statefulConnection.removeListener(listener); + }); + + }, overflowStrategy); + } + + @Override + public Mono psubscribe(K... patterns) { + return createMono(() -> commandBuilder.psubscribe(patterns)).then(); + } + + @Override + public Mono punsubscribe(K... patterns) { + return createFlux(() -> commandBuilder.punsubscribe(patterns)).then(); + } + + @Override + public Mono subscribe(K... channels) { + return createFlux(() -> commandBuilder.subscribe(channels)).then(); + } + + @Override + public Mono unsubscribe(K... channels) { + return createFlux(() -> commandBuilder.unsubscribe(channels)).then(); + } + + @Override + public Mono publish(K channel, V message) { + return createMono(() -> commandBuilder.publish(channel, message)); + } + + @Override + public Flux pubsubChannels(K channel) { + return createDissolvingFlux(() -> commandBuilder.pubsubChannels(channel)); + } + + @Override + public Mono> pubsubNumsub(K... channels) { + return createMono(() -> commandBuilder.pubsubNumsub(channels)); + } + + @Override + @SuppressWarnings("unchecked") + public StatefulRedisPubSubConnection getStatefulConnection() { + return (StatefulRedisPubSubConnection) super.getStatefulConnection(); + } + +} diff --git a/src/main/java/io/lettuce/core/pubsub/StatefulRedisPubSubConnection.java b/src/main/java/io/lettuce/core/pubsub/StatefulRedisPubSubConnection.java new file mode 100644 index 0000000000..803014f61d --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/StatefulRedisPubSubConnection.java @@ -0,0 +1,74 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; +import io.lettuce.core.pubsub.api.reactive.RedisPubSubReactiveCommands; +import io.lettuce.core.pubsub.api.sync.RedisPubSubCommands; + +/** + * An asynchronous thread-safe pub/sub connection to a redis server. After one or more channels are subscribed to only pub/sub + * related commands or {@literal QUIT} may be called. + * + * Incoming messages and results of the {@literal subscribe}/{@literal unsubscribe} calls will be passed to all registered + * {@link RedisPubSubListener}s. + * + * A {@link io.lettuce.core.protocol.ConnectionWatchdog} monitors each connection and reconnects automatically until + * {@link #close} is called. Channel and pattern subscriptions are renewed after reconnecting. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface StatefulRedisPubSubConnection extends StatefulRedisConnection { + + /** + * Returns the {@link RedisPubSubCommands} API for the current connection. Does not create a new connection. + * + * @return the synchronous API for the underlying connection. + */ + RedisPubSubCommands sync(); + + /** + * Returns the {@link RedisPubSubAsyncCommands} API for the current connection. Does not create a new connection. + * + * @return the asynchronous API for the underlying connection. + */ + RedisPubSubAsyncCommands async(); + + /** + * Returns the {@link RedisPubSubReactiveCommands} API for the current connection. Does not create a new connection. + * + * @return the reactive API for the underlying connection. + */ + RedisPubSubReactiveCommands reactive(); + + /** + * Add a new {@link RedisPubSubListener listener}. + * + * @param listener the listener, must not be {@literal null}. + */ + void addListener(RedisPubSubListener listener); + + /** + * Remove an existing {@link RedisPubSubListener listener}.. + * + * @param listener the listener, must not be {@literal null}. + */ + void removeListener(RedisPubSubListener listener); +} diff --git a/src/main/java/io/lettuce/core/pubsub/StatefulRedisPubSubConnectionImpl.java b/src/main/java/io/lettuce/core/pubsub/StatefulRedisPubSubConnectionImpl.java new file mode 100644 index 0000000000..684e97f3fd --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/StatefulRedisPubSubConnectionImpl.java @@ -0,0 +1,155 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import java.lang.reflect.Array; +import java.time.Duration; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; + +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.StatefulRedisConnectionImpl; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.protocol.ConnectionWatchdog; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; +import io.lettuce.core.pubsub.api.reactive.RedisPubSubReactiveCommands; +import io.lettuce.core.pubsub.api.sync.RedisPubSubCommands; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * An thread-safe pub/sub connection to a Redis server. Multiple threads may share one {@link StatefulRedisPubSubConnectionImpl} + * + * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All + * pending commands will be (re)sent after successful reconnection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + */ +public class StatefulRedisPubSubConnectionImpl extends StatefulRedisConnectionImpl + implements StatefulRedisPubSubConnection { + + private final PubSubEndpoint endpoint; + + /** + * Initialize a new connection. + * + * @param endpoint the {@link PubSubEndpoint} + * @param writer the writer used to write commands + * @param codec Codec used to encode/decode keys and values. + * @param timeout Maximum time to wait for a response. + */ + public StatefulRedisPubSubConnectionImpl(PubSubEndpoint endpoint, RedisChannelWriter writer, RedisCodec codec, + Duration timeout) { + + super(writer, codec, timeout); + + this.endpoint = endpoint; + } + + /** + * Add a new listener. + * + * @param listener Listener. + */ + @Override + public void addListener(RedisPubSubListener listener) { + endpoint.addListener(listener); + } + + /** + * Remove an existing listener. + * + * @param listener Listener. + */ + @Override + public void removeListener(RedisPubSubListener listener) { + endpoint.removeListener(listener); + } + + @Override + public RedisPubSubAsyncCommands async() { + return (RedisPubSubAsyncCommands) async; + } + + @Override + protected RedisPubSubAsyncCommandsImpl newRedisAsyncCommandsImpl() { + return new RedisPubSubAsyncCommandsImpl<>(this, codec); + } + + @Override + public RedisPubSubCommands sync() { + return (RedisPubSubCommands) sync; + } + + @Override + protected RedisPubSubCommands newRedisSyncCommandsImpl() { + return syncHandler(async(), RedisPubSubCommands.class); + } + + @Override + public RedisPubSubReactiveCommands reactive() { + return (RedisPubSubReactiveCommands) reactive; + } + + @Override + protected RedisPubSubReactiveCommandsImpl newRedisReactiveCommandsImpl() { + return new RedisPubSubReactiveCommandsImpl<>(this, codec); + } + + /** + * Re-subscribe to all previously subscribed channels and patterns. + * + * @return list of the futures of the {@literal subscribe} and {@literal psubscribe} commands. + */ + protected List> resubscribe() { + + List> result = new ArrayList<>(); + + if (endpoint.hasChannelSubscriptions()) { + result.add(async().subscribe(toArray(endpoint.getChannels()))); + } + + if (endpoint.hasPatternSubscriptions()) { + result.add(async().psubscribe(toArray(endpoint.getPatterns()))); + } + + return result; + } + + @SuppressWarnings("unchecked") + private T[] toArray(Collection c) { + Class cls = (Class) c.iterator().next().getClass(); + T[] array = (T[]) Array.newInstance(cls, c.size()); + return c.toArray(array); + } + + @Override + public void activated() { + super.activated(); + for (RedisFuture command : resubscribe()) { + command.exceptionally(throwable -> { + if (throwable instanceof RedisCommandExecutionException) { + InternalLoggerFactory.getInstance(getClass()).warn("Re-subscribe failed: " + command.getError()); + } + return null; + }); + } + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/api/async/RedisPubSubAsyncCommands.java b/src/main/java/io/lettuce/core/pubsub/api/async/RedisPubSubAsyncCommands.java new file mode 100644 index 0000000000..df787d5712 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/api/async/RedisPubSubAsyncCommands.java @@ -0,0 +1,68 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub.api.async; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; + +/** + * Asynchronous and thread-safe Redis PubSub API. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public interface RedisPubSubAsyncCommands extends RedisAsyncCommands { + + /** + * Listen for messages published to channels matching the given patterns. + * + * @param patterns the patterns + * @return RedisFuture<Void> Future to synchronize {@code psubscribe} completion + */ + RedisFuture psubscribe(K... patterns); + + /** + * Stop listening for messages posted to channels matching the given patterns. + * + * @param patterns the patterns + * @return RedisFuture<Void> Future to synchronize {@code punsubscribe} completion + */ + RedisFuture punsubscribe(K... patterns); + + /** + * Listen for messages published to the given channels. + * + * @param channels the channels + * @return RedisFuture<Void> Future to synchronize {@code subscribe} completion + */ + RedisFuture subscribe(K... channels); + + /** + * Stop listening for messages posted to the given channels. + * + * @param channels the channels + * @return RedisFuture<Void> Future to synchronize {@code unsubscribe} completion. + */ + RedisFuture unsubscribe(K... channels); + + /** + * @return the underlying connection. + */ + StatefulRedisPubSubConnection getStatefulConnection(); +} diff --git a/src/main/java/io/lettuce/core/pubsub/api/async/package-info.java b/src/main/java/io/lettuce/core/pubsub/api/async/package-info.java new file mode 100644 index 0000000000..98a910594e --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/api/async/package-info.java @@ -0,0 +1,4 @@ +/** + * Pub/Sub Redis API for asynchronous executed commands. + */ +package io.lettuce.core.pubsub.api.async; diff --git a/src/main/java/io/lettuce/core/pubsub/api/reactive/ChannelMessage.java b/src/main/java/io/lettuce/core/pubsub/api/reactive/ChannelMessage.java new file mode 100644 index 0000000000..311e408292 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/api/reactive/ChannelMessage.java @@ -0,0 +1,53 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub.api.reactive; + +/** + * Message payload for a subscription to a channel. + * + * @author Mark Paluch + */ +public class ChannelMessage { + + private final K channel; + private final V message; + + /** + * + * @param channel the channel + * @param message the message + */ + public ChannelMessage(K channel, V message) { + this.channel = channel; + this.message = message; + } + + /** + * + * @return the channel + */ + public K getChannel() { + return channel; + } + + /** + * + * @return the message + */ + public V getMessage() { + return message; + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/api/reactive/PatternMessage.java b/src/main/java/io/lettuce/core/pubsub/api/reactive/PatternMessage.java new file mode 100644 index 0000000000..51297e455c --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/api/reactive/PatternMessage.java @@ -0,0 +1,64 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub.api.reactive; + +/** + * Message payload for a subscription to a pattern. + * + * @author Mark Paluch + */ +public class PatternMessage { + + private final K pattern; + private final K channel; + private final V message; + + /** + * + * @param pattern the pattern + * @param channel the channel + * @param message the message + */ + public PatternMessage(K pattern, K channel, V message) { + this.pattern = pattern; + this.channel = channel; + this.message = message; + } + + /** + * + * @return the pattern + */ + public K getPattern() { + return pattern; + } + + /** + * + * @return the channel + */ + public K getChannel() { + return channel; + } + + /** + * + * @return the message + */ + public V getMessage() { + return message; + } +} diff --git a/src/main/java/io/lettuce/core/pubsub/api/reactive/RedisPubSubReactiveCommands.java b/src/main/java/io/lettuce/core/pubsub/api/reactive/RedisPubSubReactiveCommands.java new file mode 100644 index 0000000000..c9f67eb329 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/api/reactive/RedisPubSubReactiveCommands.java @@ -0,0 +1,119 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub.api.reactive; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.FluxSink; +import reactor.core.publisher.Mono; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; + +/** + * Asynchronous and thread-safe Redis PubSub API. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.0 + */ +public interface RedisPubSubReactiveCommands extends RedisReactiveCommands { + + /** + * Flux for messages ({@literal pmessage}) received though pattern subscriptions. The connection needs to be subscribed to + * one or more patterns using {@link #psubscribe(Object[])}. + *

    + * Warning! This method uses {@link reactor.core.publisher.FluxSink.OverflowStrategy#BUFFER} This does unbounded buffering + * and may lead to {@link OutOfMemoryError}. Use {@link #observePatterns(FluxSink.OverflowStrategy)} to specify a different + * strategy. + *

    + * + * @return hot Flux for subscriptions to {@literal pmessage}'s. + */ + Flux> observePatterns(); + + /** + * Flux for messages ({@literal pmessage}) received though pattern subscriptions. The connection needs to be subscribed to + * one or more patterns using {@link #psubscribe(Object[])}. + * + * @param overflowStrategy the overflow strategy to use. + * @return hot Flux for subscriptions to {@literal pmessage}'s. + */ + Flux> observePatterns(FluxSink.OverflowStrategy overflowStrategy); + + /** + * Flux for messages ({@literal message}) received though channel subscriptions. The connection needs to be subscribed to + * one or more channels using {@link #subscribe(Object[])}. + * + *

    + * Warning! This method uses {@link reactor.core.publisher.FluxSink.OverflowStrategy#BUFFER} This does unbounded buffering + * and may lead to {@link OutOfMemoryError}. Use {@link #observeChannels(FluxSink.OverflowStrategy)} to specify a different + * strategy. + *

    + * + * @return hot Flux for subscriptions to {@literal message}'s. + */ + Flux> observeChannels(); + + /** + * Flux for messages ({@literal message}) received though channel subscriptions. The connection needs to be subscribed to + * one or more channels using {@link #subscribe(Object[])}. + * + * @param overflowStrategy the overflow strategy to use. + * @return hot Flux for subscriptions to {@literal message}'s. + */ + Flux> observeChannels(FluxSink.OverflowStrategy overflowStrategy); + + /** + * Listen for messages published to channels matching the given patterns. The {@link Mono} completes without a result as + * soon as the pattern subscription is registered. + * + * @param patterns the patterns. + * @return Mono<Void> Mono for {@code psubscribe} command. + */ + Mono psubscribe(K... patterns); + + /** + * Stop listening for messages posted to channels matching the given patterns. The {@link Mono} completes without a result + * as soon as the pattern subscription is unregistered. + * + * @param patterns the patterns. + * @return Mono<Void> Mono for {@code punsubscribe} command. + */ + Mono punsubscribe(K... patterns); + + /** + * Listen for messages published to the given channels. The {@link Mono} completes without a result as soon as the * + * subscription is registered. + * + * @param channels the channels. + * @return Mono<Void> Mono for {@code subscribe} command. + */ + Mono subscribe(K... channels); + + /** + * Stop listening for messages posted to the given channels. The {@link Mono} completes without a result as soon as the + * subscription is unregistered. + * + * @param channels the channels. + * @return Mono<Void> Mono for {@code unsubscribe} command. + */ + Mono unsubscribe(K... channels); + + /** + * @return the underlying connection. + */ + StatefulRedisPubSubConnection getStatefulConnection(); +} diff --git a/src/main/java/io/lettuce/core/pubsub/api/reactive/package-info.java b/src/main/java/io/lettuce/core/pubsub/api/reactive/package-info.java new file mode 100644 index 0000000000..64d21261d9 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/api/reactive/package-info.java @@ -0,0 +1,4 @@ +/** + * Pub/Sub Redis API for reactive command execution. + */ +package io.lettuce.core.pubsub.api.reactive; diff --git a/src/main/java/io/lettuce/core/pubsub/api/sync/RedisPubSubCommands.java b/src/main/java/io/lettuce/core/pubsub/api/sync/RedisPubSubCommands.java new file mode 100644 index 0000000000..d025015e9c --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/api/sync/RedisPubSubCommands.java @@ -0,0 +1,63 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub.api.sync; + +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; + +/** + * Synchronous and thread-safe Redis PubSub API. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisPubSubCommands extends RedisCommands { + + /** + * Listen for messages published to channels matching the given patterns. + * + * @param patterns the patterns + */ + void psubscribe(K... patterns); + + /** + * Stop listening for messages posted to channels matching the given patterns. + * + * @param patterns the patterns + */ + void punsubscribe(K... patterns); + + /** + * Listen for messages published to the given channels. + * + * @param channels the channels + */ + void subscribe(K... channels); + + /** + * Stop listening for messages posted to the given channels. + * + * @param channels the channels + */ + void unsubscribe(K... channels); + + /** + * @return the underlying connection. + */ + StatefulRedisPubSubConnection getStatefulConnection(); +} diff --git a/src/main/java/io/lettuce/core/pubsub/api/sync/package-info.java b/src/main/java/io/lettuce/core/pubsub/api/sync/package-info.java new file mode 100644 index 0000000000..a6ecbb76ab --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/api/sync/package-info.java @@ -0,0 +1,4 @@ +/** + * Pub/Sub Redis API for synchronous executed commands. + */ +package io.lettuce.core.pubsub.api.sync; diff --git a/src/main/java/io/lettuce/core/pubsub/package-info.java b/src/main/java/io/lettuce/core/pubsub/package-info.java new file mode 100644 index 0000000000..2e0e22ad16 --- /dev/null +++ b/src/main/java/io/lettuce/core/pubsub/package-info.java @@ -0,0 +1,4 @@ +/** + * Pub/Sub connection classes. + */ +package io.lettuce.core.pubsub; diff --git a/src/main/java/io/lettuce/core/resource/ClientResources.java b/src/main/java/io/lettuce/core/resource/ClientResources.java new file mode 100644 index 0000000000..802bd17b01 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/ClientResources.java @@ -0,0 +1,365 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; + +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.EventPublisherOptions; +import io.lettuce.core.metrics.CommandLatencyCollector; +import io.lettuce.core.metrics.CommandLatencyCollectorOptions; +import io.lettuce.core.tracing.Tracing; +import io.netty.util.Timer; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.concurrent.Future; + +/** + * Strategy interface to provide all the infrastructure building blocks like environment settings and thread pools so that the + * client can work with it properly. {@link ClientResources} can be shared amongst multiple client instances if created outside + * the client creation. Implementations of {@link ClientResources} are stateful and must be {@link #shutdown()} after they are + * no longer in use. + * + * {@link ClientResources} provides in particular: + *
      + *
    • {@link EventLoopGroupProvider} to obtain particular {@link io.netty.channel.EventLoopGroup EventLoopGroups}
    • + *
    • {@link EventExecutorGroup} to perform internal computation tasks
    • + *
    • {@link Timer} for scheduling
    • + *
    • {@link EventBus} for client event dispatching
    • + *
    • {@link EventPublisherOptions}
    • + *
    • {@link CommandLatencyCollector} to collect latency details. Requires the {@literal HdrHistogram} library.
    • + *
    • {@link DnsResolver} to collect latency details. Requires the {@literal LatencyUtils} library.
    • + *
    • Reconnect {@link Delay}.
    • + *
    • {@link Tracing} to trace Redis commands.
    • + *
    + * + * @author Mark Paluch + * @since 3.4 + * @see DefaultClientResources + */ +public interface ClientResources { + + /** + * Create a new {@link ClientResources} using default settings. + * + * @return a new instance of a default client resources. + */ + static ClientResources create() { + return DefaultClientResources.create(); + } + + /** + * Create a new {@link ClientResources} using default settings. + * + * @return a new instance of a default client resources. + */ + static Builder builder() { + return DefaultClientResources.builder(); + } + + /** + * Builder for {@link ClientResources}. + * + * @since 5.1 + */ + interface Builder { + + /** + * Sets the {@link CommandLatencyCollector} that can that can be used across different instances of the RedisClient. + * + * @param commandLatencyCollector the command latency collector, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + Builder commandLatencyCollector(CommandLatencyCollector commandLatencyCollector); + + /** + * Sets the {@link CommandLatencyCollectorOptions} that can that can be used across different instances of the + * RedisClient. The options are only effective if no {@code commandLatencyCollector} is provided. + * + * @param commandLatencyCollectorOptions the command latency collector options, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + Builder commandLatencyCollectorOptions(CommandLatencyCollectorOptions commandLatencyCollectorOptions); + + /** + * Sets the {@link EventPublisherOptions} to publish command latency metrics using the {@link EventBus}. + * + * @param commandLatencyPublisherOptions the {@link EventPublisherOptions} to publish command latency metrics using the + * {@link EventBus}, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + Builder commandLatencyPublisherOptions(EventPublisherOptions commandLatencyPublisherOptions); + + /** + * Sets the thread pool size (number of threads to use) for computation operations (default value is the number of + * CPUs). The thread pool size is only effective if no {@code eventExecutorGroup} is provided. + * + * @param computationThreadPoolSize the thread pool size, must be greater {@code 0}. + * @return {@code this} {@link Builder}. + */ + Builder computationThreadPoolSize(int computationThreadPoolSize); + + /** + * Sets the {@link SocketAddressResolver} that is used to resolve {@link io.lettuce.core.RedisURI} to + * {@link java.net.SocketAddress}. Defaults to {@link SocketAddressResolver} using the configured {@link DnsResolver}. + * + * @param socketAddressResolver the socket address resolver, must not be {@literal null}. + * @return {@code this} {@link Builder}. + * @since 5.1 + */ + Builder socketAddressResolver(SocketAddressResolver socketAddressResolver); + + /** + * Sets the {@link DnsResolver} that is used to resolve hostnames to {@link java.net.InetAddress}. Defaults to + * {@link DnsResolvers#JVM_DEFAULT} + * + * @param dnsResolver the DNS resolver, must not be {@literal null}. + * @return {@code this} {@link Builder}. + * @since 4.3 + */ + Builder dnsResolver(DnsResolver dnsResolver); + + /** + * Sets the {@link EventBus} that can that can be used across different instances of the RedisClient. + * + * @param eventBus the event bus, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + Builder eventBus(EventBus eventBus); + + /** + * Sets a shared {@link EventExecutorGroup event executor group} that can be used across different instances of + * {@link io.lettuce.core.RedisClient} and {@link io.lettuce.core.cluster.RedisClusterClient}. The provided + * {@link EventExecutorGroup} instance will not be shut down when shutting down the client resources. You have to take + * care of that. This is an advanced configuration that should only be used if you know what you are doing. + * + * @param eventExecutorGroup the shared eventExecutorGroup, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + Builder eventExecutorGroup(EventExecutorGroup eventExecutorGroup); + + /** + * Sets a shared {@link EventLoopGroupProvider event executor provider} that can be used across different instances of + * {@link io.lettuce.core.RedisClient} and {@link io.lettuce.core.cluster.RedisClusterClient}. The provided + * {@link EventLoopGroupProvider} instance will not be shut down when shutting down the client resources. You have to + * take care of that. This is an advanced configuration that should only be used if you know what you are doing. + * + * @param eventLoopGroupProvider the shared eventLoopGroupProvider, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + Builder eventLoopGroupProvider(EventLoopGroupProvider eventLoopGroupProvider); + + /** + * Sets the thread pool size (number of threads to use) for I/O operations (default value is the number of CPUs). The + * thread pool size is only effective if no {@code eventLoopGroupProvider} is provided. + * + * @param ioThreadPoolSize the thread pool size, must be greater {@code 0}. + * @return {@code this} {@link Builder}. + */ + Builder ioThreadPoolSize(int ioThreadPoolSize); + + /** + * Sets the {@link NettyCustomizer} instance to customize netty components during connection. + * + * @param nettyCustomizer the netty customizer instance, must not be {@literal null}. + * @return this + * @since 4.4 + */ + Builder nettyCustomizer(NettyCustomizer nettyCustomizer); + + /** + * Sets the stateless reconnect {@link Delay} to delay reconnect attempts. Defaults to binary exponential delay capped + * at {@literal 30 SECONDS}. {@code reconnectDelay} must be a stateless {@link Delay}. + * + * @param reconnectDelay the reconnect delay, must not be {@literal null}. + * @return this + * @since 4.3 + */ + Builder reconnectDelay(Delay reconnectDelay); + + /** + * Sets the stateful reconnect {@link Supplier} to delay reconnect attempts. Defaults to binary exponential delay capped + * at {@literal 30 SECONDS}. + * + * @param reconnectDelay the reconnect delay, must not be {@literal null}. + * @return this + * @since 4.3 + */ + Builder reconnectDelay(Supplier reconnectDelay); + + /** + * Sets a shared {@link Timer} that can be used across different instances of {@link io.lettuce.core.RedisClient} and + * {@link io.lettuce.core.cluster.RedisClusterClient} The provided {@link Timer} instance will not be shut down when + * shutting down the client resources. You have to take care of that. This is an advanced configuration that should only + * be used if you know what you are doing. + * + * @param timer the shared {@link Timer}, must not be {@literal null}. + * @return {@code this} {@link Builder}. + * @since 4.3 + */ + Builder timer(Timer timer); + + /** + * Sets the {@link Tracing} instance to trace Redis calls. + * + * @param tracing the tracer infrastructure instance, must not be {@literal null}. + * @return this + * @since 5.1 + */ + Builder tracing(Tracing tracing); + + /** + * @return a new instance of {@link DefaultClientResources}. + */ + ClientResources build(); + } + + /** + * Returns a builder to create new {@link ClientResources} whose settings are replicated from the current + * {@link ClientResources}. + * + * @return a {@link ClientResources.Builder} to create new {@link ClientResources} whose settings are replicated from the + * current {@link ClientResources} + * + * @since 5.1 + */ + Builder mutate(); + + /** + * Shutdown the {@link ClientResources}. + * + * @return eventually the success/failure of the shutdown without errors. + */ + Future shutdown(); + + /** + * Shutdown the {@link ClientResources}. + * + * @param quietPeriod the quiet period as described in the documentation + * @param timeout the maximum amount of time to wait until the executor is shutdown regardless if a task was submitted + * during the quiet period + * @param timeUnit the unit of {@code quietPeriod} and {@code timeout} + * @return eventually the success/failure of the shutdown without errors. + */ + Future shutdown(long quietPeriod, long timeout, TimeUnit timeUnit); + + /** + * Returns the {@link EventLoopGroupProvider} that provides access to the particular {@link io.netty.channel.EventLoopGroup + * event loop groups}. lettuce requires at least two implementations: {@link io.netty.channel.nio.NioEventLoopGroup} for + * TCP/IP connections and {@link io.netty.channel.epoll.EpollEventLoopGroup} for unix domain socket connections (epoll). + * + * You can use {@link DefaultEventLoopGroupProvider} as default implementation or implement an own + * {@link EventLoopGroupProvider} to share existing {@link io.netty.channel.EventLoopGroup EventLoopGroup's} with lettuce. + * + * @return the {@link EventLoopGroupProvider} which provides access to the particular + * {@link io.netty.channel.EventLoopGroup event loop groups} + */ + EventLoopGroupProvider eventLoopGroupProvider(); + + /** + * Returns the computation pool used for internal operations. Such tasks are periodic Redis Cluster and Redis Sentinel + * topology updates and scheduling of connection reconnection by {@link io.lettuce.core.protocol.ConnectionWatchdog}. + * + * @return the computation pool used for internal operations + */ + EventExecutorGroup eventExecutorGroup(); + + /** + * Returns the pool size (number of threads) for IO threads. The indicated size does not reflect the number for all IO + * threads. TCP and socket connections (epoll) require different IO pool. + * + * @return the pool size (number of threads) for all IO tasks. + */ + int ioThreadPoolSize(); + + /** + * Returns the pool size (number of threads) for all computation tasks. + * + * @return the pool size (number of threads to use). + */ + int computationThreadPoolSize(); + + /** + * Returns the {@link Timer} to schedule events. A timer object may run single- or multi-threaded but must be used for + * scheduling of short-running jobs only. Long-running jobs should be scheduled and executed using + * {@link #eventExecutorGroup()}. + * + * @return the timer. + * @since 4.3 + */ + Timer timer(); + + /** + * Returns the event bus used to publish events. + * + * @return the event bus + */ + EventBus eventBus(); + + /** + * Returns the {@link EventPublisherOptions} for latency event publishing. + * + * @return the {@link EventPublisherOptions} for latency event publishing. + */ + EventPublisherOptions commandLatencyPublisherOptions(); + + /** + * Returns the {@link CommandLatencyCollector}. + * + * @return the command latency collector + */ + CommandLatencyCollector commandLatencyCollector(); + + /** + * Returns the {@link DnsResolver}. + * + * @return the DNS resolver. + * @since 4.3 + */ + DnsResolver dnsResolver(); + + /** + * Returns the {@link SocketAddressResolver}. + * + * @return the socket address resolver. + * @since 5.1 + */ + SocketAddressResolver socketAddressResolver(); + + /** + * Returns the {@link Delay} for reconnect attempts. May return a different instance on each call. + * + * @return the reconnect {@link Delay}. + * @since 4.3 + */ + Delay reconnectDelay(); + + /** + * Returns the {@link NettyCustomizer} to customize netty components. + * + * @return the configured {@link NettyCustomizer}. + * @since 4.4 + */ + NettyCustomizer nettyCustomizer(); + + /** + * Returns the {@link Tracing} instance to support tracing of Redis commands. + * + * @return the configured {@link Tracing}. + * @since 5.1 + */ + Tracing tracing(); +} diff --git a/src/main/java/io/lettuce/core/resource/ConstantDelay.java b/src/main/java/io/lettuce/core/resource/ConstantDelay.java new file mode 100644 index 0000000000..3400d0c013 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/ConstantDelay.java @@ -0,0 +1,37 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.time.Duration; + +/** + * {@link Delay} with a constant delay for each attempt. + * + * @author Mark Paluch + */ +class ConstantDelay extends Delay { + + private final Duration delay; + + ConstantDelay(Duration delay) { + this.delay = delay; + } + + @Override + public Duration createDelay(long attempt) { + return delay; + } +} diff --git a/src/main/java/io/lettuce/core/resource/DecorrelatedJitterDelay.java b/src/main/java/io/lettuce/core/resource/DecorrelatedJitterDelay.java new file mode 100644 index 0000000000..a4462ceb44 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/DecorrelatedJitterDelay.java @@ -0,0 +1,71 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.resource.Delay.StatefulDelay; + +/** + * Stateful delay that increases using decorrelated jitter strategy. + * + *

    + * Considering retry attempts start at 1, attempt 0 would be the initial call and will always yield 0 (or the lower). Then, each + * retry step will by default yield {@code min(cap, randomBetween(base, prevDelay * 3))}. + * + * This strategy is based on Exponential Backoff and + * Jitter. + *

    + * + * @author Jongyeol Choi + * @author Mark Paluch + * @since 4.2 + * @see StatefulDelay + */ +class DecorrelatedJitterDelay extends Delay implements StatefulDelay { + + private final Duration lower; + private final Duration upper; + private final long base; + private final TimeUnit targetTimeUnit; + + /* + * Delays may be used by different threads, this one is volatile to prevent visibility issues + */ + private volatile long prevDelay; + + DecorrelatedJitterDelay(Duration lower, Duration upper, long base, TimeUnit targetTimeUnit) { + this.lower = lower; + this.upper = upper; + this.base = base; + this.targetTimeUnit = targetTimeUnit; + reset(); + } + + @Override + public Duration createDelay(long attempt) { + long value = randomBetween(base, Math.max(base, prevDelay * 3)); + Duration delay = applyBounds(Duration.ofNanos(targetTimeUnit.toNanos(value)), lower, upper); + prevDelay = delay.toNanos(); + return delay; + } + + @Override + public void reset() { + prevDelay = 0L; + } +} diff --git a/src/main/java/io/lettuce/core/resource/DefaultClientResources.java b/src/main/java/io/lettuce/core/resource/DefaultClientResources.java new file mode 100644 index 0000000000..f9ba3955f1 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/DefaultClientResources.java @@ -0,0 +1,694 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; + +import reactor.core.scheduler.Schedulers; +import io.lettuce.core.event.DefaultEventBus; +import io.lettuce.core.event.DefaultEventPublisherOptions; +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.EventPublisherOptions; +import io.lettuce.core.event.metrics.DefaultCommandLatencyEventPublisher; +import io.lettuce.core.event.metrics.MetricEventPublisher; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.metrics.CommandLatencyCollector; +import io.lettuce.core.metrics.CommandLatencyCollectorOptions; +import io.lettuce.core.metrics.DefaultCommandLatencyCollector; +import io.lettuce.core.metrics.DefaultCommandLatencyCollectorOptions; +import io.lettuce.core.resource.Delay.StatefulDelay; +import io.lettuce.core.tracing.TracerProvider; +import io.lettuce.core.tracing.Tracing; +import io.netty.util.HashedWheelTimer; +import io.netty.util.Timer; +import io.netty.util.concurrent.*; +import io.netty.util.internal.SystemPropertyUtil; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Default instance of the client resources. + *

    + * The {@link DefaultClientResources} instance is stateful, you have to shutdown the instance if you're no longer using it. + *

    + * {@link DefaultClientResources} allow to configure: + *
      + *
    • the {@code ioThreadPoolSize}, alternatively
    • + *
    • a {@code eventLoopGroupProvider} which is a provided instance of {@link EventLoopGroupProvider}. Higher precedence than + * {@code ioThreadPoolSize}.
    • + *
    • computationThreadPoolSize
    • + *
    • a {@code eventExecutorGroup} which is a provided instance of {@link EventExecutorGroup}. Higher precedence than + * {@code computationThreadPoolSize}.
    • + *
    • an {@code eventBus} which is a provided instance of {@link EventBus}.
    • + *
    • a {@code commandLatencyCollector} which is a provided instance of {@link io.lettuce.core.metrics.CommandLatencyCollector} + * .
    • + *
    • a {@code dnsResolver} which is a provided instance of {@link DnsResolver}.
    • + *
    • a {@code socketAddressResolver} which is a provided instance of {@link SocketAddressResolver}.
    • + *
    • a {@code timer} that is a provided instance of {@link io.netty.util.HashedWheelTimer}.
    • + *
    • a {@code nettyCustomizer} that is a provided instance of {@link NettyCustomizer}.
    • + *
    • a {@code tracerProvider} that is a provided instance of {@link TracerProvider}.
    • + *
    + * + * @author Mark Paluch + * @since 3.4 + */ +public class DefaultClientResources implements ClientResources { + + protected static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultClientResources.class); + + /** + * Minimum number of I/O threads. + */ + public static final int MIN_IO_THREADS = 2; + + /** + * Minimum number of computation threads. + */ + public static final int MIN_COMPUTATION_THREADS = 2; + + public static final int DEFAULT_IO_THREADS; + public static final int DEFAULT_COMPUTATION_THREADS; + + /** + * Default delay {@link Supplier} for {@link Delay#exponential()} delay. + */ + public static final Supplier DEFAULT_RECONNECT_DELAY = Delay::exponential; + + /** + * Default (no-op) {@link NettyCustomizer}. + */ + public static final NettyCustomizer DEFAULT_NETTY_CUSTOMIZER = DefaultNettyCustomizer.INSTANCE; + + static { + + int threads = Math.max(1, SystemPropertyUtil.getInt("io.netty.eventLoopThreads", + Math.max(MIN_IO_THREADS, Runtime.getRuntime().availableProcessors()))); + + DEFAULT_IO_THREADS = threads; + DEFAULT_COMPUTATION_THREADS = threads; + if (logger.isDebugEnabled()) { + logger.debug("-Dio.netty.eventLoopThreads: {}", threads); + } + } + + private final boolean sharedEventLoopGroupProvider; + private final EventLoopGroupProvider eventLoopGroupProvider; + private final boolean sharedEventExecutor; + private final EventExecutorGroup eventExecutorGroup; + private final Timer timer; + private final boolean sharedTimer; + private final EventBus eventBus; + private final CommandLatencyCollector commandLatencyCollector; + private final boolean sharedCommandLatencyCollector; + private final EventPublisherOptions commandLatencyPublisherOptions; + private final MetricEventPublisher metricEventPublisher; + private final DnsResolver dnsResolver; + private final SocketAddressResolver socketAddressResolver; + private final Supplier reconnectDelay; + private final NettyCustomizer nettyCustomizer; + private final Tracing tracing; + + private volatile boolean shutdownCalled = false; + + protected DefaultClientResources(Builder builder) { + + if (builder.eventLoopGroupProvider == null) { + int ioThreadPoolSize = builder.ioThreadPoolSize; + + if (ioThreadPoolSize < MIN_IO_THREADS) { + logger.info("ioThreadPoolSize is less than {} ({}), setting to: {}", MIN_IO_THREADS, ioThreadPoolSize, + MIN_IO_THREADS); + ioThreadPoolSize = MIN_IO_THREADS; + } + + this.sharedEventLoopGroupProvider = false; + this.eventLoopGroupProvider = new DefaultEventLoopGroupProvider(ioThreadPoolSize); + + } else { + this.sharedEventLoopGroupProvider = builder.sharedEventLoopGroupProvider; + this.eventLoopGroupProvider = builder.eventLoopGroupProvider; + } + + if (builder.eventExecutorGroup == null) { + int computationThreadPoolSize = builder.computationThreadPoolSize; + if (computationThreadPoolSize < MIN_COMPUTATION_THREADS) { + + logger.info("computationThreadPoolSize is less than {} ({}), setting to: {}", MIN_COMPUTATION_THREADS, + computationThreadPoolSize, MIN_COMPUTATION_THREADS); + computationThreadPoolSize = MIN_COMPUTATION_THREADS; + } + + eventExecutorGroup = DefaultEventLoopGroupProvider.createEventLoopGroup(DefaultEventExecutorGroup.class, + computationThreadPoolSize); + sharedEventExecutor = false; + } else { + sharedEventExecutor = builder.sharedEventExecutor; + eventExecutorGroup = builder.eventExecutorGroup; + } + + if (builder.timer == null) { + timer = new HashedWheelTimer(new DefaultThreadFactory("lettuce-timer")); + sharedTimer = false; + } else { + timer = builder.timer; + sharedTimer = builder.sharedTimer; + } + + if (builder.eventBus == null) { + eventBus = new DefaultEventBus(Schedulers.fromExecutor(eventExecutorGroup)); + } else { + eventBus = builder.eventBus; + } + + if (builder.commandLatencyCollector == null) { + if (DefaultCommandLatencyCollector.isAvailable()) { + if (builder.commandLatencyCollectorOptions != null) { + commandLatencyCollector = CommandLatencyCollector.create(builder.commandLatencyCollectorOptions); + } else { + commandLatencyCollector = CommandLatencyCollector.create(CommandLatencyCollectorOptions.create()); + } + } else { + logger.debug("LatencyUtils/HdrUtils are not available, metrics are disabled"); + builder.commandLatencyCollectorOptions = CommandLatencyCollectorOptions.disabled(); + commandLatencyCollector = CommandLatencyCollector.disabled(); + } + + sharedCommandLatencyCollector = false; + } else { + sharedCommandLatencyCollector = builder.sharedCommandLatencyCollector; + commandLatencyCollector = builder.commandLatencyCollector; + } + + commandLatencyPublisherOptions = builder.commandLatencyPublisherOptions; + + if (commandLatencyCollector.isEnabled() && commandLatencyPublisherOptions != null) { + metricEventPublisher = new DefaultCommandLatencyEventPublisher(eventExecutorGroup, commandLatencyPublisherOptions, + eventBus, commandLatencyCollector); + } else { + metricEventPublisher = null; + } + + if (builder.dnsResolver == null) { + dnsResolver = DnsResolvers.UNRESOLVED; + } else { + dnsResolver = builder.dnsResolver; + } + + if (builder.socketAddressResolver == null) { + socketAddressResolver = SocketAddressResolver.create(dnsResolver); + } else { + socketAddressResolver = builder.socketAddressResolver; + } + + reconnectDelay = builder.reconnectDelay; + nettyCustomizer = builder.nettyCustomizer; + tracing = builder.tracing; + } + + /** + * Create a new {@link DefaultClientResources} using default settings. + * + * @return a new instance of a default client resources. + */ + public static DefaultClientResources create() { + return builder().build(); + } + + /** + * Returns a new {@link DefaultClientResources.Builder} to construct {@link DefaultClientResources}. + * + * @return a new {@link DefaultClientResources.Builder} to construct {@link DefaultClientResources}. + */ + public static DefaultClientResources.Builder builder() { + return new DefaultClientResources.Builder(); + } + + /** + * Builder for {@link DefaultClientResources}. + */ + public static class Builder implements ClientResources.Builder { + + private boolean sharedEventLoopGroupProvider; + private boolean sharedEventExecutor; + private boolean sharedTimer; + private boolean sharedCommandLatencyCollector; + + private int ioThreadPoolSize = DEFAULT_IO_THREADS; + private int computationThreadPoolSize = DEFAULT_COMPUTATION_THREADS; + private EventExecutorGroup eventExecutorGroup; + private EventLoopGroupProvider eventLoopGroupProvider; + private Timer timer; + private EventBus eventBus; + private CommandLatencyCollectorOptions commandLatencyCollectorOptions = DefaultCommandLatencyCollectorOptions.create(); + private CommandLatencyCollector commandLatencyCollector; + private EventPublisherOptions commandLatencyPublisherOptions = DefaultEventPublisherOptions.create(); + private DnsResolver dnsResolver = DnsResolvers.UNRESOLVED; + private SocketAddressResolver socketAddressResolver; + private Supplier reconnectDelay = DEFAULT_RECONNECT_DELAY; + private NettyCustomizer nettyCustomizer = DEFAULT_NETTY_CUSTOMIZER; + private Tracing tracing = Tracing.disabled(); + + private Builder() { + } + + /** + * Sets the thread pool size (number of threads to use) for I/O operations (default value is the number of CPUs). The + * thread pool size is only effective if no {@code eventLoopGroupProvider} is provided. + * + * @param ioThreadPoolSize the thread pool size, must be greater {@code 0}. + * @return {@code this} {@link Builder}. + */ + @Override + public Builder ioThreadPoolSize(int ioThreadPoolSize) { + + LettuceAssert.isTrue(ioThreadPoolSize > 0, "I/O thread pool size must be greater zero"); + + this.ioThreadPoolSize = ioThreadPoolSize; + return this; + } + + /** + * Sets a shared {@link EventLoopGroupProvider event executor provider} that can be used across different instances of + * {@link io.lettuce.core.RedisClient} and {@link io.lettuce.core.cluster.RedisClusterClient}. The provided + * {@link EventLoopGroupProvider} instance will not be shut down when shutting down the client resources. You have to + * take care of that. This is an advanced configuration that should only be used if you know what you are doing. + * + * @param eventLoopGroupProvider the shared eventLoopGroupProvider, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + @Override + public Builder eventLoopGroupProvider(EventLoopGroupProvider eventLoopGroupProvider) { + + LettuceAssert.notNull(eventLoopGroupProvider, "EventLoopGroupProvider must not be null"); + + this.sharedEventLoopGroupProvider = true; + this.eventLoopGroupProvider = eventLoopGroupProvider; + return this; + } + + /** + * Sets the thread pool size (number of threads to use) for computation operations (default value is the number of + * CPUs). The thread pool size is only effective if no {@code eventExecutorGroup} is provided. + * + * @param computationThreadPoolSize the thread pool size, must be greater {@code 0}. + * @return {@code this} {@link Builder}. + */ + @Override + public Builder computationThreadPoolSize(int computationThreadPoolSize) { + + LettuceAssert.isTrue(computationThreadPoolSize > 0, "Computation thread pool size must be greater zero"); + + this.computationThreadPoolSize = computationThreadPoolSize; + return this; + } + + /** + * Sets a shared {@link EventExecutorGroup event executor group} that can be used across different instances of + * {@link io.lettuce.core.RedisClient} and {@link io.lettuce.core.cluster.RedisClusterClient}. The provided + * {@link EventExecutorGroup} instance will not be shut down when shutting down the client resources. You have to take + * care of that. This is an advanced configuration that should only be used if you know what you are doing. + * + * @param eventExecutorGroup the shared eventExecutorGroup, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + @Override + public Builder eventExecutorGroup(EventExecutorGroup eventExecutorGroup) { + + LettuceAssert.notNull(eventExecutorGroup, "EventExecutorGroup must not be null"); + + this.sharedEventExecutor = true; + this.eventExecutorGroup = eventExecutorGroup; + return this; + } + + /** + * Sets a shared {@link Timer} that can be used across different instances of {@link io.lettuce.core.RedisClient} and + * {@link io.lettuce.core.cluster.RedisClusterClient} The provided {@link Timer} instance will not be shut down when + * shutting down the client resources. You have to take care of that. This is an advanced configuration that should only + * be used if you know what you are doing. + * + * @param timer the shared {@link Timer}, must not be {@literal null}. + * @return {@code this} {@link Builder}. + * @since 4.3 + */ + @Override + public Builder timer(Timer timer) { + + LettuceAssert.notNull(timer, "Timer must not be null"); + + this.sharedTimer = true; + this.timer = timer; + return this; + } + + /** + * Sets the {@link EventBus} that can that can be used across different instances of the RedisClient. + * + * @param eventBus the event bus, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + @Override + public Builder eventBus(EventBus eventBus) { + + LettuceAssert.notNull(eventBus, "EventBus must not be null"); + + this.eventBus = eventBus; + return this; + } + + /** + * Sets the {@link EventPublisherOptions} to publish command latency metrics using the {@link EventBus}. + * + * @param commandLatencyPublisherOptions the {@link EventPublisherOptions} to publish command latency metrics using the + * {@link EventBus}, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + @Override + public Builder commandLatencyPublisherOptions(EventPublisherOptions commandLatencyPublisherOptions) { + + LettuceAssert.notNull(commandLatencyPublisherOptions, "EventPublisherOptions must not be null"); + + this.commandLatencyPublisherOptions = commandLatencyPublisherOptions; + return this; + } + + /** + * Sets the {@link CommandLatencyCollectorOptions} that can that can be used across different instances of the + * RedisClient. The options are only effective if no {@code commandLatencyCollector} is provided. + * + * @param commandLatencyCollectorOptions the command latency collector options, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + @Override + public Builder commandLatencyCollectorOptions(CommandLatencyCollectorOptions commandLatencyCollectorOptions) { + + LettuceAssert.notNull(commandLatencyCollectorOptions, "CommandLatencyCollectorOptions must not be null"); + + this.commandLatencyCollectorOptions = commandLatencyCollectorOptions; + return this; + } + + /** + * Sets the {@link CommandLatencyCollector} that can that can be used across different instances of the RedisClient. + * + * @param commandLatencyCollector the command latency collector, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + @Override + public Builder commandLatencyCollector(CommandLatencyCollector commandLatencyCollector) { + + LettuceAssert.notNull(commandLatencyCollector, "CommandLatencyCollector must not be null"); + + this.sharedCommandLatencyCollector = true; + this.commandLatencyCollector = commandLatencyCollector; + return this; + } + + /** + * Sets the {@link SocketAddressResolver} that is used to resolve {@link io.lettuce.core.RedisURI} to + * {@link java.net.SocketAddress}. Defaults to {@link SocketAddressResolver} using the configured {@link DnsResolver}. + * + * @param socketAddressResolver the socket address resolver, must not be {@literal null}. + * @return {@code this} {@link ClientResources.Builder}. + * @since 5.1 + */ + @Override + public ClientResources.Builder socketAddressResolver(SocketAddressResolver socketAddressResolver) { + + LettuceAssert.notNull(socketAddressResolver, "SocketAddressResolver must not be null"); + + this.socketAddressResolver = socketAddressResolver; + return this; + } + + /** + * Sets the {@link DnsResolver} that is used to resolve hostnames to {@link java.net.InetAddress}. Defaults to + * {@link DnsResolvers#JVM_DEFAULT} + * + * @param dnsResolver the DNS resolver, must not be {@literal null}. + * @return {@code this} {@link Builder}. + * @since 4.3 + */ + @Override + public Builder dnsResolver(DnsResolver dnsResolver) { + + LettuceAssert.notNull(dnsResolver, "DnsResolver must not be null"); + + this.dnsResolver = dnsResolver; + return this; + } + + /** + * Sets the stateless reconnect {@link Delay} to delay reconnect attempts. Defaults to binary exponential delay capped + * at {@literal 30 SECONDS}. {@code reconnectDelay} must be a stateless {@link Delay}. + * + * @param reconnectDelay the reconnect delay, must not be {@literal null}. + * @return this + * @since 4.3 + */ + @Override + public Builder reconnectDelay(Delay reconnectDelay) { + + LettuceAssert.notNull(reconnectDelay, "Delay must not be null"); + LettuceAssert.isTrue(!(reconnectDelay instanceof StatefulDelay), "Delay must be a stateless instance."); + + return reconnectDelay(() -> reconnectDelay); + } + + /** + * Sets the stateful reconnect {@link Supplier} to delay reconnect attempts. Defaults to binary exponential delay capped + * at {@literal 30 SECONDS}. + * + * @param reconnectDelay the reconnect delay, must not be {@literal null}. + * @return this + * @since 4.3 + */ + @Override + public Builder reconnectDelay(Supplier reconnectDelay) { + + LettuceAssert.notNull(reconnectDelay, "Delay must not be null"); + + this.reconnectDelay = reconnectDelay; + return this; + } + + /** + * Sets the {@link NettyCustomizer} instance to customize netty components during connection. + * + * @param nettyCustomizer the netty customizer instance, must not be {@literal null}. + * @return this + * @since 4.4 + */ + @Override + public Builder nettyCustomizer(NettyCustomizer nettyCustomizer) { + + LettuceAssert.notNull(nettyCustomizer, "NettyCustomizer must not be null"); + + this.nettyCustomizer = nettyCustomizer; + return this; + } + + /** + * Sets the {@link Tracing} instance to trace Redis calls. + * + * @param tracing the tracer infrastructure instance, must not be {@literal null}. + * @return this + * @since 5.1 + */ + @Override + public Builder tracing(Tracing tracing) { + + LettuceAssert.notNull(tracing, "Tracing must not be null"); + + this.tracing = tracing; + return this; + } + + /** + * + * @return a new instance of {@link DefaultClientResources}. + */ + @Override + public DefaultClientResources build() { + return new DefaultClientResources(this); + } + } + + /** + * Returns a builder to create new {@link DefaultClientResources} whose settings are replicated from the current + * {@link DefaultClientResources}. + *

    + * Note: The resulting {@link DefaultClientResources} retains shared state for {@link Timer}, + * {@link CommandLatencyCollector}, {@link EventExecutorGroup}, and {@link EventLoopGroupProvider} if these are left + * unchanged. Thus you need only to shut down the last created {@link ClientResources} instances. Shutdown affects any + * previously created {@link ClientResources}. + *

    + * + * @return a {@link DefaultClientResources.Builder} to create new {@link DefaultClientResources} whose settings are + * replicated from the current {@link DefaultClientResources}. + * + * @since 5.1 + */ + @Override + public DefaultClientResources.Builder mutate() { + + Builder builder = new Builder(); + + builder.eventExecutorGroup(eventExecutorGroup()).timer(timer()).eventBus(eventBus()) + .commandLatencyCollector(commandLatencyCollector()) + .commandLatencyPublisherOptions(commandLatencyPublisherOptions()).dnsResolver(dnsResolver()) + .socketAddressResolver(socketAddressResolver()).reconnectDelay(reconnectDelay) + .nettyCustomizer(nettyCustomizer()).tracing(tracing()); + + builder.sharedCommandLatencyCollector = sharedEventLoopGroupProvider; + builder.sharedEventExecutor = sharedEventExecutor; + builder.sharedEventLoopGroupProvider = sharedEventLoopGroupProvider; + builder.sharedTimer = sharedTimer; + + return builder; + } + + @Override + protected void finalize() throws Throwable { + if (!shutdownCalled) { + logger.warn(getClass().getName() + + " was not shut down properly, shutdown() was not called before it's garbage-collected. Call shutdown() or shutdown(long,long,TimeUnit) "); + } + super.finalize(); + } + + /** + * Shutdown the {@link ClientResources}. + * + * @return eventually the success/failure of the shutdown without errors. + */ + @Override + public Future shutdown() { + return shutdown(0, 2, TimeUnit.SECONDS); + } + + /** + * Shutdown the {@link ClientResources}. + * + * @param quietPeriod the quiet period as described in the documentation + * @param timeout the maximum amount of time to wait until the executor is shutdown regardless if a task was submitted + * during the quiet period + * @param timeUnit the unit of {@code quietPeriod} and {@code timeout} + * @return eventually the success/failure of the shutdown without errors. + */ + @SuppressWarnings("unchecked") + public Future shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { + + logger.debug("Initiate shutdown ({}, {}, {})", quietPeriod, timeout, timeUnit); + + shutdownCalled = true; + DefaultPromise voidPromise = new DefaultPromise<>(ImmediateEventExecutor.INSTANCE); + PromiseCombiner aggregator = new PromiseCombiner(ImmediateEventExecutor.INSTANCE); + + if (metricEventPublisher != null) { + metricEventPublisher.shutdown(); + } + + if (!sharedTimer) { + timer.stop(); + } + + if (!sharedEventLoopGroupProvider) { + Future shutdown = eventLoopGroupProvider.shutdown(quietPeriod, timeout, timeUnit); + aggregator.add(shutdown); + } + + if (!sharedEventExecutor) { + Future shutdown = eventExecutorGroup.shutdownGracefully(quietPeriod, timeout, timeUnit); + aggregator.add(shutdown); + } + + if (!sharedCommandLatencyCollector) { + commandLatencyCollector.shutdown(); + } + + aggregator.finish(voidPromise); + + return PromiseAdapter.toBooleanPromise(voidPromise); + } + + @Override + public EventLoopGroupProvider eventLoopGroupProvider() { + return eventLoopGroupProvider; + } + + @Override + public EventExecutorGroup eventExecutorGroup() { + return eventExecutorGroup; + } + + @Override + public int ioThreadPoolSize() { + return eventLoopGroupProvider.threadPoolSize(); + } + + @Override + public int computationThreadPoolSize() { + return LettuceLists.newList(eventExecutorGroup.iterator()).size(); + } + + @Override + public EventBus eventBus() { + return eventBus; + } + + @Override + public Timer timer() { + return timer; + } + + @Override + public CommandLatencyCollector commandLatencyCollector() { + return commandLatencyCollector; + } + + @Override + public EventPublisherOptions commandLatencyPublisherOptions() { + return commandLatencyPublisherOptions; + } + + @Override + public DnsResolver dnsResolver() { + return dnsResolver; + } + + @Override + public SocketAddressResolver socketAddressResolver() { + return socketAddressResolver; + } + + @Override + public Delay reconnectDelay() { + return reconnectDelay.get(); + } + + @Override + public NettyCustomizer nettyCustomizer() { + return nettyCustomizer; + } + + @Override + public Tracing tracing() { + return tracing; + } +} diff --git a/src/main/java/io/lettuce/core/resource/DefaultEventLoopGroupProvider.java b/src/main/java/io/lettuce/core/resource/DefaultEventLoopGroupProvider.java new file mode 100644 index 0000000000..3fc8e3e97d --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/DefaultEventLoopGroupProvider.java @@ -0,0 +1,312 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static io.lettuce.core.resource.PromiseAdapter.toBooleanPromise; + +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.ThreadFactory; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.internal.LettuceAssert; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.util.concurrent.*; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Default implementation which manages one event loop group instance per type. + * + * @author Mark Paluch + * @since 3.4 + */ +public class DefaultEventLoopGroupProvider implements EventLoopGroupProvider { + + protected static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultEventLoopGroupProvider.class); + + private final Map, EventExecutorGroup> eventLoopGroups = new ConcurrentHashMap<>(2); + private final Map refCounter = new ConcurrentHashMap<>(2); + + private final int numberOfThreads; + private final ThreadFactoryProvider threadFactoryProvider; + + private volatile boolean shutdownCalled = false; + + /** + * Creates a new instance of {@link DefaultEventLoopGroupProvider}. + * + * @param numberOfThreads number of threads (pool size) + */ + public DefaultEventLoopGroupProvider(int numberOfThreads) { + this(numberOfThreads, DefaultThreadFactoryProvider.INSTANCE); + } + + /** + * Creates a new instance of {@link DefaultEventLoopGroupProvider}. + * + * @param numberOfThreads number of threads (pool size) + * @param threadFactoryProvider provides access to {@link ThreadFactory}. + * @since 6.0 + */ + public DefaultEventLoopGroupProvider(int numberOfThreads, ThreadFactoryProvider threadFactoryProvider) { + + LettuceAssert.isTrue(numberOfThreads > 0, "Number of threads must be greater than zero"); + LettuceAssert.notNull(threadFactoryProvider, "ThreadFactoryProvider must not be null"); + + this.numberOfThreads = numberOfThreads; + this.threadFactoryProvider = threadFactoryProvider; + } + + @Override + public T allocate(Class type) { + + synchronized (this) { + logger.debug("Allocating executor {}", type.getName()); + return addReference(getOrCreate(type)); + } + } + + private T addReference(T reference) { + + synchronized (refCounter) { + long counter = 0; + if (refCounter.containsKey(reference)) { + counter = refCounter.get(reference); + } + + logger.debug("Adding reference to {}, existing ref count {}", reference, counter); + counter++; + refCounter.put(reference, counter); + } + + return reference; + } + + private T release(T reference) { + + synchronized (refCounter) { + long counter = 0; + if (refCounter.containsKey(reference)) { + counter = refCounter.get(reference); + } + + if (counter < 1) { + logger.debug("Attempting to release {} but ref count is {}", reference, counter); + } + + counter--; + if (counter == 0) { + refCounter.remove(reference); + } else { + refCounter.put(reference, counter); + } + } + + return reference; + } + + @SuppressWarnings("unchecked") + private T getOrCreate(Class type) { + + if (shutdownCalled) { + throw new IllegalStateException("Provider is shut down and can not longer provide resources"); + } + + if (!eventLoopGroups.containsKey(type)) { + eventLoopGroups.put(type, doCreateEventLoopGroup(type, numberOfThreads, threadFactoryProvider)); + } + + return (T) eventLoopGroups.get(type); + } + + /** + * Customization hook for {@link EventLoopGroup} creation. + * + * @param + * @param type requested event loop group type. + * @param numberOfThreads number of threads to create. + * @param threadFactoryProvider provider for {@link ThreadFactory}. + * @return + * @since 6.0 + */ + protected EventExecutorGroup doCreateEventLoopGroup(Class type, int numberOfThreads, + ThreadFactoryProvider threadFactoryProvider) { + return createEventLoopGroup(type, numberOfThreads, threadFactoryProvider); + } + + /** + * Create an instance of a {@link EventExecutorGroup} using the default {@link ThreadFactoryProvider}. Supported types are: + *
      + *
    • DefaultEventExecutorGroup
    • + *
    • NioEventLoopGroup
    • + *
    • EpollEventLoopGroup
    • + *
    • KqueueEventLoopGroup
    • + *
    + * + * @param type the type + * @param numberOfThreads the number of threads to use for the {@link EventExecutorGroup} + * @param type parameter + * @return a new instance of a {@link EventExecutorGroup} + * @throws IllegalArgumentException if the {@code type} is not supported. + */ + public static EventExecutorGroup createEventLoopGroup(Class type, int numberOfThreads) { + return createEventLoopGroup(type, numberOfThreads, DefaultThreadFactoryProvider.INSTANCE); + } + + /** + * Create an instance of a {@link EventExecutorGroup}. Supported types are: + *
      + *
    • DefaultEventExecutorGroup
    • + *
    • NioEventLoopGroup
    • + *
    • EpollEventLoopGroup
    • + *
    • KqueueEventLoopGroup
    • + *
    + * + * @param type the type + * @param numberOfThreads the number of threads to use for the {@link EventExecutorGroup} + * @param type parameter + * @return a new instance of a {@link EventExecutorGroup} + * @throws IllegalArgumentException if the {@code type} is not supported. + * @since 5.3 + */ + private static EventExecutorGroup createEventLoopGroup(Class type, int numberOfThreads, + ThreadFactoryProvider factoryProvider) { + + logger.debug("Creating executor {}", type.getName()); + + if (DefaultEventExecutorGroup.class.equals(type)) { + return new DefaultEventExecutorGroup(numberOfThreads, + factoryProvider.getThreadFactory("lettuce-eventExecutorLoop")); + } + + if (NioEventLoopGroup.class.equals(type)) { + return new NioEventLoopGroup(numberOfThreads, factoryProvider.getThreadFactory("lettuce-nioEventLoop")); + } + + if (EpollProvider.isAvailable()) { + + EventLoopResources resources = EpollProvider.getResources(); + + if (resources.matches(type)) { + return resources.newEventLoopGroup(numberOfThreads, factoryProvider.getThreadFactory("lettuce-epollEventLoop")); + } + } + + if (KqueueProvider.isAvailable()) { + + EventLoopResources resources = KqueueProvider.getResources(); + + if (resources.matches(type)) { + return resources.newEventLoopGroup(numberOfThreads, + factoryProvider.getThreadFactory("lettuce-kqueueEventLoop")); + } + } + + throw new IllegalArgumentException(String.format("Type %s not supported", type.getName())); + } + + @Override + public Promise release(EventExecutorGroup eventLoopGroup, long quietPeriod, long timeout, TimeUnit unit) { + return toBooleanPromise(doRelease(eventLoopGroup, quietPeriod, timeout, unit)); + } + + private Future doRelease(EventExecutorGroup eventLoopGroup, long quietPeriod, long timeout, TimeUnit unit) { + + logger.debug("Release executor {}", eventLoopGroup); + + Class key = getKey(release(eventLoopGroup)); + + if ((key == null && eventLoopGroup.isShuttingDown()) || refCounter.containsKey(eventLoopGroup)) { + return new SucceededFuture<>(ImmediateEventExecutor.INSTANCE, true); + } + + if (key != null) { + eventLoopGroups.remove(key); + } + + return eventLoopGroup.shutdownGracefully(quietPeriod, timeout, unit); + } + + private Class getKey(EventExecutorGroup eventLoopGroup) { + Class key = null; + + Map, EventExecutorGroup> copy = new HashMap<>(eventLoopGroups); + for (Map.Entry, EventExecutorGroup> entry : copy.entrySet()) { + if (entry.getValue() == eventLoopGroup) { + key = entry.getKey(); + break; + } + } + return key; + } + + @Override + public int threadPoolSize() { + return numberOfThreads; + } + + @Override + public Future shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { + + logger.debug("Initiate shutdown ({}, {}, {})", quietPeriod, timeout, timeUnit); + + shutdownCalled = true; + + Map, EventExecutorGroup> copy = new HashMap<>(eventLoopGroups); + + DefaultPromise overall = new DefaultPromise<>(ImmediateEventExecutor.INSTANCE); + PromiseCombiner combiner = new PromiseCombiner(ImmediateEventExecutor.INSTANCE); + + for (EventExecutorGroup executorGroup : copy.values()) { + combiner.add(doRelease(executorGroup, quietPeriod, timeout, timeUnit)); + } + + combiner.finish(overall); + + return PromiseAdapter.toBooleanPromise(overall); + } + + /** + * Interface to provide a custom {@link java.util.concurrent.ThreadFactory}. Implementations are asked through + * {@link #getThreadFactory(String)} to provide a thread factory for a given pool name. + * + * @since 6.0 + */ + public interface ThreadFactoryProvider { + + /** + * Return a {@link ThreadFactory} for the given {@code poolName}. + * + * @param poolName a descriptive pool name. Typically used as prefix for thread names. + * @return the {@link ThreadFactory}. + */ + ThreadFactory getThreadFactory(String poolName); + } + + enum DefaultThreadFactoryProvider implements ThreadFactoryProvider { + + INSTANCE; + + @Override + public ThreadFactory getThreadFactory(String poolName) { + return new DefaultThreadFactory(poolName, true); + } + } +} diff --git a/src/main/java/io/lettuce/core/resource/DefaultNettyCustomizer.java b/src/main/java/io/lettuce/core/resource/DefaultNettyCustomizer.java new file mode 100644 index 0000000000..37ab6b7b31 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/DefaultNettyCustomizer.java @@ -0,0 +1,40 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import io.netty.bootstrap.Bootstrap; +import io.netty.channel.Channel; + +/** + * Default (empty) {@link NettyCustomizer} implementation. + * + * @author Mark Paluch + * @since 4.4 + */ +enum DefaultNettyCustomizer implements NettyCustomizer { + + INSTANCE; + + @Override + public void afterBootstrapInitialized(Bootstrap bootstrap) { + // no-op + } + + @Override + public void afterChannelInitialized(Channel channel) { + // no-op + } +} diff --git a/src/main/java/io/lettuce/core/resource/Delay.java b/src/main/java/io/lettuce/core/resource/Delay.java new file mode 100644 index 0000000000..b1fb14ec8a --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/Delay.java @@ -0,0 +1,342 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.time.Duration; +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Base class for delays and factory class to create particular instances. {@link Delay} can be subclassed to create custom + * delay implementations based on attempts. Attempts start with {@code 1}. + *

    + * Delays are usually stateless instances that can be shared amongst multiple users (such as connections). Stateful + * {@link Delay} implementations must implement {@link StatefulDelay} to reset their internal state after the delay is not + * required anymore. + * + * @author Mark Paluch + * @author Jongyeol Choi + * @since 4.2 + * @see StatefulDelay + */ +public abstract class Delay { + + private static Duration DEFAULT_LOWER_BOUND = Duration.ZERO; + private static Duration DEFAULT_UPPER_BOUND = Duration.ofSeconds(30); + private static int DEFAULT_POWER_OF = 2; + private static TimeUnit DEFAULT_TIMEUNIT = TimeUnit.MILLISECONDS; + + /** + * Interface to be implemented by stateful {@link Delay}s. Stateful delays can get reset once a condition (such as + * successful reconnect) is met. Stateful delays should not be shared by multiple connections but each connection should use + * its own instance. + * + * @see Supplier + * @see io.lettuce.core.resource.DefaultClientResources.Builder#reconnectDelay(Supplier) + */ + public interface StatefulDelay { + + /** + * Reset this delay state. Resetting prepares a stateful delay for its next usage. + */ + void reset(); + } + + /** + * Creates a new {@link Delay}. + */ + protected Delay() { + } + + /** + * Calculate a specific delay based on the attempt. + * + * This method is to be implemented by the implementations and depending on the params that were set during construction + * time. + * + * @param attempt the attempt to calculate the delay from. + * @return the calculated delay. + */ + public abstract Duration createDelay(long attempt); + + /** + * Creates a new {@link ConstantDelay}. + * + * @param delay the delay, must be greater or equal to {@literal 0}. + * @return a created {@link ConstantDelay}. + */ + public static Delay constant(Duration delay) { + + LettuceAssert.notNull(delay, "Delay must not be null"); + LettuceAssert.isTrue(delay.toNanos() >= 0, "Delay must be greater or equal to 0"); + + return new ConstantDelay(delay); + } + + /** + * Creates a new {@link ConstantDelay}. + * + * @param delay the delay, must be greater or equal to 0 + * @param timeUnit the unit of the delay. + * @return a created {@link ConstantDelay}. + * @deprecated since 5.0, use {@link #constant(Duration)} + */ + @Deprecated + public static Delay constant(int delay, TimeUnit timeUnit) { + + LettuceAssert.notNull(timeUnit, "TimeUnit must not be null"); + + return constant(Duration.ofNanos(timeUnit.toNanos(delay))); + } + + /** + * Creates a new {@link ExponentialDelay} with default boundaries and factor (1, 2, 4, 8, 16, 32...). The delay begins with + * 1 and is capped at 30 {@link TimeUnit#SECONDS} after reaching the 16th attempt. + * + * @return a created {@link ExponentialDelay}. + */ + public static Delay exponential() { + return exponential(DEFAULT_LOWER_BOUND, DEFAULT_UPPER_BOUND, DEFAULT_POWER_OF, DEFAULT_TIMEUNIT); + } + + /** + * Creates a new {@link ExponentialDelay} on with custom boundaries and factor (eg. with upper 9000, lower 0, powerOf 10: 1, + * 10, 100, 1000, 9000, 9000, 9000, ...). + * + * @param lower the lower boundary, must be non-negative + * @param upper the upper boundary, must be greater than the lower boundary + * @param powersOf the base for exponential growth (eg. powers of 2, powers of 10, etc...), must be non-negative and greater + * than 1 + * @param targetTimeUnit the unit of the delay. + * @return a created {@link ExponentialDelay}. + * @since 5.0 + */ + public static Delay exponential(Duration lower, Duration upper, int powersOf, TimeUnit targetTimeUnit) { + + LettuceAssert.notNull(lower, "Lower boundary must not be null"); + LettuceAssert.isTrue(lower.toNanos() >= 0, "Lower boundary must be greater or equal to 0"); + LettuceAssert.notNull(upper, "Upper boundary must not be null"); + LettuceAssert.isTrue(upper.toNanos() > lower.toNanos(), "Upper boundary must be greater than the lower boundary"); + LettuceAssert.isTrue(powersOf > 1, "PowersOf must be greater than 1"); + LettuceAssert.notNull(targetTimeUnit, "Target TimeUnit must not be null"); + + return new ExponentialDelay(lower, upper, powersOf, targetTimeUnit); + } + + /** + * Creates a new {@link ExponentialDelay} on with custom boundaries and factor (eg. with upper 9000, lower 0, powerOf 10: 1, + * 10, 100, 1000, 9000, 9000, 9000, ...). + * + * @param lower the lower boundary, must be non-negative + * @param upper the upper boundary, must be greater than the lower boundary + * @param unit the unit of the delay. + * @param powersOf the base for exponential growth (eg. powers of 2, powers of 10, etc...), must be non-negative and greater + * than 1 + * @return a created {@link ExponentialDelay}. + */ + public static Delay exponential(long lower, long upper, TimeUnit unit, int powersOf) { + + LettuceAssert.isTrue(lower >= 0, "Lower boundary must be greater or equal to 0"); + LettuceAssert.isTrue(upper > lower, "Upper boundary must be greater than the lower boundary"); + LettuceAssert.isTrue(powersOf > 1, "PowersOf must be greater than 1"); + LettuceAssert.notNull(unit, "TimeUnit must not be null"); + + return exponential(Duration.ofNanos(unit.toNanos(lower)), Duration.ofNanos(unit.toNanos(upper)), powersOf, unit); + } + + /** + * Creates a new {@link EqualJitterDelay} with default boundaries. + * + * @return a created {@link EqualJitterDelay}. + */ + public static Delay equalJitter() { + return equalJitter(DEFAULT_LOWER_BOUND, DEFAULT_UPPER_BOUND, 1L, DEFAULT_TIMEUNIT); + } + + /** + * Creates a new {@link EqualJitterDelay}. + * + * @param lower the lower boundary, must be non-negative + * @param upper the upper boundary, must be greater than the lower boundary + * @param base the base, must be greater or equal to 1 + * @param targetTimeUnit the unit of the delay. + * @return a created {@link EqualJitterDelay}. + * @since 5.0 + */ + public static Delay equalJitter(Duration lower, Duration upper, long base, TimeUnit targetTimeUnit) { + + LettuceAssert.notNull(lower, "Lower boundary must not be null"); + LettuceAssert.isTrue(lower.toNanos() >= 0, "Lower boundary must be greater or equal to 0"); + LettuceAssert.notNull(upper, "Upper boundary must not be null"); + LettuceAssert.isTrue(upper.toNanos() > lower.toNanos(), "Upper boundary must be greater than the lower boundary"); + LettuceAssert.isTrue(base >= 1, "Base must be greater or equal to 1"); + LettuceAssert.notNull(targetTimeUnit, "Target TimeUnit must not be null"); + + return new EqualJitterDelay(lower, upper, base, targetTimeUnit); + } + + /** + * Creates a new {@link EqualJitterDelay}. + * + * @param lower the lower boundary, must be non-negative + * @param upper the upper boundary, must be greater than the lower boundary + * @param base the base, must be greater or equal to 1 + * @param unit the unit of the delay. + * @return a created {@link EqualJitterDelay}. + */ + public static Delay equalJitter(long lower, long upper, long base, TimeUnit unit) { + + LettuceAssert.isTrue(lower >= 0, "Lower boundary must be greater or equal to 0"); + LettuceAssert.isTrue(upper > lower, "Upper boundary must be greater than the lower boundary"); + LettuceAssert.isTrue(base >= 1, "Base must be greater or equal to 1"); + LettuceAssert.notNull(unit, "TimeUnit must not be null"); + + return equalJitter(Duration.ofNanos(unit.toNanos(lower)), Duration.ofNanos(unit.toNanos(upper)), base, unit); + } + + /** + * Creates a new {@link FullJitterDelay} with default boundaries. + * + * @return a created {@link FullJitterDelay}. + */ + public static Delay fullJitter() { + return fullJitter(DEFAULT_LOWER_BOUND, DEFAULT_UPPER_BOUND, 1L, DEFAULT_TIMEUNIT); + } + + /** + * Creates a new {@link FullJitterDelay}. + * + * @param lower the lower boundary, must be non-negative + * @param upper the upper boundary, must be greater than the lower boundary + * @param base the base, must be greater or equal to 1 + * @param targetTimeUnit the unit of the delay. + * @return a created {@link FullJitterDelay}. + * @since 5.0 + */ + public static Delay fullJitter(Duration lower, Duration upper, long base, TimeUnit targetTimeUnit) { + + LettuceAssert.notNull(lower, "Lower boundary must not be null"); + LettuceAssert.isTrue(lower.toNanos() >= 0, "Lower boundary must be greater or equal to 0"); + LettuceAssert.notNull(upper, "Upper boundary must not be null"); + LettuceAssert.isTrue(upper.toNanos() > lower.toNanos(), "Upper boundary must be greater than the lower boundary"); + LettuceAssert.isTrue(base >= 1, "Base must be greater or equal to 1"); + LettuceAssert.notNull(targetTimeUnit, "Target TimeUnit must not be null"); + + return new FullJitterDelay(lower, upper, base, targetTimeUnit); + } + + /** + * Creates a new {@link FullJitterDelay}. + * + * @param lower the lower boundary, must be non-negative + * @param upper the upper boundary, must be greater than the lower boundary + * @param base the base, must be greater or equal to 1 + * @param unit the unit of the delay. + * @return a created {@link FullJitterDelay}. + */ + public static Delay fullJitter(long lower, long upper, long base, TimeUnit unit) { + + LettuceAssert.isTrue(lower >= 0, "Lower boundary must be greater or equal to 0"); + LettuceAssert.isTrue(upper > lower, "Upper boundary must be greater than the lower boundary"); + LettuceAssert.isTrue(base >= 1, "Base must be greater or equal to 1"); + LettuceAssert.notNull(unit, "TimeUnit must not be null"); + + return fullJitter(Duration.ofNanos(unit.toNanos(lower)), Duration.ofNanos(unit.toNanos(upper)), base, unit); + } + + /** + * Creates a {@link Supplier} that constructs new {@link DecorrelatedJitterDelay} instances with default boundaries. + * + * @return a {@link Supplier} of {@link DecorrelatedJitterDelay}. + */ + public static Supplier decorrelatedJitter() { + return decorrelatedJitter(DEFAULT_LOWER_BOUND, DEFAULT_UPPER_BOUND, 0L, DEFAULT_TIMEUNIT); + } + + /** + * Creates a {@link Supplier} that constructs new {@link DecorrelatedJitterDelay} instances. + * + * @param lower the lower boundary, must be non-negative + * @param upper the upper boundary, must be greater than the lower boundary + * @param base the base, must be greater or equal to 0 + * @param targetTimeUnit the unit of the delay. + * @return a new {@link Supplier} of {@link DecorrelatedJitterDelay}. + * @since 5.0 + */ + public static Supplier decorrelatedJitter(Duration lower, Duration upper, long base, TimeUnit targetTimeUnit) { + + LettuceAssert.notNull(lower, "Lower boundary must not be null"); + LettuceAssert.isTrue(lower.toNanos() >= 0, "Lower boundary must be greater or equal to 0"); + LettuceAssert.notNull(upper, "Upper boundary must not be null"); + LettuceAssert.isTrue(upper.toNanos() > lower.toNanos(), "Upper boundary must be greater than the lower boundary"); + LettuceAssert.isTrue(base >= 0, "Base must be greater or equal to 1"); + LettuceAssert.notNull(targetTimeUnit, "Target TimeUnit must not be null"); + + // Create new Delay because it has state. + return () -> new DecorrelatedJitterDelay(lower, upper, base, targetTimeUnit); + } + + /** + * Creates a {@link Supplier} that constructs new {@link DecorrelatedJitterDelay} instances. + * + * @param lower the lower boundary, must be non-negative + * @param upper the upper boundary, must be greater than the lower boundary + * @param base the base, must be greater or equal to 0 + * @param unit the unit of the delay. + * @return a new {@link Supplier} of {@link DecorrelatedJitterDelay}. + */ + public static Supplier decorrelatedJitter(long lower, long upper, long base, TimeUnit unit) { + + LettuceAssert.isTrue(lower >= 0, "Lower boundary must be greater or equal to 0"); + LettuceAssert.isTrue(upper > lower, "Upper boundary must be greater than the lower boundary"); + LettuceAssert.isTrue(base >= 0, "Base must be greater or equal to 0"); + LettuceAssert.notNull(unit, "TimeUnit must not be null"); + + // Create new Delay because it has state. + return decorrelatedJitter(Duration.ofNanos(unit.toNanos(lower)), Duration.ofNanos(unit.toNanos(upper)), base, unit); + } + + /** + * Generates a random long value within {@code min} and {@code max} boundaries. + * + * @param min + * @param max + * @return a random value + * @see ThreadLocalRandom#nextLong(long, long) + */ + protected static long randomBetween(long min, long max) { + if (min == max) { + return min; + } + return ThreadLocalRandom.current().nextLong(min, max); + } + + protected static Duration applyBounds(Duration calculatedValue, Duration lower, Duration upper) { + + if (calculatedValue.compareTo(lower) < 0) { + return lower; + } + + if (calculatedValue.compareTo(upper) > 0) { + return upper; + } + + return calculatedValue; + } +} diff --git a/src/main/java/io/lettuce/core/resource/DirContextDnsResolver.java b/src/main/java/io/lettuce/core/resource/DirContextDnsResolver.java new file mode 100644 index 0000000000..24c1a98a40 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/DirContextDnsResolver.java @@ -0,0 +1,482 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.io.Closeable; +import java.io.IOException; +import java.net.InetAddress; +import java.net.UnknownHostException; +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Properties; + +import javax.naming.Context; +import javax.naming.InitialContext; +import javax.naming.NamingEnumeration; +import javax.naming.NamingException; +import javax.naming.directory.Attribute; +import javax.naming.directory.Attributes; +import javax.naming.directory.InitialDirContext; + +import io.lettuce.core.LettuceStrings; +import io.lettuce.core.internal.LettuceAssert; + +/** + * DNS Resolver based on Java's {@code com.sun.jndi.dns.DnsContextFactory}. This resolver resolves hostnames to IPv4 and IPv6 + * addresses using {@code A}, {@code AAAA} and {@code CNAME} records. Java IP stack preferences are read from system properties + * and taken into account when resolving names. + *

    + * The default configuration uses system-configured DNS server addresses to perform lookups but server adresses can be specified + * using {@link #DirContextDnsResolver(Iterable)}. Custom DNS servers can be specified by using + * {@link #DirContextDnsResolver(String)} or {@link #DirContextDnsResolver(Iterable)}. + *

    + * + * @author Mark Paluch + * @since 4.2 + */ +public class DirContextDnsResolver implements DnsResolver, Closeable { + + static final String PREFER_IPV4_KEY = "java.net.preferIPv4Stack"; + static final String PREFER_IPV6_KEY = "java.net.preferIPv6Stack"; + + private static final int IPV4_PART_COUNT = 4; + private static final int IPV6_PART_COUNT = 8; + + private static final String CTX_FACTORY_NAME = "com.sun.jndi.dns.DnsContextFactory"; + private static final String INITIAL_TIMEOUT = "com.sun.jndi.dns.timeout.initial"; + private static final String LOOKUP_RETRIES = "com.sun.jndi.dns.timeout.retries"; + + private static final String DEFAULT_INITIAL_TIMEOUT = "1000"; + private static final String DEFAULT_RETRIES = "4"; + + private final boolean preferIpv4; + private final boolean preferIpv6; + private final Properties properties; + private final InitialDirContext context; + + /** + * Creates a new {@link DirContextDnsResolver} using system-configured DNS servers. + */ + public DirContextDnsResolver() { + this(new Properties(), new StackPreference()); + } + + /** + * Creates a new {@link DirContextDnsResolver} using a collection of DNS servers. + * + * @param dnsServer must not be {@literal null} and not empty. + */ + public DirContextDnsResolver(String dnsServer) { + this(Collections.singleton(dnsServer)); + } + + /** + * Creates a new {@link DirContextDnsResolver} using a collection of DNS servers. + * + * @param dnsServers must not be {@literal null} and not empty. + */ + public DirContextDnsResolver(Iterable dnsServers) { + this(getProperties(dnsServers), new StackPreference()); + } + + /** + * Creates a new {@link DirContextDnsResolver} for the given stack preference and {@code properties}. + * + * @param preferIpv4 flag to prefer IPv4 over IPv6 address resolution. + * @param preferIpv6 flag to prefer IPv6 over IPv4 address resolution. + * @param properties custom properties for creating the context, must not be {@literal null}. + */ + public DirContextDnsResolver(boolean preferIpv4, boolean preferIpv6, Properties properties) { + + this.preferIpv4 = preferIpv4; + this.preferIpv6 = preferIpv6; + this.properties = properties; + this.context = createContext(properties); + } + + private DirContextDnsResolver(Properties properties, StackPreference stackPreference) { + + this.properties = new Properties(properties); + this.preferIpv4 = stackPreference.preferIpv4; + this.preferIpv6 = stackPreference.preferIpv6; + this.context = createContext(properties); + } + + private InitialDirContext createContext(Properties properties) { + + LettuceAssert.notNull(properties, "Properties must not be null"); + + Properties hashtable = (Properties) properties.clone(); + hashtable.put(InitialContext.INITIAL_CONTEXT_FACTORY, CTX_FACTORY_NAME); + + if (!hashtable.containsKey(INITIAL_TIMEOUT)) { + hashtable.put(INITIAL_TIMEOUT, DEFAULT_INITIAL_TIMEOUT); + } + + if (!hashtable.containsKey(LOOKUP_RETRIES)) { + hashtable.put(LOOKUP_RETRIES, DEFAULT_RETRIES); + } + + try { + return new InitialDirContext(hashtable); + } catch (NamingException e) { + throw new IllegalStateException(e); + } + } + + @Override + public void close() throws IOException { + try { + context.close(); + } catch (NamingException e) { + throw new IOException(e); + } + } + + /** + * Perform hostname to address resolution. + * + * @param host the hostname, must not be empty or {@literal null}. + * @return array of one or more {@link InetAddress addresses} + * @throws UnknownHostException + */ + @Override + public InetAddress[] resolve(String host) throws UnknownHostException { + + if (ipStringToBytes(host) != null) { + return new InetAddress[] { InetAddress.getByAddress(ipStringToBytes(host)) }; + } + + List inetAddresses = new ArrayList<>(); + try { + resolve(host, inetAddresses); + } catch (NamingException e) { + throw new UnknownHostException(String.format("Cannot resolve %s to a hostname because of %s", host, e)); + } + + if (inetAddresses.isEmpty()) { + throw new UnknownHostException(String.format("Cannot resolve %s to a hostname", host)); + } + + return inetAddresses.toArray(new InetAddress[inetAddresses.size()]); + } + + /** + * Resolve a hostname + * + * @param hostname + * @param inetAddresses + * @throws NamingException + * @throws UnknownHostException + */ + private void resolve(String hostname, List inetAddresses) throws NamingException, UnknownHostException { + + if (preferIpv6 || (!preferIpv4 && !preferIpv6)) { + + inetAddresses.addAll(resolve(hostname, "AAAA")); + inetAddresses.addAll(resolve(hostname, "A")); + } else { + + inetAddresses.addAll(resolve(hostname, "A")); + inetAddresses.addAll(resolve(hostname, "AAAA")); + } + + if (inetAddresses.isEmpty()) { + inetAddresses.addAll(resolveCname(hostname)); + } + } + + /** + * Resolves {@code CNAME} records to {@link InetAddress adresses}. + * + * @param hostname + * @return + * @throws NamingException + */ + @SuppressWarnings("rawtypes") + private List resolveCname(String hostname) throws NamingException { + + List inetAddresses = new ArrayList<>(); + + Attributes attrs = context.getAttributes(hostname, new String[] { "CNAME" }); + Attribute attr = attrs.get("CNAME"); + + if (attr != null && attr.size() > 0) { + NamingEnumeration e = attr.getAll(); + + while (e.hasMore()) { + String h = (String) e.next(); + + if (h.endsWith(".")) { + h = h.substring(0, h.lastIndexOf('.')); + } + try { + InetAddress[] resolved = resolve(h); + for (InetAddress inetAddress : resolved) { + inetAddresses.add(InetAddress.getByAddress(hostname, inetAddress.getAddress())); + } + + } catch (UnknownHostException e1) { + // ignore + } + } + } + + return inetAddresses; + } + + /** + * Resolve an attribute for a hostname. + * + * @param hostname + * @param attrName + * @return + * @throws NamingException + * @throws UnknownHostException + */ + @SuppressWarnings("rawtypes") + private List resolve(String hostname, String attrName) throws NamingException, UnknownHostException { + + Attributes attrs = context.getAttributes(hostname, new String[] { attrName }); + + List inetAddresses = new ArrayList<>(); + Attribute attr = attrs.get(attrName); + + if (attr != null && attr.size() > 0) { + NamingEnumeration e = attr.getAll(); + + while (e.hasMore()) { + InetAddress inetAddress = InetAddress.getByName("" + e.next()); + inetAddresses.add(InetAddress.getByAddress(hostname, inetAddress.getAddress())); + } + } + + return inetAddresses; + } + + private static Properties getProperties(Iterable dnsServers) { + + Properties properties = new Properties(); + StringBuffer providerUrl = new StringBuffer(); + + for (String dnsServer : dnsServers) { + + LettuceAssert.isTrue(LettuceStrings.isNotEmpty(dnsServer), "DNS Server must not be empty"); + if (providerUrl.length() != 0) { + providerUrl.append(' '); + } + providerUrl.append(String.format("dns://%s", dnsServer)); + } + + if (providerUrl.length() == 0) { + throw new IllegalArgumentException("DNS Servers must not be empty"); + } + + properties.put(Context.PROVIDER_URL, providerUrl.toString()); + + return properties; + } + + /** + * Stack preference utility. + */ + private static final class StackPreference { + + final boolean preferIpv4; + final boolean preferIpv6; + + public StackPreference() { + + boolean preferIpv4 = false; + boolean preferIpv6 = false; + + if (System.getProperty(PREFER_IPV4_KEY) == null && System.getProperty(PREFER_IPV6_KEY) == null) { + preferIpv4 = false; + preferIpv6 = false; + } + + if (System.getProperty(PREFER_IPV4_KEY) == null && System.getProperty(PREFER_IPV6_KEY) != null) { + + preferIpv6 = Boolean.getBoolean(PREFER_IPV6_KEY); + if (!preferIpv6) { + preferIpv4 = true; + } + } + + if (System.getProperty(PREFER_IPV4_KEY) != null && System.getProperty(PREFER_IPV6_KEY) == null) { + + preferIpv4 = Boolean.getBoolean(PREFER_IPV4_KEY); + if (!preferIpv4) { + preferIpv6 = true; + } + } + + if (System.getProperty(PREFER_IPV4_KEY) != null && System.getProperty(PREFER_IPV6_KEY) != null) { + + preferIpv4 = Boolean.getBoolean(PREFER_IPV4_KEY); + preferIpv6 = Boolean.getBoolean(PREFER_IPV6_KEY); + } + + this.preferIpv4 = preferIpv4; + this.preferIpv6 = preferIpv6; + } + } + + private static byte[] ipStringToBytes(String ipString) { + // Make a first pass to categorize the characters in this string. + boolean hasColon = false; + boolean hasDot = false; + for (int i = 0; i < ipString.length(); i++) { + char c = ipString.charAt(i); + if (c == '.') { + hasDot = true; + } else if (c == ':') { + if (hasDot) { + return null; // Colons must not appear after dots. + } + hasColon = true; + } else if (Character.digit(c, 16) == -1) { + return null; // Everything else must be a decimal or hex digit. + } + } + + // Now decide which address family to parse. + if (hasColon) { + if (hasDot) { + ipString = convertDottedQuadToHex(ipString); + if (ipString == null) { + return null; + } + } + return textToNumericFormatV6(ipString); + } else if (hasDot) { + return textToNumericFormatV4(ipString); + } + return null; + } + + private static byte[] textToNumericFormatV4(String ipString) { + byte[] bytes = new byte[IPV4_PART_COUNT]; + int i = 0; + try { + for (String octet : ipString.split("\\.", IPV4_PART_COUNT)) { + bytes[i++] = parseOctet(octet); + } + } catch (NumberFormatException ex) { + return null; + } + + return i == IPV4_PART_COUNT ? bytes : null; + } + + private static byte[] textToNumericFormatV6(String ipString) { + // An address can have [2..8] colons, and N colons make N+1 parts. + String[] parts = ipString.split(":", IPV6_PART_COUNT + 2); + if (parts.length < 3 || parts.length > IPV6_PART_COUNT + 1) { + return null; + } + + // Disregarding the endpoints, find "::" with nothing in between. + // This indicates that a run of zeroes has been skipped. + int skipIndex = -1; + for (int i = 1; i < parts.length - 1; i++) { + if (parts[i].length() == 0) { + if (skipIndex >= 0) { + return null; // Can't have more than one :: + } + skipIndex = i; + } + } + + int partsHi; // Number of parts to copy from above/before the "::" + int partsLo; // Number of parts to copy from below/after the "::" + if (skipIndex >= 0) { + // If we found a "::", then check if it also covers the endpoints. + partsHi = skipIndex; + partsLo = parts.length - skipIndex - 1; + if (parts[0].length() == 0 && --partsHi != 0) { + return null; // ^: requires ^:: + } + if (parts[parts.length - 1].length() == 0 && --partsLo != 0) { + return null; // :$ requires ::$ + } + } else { + // Otherwise, allocate the entire address to partsHi. The endpoints + // could still be empty, but parseHextet() will check for that. + partsHi = parts.length; + partsLo = 0; + } + + // If we found a ::, then we must have skipped at least one part. + // Otherwise, we must have exactly the right number of parts. + int partsSkipped = IPV6_PART_COUNT - (partsHi + partsLo); + if (!(skipIndex >= 0 ? partsSkipped >= 1 : partsSkipped == 0)) { + return null; + } + + // Now parse the hextets into a byte array. + ByteBuffer rawBytes = ByteBuffer.allocate(2 * IPV6_PART_COUNT); + try { + for (int i = 0; i < partsHi; i++) { + rawBytes.putShort(parseHextet(parts[i])); + } + for (int i = 0; i < partsSkipped; i++) { + rawBytes.putShort((short) 0); + } + for (int i = partsLo; i > 0; i--) { + rawBytes.putShort(parseHextet(parts[parts.length - i])); + } + } catch (NumberFormatException ex) { + return null; + } + return rawBytes.array(); + } + + private static String convertDottedQuadToHex(String ipString) { + int lastColon = ipString.lastIndexOf(':'); + String initialPart = ipString.substring(0, lastColon + 1); + String dottedQuad = ipString.substring(lastColon + 1); + byte[] quad = textToNumericFormatV4(dottedQuad); + if (quad == null) { + return null; + } + String penultimate = Integer.toHexString(((quad[0] & 0xff) << 8) | (quad[1] & 0xff)); + String ultimate = Integer.toHexString(((quad[2] & 0xff) << 8) | (quad[3] & 0xff)); + return initialPart + penultimate + ":" + ultimate; + } + + private static byte parseOctet(String ipPart) { + // Note: we already verified that this string contains only hex digits. + int octet = Integer.parseInt(ipPart); + // Disallow leading zeroes, because no clear standard exists on + // whether these should be interpreted as decimal or octal. + if (octet > 255 || (ipPart.startsWith("0") && ipPart.length() > 1)) { + throw new NumberFormatException(); + } + return (byte) octet; + } + + private static short parseHextet(String ipPart) { + // Note: we already verified that this string contains only hex digits. + int hextet = Integer.parseInt(ipPart, 16); + if (hextet > 0xffff) { + throw new NumberFormatException(); + } + return (short) hextet; + } +} diff --git a/src/main/java/io/lettuce/core/resource/DnsResolver.java b/src/main/java/io/lettuce/core/resource/DnsResolver.java new file mode 100644 index 0000000000..4754fab90f --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/DnsResolver.java @@ -0,0 +1,59 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.net.InetAddress; +import java.net.UnknownHostException; + +/** + * Users may implement this interface to override the normal DNS lookup offered by the OS. + * + * @author Mark Paluch + * @since 4.2 + */ +public interface DnsResolver { + + /** + * Java VM default resolver. + * + * @since 5.1 + */ + static DnsResolver jvmDefault() { + return DnsResolvers.JVM_DEFAULT; + } + + /** + * Non-resolving {@link DnsResolver}. Returns an empty {@link InetAddress} to indicate an unresolved address. + * + * @since 5.1 + * @see java.net.InetSocketAddress#createUnresolved(String, int) + */ + static DnsResolver unresolved() { + return DnsResolvers.UNRESOLVED; + } + + /** + * Returns the IP address for the specified host name. + * + * @param host the hostname, must not be empty or {@literal null}. + * @return array of one or more {@link InetAddress adresses}. An empty array indicates that DNS resolution is not supported + * by this {@link DnsResolver} and should happen by netty, see + * {@link java.net.InetSocketAddress#createUnresolved(String, int)}. + * @throws UnknownHostException if the given host is not recognized or the associated IP address cannot be used to build an + * {@link InetAddress} instance + */ + InetAddress[] resolve(String host) throws UnknownHostException; +} diff --git a/src/main/java/io/lettuce/core/resource/DnsResolvers.java b/src/main/java/io/lettuce/core/resource/DnsResolvers.java new file mode 100644 index 0000000000..efc898ad18 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/DnsResolvers.java @@ -0,0 +1,52 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.net.InetAddress; +import java.net.UnknownHostException; + +/** + * Predefined DNS resolvers. + * + * @author Mark Paluch + * @since 4.2 + */ +public enum DnsResolvers implements DnsResolver { + + /** + * Java VM default resolver. + */ + JVM_DEFAULT { + @Override + public InetAddress[] resolve(String host) throws UnknownHostException { + return InetAddress.getAllByName(host); + } + }, + + /** + * Non-resolving {@link DnsResolver}. Returns an empty {@link InetAddress} to indicate an unresolved address. + * + * @see java.net.InetSocketAddress#createUnresolved(String, int) + * @since 4.4 + */ + UNRESOLVED { + @Override + public InetAddress[] resolve(String host) throws UnknownHostException { + return new InetAddress[0]; + } + }; + +} diff --git a/src/main/java/io/lettuce/core/resource/EpollProvider.java b/src/main/java/io/lettuce/core/resource/EpollProvider.java new file mode 100644 index 0000000000..4c9cce4f8a --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/EpollProvider.java @@ -0,0 +1,210 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.net.SocketAddress; +import java.util.concurrent.ThreadFactory; + +import io.lettuce.core.internal.LettuceAssert; +import io.netty.channel.Channel; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.epoll.Epoll; +import io.netty.channel.epoll.EpollDomainSocketChannel; +import io.netty.channel.epoll.EpollEventLoopGroup; +import io.netty.channel.epoll.EpollSocketChannel; +import io.netty.channel.unix.DomainSocketAddress; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.internal.SystemPropertyUtil; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Wraps and provides Epoll classes. This is to protect the user from {@link ClassNotFoundException}'s caused by the absence of + * the {@literal netty-transport-native-epoll} library during runtime. Internal API. + * + * @author Mark Paluch + * @since 4.4 + */ +public class EpollProvider { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(EpollProvider.class); + + private static final String EPOLL_ENABLED_KEY = "io.lettuce.core.epoll"; + private static final boolean EPOLL_ENABLED = Boolean.parseBoolean(SystemPropertyUtil.get(EPOLL_ENABLED_KEY, "true")); + + private static final boolean EPOLL_AVAILABLE; + private static final EventLoopResources EPOLL_RESOURCES; + + static { + + boolean availability; + try { + Class.forName("io.netty.channel.epoll.Epoll"); + availability = Epoll.isAvailable(); + } catch (ClassNotFoundException e) { + availability = false; + } + + EPOLL_AVAILABLE = availability; + + if (EPOLL_AVAILABLE) { + logger.debug("Starting with epoll library"); + EPOLL_RESOURCES = AvailableEpollResources.INSTANCE; + + } else { + logger.debug("Starting without optional epoll library"); + EPOLL_RESOURCES = UnavailableEpollResources.INSTANCE; + } + } + + /** + * @return {@literal true} if epoll is available. + */ + public static boolean isAvailable() { + return EPOLL_AVAILABLE && EPOLL_ENABLED; + } + + /** + * Check whether the Epoll library is available on the class path. + * + * @throws IllegalStateException if the {@literal netty-transport-native-epoll} library is not available + */ + static void checkForEpollLibrary() { + + LettuceAssert.assertState(EPOLL_ENABLED, + String.format("epoll use is disabled via System properties (%s)", EPOLL_ENABLED_KEY)); + LettuceAssert.assertState(isAvailable(), + "netty-transport-native-epoll is not available. Make sure netty-transport-native-epoll library on the class path and supported by your operating system."); + } + + /** + * Returns the {@link EventLoopResources} for epoll-backed transport. Check availability with {@link #isAvailable()} prior + * to obtaining the resources. + * + * @return the {@link EventLoopResources}. May be unavailable. + * + * @since 6.0 + */ + public static EventLoopResources getResources() { + return EPOLL_RESOURCES; + } + + /** + * {@link EventLoopResources} for unavailable EPoll. + */ + enum UnavailableEpollResources implements EventLoopResources { + + INSTANCE; + + @Override + public Class domainSocketChannelClass() { + + checkForEpollLibrary(); + return null; + } + + @Override + public Class eventLoopGroupClass() { + + checkForEpollLibrary(); + return null; + } + + @Override + public boolean matches(Class type) { + + checkForEpollLibrary(); + return false; + } + + @Override + public EventLoopGroup newEventLoopGroup(int nThreads, ThreadFactory threadFactory) { + + checkForEpollLibrary(); + return null; + } + + @Override + public SocketAddress newSocketAddress(String socketPath) { + + checkForEpollLibrary(); + return null; + } + + @Override + public Class socketChannelClass() { + + checkForEpollLibrary(); + return null; + } + } + + /** + * {@link EventLoopResources} for available Epoll. + */ + enum AvailableEpollResources implements EventLoopResources { + + INSTANCE; + + @Override + public boolean matches(Class type) { + + LettuceAssert.notNull(type, "EventLoopGroup type must not be null"); + + return type.equals(EpollEventLoopGroup.class); + } + + @Override + public EventLoopGroup newEventLoopGroup(int nThreads, ThreadFactory threadFactory) { + + checkForEpollLibrary(); + + return new EpollEventLoopGroup(nThreads, threadFactory); + } + + @Override + public Class domainSocketChannelClass() { + + checkForEpollLibrary(); + + return EpollDomainSocketChannel.class; + } + + @Override + public Class socketChannelClass() { + + checkForEpollLibrary(); + + return EpollSocketChannel.class; + } + + @Override + public Class eventLoopGroupClass() { + + checkForEpollLibrary(); + + return EpollEventLoopGroup.class; + } + + @Override + public SocketAddress newSocketAddress(String socketPath) { + + checkForEpollLibrary(); + + return new DomainSocketAddress(socketPath); + } + } +} diff --git a/src/main/java/io/lettuce/core/resource/EqualJitterDelay.java b/src/main/java/io/lettuce/core/resource/EqualJitterDelay.java new file mode 100644 index 0000000000..2b82250769 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/EqualJitterDelay.java @@ -0,0 +1,52 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +/** + * Delay that increases using equal jitter strategy. + * + *

    + * Considering retry attempts start at 1, attempt 0 would be the initial call and will always yield 0 (or the lower). Then, each + * retry step will by default yield {@code randomBetween(0, base * 2 ^ (attempt - 1))}. + * + * This strategy is based on Exponential Backoff and + * Jitter. + *

    + * + * @author Jongyeol Choi + * @author Mark Paluch + * @since 4.2 + */ +class EqualJitterDelay extends ExponentialDelay { + + private final long base; + private final TimeUnit targetTimeUnit; + + EqualJitterDelay(Duration lower, Duration upper, long base, TimeUnit targetTimeUnit) { + super(lower, upper, 2, targetTimeUnit); + this.base = base; + this.targetTimeUnit = targetTimeUnit; + } + + @Override + public Duration createDelay(long attempt) { + long value = randomBetween(0, base * calculatePowerOfTwo(attempt)); + return applyBounds(Duration.ofNanos(targetTimeUnit.toNanos(value))); + } +} diff --git a/src/main/java/com/lambdaworks/redis/resource/EventLoopGroupProvider.java b/src/main/java/io/lettuce/core/resource/EventLoopGroupProvider.java similarity index 79% rename from src/main/java/com/lambdaworks/redis/resource/EventLoopGroupProvider.java rename to src/main/java/io/lettuce/core/resource/EventLoopGroupProvider.java index f9368462e1..374d9c390b 100644 --- a/src/main/java/com/lambdaworks/redis/resource/EventLoopGroupProvider.java +++ b/src/main/java/io/lettuce/core/resource/EventLoopGroupProvider.java @@ -1,4 +1,19 @@ -package com.lambdaworks.redis.resource; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; import java.util.concurrent.TimeUnit; @@ -12,7 +27,7 @@ * instances open can exhaust the number of open files. *

    * Usually, the default settings are sufficient. However, customizing might be useful for some special cases where multiple - * {@link com.lambdaworks.redis.RedisClient} or {@link com.lambdaworks.redis.cluster.RedisClusterClient} instances are needed + * {@link io.lettuce.core.RedisClient} or {@link io.lettuce.core.cluster.RedisClusterClient} instances are needed * that share one or more event loop groups. *

    *

    @@ -22,7 +37,7 @@ *

    * You can implement your own {@link EventLoopGroupProvider} to share existing {@link EventLoopGroup EventLoopGroup's} with * lettuce. - * + * * @author Mark Paluch * @since 3.4 */ @@ -50,7 +65,7 @@ public interface EventLoopGroupProvider { /** * Release a {@code eventLoopGroup} instance. The method will shutdown/terminate the {@link EventExecutorGroup} if it is no longer * needed. - * + * * @param eventLoopGroup the eventLoopGroup instance, must not be {@literal null} * @param quietPeriod the quiet period * @param timeout the timeout diff --git a/src/main/java/io/lettuce/core/resource/EventLoopResources.java b/src/main/java/io/lettuce/core/resource/EventLoopResources.java new file mode 100644 index 0000000000..a12400e4d6 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/EventLoopResources.java @@ -0,0 +1,71 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.net.SocketAddress; +import java.util.concurrent.ThreadFactory; + +import io.netty.channel.Channel; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.epoll.EpollEventLoopGroup; +import io.netty.util.concurrent.EventExecutorGroup; + +/** + * Interface to encapsulate EventLoopGroup resources. + * + * @author Mark Paluch + * @since 6.0 + */ +public interface EventLoopResources { + + /** + * Checks if the given {@code type} matches the underlying {@link EventExecutorGroup} type. + * + * @param type must not be {@literal null}. + * @return {@literal true} if {@code type} is a {@link EventExecutorGroup} of the underlying loop resources. + */ + boolean matches(Class type); + + /** + * Create a new {@link EpollEventLoopGroup}. + * + * @param nThreads number of threads. + * @param threadFactory the {@link ThreadFactory}. + * @return the {@link EventLoopGroup}. + */ + EventLoopGroup newEventLoopGroup(int nThreads, ThreadFactory threadFactory); + + /** + * @return the Domain Socket {@link Channel} class. + */ + Class domainSocketChannelClass(); + + /** + * @return the {@link Channel} class. + */ + Class socketChannelClass(); + + /** + * @return the {@link EventLoopGroup} class. + */ + Class eventLoopGroupClass(); + + /** + * @param socketPath the socket file path. + * @return a domain socket address object. + */ + SocketAddress newSocketAddress(String socketPath); +} diff --git a/src/main/java/io/lettuce/core/resource/ExponentialDelay.java b/src/main/java/io/lettuce/core/resource/ExponentialDelay.java new file mode 100644 index 0000000000..f67fd66bdf --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/ExponentialDelay.java @@ -0,0 +1,97 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +/** + * Delay that increases exponentially on every attempt. + * + *

    + * Considering retry attempts start at 1, attempt 0 would be the initial call and will always yield 0 (or the lower bound). Then + * each retry step will by default yield {@code 2 ^ (attemptNumber-1)}. By default this gives us 0 (initial attempt), 1, 2, 4, + * 8, 16, 32, ... + *

    + * {@link ExponentialDelay} can also apply different {@code powersBy}, such as power of 10 that would apply + * {@code 10 ^ (attemptNumber-1)} which would give 0, 10, 100, 1000, ... + *

    + * Each of the resulting values that is below the {@code lowerBound} will be replaced by the lower bound, and each value over + * the {@code upperBound} will be replaced by the upper bound. + * + * @author Mark Paluch + * @author Jongyeol Choi + * @since 4.1 + */ +class ExponentialDelay extends Delay { + + private final Duration lower; + private final Duration upper; + private final int powersOf; + private final TimeUnit targetTimeUnit; + + ExponentialDelay(Duration lower, Duration upper, int powersOf, TimeUnit targetTimeUnit) { + + this.lower = lower; + this.upper = upper; + this.powersOf = powersOf; + this.targetTimeUnit = targetTimeUnit; + } + + @Override + public Duration createDelay(long attempt) { + + long delay; + if (attempt <= 0) { // safeguard against underflow + delay = 0; + } else if (powersOf == 2) { + delay = calculatePowerOfTwo(attempt); + } else { + delay = calculateAlternatePower(attempt); + } + + return applyBounds(Duration.ofNanos(targetTimeUnit.toNanos(delay))); + } + + /** + * Apply bounds to the given {@code delay}. + * + * @param delay the delay + * @return the delay normalized to its lower and upper bounds. + */ + protected Duration applyBounds(Duration delay) { + return applyBounds(delay, lower, upper); + } + + private long calculateAlternatePower(long attempt) { + + // round will cap at Long.MAX_VALUE and pow should prevent overflows + double step = Math.pow(powersOf, attempt - 1); // attempt > 0 + return Math.round(step); + } + + // fastpath with bitwise operator + protected static long calculatePowerOfTwo(long attempt) { + + if (attempt <= 0) { // safeguard against underflow + return 0L; + } else if (attempt >= 64) { // safeguard against overflow in the bitshift operation + return Long.MAX_VALUE; + } else { + return 1L << (attempt - 1); + } + } +} diff --git a/src/main/java/io/lettuce/core/resource/FullJitterDelay.java b/src/main/java/io/lettuce/core/resource/FullJitterDelay.java new file mode 100644 index 0000000000..5235871fd6 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/FullJitterDelay.java @@ -0,0 +1,58 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +/** + * Delay that increases using full jitter strategy. + * + *

    + * Considering retry attempts start at 1, attempt 0 would be the initial call and will always yield 0 (or the lower). Then, each + * retry step will by default yield {@code temp / 2 + random_between(0, temp / 2)} and temp is + * {@code temp = min(upper, base * 2 ** attempt)}. + * + * This strategy is based on Exponential Backoff and + * Jitter. + *

    + * + * @author Jongyeol Choi + * @author Mark Paluch + * @since 4.2 + */ +class FullJitterDelay extends ExponentialDelay { + + private final Duration upper; + private final long base; + private final TimeUnit targetTimeUnit; + + FullJitterDelay(Duration lower, Duration upper, long base, TimeUnit targetTimeUnit) { + super(lower, upper, 2, targetTimeUnit); + this.upper = upper; + this.base = base; + this.targetTimeUnit = targetTimeUnit; + } + + @Override + public Duration createDelay(long attempt) { + + long upperTarget = targetTimeUnit.convert(upper.toNanos(), TimeUnit.NANOSECONDS); + long temp = Math.min(upperTarget, base * calculatePowerOfTwo(attempt)); + long delay = temp / 2 + randomBetween(0, temp / 2); + return applyBounds(Duration.ofNanos(targetTimeUnit.toNanos(delay))); + } +} diff --git a/src/main/java/io/lettuce/core/resource/KqueueProvider.java b/src/main/java/io/lettuce/core/resource/KqueueProvider.java new file mode 100644 index 0000000000..5ef17018e1 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/KqueueProvider.java @@ -0,0 +1,210 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.net.SocketAddress; +import java.util.concurrent.ThreadFactory; + +import io.lettuce.core.internal.LettuceAssert; +import io.netty.channel.Channel; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.kqueue.KQueue; +import io.netty.channel.kqueue.KQueueDomainSocketChannel; +import io.netty.channel.kqueue.KQueueEventLoopGroup; +import io.netty.channel.kqueue.KQueueSocketChannel; +import io.netty.channel.unix.DomainSocketAddress; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.internal.SystemPropertyUtil; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * Wraps and provides kqueue classes. This is to protect the user from {@link ClassNotFoundException}'s caused by the absence of + * the {@literal netty-transport-native-kqueue} library during runtime. Internal API. + * + * @author Mark Paluch + * @since 4.4 + */ +public class KqueueProvider { + + private static final InternalLogger logger = InternalLoggerFactory.getInstance(KqueueProvider.class); + + private static final String KQUEUE_ENABLED_KEY = "io.lettuce.core.kqueue"; + private static final boolean KQUEUE_ENABLED = Boolean.parseBoolean(SystemPropertyUtil.get(KQUEUE_ENABLED_KEY, "true")); + + private static final boolean KQUEUE_AVAILABLE; + private static final EventLoopResources KQUEUE_RESOURCES; + + static { + + boolean availability; + try { + Class.forName("io.netty.channel.kqueue.KQueue"); + availability = KQueue.isAvailable(); + } catch (ClassNotFoundException e) { + availability = false; + } + + KQUEUE_AVAILABLE = availability; + + if (KQUEUE_AVAILABLE) { + logger.debug("Starting with kqueue library"); + KQUEUE_RESOURCES = AvailableKqueueResources.INSTANCE; + + } else { + logger.debug("Starting without optional kqueue library"); + KQUEUE_RESOURCES = UnavailableKqueueResources.INSTANCE; + } + } + + /** + * @return {@literal true} if kqueue is available. + */ + public static boolean isAvailable() { + return KQUEUE_AVAILABLE && KQUEUE_ENABLED; + } + + /** + * Check whether the kqueue library is available on the class path. + * + * @throws IllegalStateException if the {@literal netty-transport-native-kqueue} library is not available + */ + static void checkForKqueueLibrary() { + + LettuceAssert.assertState(KQUEUE_ENABLED, + String.format("kqueue use is disabled via System properties (%s)", KQUEUE_ENABLED_KEY)); + LettuceAssert.assertState(isAvailable(), + "netty-transport-native-kqueue is not available. Make sure netty-transport-native-kqueue library on the class path and supported by your operating system."); + } + + /** + * Returns the {@link EventLoopResources} for kqueue-backed transport. Check availability with {@link #isAvailable()} prior + * to obtaining the resources. + * + * @return the {@link EventLoopResources}. May be unavailable. + * + * @since 6.0 + */ + public static EventLoopResources getResources() { + return KQUEUE_RESOURCES; + } + + /** + * {@link EventLoopResources} for unavailable EPoll. + */ + enum UnavailableKqueueResources implements EventLoopResources { + + INSTANCE; + + @Override + public Class domainSocketChannelClass() { + + checkForKqueueLibrary(); + return null; + } + + @Override + public Class eventLoopGroupClass() { + + checkForKqueueLibrary(); + return null; + } + + @Override + public boolean matches(Class type) { + + checkForKqueueLibrary(); + return false; + } + + @Override + public EventLoopGroup newEventLoopGroup(int nThreads, ThreadFactory threadFactory) { + + checkForKqueueLibrary(); + return null; + } + + @Override + public SocketAddress newSocketAddress(String socketPath) { + + checkForKqueueLibrary(); + return null; + } + + @Override + public Class socketChannelClass() { + + checkForKqueueLibrary(); + return null; + } + } + + /** + * {@link EventLoopResources} for available kqueue. + */ + enum AvailableKqueueResources implements EventLoopResources { + + INSTANCE; + + @Override + public boolean matches(Class type) { + + LettuceAssert.notNull(type, "EventLoopGroup type must not be null"); + + return type.equals(eventLoopGroupClass()); + } + + @Override + public EventLoopGroup newEventLoopGroup(int nThreads, ThreadFactory threadFactory) { + + checkForKqueueLibrary(); + + return new KQueueEventLoopGroup(nThreads, threadFactory); + } + + @Override + public Class domainSocketChannelClass() { + + checkForKqueueLibrary(); + + return KQueueDomainSocketChannel.class; + } + + @Override + public Class socketChannelClass() { + + checkForKqueueLibrary(); + + return KQueueSocketChannel.class; + } + + @Override + public Class eventLoopGroupClass() { + + checkForKqueueLibrary(); + + return KQueueEventLoopGroup.class; + } + + @Override + public SocketAddress newSocketAddress(String socketPath) { + + checkForKqueueLibrary(); + + return new DomainSocketAddress(socketPath); + } + } +} diff --git a/src/main/java/io/lettuce/core/resource/MappingSocketAddressResolver.java b/src/main/java/io/lettuce/core/resource/MappingSocketAddressResolver.java new file mode 100644 index 0000000000..8e5dc64fd0 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/MappingSocketAddressResolver.java @@ -0,0 +1,98 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.net.SocketAddress; +import java.net.UnknownHostException; +import java.util.function.Function; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Mapping {@link SocketAddressResolver} that allows mapping of {@link io.lettuce.core.RedisURI} host and port components to + * redirect connection endpoint coordinates using a {@link Function mapping function}. + * + * @author Mark Paluch + * @since 5.1 + */ +public class MappingSocketAddressResolver extends SocketAddressResolver { + + private final Function mappingFunction; + private final DnsResolver dnsResolver; + + /** + * Create a new {@link SocketAddressResolver} given {@link DnsResolver} and {@link Function mapping function}. + * + * @param dnsResolver must not be {@literal null}. + * @param mappingFunction must not be {@literal null}. + */ + private MappingSocketAddressResolver(DnsResolver dnsResolver, Function mappingFunction) { + + super(dnsResolver); + + LettuceAssert.notNull(mappingFunction, "Mapping function must not be null!"); + this.dnsResolver = dnsResolver; + this.mappingFunction = mappingFunction; + } + + /** + * Create a new {@link SocketAddressResolver} given {@link DnsResolver} and {@link Function mapping function}. + * + * @param dnsResolver must not be {@literal null}. + * @param mappingFunction must not be {@literal null}. + * @return the {@link MappingSocketAddressResolver}. + */ + public static MappingSocketAddressResolver create(DnsResolver dnsResolver, + Function mappingFunction) { + return new MappingSocketAddressResolver(dnsResolver, mappingFunction); + } + + @Override + public SocketAddress resolve(RedisURI redisURI) { + + if (redisURI.getSocket() != null) { + return getDomainSocketAddress(redisURI); + } + + HostAndPort hostAndPort = HostAndPort.of(redisURI.getHost(), redisURI.getPort()); + + HostAndPort mapped = mappingFunction.apply(hostAndPort); + if (mapped == null) { + throw new IllegalStateException("Mapping function must not return null for HostAndPort"); + } + + try { + return doResolve(mapped); + } catch (UnknownHostException e) { + return new InetSocketAddress(redisURI.getHost(), redisURI.getPort()); + } + } + + private SocketAddress doResolve(HostAndPort mapped) throws UnknownHostException { + + InetAddress[] inetAddress = dnsResolver.resolve(mapped.getHostText()); + + if (inetAddress.length == 0) { + return InetSocketAddress.createUnresolved(mapped.getHostText(), mapped.getPort()); + } + + return new InetSocketAddress(inetAddress[0], mapped.getPort()); + } +} diff --git a/src/main/java/io/lettuce/core/resource/NettyCustomizer.java b/src/main/java/io/lettuce/core/resource/NettyCustomizer.java new file mode 100644 index 0000000000..ca8be1f757 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/NettyCustomizer.java @@ -0,0 +1,53 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import io.netty.bootstrap.Bootstrap; +import io.netty.channel.Channel; + +/** + * Strategy interface to customize netty {@link io.netty.bootstrap.Bootstrap} and {@link io.netty.channel.Channel} via callback + * hooks.
    + * Extending the NettyCustomizer API + *

    + * Contrary to other driver options, the options available in this class should be considered as advanced feature and as such, + * they should only be modified by expert users. A misconfiguration introduced by the means of this API can have unexpected + * results and cause the driver to completely fail to connect. + * + * @author Mark Paluch + * @since 4.4 + */ +public interface NettyCustomizer { + + /** + * Hook invoked each time the driver creates a new Connection and configures a new instance of Bootstrap for it. This hook + * is called after the driver has applied all {@link java.net.SocketOption}s. This is a good place to add extra + * {@link io.netty.channel.ChannelOption}s to the {@link Bootstrap}. + * + * @param bootstrap must not be {@literal null}. + */ + default void afterBootstrapInitialized(Bootstrap bootstrap) { + } + + /** + * Hook invoked each time the driver initializes the channel. This hook is called after the driver has registered all its + * internal channel handlers, and applied the configured options. + * + * @param channel must not be {@literal null}. + */ + default void afterChannelInitialized(Channel channel) { + } +} diff --git a/src/main/java/io/lettuce/core/resource/PromiseAdapter.java b/src/main/java/io/lettuce/core/resource/PromiseAdapter.java new file mode 100644 index 0000000000..d99bd5b4f8 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/PromiseAdapter.java @@ -0,0 +1,58 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import io.netty.util.concurrent.*; + +/** + * Utility class to support netty's future handling. + * + * @author Mark Paluch + * @since 3.4 + */ +class PromiseAdapter { + + /** + * Create a promise that emits a {@code Boolean} value on completion of the {@code future} + * + * @param future the future. + * @return Promise emitting a {@code Boolean} value. {@literal true} if the {@code future} completed successfully, otherwise + * the cause wil be transported. + */ + static Promise toBooleanPromise(Future future) { + + DefaultPromise result = new DefaultPromise<>(GlobalEventExecutor.INSTANCE); + + if (future.isDone() || future.isCancelled()) { + if (future.isSuccess()) { + result.setSuccess(true); + } else { + result.setFailure(future.cause()); + } + return result; + } + + future.addListener((GenericFutureListener>) f -> { + + if (f.isSuccess()) { + result.setSuccess(true); + } else { + result.setFailure(f.cause()); + } + }); + return result; + } +} diff --git a/src/main/java/io/lettuce/core/resource/SocketAddressResolver.java b/src/main/java/io/lettuce/core/resource/SocketAddressResolver.java new file mode 100644 index 0000000000..f35df4d717 --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/SocketAddressResolver.java @@ -0,0 +1,111 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.net.SocketAddress; +import java.net.UnknownHostException; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Resolves a {@link io.lettuce.core.RedisURI} to a {@link java.net.SocketAddress}. + * + * @author Mark Paluch + * @see MappingSocketAddressResolver + */ +public class SocketAddressResolver { + + private final DnsResolver dnsResolver; + + /** + * Create a new {@link SocketAddressResolver} given {@link DnsResolver}. + * + * @param dnsResolver must not be {@literal null}. + * @since 5.1 + */ + protected SocketAddressResolver(DnsResolver dnsResolver) { + + LettuceAssert.notNull(dnsResolver, "DnsResolver must not be null"); + + this.dnsResolver = dnsResolver; + } + + /** + * Create a new {@link SocketAddressResolver} given {@link DnsResolver}. + * + * @param dnsResolver must not be {@literal null}. + * @return the {@link SocketAddressResolver}. + * @since 5.1 + */ + public static SocketAddressResolver create(DnsResolver dnsResolver) { + return new SocketAddressResolver(dnsResolver); + } + + /** + * Resolve a {@link RedisURI} to a {@link SocketAddress}. + * + * @param redisURI must not be {@literal null}. + * @return the resolved {@link SocketAddress}. + * @since 5.1 + */ + public SocketAddress resolve(RedisURI redisURI) { + + LettuceAssert.notNull(redisURI, "RedisURI must not be null"); + + return resolve(redisURI, dnsResolver); + } + + /** + * Resolves a {@link io.lettuce.core.RedisURI} to a {@link java.net.SocketAddress}. + * + * @param redisURI must not be {@literal null}. + * @param dnsResolver must not be {@literal null}. + * @return the resolved {@link SocketAddress}. + */ + public static SocketAddress resolve(RedisURI redisURI, DnsResolver dnsResolver) { + + if (redisURI.getSocket() != null) { + return getDomainSocketAddress(redisURI); + } + + try { + InetAddress[] inetAddress = dnsResolver.resolve(redisURI.getHost()); + + if (inetAddress.length == 0) { + return InetSocketAddress.createUnresolved(redisURI.getHost(), redisURI.getPort()); + } + + return new InetSocketAddress(inetAddress[0], redisURI.getPort()); + } catch (UnknownHostException e) { + return new InetSocketAddress(redisURI.getHost(), redisURI.getPort()); + } + } + + static SocketAddress getDomainSocketAddress(RedisURI redisURI) { + + if (KqueueProvider.isAvailable() || EpollProvider.isAvailable()) { + EventLoopResources resources = KqueueProvider.isAvailable() ? KqueueProvider.getResources() + : EpollProvider.getResources(); + return resources.newSocketAddress(redisURI.getSocket()); + } + + throw new IllegalStateException( + "No native transport available. Make sure that either netty's epoll or kqueue library is on the class path and supported by your operating system."); + } +} diff --git a/src/main/java/io/lettuce/core/resource/package-info.java b/src/main/java/io/lettuce/core/resource/package-info.java new file mode 100644 index 0000000000..b4c223bf4e --- /dev/null +++ b/src/main/java/io/lettuce/core/resource/package-info.java @@ -0,0 +1,4 @@ +/** + * Client resource infrastructure providers. + */ +package io.lettuce.core.resource; diff --git a/src/main/java/io/lettuce/core/sentinel/RedisSentinelAsyncCommandsImpl.java b/src/main/java/io/lettuce/core/sentinel/RedisSentinelAsyncCommandsImpl.java new file mode 100644 index 0000000000..e699d5d3eb --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/RedisSentinelAsyncCommandsImpl.java @@ -0,0 +1,176 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import java.net.SocketAddress; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.async.RedisSentinelAsyncCommands; + +/** + * An asynchronous and thread-safe API for a Redis Sentinel connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public class RedisSentinelAsyncCommandsImpl implements RedisSentinelAsyncCommands { + + private final SentinelCommandBuilder commandBuilder; + private final StatefulConnection connection; + + public RedisSentinelAsyncCommandsImpl(StatefulConnection connection, RedisCodec codec) { + this.connection = connection; + commandBuilder = new SentinelCommandBuilder(codec); + } + + @Override + public RedisFuture getMasterAddrByName(K key) { + return dispatch(commandBuilder.getMasterAddrByKey(key)); + } + + @Override + public RedisFuture>> masters() { + return dispatch(commandBuilder.masters()); + } + + @Override + public RedisFuture> master(K key) { + return dispatch(commandBuilder.master(key)); + } + + @Override + public RedisFuture>> slaves(K key) { + return dispatch(commandBuilder.slaves(key)); + } + + @Override + public RedisFuture reset(K key) { + return dispatch(commandBuilder.reset(key)); + } + + @Override + public RedisFuture failover(K key) { + return dispatch(commandBuilder.failover(key)); + } + + @Override + public RedisFuture monitor(K key, String ip, int port, int quorum) { + return dispatch(commandBuilder.monitor(key, ip, port, quorum)); + } + + @Override + public RedisFuture set(K key, String option, V value) { + return dispatch(commandBuilder.set(key, option, value)); + } + + @Override + public RedisFuture remove(K key) { + return dispatch(commandBuilder.remove(key)); + } + + @Override + public RedisFuture ping() { + return dispatch(commandBuilder.ping()); + } + + @Override + public RedisFuture clientGetname() { + return dispatch(commandBuilder.clientGetname()); + } + + @Override + public RedisFuture clientSetname(K name) { + return dispatch(commandBuilder.clientSetname(name)); + } + + @Override + public RedisFuture clientKill(String addr) { + return dispatch(commandBuilder.clientKill(addr)); + } + + @Override + public RedisFuture clientKill(KillArgs killArgs) { + return dispatch(commandBuilder.clientKill(killArgs)); + } + + @Override + public RedisFuture clientPause(long timeout) { + return dispatch(commandBuilder.clientPause(timeout)); + } + + @Override + public RedisFuture clientList() { + return dispatch(commandBuilder.clientList()); + } + + @Override + public RedisFuture info() { + return dispatch(commandBuilder.info()); + } + + @Override + public RedisFuture info(String section) { + return dispatch(commandBuilder.info(section)); + } + + @Override + public RedisFuture dispatch(ProtocolKeyword type, CommandOutput output) { + + LettuceAssert.notNull(type, "Command type must not be null"); + LettuceAssert.notNull(output, "CommandOutput type must not be null"); + + return dispatch(new AsyncCommand<>(new Command<>(type, output))); + } + + @Override + public RedisFuture dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args) { + + LettuceAssert.notNull(type, "Command type must not be null"); + LettuceAssert.notNull(output, "CommandOutput type must not be null"); + LettuceAssert.notNull(args, "CommandArgs type must not be null"); + + return dispatch(new AsyncCommand<>(new Command<>(type, output, args))); + } + + public AsyncCommand dispatch(RedisCommand cmd) { + return (AsyncCommand) connection.dispatch(new AsyncCommand<>(cmd)); + } + + public void close() { + connection.close(); + } + + @Override + public boolean isOpen() { + return connection.isOpen(); + } + + @Override + public StatefulRedisSentinelConnection getStatefulConnection() { + return (StatefulRedisSentinelConnection) connection; + } +} diff --git a/src/main/java/io/lettuce/core/sentinel/RedisSentinelReactiveCommandsImpl.java b/src/main/java/io/lettuce/core/sentinel/RedisSentinelReactiveCommandsImpl.java new file mode 100644 index 0000000000..794f6e1882 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/RedisSentinelReactiveCommandsImpl.java @@ -0,0 +1,176 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import java.net.SocketAddress; +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.AbstractRedisReactiveCommands; +import io.lettuce.core.KillArgs; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.reactive.RedisSentinelReactiveCommands; + +/** + * A reactive and thread-safe API for a Redis Sentinel connection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public class RedisSentinelReactiveCommandsImpl extends AbstractRedisReactiveCommands implements + RedisSentinelReactiveCommands { + + private final SentinelCommandBuilder commandBuilder; + + public RedisSentinelReactiveCommandsImpl(StatefulConnection connection, RedisCodec codec) { + super(connection, codec); + commandBuilder = new SentinelCommandBuilder(codec); + } + + @Override + public Mono getMasterAddrByName(K key) { + return createMono(() -> commandBuilder.getMasterAddrByKey(key)); + } + + @Override + public Flux> masters() { + return createDissolvingFlux(commandBuilder::masters); + } + + @Override + public Mono> master(K key) { + return createMono(() -> commandBuilder.master(key)); + } + + @Override + public Flux> slaves(K key) { + return createDissolvingFlux(() -> commandBuilder.slaves(key)); + } + + @Override + public Mono reset(K key) { + return createMono(() -> commandBuilder.reset(key)); + } + + @Override + public Mono failover(K key) { + return createMono(() -> commandBuilder.failover(key)); + } + + @Override + public Mono monitor(K key, String ip, int port, int quorum) { + return createMono(() -> commandBuilder.monitor(key, ip, port, quorum)); + } + + @Override + public Mono set(K key, String option, V value) { + return createMono(() -> commandBuilder.set(key, option, value)); + } + + @Override + public Mono remove(K key) { + return createMono(() -> commandBuilder.remove(key)); + } + + @Override + public Mono ping() { + return createMono(commandBuilder::ping); + } + + @Override + public Mono clientGetname() { + return createMono(commandBuilder::clientGetname); + } + + @Override + public Mono clientSetname(K name) { + return createMono(() -> commandBuilder.clientSetname(name)); + } + + @Override + public Mono clientKill(String addr) { + return createMono(() -> commandBuilder.clientKill(addr)); + } + + @Override + public Mono clientKill(KillArgs killArgs) { + return createMono(() -> commandBuilder.clientKill(killArgs)); + } + + @Override + public Mono clientPause(long timeout) { + return createMono(() -> commandBuilder.clientPause(timeout)); + } + + @Override + public Mono clientList() { + return createMono(commandBuilder::clientList); + } + + @Override + public Mono info() { + return createMono(commandBuilder::info); + } + + @Override + public Mono info(String section) { + return createMono(() -> commandBuilder.info(section)); + } + + @SuppressWarnings("unchecked") + public Flux dispatch(ProtocolKeyword type, CommandOutput output) { + + LettuceAssert.notNull(type, "Command type must not be null"); + LettuceAssert.notNull(output, "CommandOutput type must not be null"); + + return (Flux) createFlux(() -> new Command<>(type, output)); + } + + @SuppressWarnings("unchecked") + public Flux dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args) { + + LettuceAssert.notNull(type, "Command type must not be null"); + LettuceAssert.notNull(output, "CommandOutput type must not be null"); + LettuceAssert.notNull(args, "CommandArgs type must not be null"); + + return (Flux) createFlux(() -> new Command<>(type, output, args)); + } + + @Override + public void close() { + getStatefulConnection().close(); + } + + @Override + public boolean isOpen() { + return getStatefulConnection().isOpen(); + } + + @Override + public StatefulRedisSentinelConnection getStatefulConnection() { + return (StatefulRedisSentinelConnection) super.getConnection(); + } +} diff --git a/src/main/java/io/lettuce/core/sentinel/SentinelCommandBuilder.java b/src/main/java/io/lettuce/core/sentinel/SentinelCommandBuilder.java new file mode 100644 index 0000000000..908dc586b0 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/SentinelCommandBuilder.java @@ -0,0 +1,142 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import static io.lettuce.core.protocol.CommandKeyword.*; +import static io.lettuce.core.protocol.CommandType.*; + +import java.net.SocketAddress; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.output.*; +import io.lettuce.core.protocol.BaseRedisCommandBuilder; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandKeyword; + +/** + * @author Mark Paluch + * @since 3.0 + */ +class SentinelCommandBuilder extends BaseRedisCommandBuilder { + + public SentinelCommandBuilder(RedisCodec codec) { + super(codec); + } + + public Command getMasterAddrByKey(K key) { + CommandArgs args = new CommandArgs<>(codec).add("get-master-addr-by-name").addKey(key); + return createCommand(SENTINEL, new SocketAddressOutput<>(codec), args); + } + + public Command>> masters() { + CommandArgs args = new CommandArgs<>(codec).add("masters"); + return createCommand(SENTINEL, new ListOfMapsOutput<>(codec), args); + } + + public Command> master(K key) { + CommandArgs args = new CommandArgs<>(codec).add("master").addKey(key); + return createCommand(SENTINEL, new MapOutput<>(codec), args); + } + + public Command>> slaves(K key) { + CommandArgs args = new CommandArgs<>(codec).add(SLAVES).addKey(key); + return createCommand(SENTINEL, new ListOfMapsOutput<>(codec), args); + } + + public Command reset(K key) { + CommandArgs args = new CommandArgs<>(codec).add(RESET).addKey(key); + return createCommand(SENTINEL, new IntegerOutput<>(codec), args); + } + + public Command failover(K key) { + CommandArgs args = new CommandArgs<>(codec).add(FAILOVER).addKey(key); + return createCommand(SENTINEL, new StatusOutput<>(codec), args); + } + + public Command monitor(K key, String ip, int port, int quorum) { + CommandArgs args = new CommandArgs<>(codec).add(MONITOR).addKey(key).add(ip).add(port).add(quorum); + return createCommand(SENTINEL, new StatusOutput<>(codec), args); + } + + public Command set(K key, String option, V value) { + CommandArgs args = new CommandArgs<>(codec).add(SET).addKey(key).add(option).addValue(value); + return createCommand(SENTINEL, new StatusOutput<>(codec), args); + } + + public Command clientGetname() { + CommandArgs args = new CommandArgs<>(codec).add(GETNAME); + return createCommand(CLIENT, new KeyOutput<>(codec), args); + } + + public Command clientSetname(K name) { + LettuceAssert.notNull(name, "Name must not be null"); + + CommandArgs args = new CommandArgs<>(codec).add(SETNAME).addKey(name); + return createCommand(CLIENT, new StatusOutput<>(codec), args); + } + + public Command clientKill(String addr) { + LettuceAssert.notNull(addr, "Addr must not be null"); + LettuceAssert.notEmpty(addr, "Addr must not be empty"); + + CommandArgs args = new CommandArgs<>(codec).add(KILL).add(addr); + return createCommand(CLIENT, new StatusOutput<>(codec), args); + } + + public Command clientKill(KillArgs killArgs) { + LettuceAssert.notNull(killArgs, "KillArgs must not be null"); + + CommandArgs args = new CommandArgs<>(codec).add(KILL); + killArgs.build(args); + return createCommand(CLIENT, new IntegerOutput<>(codec), args); + } + + public Command clientPause(long timeout) { + CommandArgs args = new CommandArgs<>(codec).add(PAUSE).add(timeout); + return createCommand(CLIENT, new StatusOutput<>(codec), args); + } + + public Command clientList() { + CommandArgs args = new CommandArgs<>(codec).add(LIST); + return createCommand(CLIENT, new StatusOutput<>(codec), args); + } + + public Command info() { + return createCommand(INFO, new StatusOutput<>(codec)); + } + + public Command info(String section) { + LettuceAssert.notNull(section, "Section must not be null"); + + CommandArgs args = new CommandArgs<>(codec).add(section); + return createCommand(INFO, new StatusOutput<>(codec), args); + } + + public Command ping() { + return createCommand(PING, new StatusOutput<>(codec)); + } + + public Command remove(K key) { + CommandArgs args = new CommandArgs<>(codec).add(CommandKeyword.REMOVE).addKey(key); + return createCommand(SENTINEL, new StatusOutput<>(codec), args); + } + +} diff --git a/src/main/java/io/lettuce/core/sentinel/StatefulRedisSentinelConnectionImpl.java b/src/main/java/io/lettuce/core/sentinel/StatefulRedisSentinelConnectionImpl.java new file mode 100644 index 0000000000..695d4fe144 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/StatefulRedisSentinelConnectionImpl.java @@ -0,0 +1,106 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import java.time.Duration; +import java.util.Collection; + +import io.lettuce.core.ConnectionState; +import io.lettuce.core.RedisChannelHandler; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.async.RedisSentinelAsyncCommands; +import io.lettuce.core.sentinel.api.reactive.RedisSentinelReactiveCommands; +import io.lettuce.core.sentinel.api.sync.RedisSentinelCommands; + +/** + * @author Mark Paluch + */ +public class StatefulRedisSentinelConnectionImpl extends RedisChannelHandler + implements StatefulRedisSentinelConnection { + + protected final RedisCodec codec; + protected final RedisSentinelCommands sync; + protected final RedisSentinelAsyncCommands async; + protected final RedisSentinelReactiveCommands reactive; + + private final SentinelConnectionState connectionState = new SentinelConnectionState(); + + public StatefulRedisSentinelConnectionImpl(RedisChannelWriter writer, RedisCodec codec, Duration timeout) { + + super(writer, timeout); + + this.codec = codec; + this.async = new RedisSentinelAsyncCommandsImpl<>(this, codec); + this.sync = syncHandler(async, RedisSentinelCommands.class); + this.reactive = new RedisSentinelReactiveCommandsImpl<>(this, codec); + } + + @Override + public RedisCommand dispatch(RedisCommand command) { + return super.dispatch(command); + } + + @Override + public Collection> dispatch(Collection> commands) { + return super.dispatch(commands); + } + + @Override + public RedisSentinelCommands sync() { + return sync; + } + + @Override + public RedisSentinelAsyncCommands async() { + return async; + } + + @Override + public RedisSentinelReactiveCommands reactive() { + return reactive; + } + + /** + * @param clientName + * @deprecated since 6.0, use {@link RedisSentinelAsyncCommands#clientSetname(Object)}. + */ + @Deprecated + public void setClientName(String clientName) { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(CommandKeyword.SETNAME).addValue(clientName); + AsyncCommand async = new AsyncCommand<>( + new Command<>(CommandType.CLIENT, new StatusOutput<>(StringCodec.UTF8), args)); + connectionState.setClientName(clientName); + + dispatch((RedisCommand) async); + } + + public ConnectionState getConnectionState() { + return connectionState; + } + + static class SentinelConnectionState extends ConnectionState { + @Override + protected void setClientName(String clientName) { + super.setClientName(clientName); + } + } +} diff --git a/src/main/java/io/lettuce/core/sentinel/api/StatefulRedisSentinelConnection.java b/src/main/java/io/lettuce/core/sentinel/api/StatefulRedisSentinelConnection.java new file mode 100644 index 0000000000..fb1f3cdbf8 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/api/StatefulRedisSentinelConnection.java @@ -0,0 +1,57 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel.api; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.protocol.ConnectionWatchdog; +import io.lettuce.core.sentinel.api.async.RedisSentinelAsyncCommands; +import io.lettuce.core.sentinel.api.reactive.RedisSentinelReactiveCommands; +import io.lettuce.core.sentinel.api.sync.RedisSentinelCommands; + +/** + * A thread-safe connection to a redis server. Multiple threads may share one {@link StatefulRedisSentinelConnection}. + * + * A {@link ConnectionWatchdog} monitors each connection and reconnects automatically until {@link #close} is called. All + * pending commands will be (re)sent after successful reconnection. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface StatefulRedisSentinelConnection extends StatefulConnection { + + /** + * Returns the {@link RedisSentinelCommands} API for the current connection. Does not create a new connection. + * + * @return the synchronous API for the underlying connection. + */ + RedisSentinelCommands sync(); + + /** + * Returns the {@link RedisSentinelAsyncCommands} API for the current connection. Does not create a new connection. + * + * @return the asynchronous API for the underlying connection. + */ + RedisSentinelAsyncCommands async(); + + /** + * Returns the {@link RedisSentinelReactiveCommands} API for the current connection. Does not create a new connection. + * + * @return the reactive API for the underlying connection. + */ + RedisSentinelReactiveCommands reactive(); +} diff --git a/src/main/java/io/lettuce/core/sentinel/api/async/RedisSentinelAsyncCommands.java b/src/main/java/io/lettuce/core/sentinel/api/async/RedisSentinelAsyncCommands.java new file mode 100644 index 0000000000..4a6ebd1f73 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/api/async/RedisSentinelAsyncCommands.java @@ -0,0 +1,218 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel.api.async; + +import java.net.SocketAddress; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; + +/** + * Asynchronous executed commands for Redis Sentinel. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateAsyncApi + */ +public interface RedisSentinelAsyncCommands { + + /** + * Return the ip and port number of the master with that name. + * + * @param key the key + * @return SocketAddress; + */ + RedisFuture getMasterAddrByName(K key); + + /** + * Enumerates all the monitored masters and their states. + * + * @return Map<K, V>> + */ + RedisFuture>> masters(); + + /** + * Show the state and info of the specified master. + * + * @param key the key + * @return Map<K, V> + */ + RedisFuture> master(K key); + + /** + * Provides a list of replicas for the master with the specified name. + * + * @param key the key + * @return List<Map<K, V>> + */ + RedisFuture>> slaves(K key); + + /** + * This command will reset all the masters with matching name. + * + * @param key the key + * @return Long + */ + RedisFuture reset(K key); + + /** + * Perform a failover. + * + * @param key the master id + * @return String + */ + RedisFuture failover(K key); + + /** + * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. + * + * @param key the key + * @param ip the IP address + * @param port the port + * @param quorum the quorum count + * @return String + */ + RedisFuture monitor(K key, String ip, int port, int quorum); + + /** + * Multiple option / value pairs can be specified (or none at all). + * + * @param key the key + * @param option the option + * @param value the value + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + RedisFuture set(K key, String option, V value); + + /** + * remove the specified master. + * + * @param key the key + * @return String + */ + RedisFuture remove(K key); + + /** + * Get the current connection name. + * + * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. + */ + RedisFuture clientGetname(); + + /** + * Set the current connection name. + * + * @param name the client name + * @return simple-string-reply {@code OK} if the connection name was successfully set. + */ + RedisFuture clientSetname(K name); + + /** + * Kill the connection of a client identified by ip:port. + * + * @param addr ip:port + * @return String simple-string-reply {@code OK} if the connection exists and has been closed + */ + RedisFuture clientKill(String addr); + + /** + * Kill connections of clients which are filtered by {@code killArgs} + * + * @param killArgs args for the kill operation + * @return Long integer-reply number of killed connections + */ + RedisFuture clientKill(KillArgs killArgs); + + /** + * Stop processing commands from clients for some time. + * + * @param timeout the timeout value in milliseconds + * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. + */ + RedisFuture clientPause(long timeout); + + /** + * Get the list of client connections. + * + * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), + * each line is composed of a succession of property=value fields separated by a space character. + */ + RedisFuture clientList(); + + /** + * Get information and statistics about the server. + * + * @return String bulk-string-reply as a collection of text lines. + */ + RedisFuture info(); + + /** + * Get information and statistics about the server. + * + * @param section the section type: string + * @return String bulk-string-reply as a collection of text lines. + */ + RedisFuture info(String section); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + RedisFuture ping(); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param response type + * @return the command response + * @since 5.2 + */ + RedisFuture dispatch(ProtocolKeyword type, CommandOutput output); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param args the command arguments, must not be {@literal null}. + * @param response type + * @return the command response + * @since 5.2 + */ + RedisFuture dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); + + /** + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * @return the underlying connection. + */ + StatefulRedisSentinelConnection getStatefulConnection(); +} diff --git a/src/main/java/io/lettuce/core/sentinel/api/async/package-info.java b/src/main/java/io/lettuce/core/sentinel/api/async/package-info.java new file mode 100644 index 0000000000..9e8e976232 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/api/async/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Sentinel API for asynchronous executed commands. + */ +package io.lettuce.core.sentinel.api.async; diff --git a/src/main/java/io/lettuce/core/sentinel/api/package-info.java b/src/main/java/io/lettuce/core/sentinel/api/package-info.java new file mode 100644 index 0000000000..9ff01b61e5 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/api/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Sentinel connection API. + */ +package io.lettuce.core.sentinel.api; diff --git a/src/main/java/io/lettuce/core/sentinel/api/reactive/RedisSentinelReactiveCommands.java b/src/main/java/io/lettuce/core/sentinel/api/reactive/RedisSentinelReactiveCommands.java new file mode 100644 index 0000000000..51f98ef55f --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/api/reactive/RedisSentinelReactiveCommands.java @@ -0,0 +1,218 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel.api.reactive; + +import java.net.SocketAddress; +import java.util.Map; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.KillArgs; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; + +/** + * Reactive executed commands for Redis Sentinel. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateReactiveApi + */ +public interface RedisSentinelReactiveCommands { + + /** + * Return the ip and port number of the master with that name. + * + * @param key the key + * @return SocketAddress; + */ + Mono getMasterAddrByName(K key); + + /** + * Enumerates all the monitored masters and their states. + * + * @return Map<K, V>> + */ + Flux> masters(); + + /** + * Show the state and info of the specified master. + * + * @param key the key + * @return Map<K, V> + */ + Mono> master(K key); + + /** + * Provides a list of replicas for the master with the specified name. + * + * @param key the key + * @return Map<K, V> + */ + Flux> slaves(K key); + + /** + * This command will reset all the masters with matching name. + * + * @param key the key + * @return Long + */ + Mono reset(K key); + + /** + * Perform a failover. + * + * @param key the master id + * @return String + */ + Mono failover(K key); + + /** + * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. + * + * @param key the key + * @param ip the IP address + * @param port the port + * @param quorum the quorum count + * @return String + */ + Mono monitor(K key, String ip, int port, int quorum); + + /** + * Multiple option / value pairs can be specified (or none at all). + * + * @param key the key + * @param option the option + * @param value the value + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + Mono set(K key, String option, V value); + + /** + * remove the specified master. + * + * @param key the key + * @return String + */ + Mono remove(K key); + + /** + * Get the current connection name. + * + * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. + */ + Mono clientGetname(); + + /** + * Set the current connection name. + * + * @param name the client name + * @return simple-string-reply {@code OK} if the connection name was successfully set. + */ + Mono clientSetname(K name); + + /** + * Kill the connection of a client identified by ip:port. + * + * @param addr ip:port + * @return String simple-string-reply {@code OK} if the connection exists and has been closed + */ + Mono clientKill(String addr); + + /** + * Kill connections of clients which are filtered by {@code killArgs} + * + * @param killArgs args for the kill operation + * @return Long integer-reply number of killed connections + */ + Mono clientKill(KillArgs killArgs); + + /** + * Stop processing commands from clients for some time. + * + * @param timeout the timeout value in milliseconds + * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. + */ + Mono clientPause(long timeout); + + /** + * Get the list of client connections. + * + * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), + * each line is composed of a succession of property=value fields separated by a space character. + */ + Mono clientList(); + + /** + * Get information and statistics about the server. + * + * @return String bulk-string-reply as a collection of text lines. + */ + Mono info(); + + /** + * Get information and statistics about the server. + * + * @param section the section type: string + * @return String bulk-string-reply as a collection of text lines. + */ + Mono info(String section); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + Mono ping(); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param response type + * @return the command response + * @since 5.2 + */ + Flux dispatch(ProtocolKeyword type, CommandOutput output); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param args the command arguments, must not be {@literal null}. + * @param response type + * @return the command response + * @since 5.2 + */ + Flux dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); + + /** + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * @return the underlying connection. + */ + StatefulRedisSentinelConnection getStatefulConnection(); +} diff --git a/src/main/java/io/lettuce/core/sentinel/api/reactive/package-info.java b/src/main/java/io/lettuce/core/sentinel/api/reactive/package-info.java new file mode 100644 index 0000000000..2d19a46424 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/api/reactive/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Sentinel API for reactive command execution. + */ +package io.lettuce.core.sentinel.api.reactive; diff --git a/src/main/java/io/lettuce/core/sentinel/api/sync/RedisSentinelCommands.java b/src/main/java/io/lettuce/core/sentinel/api/sync/RedisSentinelCommands.java new file mode 100644 index 0000000000..50d40aa9e4 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/api/sync/RedisSentinelCommands.java @@ -0,0 +1,217 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel.api.sync; + +import java.net.SocketAddress; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; + +/** + * Synchronous executed commands for Redis Sentinel. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + * @generated by io.lettuce.apigenerator.CreateSyncApi + */ +public interface RedisSentinelCommands { + + /** + * Return the ip and port number of the master with that name. + * + * @param key the key + * @return SocketAddress; + */ + SocketAddress getMasterAddrByName(K key); + + /** + * Enumerates all the monitored masters and their states. + * + * @return Map<K, V>> + */ + List> masters(); + + /** + * Show the state and info of the specified master. + * + * @param key the key + * @return Map<K, V> + */ + Map master(K key); + + /** + * Provides a list of replicas for the master with the specified name. + * + * @param key the key + * @return List<Map<K, V>> + */ + List> slaves(K key); + + /** + * This command will reset all the masters with matching name. + * + * @param key the key + * @return Long + */ + Long reset(K key); + + /** + * Perform a failover. + * + * @param key the master id + * @return String + */ + String failover(K key); + + /** + * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. + * + * @param key the key + * @param ip the IP address + * @param port the port + * @param quorum the quorum count + * @return String + */ + String monitor(K key, String ip, int port, int quorum); + + /** + * Multiple option / value pairs can be specified (or none at all). + * + * @param key the key + * @param option the option + * @param value the value + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + String set(K key, String option, V value); + + /** + * remove the specified master. + * + * @param key the key + * @return String + */ + String remove(K key); + + /** + * Get the current connection name. + * + * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. + */ + K clientGetname(); + + /** + * Set the current connection name. + * + * @param name the client name + * @return simple-string-reply {@code OK} if the connection name was successfully set. + */ + String clientSetname(K name); + + /** + * Kill the connection of a client identified by ip:port. + * + * @param addr ip:port + * @return String simple-string-reply {@code OK} if the connection exists and has been closed + */ + String clientKill(String addr); + + /** + * Kill connections of clients which are filtered by {@code killArgs} + * + * @param killArgs args for the kill operation + * @return Long integer-reply number of killed connections + */ + Long clientKill(KillArgs killArgs); + + /** + * Stop processing commands from clients for some time. + * + * @param timeout the timeout value in milliseconds + * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. + */ + String clientPause(long timeout); + + /** + * Get the list of client connections. + * + * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), + * each line is composed of a succession of property=value fields separated by a space character. + */ + String clientList(); + + /** + * Get information and statistics about the server. + * + * @return String bulk-string-reply as a collection of text lines. + */ + String info(); + + /** + * Get information and statistics about the server. + * + * @param section the section type: string + * @return String bulk-string-reply as a collection of text lines. + */ + String info(String section); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + String ping(); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param response type + * @return the command response + * @since 5.2 + */ + T dispatch(ProtocolKeyword type, CommandOutput output); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param args the command arguments, must not be {@literal null}. + * @param response type + * @return the command response + * @since 5.2 + */ + T dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); + + /** + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * @return the underlying connection. + */ + StatefulRedisSentinelConnection getStatefulConnection(); +} diff --git a/src/main/java/io/lettuce/core/sentinel/api/sync/package-info.java b/src/main/java/io/lettuce/core/sentinel/api/sync/package-info.java new file mode 100644 index 0000000000..f9c00bcc69 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/api/sync/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Sentinel API for synchronous executed commands. + */ +package io.lettuce.core.sentinel.api.sync; diff --git a/src/main/java/io/lettuce/core/sentinel/package-info.java b/src/main/java/io/lettuce/core/sentinel/package-info.java new file mode 100644 index 0000000000..5b5ab61d61 --- /dev/null +++ b/src/main/java/io/lettuce/core/sentinel/package-info.java @@ -0,0 +1,4 @@ +/** + * Redis Sentinel connection classes. + */ +package io.lettuce.core.sentinel; diff --git a/src/main/java/com/lambdaworks/redis/support/AbstractCdiBean.java b/src/main/java/io/lettuce/core/support/AbstractCdiBean.java similarity index 75% rename from src/main/java/com/lambdaworks/redis/support/AbstractCdiBean.java rename to src/main/java/io/lettuce/core/support/AbstractCdiBean.java index f9c50af1f2..0b9326c4f2 100644 --- a/src/main/java/com/lambdaworks/redis/support/AbstractCdiBean.java +++ b/src/main/java/io/lettuce/core/support/AbstractCdiBean.java @@ -1,4 +1,19 @@ -package com.lambdaworks.redis.support; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; import java.lang.annotation.Annotation; import java.lang.reflect.Type; @@ -12,8 +27,8 @@ import javax.enterprise.inject.spi.BeanManager; import javax.enterprise.inject.spi.InjectionPoint; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.resource.ClientResources; +import io.lettuce.core.RedisURI; +import io.lettuce.core.resource.ClientResources; /** * @author Mark Paluch diff --git a/src/main/java/io/lettuce/core/support/AsyncConnectionPoolSupport.java b/src/main/java/io/lettuce/core/support/AsyncConnectionPoolSupport.java new file mode 100644 index 0000000000..bcab04d0b1 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/AsyncConnectionPoolSupport.java @@ -0,0 +1,189 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Supplier; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.support.ConnectionWrapping.HasTargetConnection; +import io.lettuce.core.support.ConnectionWrapping.Origin; + +/** + * Asynchronous connection pool support for {@link BoundedAsyncPool}. Connection pool creation requires a {@link Supplier} that + * connects asynchronously to Redis. The pool can allocate either wrapped or direct connections. + *

      + *
    • Wrapped instances will return the connection back to the pool when called {@link StatefulConnection#close()}/ + * {@link StatefulConnection#closeAsync()}.
    • + *
    • Regular connections need to be returned to the pool with {@link AsyncPool#release(Object)}
    • + *
    + *

    + * Lettuce connections are designed to be thread-safe so one connection can be shared amongst multiple threads and Lettuce + * connections {@link ClientOptions#isAutoReconnect() auto-reconnect} by default. Connection pooling with Lettuce can be + * required when you're invoking Redis operations in multiple threads and you use + *

      + *
    • blocking commands such as {@code BLPOP}.
    • + *
    • transactions {@code BLPOP}.
    • + *
    • {@link StatefulConnection#setAutoFlushCommands(boolean) command batching}.
    • + *
    + * + * Transactions and command batching affect connection state. Blocking commands won't propagate queued commands to Redis until + * the blocking command is completed. + * + *

    Example

    + * + *
    + * // application initialization
    + * RedisClusterClient clusterClient = RedisClusterClient.create(RedisURI.create(host, port));
    + * AsyncPool<StatefulRedisConnection<String, String>> pool = AsyncConnectionPoolSupport.createBoundedObjectPool(
    + *         () -> clusterClient.connectAsync(), BoundedPoolConfig.create());
    + *
    + * // executing work
    + * CompletableFuture<String> pingResponse = pool.acquire().thenCompose(c -> {
    + *
    + *     return c.async().ping().whenComplete((s, throwable) -> pool.release(c));
    + * });
    + *
    + * // terminating
    + * CompletableFuture<Void> poolClose = pool.closeAsync();
    + *
    + * // after poolClose completes:
    + * CompletableFuture<Void> closeFuture = clusterClient.shutdown();
    + * 
    + * + * @author Mark Paluch + * @since 5.1 + */ +public abstract class AsyncConnectionPoolSupport { + + private AsyncConnectionPoolSupport() { + } + + /** + * Creates a new {@link BoundedAsyncPool} using the {@link Supplier}. Allocated instances are wrapped and must not be + * returned with {@link AsyncPool#release(Object)}. + * + * @param connectionSupplier must not be {@literal null}. + * @param config must not be {@literal null}. + * @param connection type. + * @return the connection pool. + */ + public static > BoundedAsyncPool createBoundedObjectPool( + Supplier> connectionSupplier, BoundedPoolConfig config) { + return createBoundedObjectPool(connectionSupplier, config, true); + } + + /** + * Creates a new {@link BoundedAsyncPool} using the {@link Supplier}. + * + * @param connectionSupplier must not be {@literal null}. + * @param config must not be {@literal null}. + * @param wrapConnections {@literal false} to return direct connections that need to be returned to the pool using + * {@link AsyncPool#release(Object)}. {@literal true} to return wrapped connection that are returned to the pool when + * invoking {@link StatefulConnection#close()}/{@link StatefulConnection#closeAsync()}. + * @param connection type. + * @return the connection pool. + */ + @SuppressWarnings("unchecked") + public static > BoundedAsyncPool createBoundedObjectPool( + Supplier> connectionSupplier, BoundedPoolConfig config, boolean wrapConnections) { + + LettuceAssert.notNull(connectionSupplier, "Connection supplier must not be null"); + LettuceAssert.notNull(config, "BoundedPoolConfig must not be null"); + + AtomicReference> poolRef = new AtomicReference<>(); + + BoundedAsyncPool pool = new BoundedAsyncPool(new RedisPooledObjectFactory(connectionSupplier), config) { + + @Override + public CompletableFuture acquire() { + + CompletableFuture acquire = super.acquire(); + + if (wrapConnections) { + return acquire.thenApply(it -> ConnectionWrapping.wrapConnection(it, poolRef.get())); + } + + return acquire; + } + + @Override + public CompletableFuture release(T object) { + + if (wrapConnections && object instanceof HasTargetConnection) { + return super.release((T) ((HasTargetConnection) object).getTargetConnection()); + } + + return super.release(object); + } + }; + + poolRef.set(new AsyncPoolWrapper<>(pool)); + + return pool; + } + + /** + * @author Mark Paluch + * @since 5.1 + */ + private static class RedisPooledObjectFactory> implements AsyncObjectFactory { + + private final Supplier> connectionSupplier; + + RedisPooledObjectFactory(Supplier> connectionSupplier) { + this.connectionSupplier = connectionSupplier; + } + + @Override + public CompletableFuture create() { + return connectionSupplier.get().toCompletableFuture(); + } + + @Override + public CompletableFuture destroy(T object) { + return object.closeAsync(); + } + + @Override + public CompletableFuture validate(T object) { + return CompletableFuture.completedFuture(object.isOpen()); + } + } + + private static class AsyncPoolWrapper implements Origin { + + private final AsyncPool pool; + + AsyncPoolWrapper(AsyncPool pool) { + this.pool = pool; + } + + @Override + public void returnObject(T o) { + returnObjectAsync(o).join(); + } + + @Override + public CompletableFuture returnObjectAsync(T o) { + return pool.release(o); + } + } +} diff --git a/src/main/java/io/lettuce/core/support/AsyncObjectFactory.java b/src/main/java/io/lettuce/core/support/AsyncObjectFactory.java new file mode 100644 index 0000000000..3738693061 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/AsyncObjectFactory.java @@ -0,0 +1,60 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.util.concurrent.CompletableFuture; + +import org.apache.commons.pool2.PooledObject; + +/** + * An interface defining life-cycle methods for instances to be served by an pool. + * + * @param Type of element managed in this factory. + * @author Mark Paluch + * @since 5.1 + */ +public interface AsyncObjectFactory { + + /** + * Create an instance that can be served by the pool and wrap it in a {@link PooledObject} to be managed by the pool. + * + * @return a {@code PooledObject} wrapping an instance that can be served by the pool + */ + CompletableFuture create(); + + /** + * Destroys an instance no longer needed by the pool. + *

    + * It is important for implementations of this method to be aware that there is no guarantee about what state {@code object} + * will be in and the implementation should be prepared to handle unexpected errors. + *

    + * Also, an implementation must take in to consideration that instances lost to the garbage collector may never be + * destroyed. + * + * @param object a {@code PooledObject} wrapping the instance to be destroyed + * @see #validate + */ + CompletableFuture destroy(T object); + + /** + * Ensures that the instance is safe to be returned by the pool. + * + * @param object a {@code PooledObject} wrapping the instance to be validated + * + * @return {@literal false} if {@code object} is not valid and should be dropped from the pool, {@literal true} otherwise. + */ + CompletableFuture validate(T object); +} diff --git a/src/main/java/io/lettuce/core/support/AsyncPool.java b/src/main/java/io/lettuce/core/support/AsyncPool.java new file mode 100644 index 0000000000..8ceccfc164 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/AsyncPool.java @@ -0,0 +1,67 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.io.Closeable; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.internal.AsyncCloseable; + +/** + * Interface declaring non-blocking object pool methods allowing to {@link #acquire()} and {@link #release(Object)} objects. All + * activity of a pool task outcome is communicated through the returned {@link CompletableFuture}. + * + * @author Mark Paluch + * @since 5.1 + */ +public interface AsyncPool extends Closeable, AsyncCloseable { + + /** + * Acquire an object from this {@link AsyncPool}. The returned {@link CompletableFuture} is notified once the acquire is + * successful and failed otherwise. Behavior upon acquiring objects from an exhausted pool depends on the actual pool + * implementation whether requests are rejected immediately (exceptional completion with + * {@link java.util.NoSuchElementException}) or delayed after exceeding a particular timeout ( + * {@link java.util.concurrent.TimeoutException}). + * + * It's required that an acquired object is always released to the pool again once the object is no longer in + * use.. + */ + CompletableFuture acquire(); + + /** + * Release an object back to this {@link AsyncPool}. The returned {@link CompletableFuture} is notified once the release is + * successful and failed otherwise. When failed the object will automatically disposed. + * + * @param object the object to be released. The object must have been acquired from this pool. + */ + CompletableFuture release(T object); + + /** + * Clear the pool. + */ + void clear(); + + /** + * Clear the pool. + */ + CompletableFuture clearAsync(); + + @Override + void close(); + + @Override + CompletableFuture closeAsync(); +} diff --git a/src/main/java/io/lettuce/core/support/BasePool.java b/src/main/java/io/lettuce/core/support/BasePool.java new file mode 100644 index 0000000000..bb7536af1a --- /dev/null +++ b/src/main/java/io/lettuce/core/support/BasePool.java @@ -0,0 +1,87 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Basic implementation of a pool configured through {@link BasePoolConfig}. + * + * @author Mark Paluch + * @since 5.1 + */ +public abstract class BasePool { + + private final boolean testOnCreate; + private final boolean testOnAcquire; + private final boolean testOnRelease; + + /** + * Create a new pool given {@link BasePoolConfig}. + * + * @param poolConfig must not be {@literal null}. + */ + protected BasePool(BasePoolConfig poolConfig) { + + LettuceAssert.notNull(poolConfig, "PoolConfig must not be null"); + + this.testOnCreate = poolConfig.isTestOnCreate(); + this.testOnAcquire = poolConfig.isTestOnAcquire(); + this.testOnRelease = poolConfig.isTestOnRelease(); + } + + /** + * Returns whether objects created for the pool will be validated before being returned from the acquire method. Validation + * is performed by the {@link AsyncObjectFactory#validate(Object)} method of the factory associated with the pool. If the + * object fails to validate, then acquire will fail. + * + * @return {@literal true} if newly created objects are validated before being returned from the acquire method. + */ + public boolean isTestOnCreate() { + return testOnCreate; + } + + /** + * Returns whether objects acquired from the pool will be validated before being returned from the acquire method. + * Validation is performed by the {@link AsyncObjectFactory#validate(Object)} method of the factory associated with the + * pool. If the object fails to validate, it will be removed from the pool and destroyed, and a new attempt will be made to + * borrow an object from the pool. + * + * @return {@literal true} if objects are validated before being returned from the acquire method. + */ + public boolean isTestOnAcquire() { + return testOnAcquire; + } + + /** + * Returns whether objects borrowed from the pool will be validated when they are returned to the pool via the release + * method. Validation is performed by the {@link AsyncObjectFactory#validate(Object)} method of the factory associated with + * the pool. Returning objects that fail validation are destroyed rather then being returned the pool. + * + * @return {@literal true} if objects are validated on return to the pool via the release method. + */ + public boolean isTestOnRelease() { + return testOnRelease; + } + + /** + * Set the {@link StackTraceElement} for the given {@link Throwable}, using the {@link Class} and method name. + */ + static T unknownStackTrace(T cause, Class clazz, String method) { + cause.setStackTrace(new StackTraceElement[] { new StackTraceElement(clazz.getName(), method, null, -1) }); + return cause; + } +} diff --git a/src/main/java/io/lettuce/core/support/BasePoolConfig.java b/src/main/java/io/lettuce/core/support/BasePoolConfig.java new file mode 100644 index 0000000000..09c39a69c0 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/BasePoolConfig.java @@ -0,0 +1,171 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +/** + * Base configuration for an object pool declaring options for object validation. Typically used as base class for configuration + * objects for specific pool implementations. + * + * @author Mark Paluch + * @since 5.1 + */ +public abstract class BasePoolConfig { + + /** + * The default value for the {@code testOnCreate} configuration attribute. + */ + public static final boolean DEFAULT_TEST_ON_CREATE = false; + + /** + * The default value for the {@code testOnAcquire} configuration attribute. + */ + public static final boolean DEFAULT_TEST_ON_ACQUIRE = false; + + /** + * The default value for the {@code testOnRelease} configuration attribute. + */ + public static final boolean DEFAULT_TEST_ON_RELEASE = false; + + private final boolean testOnCreate; + private final boolean testOnAcquire; + private final boolean testOnRelease; + + protected BasePoolConfig(boolean testOnCreate, boolean testOnAcquire, boolean testOnRelease) { + + this.testOnCreate = testOnCreate; + this.testOnAcquire = testOnAcquire; + this.testOnRelease = testOnRelease; + } + + /** + * Get the value for the {@code testOnCreate} configuration attribute for pools created with this configuration instance. + * + * @return the current setting of {@code testOnCreate} for this configuration instance. + */ + public boolean isTestOnCreate() { + return testOnCreate; + } + + /** + * Get the value for the {@code testOnAcquire} configuration attribute for pools created with this configuration instance. + * + * @return the current setting of {@code testOnAcquire} for this configuration instance. + */ + public boolean isTestOnAcquire() { + return testOnAcquire; + } + + /** + * Get the value for the {@code testOnRelease} configuration attribute for pools created with this configuration instance. + * + * @return the current setting of {@code testOnRelease} for this configuration instance. + */ + public boolean isTestOnRelease() { + return testOnRelease; + } + + /** + * Builder for {@link BasePoolConfig}. + */ + public abstract static class Builder { + + protected boolean testOnCreate = DEFAULT_TEST_ON_CREATE; + protected boolean testOnAcquire = DEFAULT_TEST_ON_ACQUIRE; + protected boolean testOnRelease = DEFAULT_TEST_ON_RELEASE; + + protected Builder() { + } + + /** + * Enables validation of objects before being returned from the acquire method. Validation is performed by the + * {@link AsyncObjectFactory#validate(Object)} method of the factory associated with the pool. If the object fails to + * validate, then acquire will fail. + * + * @return {@code this} {@link Builder}. + */ + public Builder testOnCreate() { + return testOnCreate(true); + } + + /** + * Configures whether objects created for the pool will be validated before being returned from the acquire method. + * Validation is performed by the {@link AsyncObjectFactory#validate(Object)} method of the factory associated with the + * pool. If the object fails to validate, then acquire will fail. + * + * @param testOnCreate {@literal true} if newly created objects should be validated before being returned from the + * acquire method. {@literal true} to enable test on creation. + * + * @return {@code this} {@link Builder}. + */ + public Builder testOnCreate(boolean testOnCreate) { + + this.testOnCreate = testOnCreate; + return this; + } + + /** + * Enables validation of objects before being returned from the acquire method. Validation is performed by the + * {@link AsyncObjectFactory#validate(Object)} method of the factory associated with the pool. If the object fails to + * validate, it will be removed from the pool and destroyed, and a new attempt will be made to borrow an object from the + * pool. + * + * @return {@code this} {@link Builder}. + */ + public Builder testOnAcquire() { + return testOnAcquire(true); + } + + /** + * Configures whether objects acquired from the pool will be validated before being returned from the acquire method. + * Validation is performed by the {@link AsyncObjectFactory#validate(Object)} method of the factory associated with the + * pool. If the object fails to validate, it will be removed from the pool and destroyed, and a new attempt will be made + * to borrow an object from the pool. + * + * @param testOnAcquire {@literal true} if objects should be validated before being returned from the acquire method. + * @return {@code this} {@link Builder}. + */ + public Builder testOnAcquire(boolean testOnAcquire) { + + this.testOnAcquire = testOnAcquire; + return this; + } + + /** + * Enables validation of objects when they are returned to the pool via the release method. Validation is performed by + * the {@link AsyncObjectFactory#validate(Object)} method of the factory associated with the pool. Returning objects + * that fail validation are destroyed rather then being returned the pool. + * + * @return {@code this} {@link Builder}. + */ + public Builder testOnRelease() { + return testOnRelease(true); + } + + /** + * Configures whether objects borrowed from the pool will be validated when they are returned to the pool via the + * release method. Validation is performed by the {@link AsyncObjectFactory#validate(Object)} method of the factory + * associated with the pool. Returning objects that fail validation are destroyed rather then being returned the pool. + * + * @param testOnRelease {@literal true} if objects should be validated on return to the pool via the release method. + * @return {@code this} {@link Builder}. + */ + public Builder testOnRelease(boolean testOnRelease) { + + this.testOnRelease = testOnRelease; + return this; + } + } +} diff --git a/src/main/java/io/lettuce/core/support/BoundedAsyncPool.java b/src/main/java/io/lettuce/core/support/BoundedAsyncPool.java new file mode 100644 index 0000000000..1485295103 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/BoundedAsyncPool.java @@ -0,0 +1,452 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.util.ArrayList; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.Queue; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ConcurrentLinkedQueue; +import java.util.concurrent.atomic.AtomicInteger; + +import io.lettuce.core.internal.Futures; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Bounded asynchronous object pool. This object pool allows pre-warming with {@link BoundedPoolConfig#getMinIdle() idle} + * objects upon construction. The pool is stateful and requires {@link #closeAsync() cleanup} once it's no longer in use. + *

    + * Object pool bounds are maintained on a best-effort basis as bounds are maintained upon object request whereas the actual + * object creation might finish at a later time. You might see temporarily slight differences in object usage vs. pool count due + * to asynchronous processing vs. protecting the pool from exceed its bounds. + * + * @author Mark Paluch + * @since 5.1 + * @see BoundedPoolConfig + * @see AsyncObjectFactory + */ +public class BoundedAsyncPool extends BasePool implements AsyncPool { + + private static final CompletableFuture COMPLETED = CompletableFuture.completedFuture(null); + + private static final IllegalStateException POOL_SHUTDOWN = unknownStackTrace(new IllegalStateException( + "AsyncPool is closed"), BoundedAsyncPool.class, "acquire()"); + + private static final NoSuchElementException POOL_EXHAUSTED = unknownStackTrace( + new NoSuchElementException("Pool exhausted"), BoundedAsyncPool.class, "acquire()"); + + private static final IllegalStateException NOT_PART_OF_POOL = unknownStackTrace(new IllegalStateException( + "Returned object not currently part of this pool"), BoundedAsyncPool.class, "release()"); + + private final int maxTotal; + private final int maxIdle; + private final int minIdle; + + private final AsyncObjectFactory factory; + + private final Queue cache; + private final Queue all; + + private final AtomicInteger objectCount = new AtomicInteger(); + private final AtomicInteger objectsInCreationCount = new AtomicInteger(); + private final AtomicInteger idleCount = new AtomicInteger(); + + private final CompletableFuture closeFuture = new CompletableFuture<>(); + + private volatile State state = State.ACTIVE; + + /** + * Create a new {@link BoundedAsyncPool} given {@link BasePoolConfig} and {@link AsyncObjectFactory}. The factory creates + * idle objects upon construction and requires {@link #closeAsync() termination} once it's no longer in use. + * + * @param factory must not be {@literal null}. + * @param poolConfig must not be {@literal null}. + */ + public BoundedAsyncPool(AsyncObjectFactory factory, BoundedPoolConfig poolConfig) { + + super(poolConfig); + + LettuceAssert.notNull(factory, "AsyncObjectFactory must not be null"); + + this.maxTotal = poolConfig.getMaxTotal(); + this.maxIdle = poolConfig.getMaxIdle(); + this.minIdle = poolConfig.getMinIdle(); + + this.factory = factory; + + this.cache = new ConcurrentLinkedQueue<>(); + this.all = new ConcurrentLinkedQueue<>(); + + createIdle(); + } + + private void createIdle() { + + int potentialIdle = getMinIdle() - getIdle(); + if (potentialIdle <= 0 || !isPoolActive()) { + return; + } + + int totalLimit = getAvailableCapacity(); + int toCreate = Math.min(Math.max(0, totalLimit), potentialIdle); + + for (int i = 0; i < toCreate; i++) { + + if (getAvailableCapacity() <= 0) { + break; + } + + CompletableFuture future = new CompletableFuture<>(); + makeObject0(future); + + future.thenAccept(it -> { + + if (isPoolActive()) { + idleCount.incrementAndGet(); + cache.add(it); + } else { + factory.destroy(it); + } + }); + } + } + + private int getAvailableCapacity() { + return getMaxTotal() - (getCreationInProgress() + getObjectCount()); + } + + @Override + public CompletableFuture acquire() { + + T object = cache.poll(); + + CompletableFuture res = new CompletableFuture<>(); + acquire0(object, res); + + return res; + } + + private void acquire0(T object, CompletableFuture res) { + + if (object != null) { + + idleCount.decrementAndGet(); + + if (isTestOnAcquire()) { + + factory.validate(object).whenComplete((state, throwable) -> { + + if (!isPoolActive()) { + res.completeExceptionally(POOL_SHUTDOWN); + return; + } + + if (state != null && state) { + + completeAcquire(res, object); + + return; + } + + destroy0(object).whenComplete((aVoid, th) -> makeObject0(res)); + }); + + return; + } + + if (isPoolActive()) { + completeAcquire(res, object); + } else { + res.completeExceptionally(POOL_SHUTDOWN); + } + + createIdle(); + return; + } + + long objects = (long) (getObjectCount() + getCreationInProgress()); + + if ((long) getMaxTotal() >= (objects + 1)) { + makeObject0(res); + return; + } + + res.completeExceptionally(POOL_EXHAUSTED); + } + + private void makeObject0(CompletableFuture res) { + + long total = getObjectCount(); + long creations = objectsInCreationCount.incrementAndGet(); + + if (((long) getMaxTotal()) < total + creations) { + + res.completeExceptionally(POOL_EXHAUSTED); + objectsInCreationCount.decrementAndGet(); + return; + } + + factory.create().whenComplete( + (o, t) -> { + + if (t != null) { + objectsInCreationCount.decrementAndGet(); + res.completeExceptionally(new IllegalStateException("Cannot allocate object", t)); + return; + } + + if (isTestOnCreate()) { + + factory.validate(o).whenComplete( + (state, throwable) -> { + + try { + + if (isPoolActive() && state != null && state) { + + objectCount.incrementAndGet(); + all.add(o); + + completeAcquire(res, o); + return; + } + + if (!isPoolActive()) { + rejectPoolClosed(res, o); + return; + } + + factory.destroy(o).whenComplete( + (v, th) -> res.completeExceptionally(new IllegalStateException( + "Cannot allocate object: Validation failed", throwable))); + } catch (Exception e) { + factory.destroy(o).whenComplete( + (v, th) -> res.completeExceptionally(new IllegalStateException( + "Cannot allocate object: Validation failed", throwable))); + } finally { + objectsInCreationCount.decrementAndGet(); + } + }); + + return; + } + + try { + + if (isPoolActive()) { + objectCount.incrementAndGet(); + all.add(o); + + completeAcquire(res, o); + } else { + rejectPoolClosed(res, o); + } + + } catch (Exception e) { + + objectCount.decrementAndGet(); + all.remove(o); + + factory.destroy(o).whenComplete((v, th) -> res.completeExceptionally(e)); + } finally { + objectsInCreationCount.decrementAndGet(); + } + }); + } + + private void completeAcquire(CompletableFuture res, T o) { + + if (res.isCancelled()) { + return0(o); + } else { + res.complete(o); + } + } + + private void rejectPoolClosed(CompletableFuture res, T o) { + + factory.destroy(o); + res.completeExceptionally(POOL_SHUTDOWN); + } + + @Override + public CompletableFuture release(T object) { + + if (!all.contains(object)) { + return Futures.failed(NOT_PART_OF_POOL); + } + + if (idleCount.get() >= getMaxIdle()) { + return destroy0(object); + } + + if (isTestOnRelease()) { + + CompletableFuture valid = factory.validate(object); + CompletableFuture res = new CompletableFuture<>(); + + valid.whenComplete((state1, throwable) -> { + + if (state1 != null && state1) { + return0(object).whenComplete((x, y) -> res.complete(null)); + } else { + destroy0(object).whenComplete((x, y) -> res.complete(null)); + } + }); + + return res; + } + + return return0(object); + } + + private CompletableFuture return0(T object) { + + int idleCount = this.idleCount.incrementAndGet(); + + if (idleCount > getMaxIdle()) { + + this.idleCount.decrementAndGet(); + return destroy0(object); + } + + cache.add(object); + + return COMPLETED; + } + + private CompletableFuture destroy0(T object) { + + objectCount.decrementAndGet(); + all.remove(object); + return factory.destroy(object); + } + + @Override + public void clear() { + clearAsync().join(); + } + + @Override + public CompletableFuture clearAsync() { + + List> futures = new ArrayList<>(all.size()); + + T cached; + while ((cached = cache.poll()) != null) { + idleCount.decrementAndGet(); + objectCount.decrementAndGet(); + all.remove(cached); + futures.add(factory.destroy(cached)); + } + + return Futures.allOf(futures); + } + + @Override + public void close() { + closeAsync().join(); + } + + @Override + public CompletableFuture closeAsync() { + + if (!isPoolActive()) { + return closeFuture; + } + + state = State.TERMINATING; + + CompletableFuture clear = clearAsync(); + + state = State.TERMINATED; + + clear.whenComplete((aVoid, throwable) -> { + + if (throwable != null) { + closeFuture.completeExceptionally(throwable); + } else { + closeFuture.complete(aVoid); + } + }); + + return closeFuture; + } + + /** + * Returns the maximum number of objects that can be allocated by the pool (checked out to clients, or idle awaiting + * checkout) at a given time. When negative, there is no limit to the number of objects that can be managed by the pool at + * one time. + * + * @return the cap on the total number of object instances managed by the pool. + */ + public int getMaxTotal() { + return maxTotal; + } + + /** + * Returns the cap on the number of "idle" instances in the pool. If {@code maxIdle} is set too low on heavily loaded + * systems it is possible you will see objects being destroyed and almost immediately new objects being created. This is a + * result of the active threads momentarily returning objects faster than they are requesting them them, causing the number + * of idle objects to rise above maxIdle. The best value for maxIdle for heavily loaded system will vary but the default is + * a good starting point. + * + * @return the maximum number of "idle" instances that can be held in the pool. + */ + public int getMaxIdle() { + return maxIdle; + } + + /** + * Returns the target for the minimum number of idle objects to maintain in the pool. If this is the case, an attempt is + * made to ensure that the pool has the required minimum number of instances during idle object eviction runs. + *

    + * If the configured value of minIdle is greater than the configured value for {@code maxIdle} then the value of + * {@code maxIdle} will be used instead. + * + * @return The minimum number of objects. + */ + public int getMinIdle() { + + int maxIdleSave = getMaxIdle(); + if (this.minIdle > maxIdleSave) { + return maxIdleSave; + } else { + return minIdle; + } + } + + public int getIdle() { + return idleCount.get(); + } + + public int getObjectCount() { + return objectCount.get(); + } + + public int getCreationInProgress() { + return objectsInCreationCount.get(); + } + + private boolean isPoolActive() { + return this.state == State.ACTIVE; + } + + enum State { + ACTIVE, TERMINATING, TERMINATED; + } +} diff --git a/src/main/java/io/lettuce/core/support/BoundedPoolConfig.java b/src/main/java/io/lettuce/core/support/BoundedPoolConfig.java new file mode 100644 index 0000000000..eb87ca0c6b --- /dev/null +++ b/src/main/java/io/lettuce/core/support/BoundedPoolConfig.java @@ -0,0 +1,201 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +/** + * Configuration for asynchronous pooling using {@link BoundedAsyncPool}. Instances can be created through a {@link #builder()}. + * + * @author Mark Paluch + * @since 5.1 + * @see BoundedAsyncPool + */ +public class BoundedPoolConfig extends BasePoolConfig { + + /** + * The default value for the {@code maxTotal} configuration attribute. + */ + public static final int DEFAULT_MAX_TOTAL = 8; + + /** + * The default value for the {@code maxIdle} configuration attribute. + */ + public static final int DEFAULT_MAX_IDLE = 8; + + /** + * The default value for the {@code minIdle} configuration attribute. + */ + public static final int DEFAULT_MIN_IDLE = 0; + + private final int maxTotal; + private final int maxIdle; + private final int minIdle; + + protected BoundedPoolConfig(boolean testOnCreate, boolean testOnAcquire, boolean testOnRelease, int maxTotal, int maxIdle, + int minIdle) { + + super(testOnCreate, testOnAcquire, testOnRelease); + + this.maxTotal = maxTotal; + this.maxIdle = maxIdle; + this.minIdle = minIdle; + } + + /** + * Create a new {@link Builder} for {@link BoundedPoolConfig}. + * + * @return a new {@link Builder} for {@link BoundedPoolConfig}. + */ + public static Builder builder() { + return new Builder(); + } + + public static BoundedPoolConfig create() { + return builder().build(); + } + + /** + * Get the value for the {@code maxTotal} configuration attribute for pools created with this configuration instance. + * + * @return the current setting of {@code maxTotal} for this configuration instance. + */ + public int getMaxTotal() { + return maxTotal; + } + + /** + * Get the value for the {@code maxIdle} configuration attribute for pools created with this configuration instance. + * + * @return the current setting of {@code maxIdle} for this configuration instance. + */ + public int getMaxIdle() { + return maxIdle; + } + + /** + * Get the value for the {@code minIdle} configuration attribute for pools created with this configuration instance. + * + * @return the current setting of {@code minIdle} for this configuration instance. + */ + public int getMinIdle() { + return minIdle; + } + + /** + * Builder for {@link BoundedPoolConfig}. + */ + public static class Builder extends BasePoolConfig.Builder { + + private int maxTotal = DEFAULT_MAX_TOTAL; + private int maxIdle = DEFAULT_MAX_IDLE; + private int minIdle = DEFAULT_MIN_IDLE; + + protected Builder() { + } + + @Override + public Builder testOnCreate() { + + super.testOnCreate(); + return this; + } + + @Override + public Builder testOnCreate(boolean testOnCreate) { + + super.testOnCreate(testOnCreate); + return this; + } + + @Override + public Builder testOnAcquire() { + + super.testOnAcquire(); + return this; + } + + @Override + public Builder testOnAcquire(boolean testOnAcquire) { + + super.testOnAcquire(testOnAcquire); + return this; + } + + @Override + public Builder testOnRelease() { + + super.testOnRelease(); + return this; + } + + @Override + public Builder testOnRelease(boolean testOnRelease) { + + super.testOnRelease(testOnRelease); + return this; + } + + /** + * Configures the maximum number of objects that can be allocated by the pool (checked out to clients, or idle awaiting + * checkout) at a given time. + * + * @param maxTotal maximum number of objects that can be allocated by the pool. + * @return {@code this} {@link Builder}. + */ + public Builder maxTotal(int maxTotal) { + + this.maxTotal = maxTotal; + return this; + } + + /** + * Returns the cap on the number of "idle" instances in the pool. If {@code maxIdle} is set too low on heavily loaded + * systems it is possible you will see objects being destroyed and almost immediately new objects being created. This is + * a result of the active threads momentarily returning objects faster than they are requesting them them, causing the + * number of idle objects to rise above maxIdle. The best value for maxIdle for heavily loaded system will vary but the + * default is a good starting point. + * + * @param maxIdle the cap on the number of "idle" instances in the pool. + * @return {@code this} {@link Builder}. + */ + public Builder maxIdle(int maxIdle) { + + this.maxIdle = maxIdle; + return this; + } + + /** + * Configures the maximum number of objects that can be allocated by the pool (checked out to clients, or idle awaiting + * checkout) at a given time. + * + * @param minIdle maximum number of objects that can be allocated by the pool. + * @return {@code this} {@link Builder}. + */ + public Builder minIdle(int minIdle) { + + this.minIdle = minIdle; + return this; + } + + /** + * Build a new {@link BasePoolConfig} object. + * + * @return a new {@link BasePoolConfig} object. + */ + public BoundedPoolConfig build() { + return new BoundedPoolConfig(testOnCreate, testOnAcquire, testOnRelease, maxTotal, maxIdle, minIdle); + } + } +} diff --git a/src/main/java/io/lettuce/core/support/ClientResourcesFactoryBean.java b/src/main/java/io/lettuce/core/support/ClientResourcesFactoryBean.java new file mode 100644 index 0000000000..67ad01cb24 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/ClientResourcesFactoryBean.java @@ -0,0 +1,81 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import org.springframework.beans.factory.FactoryBean; +import org.springframework.beans.factory.config.AbstractFactoryBean; + +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DefaultClientResources; + +/** + * {@link FactoryBean} that creates a {@link ClientResources} instance representing the infrastructure resources (thread pools) + * for a Redis Client. + * + * @author Mark Paluch + */ +public class ClientResourcesFactoryBean extends AbstractFactoryBean { + + private int ioThreadPoolSize = DefaultClientResources.DEFAULT_IO_THREADS; + private int computationThreadPoolSize = DefaultClientResources.DEFAULT_COMPUTATION_THREADS; + + public int getIoThreadPoolSize() { + return ioThreadPoolSize; + } + + /** + * Sets the thread pool size (number of threads to use) for I/O operations (default value is the number of CPUs). + * + * @param ioThreadPoolSize the thread pool size + */ + public void setIoThreadPoolSize(int ioThreadPoolSize) { + this.ioThreadPoolSize = ioThreadPoolSize; + } + + public int getComputationThreadPoolSize() { + return computationThreadPoolSize; + } + + /** + * Sets the thread pool size (number of threads to use) for computation operations (default value is the number of CPUs). + * + * @param computationThreadPoolSize the thread pool size + */ + public void setComputationThreadPoolSize(int computationThreadPoolSize) { + this.computationThreadPoolSize = computationThreadPoolSize; + } + + @Override + public Class getObjectType() { + return ClientResources.class; + } + + @Override + protected ClientResources createInstance() throws Exception { + return DefaultClientResources.builder().computationThreadPoolSize(computationThreadPoolSize) + .ioThreadPoolSize(ioThreadPoolSize).build(); + } + + @Override + protected void destroyInstance(ClientResources instance) throws Exception { + instance.shutdown().get(); + } + + @Override + public boolean isSingleton() { + return true; + } +} diff --git a/src/main/java/io/lettuce/core/support/CommonsPool2ConfigConverter.java b/src/main/java/io/lettuce/core/support/CommonsPool2ConfigConverter.java new file mode 100644 index 0000000000..bb08f46572 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/CommonsPool2ConfigConverter.java @@ -0,0 +1,53 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import org.apache.commons.pool2.impl.GenericObjectPoolConfig; + +import io.lettuce.core.internal.LettuceAssert; + +/** + * Utility class to adapt Commons Pool 2 configuration to {@link BoundedPoolConfig}. + * + * @author Mark Paluch + * @since 5.1 + */ +public class CommonsPool2ConfigConverter { + + private CommonsPool2ConfigConverter() { + } + + /** + * Converts {@link GenericObjectPoolConfig} properties to an immutable {@link BoundedPoolConfig}. Applies max total, min/max + * idle and test on borrow/create/release configuration. + * + * @param config must not be {@literal null}. + * @return the converted {@link BoundedPoolConfig}. + */ + public static BoundedPoolConfig bounded(GenericObjectPoolConfig config) { + + LettuceAssert.notNull(config, "GenericObjectPoolConfig must not be null"); + + return BoundedPoolConfig.builder() // + .maxTotal(config.getMaxTotal() > 0 ? config.getMaxTotal() : Integer.MAX_VALUE) + .maxIdle(config.getMaxIdle() > 0 ? config.getMaxIdle() : Integer.MAX_VALUE) // + .minIdle(config.getMinIdle()) // + .testOnAcquire(config.getTestOnBorrow()) // + .testOnCreate(config.getTestOnCreate()) // + .testOnRelease(config.getTestOnReturn()) // + .build(); + } +} diff --git a/src/main/java/io/lettuce/core/support/ConnectionPoolSupport.java b/src/main/java/io/lettuce/core/support/ConnectionPoolSupport.java new file mode 100644 index 0000000000..c8087b6e1c --- /dev/null +++ b/src/main/java/io/lettuce/core/support/ConnectionPoolSupport.java @@ -0,0 +1,249 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static io.lettuce.core.support.ConnectionWrapping.HasTargetConnection; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Supplier; + +import org.apache.commons.pool2.BasePooledObjectFactory; +import org.apache.commons.pool2.ObjectPool; +import org.apache.commons.pool2.PooledObject; +import org.apache.commons.pool2.impl.DefaultPooledObject; +import org.apache.commons.pool2.impl.GenericObjectPool; +import org.apache.commons.pool2.impl.GenericObjectPoolConfig; +import org.apache.commons.pool2.impl.SoftReferenceObjectPool; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.internal.LettuceAssert; +import io.lettuce.core.support.ConnectionWrapping.Origin; + +/** + * Connection pool support for {@link GenericObjectPool} and {@link SoftReferenceObjectPool}. Connection pool creation requires + * a {@link Supplier} that creates Redis connections. The pool can allocate either wrapped or direct connections. + *

      + *
    • Wrapped instances will return the connection back to the pool when called {@link StatefulConnection#close()}.
    • + *
    • Regular connections need to be returned to the pool with {@link GenericObjectPool#returnObject(Object)}
    • + *
    + *

    + * Lettuce connections are designed to be thread-safe so one connection can be shared amongst multiple threads and Lettuce + * connections {@link ClientOptions#isAutoReconnect() auto-reconnect} by default. Connection pooling with Lettuce can be + * required when you're invoking Redis operations in multiple threads and you use + *

      + *
    • blocking commands such as {@code BLPOP}.
    • + *
    • transactions {@code BLPOP}.
    • + *
    • {@link StatefulConnection#setAutoFlushCommands(boolean) command batching}.
    • + *
    + * + * Transactions and command batching affect connection state. Blocking commands won't propagate queued commands to Redis until + * the blocking command is completed. + * + *

    Example

    + * + *
    + * // application initialization
    + * RedisClusterClient clusterClient = RedisClusterClient.create(RedisURI.create(host, port));
    + * GenericObjectPool<StatefulRedisClusterConnection<String, String>> pool = ConnectionPoolSupport.createGenericObjectPool(
    + *         () -> clusterClient.connect(), new GenericObjectPoolConfig());
    + *
    + * // executing work
    + * try (StatefulRedisClusterConnection<String, String> connection = pool.borrowObject()) {
    + *     // perform some work
    + * }
    + *
    + * // terminating
    + * pool.close();
    + * clusterClient.shutdown();
    + * 
    + * + * @author Mark Paluch + * @since 4.3 + */ +public abstract class ConnectionPoolSupport { + + private ConnectionPoolSupport() { + } + + /** + * Creates a new {@link GenericObjectPool} using the {@link Supplier}. Allocated instances are wrapped and must not be + * returned with {@link ObjectPool#returnObject(Object)}. + * + * @param connectionSupplier must not be {@literal null}. + * @param config must not be {@literal null}. + * @param connection type. + * @return the connection pool. + */ + public static > GenericObjectPool createGenericObjectPool( + Supplier connectionSupplier, GenericObjectPoolConfig config) { + return createGenericObjectPool(connectionSupplier, config, true); + } + + /** + * Creates a new {@link GenericObjectPool} using the {@link Supplier}. + * + * @param connectionSupplier must not be {@literal null}. + * @param config must not be {@literal null}. + * @param wrapConnections {@literal false} to return direct connections that need to be returned to the pool using + * {@link ObjectPool#returnObject(Object)}. {@literal true} to return wrapped connection that are returned to the + * pool when invoking {@link StatefulConnection#close()}. + * @param connection type. + * @return the connection pool. + */ + @SuppressWarnings("unchecked") + public static > GenericObjectPool createGenericObjectPool( + Supplier connectionSupplier, GenericObjectPoolConfig config, boolean wrapConnections) { + + LettuceAssert.notNull(connectionSupplier, "Connection supplier must not be null"); + LettuceAssert.notNull(config, "GenericObjectPoolConfig must not be null"); + + AtomicReference> poolRef = new AtomicReference<>(); + + GenericObjectPool pool = new GenericObjectPool(new RedisPooledObjectFactory(connectionSupplier), config) { + + @Override + public T borrowObject() throws Exception { + return wrapConnections ? ConnectionWrapping.wrapConnection(super.borrowObject(), poolRef.get()) : super + .borrowObject(); + } + + @Override + public void returnObject(T obj) { + + if (wrapConnections && obj instanceof HasTargetConnection) { + super.returnObject((T) ((HasTargetConnection) obj).getTargetConnection()); + return; + } + super.returnObject(obj); + } + }; + + poolRef.set(new ObjectPoolWrapper<>(pool)); + + return pool; + } + + /** + * Creates a new {@link SoftReferenceObjectPool} using the {@link Supplier}. Allocated instances are wrapped and must not be + * returned with {@link ObjectPool#returnObject(Object)}. + * + * @param connectionSupplier must not be {@literal null}. + * @param connection type. + * @return the connection pool. + */ + public static > SoftReferenceObjectPool createSoftReferenceObjectPool( + Supplier connectionSupplier) { + return createSoftReferenceObjectPool(connectionSupplier, true); + } + + /** + * Creates a new {@link SoftReferenceObjectPool} using the {@link Supplier}. + * + * @param connectionSupplier must not be {@literal null}. + * @param wrapConnections {@literal false} to return direct connections that need to be returned to the pool using + * {@link ObjectPool#returnObject(Object)}. {@literal true} to return wrapped connection that are returned to the + * pool when invoking {@link StatefulConnection#close()}. + * @param connection type. + * @return the connection pool. + */ + @SuppressWarnings("unchecked") + public static > SoftReferenceObjectPool createSoftReferenceObjectPool( + Supplier connectionSupplier, boolean wrapConnections) { + + LettuceAssert.notNull(connectionSupplier, "Connection supplier must not be null"); + + AtomicReference> poolRef = new AtomicReference<>(); + + SoftReferenceObjectPool pool = new SoftReferenceObjectPool(new RedisPooledObjectFactory<>(connectionSupplier)) { + @Override + public synchronized T borrowObject() throws Exception { + return wrapConnections ? ConnectionWrapping.wrapConnection(super.borrowObject(), poolRef.get()) : super + .borrowObject(); + } + + @Override + public synchronized void returnObject(T obj) throws Exception { + + if (wrapConnections && obj instanceof HasTargetConnection) { + super.returnObject((T) ((HasTargetConnection) obj).getTargetConnection()); + return; + } + super.returnObject(obj); + } + }; + poolRef.set(new ObjectPoolWrapper<>(pool)); + + return pool; + } + + + /** + * @author Mark Paluch + * @since 4.3 + */ + private static class RedisPooledObjectFactory> extends BasePooledObjectFactory { + + private final Supplier connectionSupplier; + + RedisPooledObjectFactory(Supplier connectionSupplier) { + this.connectionSupplier = connectionSupplier; + } + + @Override + public T create() throws Exception { + return connectionSupplier.get(); + } + + @Override + public void destroyObject(PooledObject p) throws Exception { + p.getObject().close(); + } + + @Override + public PooledObject wrap(T obj) { + return new DefaultPooledObject<>(obj); + } + + @Override + public boolean validateObject(PooledObject p) { + return p.getObject().isOpen(); + } + } + + private static class ObjectPoolWrapper implements Origin { + + private static final CompletableFuture COMPLETED = CompletableFuture.completedFuture(null); + + private final ObjectPool pool; + + ObjectPoolWrapper(ObjectPool pool) { + this.pool = pool; + } + + @Override + public void returnObject(T o) throws Exception { + pool.returnObject(o); + } + + @Override + public CompletableFuture returnObjectAsync(T o) throws Exception { + pool.returnObject(o); + return COMPLETED; + } + } +} diff --git a/src/main/java/io/lettuce/core/support/ConnectionWrapping.java b/src/main/java/io/lettuce/core/support/ConnectionWrapping.java new file mode 100644 index 0000000000..700e01ed27 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/ConnectionWrapping.java @@ -0,0 +1,255 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.lang.reflect.Proxy; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ConcurrentHashMap; + +import io.lettuce.core.RedisException; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.internal.AbstractInvocationHandler; +import io.lettuce.core.internal.AsyncCloseable; + +/** + * Utility to wrap pooled connections for return-on-close. + * + * @author Mark Paluch + * @since 5.1 + */ +public class ConnectionWrapping { + + /** + * Unwrap a potentially {@link Wrapper} object. Recurses across {@link Wrapper wrappers} + * + * @param object the potentially wrapped object. + * @return the {@code object} if it is not wrapped or the {@link Wrapper#unwrap() unwrapped} object. + */ + public static Object unwrap(Object object) { + + while (object instanceof Wrapper) { + object = ((Wrapper) object).unwrap(); + } + + return object; + } + + /** + * Wrap a connection along its {@link Origin} reference. + * + * @param connection + * @param pool + * @param + * @return + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + static T wrapConnection(T connection, Origin pool) { + + ReturnObjectOnCloseInvocationHandler handler = new ReturnObjectOnCloseInvocationHandler(connection, pool); + + Class[] implementedInterfaces = connection.getClass().getInterfaces(); + Class[] interfaces = new Class[implementedInterfaces.length + 1]; + interfaces[0] = HasTargetConnection.class; + System.arraycopy(implementedInterfaces, 0, interfaces, 1, implementedInterfaces.length); + + T proxiedConnection = (T) Proxy.newProxyInstance(connection.getClass().getClassLoader(), interfaces, handler); + handler.setProxiedConnection(proxiedConnection); + + return proxiedConnection; + } + + /** + * Invocation handler that takes care of connection.close(). Connections are returned to the pool on a close()-call. + * + * @author Mark Paluch + * @param Connection type. + * @since 4.3 + */ + static class ReturnObjectOnCloseInvocationHandler extends AbstractInvocationHandler implements Wrapper { + + private T connection; + private T proxiedConnection; + private Map connectionProxies = new ConcurrentHashMap<>(5, 1); + + private final Origin pool; + + ReturnObjectOnCloseInvocationHandler(T connection, Origin pool) { + this.connection = connection; + this.pool = pool; + } + + void setProxiedConnection(T proxiedConnection) { + this.proxiedConnection = proxiedConnection; + } + + @Override + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + if (method.getName().equals("getStatefulConnection")) { + return proxiedConnection; + } + + if (method.getName().equals("getTargetConnection")) { + return connection; + } + + if (connection == null) { + throw new RedisException("Connection is deallocated and cannot be used anymore."); + } + + if (method.getName().equals("close")) { + pool.returnObject(proxiedConnection); + connection = null; + proxiedConnection = null; + connectionProxies.clear(); + return null; + } + + if (method.getName().equals("closeAsync")) { + CompletableFuture future = pool.returnObjectAsync(proxiedConnection); + connection = null; + proxiedConnection = null; + connectionProxies.clear(); + return future; + } + + try { + + if (method.getName().equals("sync") || method.getName().equals("async") || method.getName().equals("reactive")) { + return connectionProxies.computeIfAbsent(method, m -> getInnerProxy(method, args)); + } + + return method.invoke(connection, args); + + } catch (InvocationTargetException e) { + throw e.getTargetException(); + } + } + + @SuppressWarnings({ "unchecked", "rawtypes" }) + private Object getInnerProxy(Method method, Object[] args) { + + try { + Object result = method.invoke(connection, args); + + result = Proxy.newProxyInstance(getClass().getClassLoader(), result.getClass().getInterfaces(), + new DelegateCloseToConnectionInvocationHandler((AsyncCloseable) proxiedConnection, result)); + + return result; + } catch (IllegalAccessException e) { + throw new RedisException(e); + } catch (InvocationTargetException e) { + throw new RedisException(e.getTargetException()); + } + } + + public T getConnection() { + return connection; + } + + @Override + public T unwrap() { + return getConnection(); + } + } + + /** + * Invocation handler that takes care of connection.close(). Connections are returned to the pool on a close()-call. + * + * @author Mark Paluch + * @param Connection type. + * @since 4.3 + */ + @SuppressWarnings("try") + static class DelegateCloseToConnectionInvocationHandler extends + AbstractInvocationHandler implements Wrapper { + + private final T proxiedConnection; + private final Object api; + + DelegateCloseToConnectionInvocationHandler(T proxiedConnection, Object api) { + + this.proxiedConnection = proxiedConnection; + this.api = api; + } + + @Override + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + if (method.getName().equals("getStatefulConnection")) { + return proxiedConnection; + } + + try { + + if (method.getName().equals("close")) { + proxiedConnection.close(); + return null; + } + + if (method.getName().equals("closeAsync")) { + return proxiedConnection.closeAsync(); + } + + return method.invoke(api, args); + + } catch (InvocationTargetException e) { + throw e.getTargetException(); + } + } + + @Override + public Object unwrap() { + return api; + } + } + + /** + * Interface to retrieve an underlying target connection from a proxy. + */ + interface HasTargetConnection { + StatefulConnection getTargetConnection(); + } + + /** + * Interface to return objects to their origin. + */ + interface Origin { + + /** + * Synchronously return the object. + */ + void returnObject(T o) throws Exception; + + /** + * Return the object asynchronously. + */ + CompletableFuture returnObjectAsync(T o) throws Exception; + } + + /** + * Marker interface to indicate a wrapper. + * + * @param Type of the wrapped object. + * @since 5.2 + */ + interface Wrapper { + T unwrap(); + } +} diff --git a/src/main/java/com/lambdaworks/redis/support/LettuceCdiExtension.java b/src/main/java/io/lettuce/core/support/LettuceCdiExtension.java similarity index 81% rename from src/main/java/com/lambdaworks/redis/support/LettuceCdiExtension.java rename to src/main/java/io/lettuce/core/support/LettuceCdiExtension.java index 7ab123ca8c..a9ec032ab6 100644 --- a/src/main/java/com/lambdaworks/redis/support/LettuceCdiExtension.java +++ b/src/main/java/io/lettuce/core/support/LettuceCdiExtension.java @@ -1,22 +1,36 @@ -package com.lambdaworks.redis.support; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import java.util.Map; -import java.util.Map.Entry; import java.util.Set; +import java.util.Map.Entry; import java.util.concurrent.ConcurrentHashMap; import javax.enterprise.event.Observes; import javax.enterprise.inject.Default; import javax.enterprise.inject.spi.*; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.internal.LettuceSets; -import com.lambdaworks.redis.resource.ClientResources; - +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.internal.LettuceSets; +import io.lettuce.core.resource.ClientResources; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; @@ -27,43 +41,37 @@ * shared across multiple client instances (Standalone, Cluster) by providing a {@link ClientResources} bean with the same * qualifiers as the {@link RedisURI}. * - *

    - * Example: - *

    + *

    Example

    * - *
    - * 
    - *  public class Producers {
    - *     @Produces
    + * 
    + * public class Producers {
    + *     @Produces
      *     public RedisURI redisURI() {
    - *         return RedisURI.Builder.redis("localhost", 6379).build();
    + *         return RedisURI.Builder.redis("localhost", 6379).build();
      *     }
    - * 
    - *     @Produces
    + *
    + *     @Produces
      *     public ClientResources clientResources() {
      *         return DefaultClientResources.create()
      *     }
    - * 
    + *
      *     public void shutdownClientResources(@Disposes ClientResources clientResources) throws Exception {
      *         clientResources.shutdown().get();
      *     }
      * }
    - * 
      * 
    * * - *
    - *  
    - *   public class Consumer {
    - *      @Inject
    - *      private RedisClient client;
    - * 
    - *      @Inject
    - *      private RedisClusterClient clusterClient;
    + * 
    + * public class Consumer {
    + *     @Inject
    + *     private RedisClient client;
    + *
    + *     @Inject
    + *     private RedisClusterClient clusterClient;
      * }
    - *  
      * 
    - * + * * @author Mark Paluch */ public class LettuceCdiExtension implements Extension { diff --git a/src/main/java/com/lambdaworks/redis/support/LettuceFactoryBeanSupport.java b/src/main/java/io/lettuce/core/support/LettuceFactoryBeanSupport.java similarity index 75% rename from src/main/java/com/lambdaworks/redis/support/LettuceFactoryBeanSupport.java rename to src/main/java/io/lettuce/core/support/LettuceFactoryBeanSupport.java index 492bfb4284..761aa726fc 100644 --- a/src/main/java/com/lambdaworks/redis/support/LettuceFactoryBeanSupport.java +++ b/src/main/java/io/lettuce/core/support/LettuceFactoryBeanSupport.java @@ -1,16 +1,31 @@ -package com.lambdaworks.redis.support; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; import java.net.URI; import org.springframework.beans.factory.config.AbstractFactoryBean; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.resource.ClientResources; +import io.lettuce.core.RedisURI; +import io.lettuce.core.resource.ClientResources; /** * Adapter for Springs {@link org.springframework.beans.factory.FactoryBean} interface to allow easy setup of - * {@link com.lambdaworks.redis.RedisClient} factories via Spring configuration. - * + * {@link io.lettuce.core.RedisClient} factories via Spring configuration. + * * @author Mark Paluch * @since 3.0 */ @@ -38,7 +53,7 @@ public URI getUri() { /** * Set the URI for connecting Redis. The URI follows the URI conventions. See {@link RedisURI} for URL schemes. Either the * URI of the RedisURI must be set in order to connect to Redis. - * + * * @param uri the URI */ public void setUri(URI uri) { @@ -52,7 +67,7 @@ public RedisURI getRedisURI() { /** * Set the RedisURI for connecting Redis. See {@link RedisURI} for URL schemes. Either the URI of the RedisURI must be set * in order to connect to Redis. - * + * * @param redisURI the RedisURI */ public void setRedisURI(RedisURI redisURI) { @@ -66,7 +81,7 @@ public String getPassword() { /** * Sets the password to use for a Redis connection. If the password is set, it has higher precedence than the password * provided within the URI meaning the password from the URI is replaced by this one. - * + * * @param password the password */ public void setPassword(String password) { @@ -85,7 +100,7 @@ public ClientResources getClientResources() { /** * Set shared client resources to reuse across different client instances. If not set, each client instance will provide * their own {@link ClientResources} instance. - * + * * @param clientResources the client resources */ public void setClientResources(ClientResources clientResources) { diff --git a/src/main/java/io/lettuce/core/support/RedisClientCdiBean.java b/src/main/java/io/lettuce/core/support/RedisClientCdiBean.java new file mode 100644 index 0000000000..f2f06e8f98 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/RedisClientCdiBean.java @@ -0,0 +1,75 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.lang.annotation.Annotation; +import java.util.Set; + +import javax.enterprise.context.spi.CreationalContext; +import javax.enterprise.inject.spi.Bean; +import javax.enterprise.inject.spi.BeanManager; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.resource.ClientResources; + +/** + * Factory Bean for {@link RedisClient} instances. Requires a {@link RedisURI} and allows to reuse + * {@link io.lettuce.core.resource.ClientResources}. URI Formats: + * {@code + * redis-sentinel://host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId + * } + * + * {@code + * redis://host[:port][/databaseNumber] + * } + * + * @see RedisURI + * @author Mark Paluch + * @since 3.0 + */ +class RedisClientCdiBean extends AbstractCdiBean { + + RedisClientCdiBean(Bean redisURIBean, Bean clientResourcesBean, BeanManager beanManager, + Set qualifiers, String name) { + super(redisURIBean, clientResourcesBean, beanManager, qualifiers, name); + } + + @Override + public Class getBeanClass() { + return RedisClient.class; + } + + @Override + public RedisClient create(CreationalContext creationalContext) { + + CreationalContext uriCreationalContext = beanManager.createCreationalContext(redisURIBean); + RedisURI redisURI = (RedisURI) beanManager.getReference(redisURIBean, RedisURI.class, uriCreationalContext); + + if (clientResourcesBean != null) { + ClientResources clientResources = (ClientResources) beanManager.getReference(clientResourcesBean, + ClientResources.class, uriCreationalContext); + return RedisClient.create(clientResources, redisURI); + } + + return RedisClient.create(redisURI); + } + + @Override + public void destroy(RedisClient instance, CreationalContext creationalContext) { + instance.shutdown(); + } +} diff --git a/src/main/java/io/lettuce/core/support/RedisClientFactoryBean.java b/src/main/java/io/lettuce/core/support/RedisClientFactoryBean.java new file mode 100644 index 0000000000..4c0cf9571a --- /dev/null +++ b/src/main/java/io/lettuce/core/support/RedisClientFactoryBean.java @@ -0,0 +1,74 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static io.lettuce.core.LettuceStrings.isNotEmpty; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; + +/** + * Factory Bean for {@link RedisClient} instances. Needs either a {@link java.net.URI} or a {@link RedisURI} as input and allows + * to reuse {@link io.lettuce.core.resource.ClientResources}. URI Formats: + * {@code + * redis-sentinel://host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId + * } + * + * {@code + * redis://host[:port][/databaseNumber] + * } + * + * @see RedisURI + * @see ClientResourcesFactoryBean + * @author Mark Paluch + * @since 3.0 + */ +public class RedisClientFactoryBean extends LettuceFactoryBeanSupport { + + @Override + public void afterPropertiesSet() throws Exception { + + if (getRedisURI() == null) { + RedisURI redisURI = RedisURI.create(getUri()); + + if (isNotEmpty(getPassword())) { + redisURI.setPassword(getPassword()); + } + setRedisURI(redisURI); + } + + super.afterPropertiesSet(); + } + + @Override + protected void destroyInstance(RedisClient instance) throws Exception { + instance.shutdown(); + } + + @Override + public Class getObjectType() { + return RedisClient.class; + } + + @Override + protected RedisClient createInstance() throws Exception { + + if (getClientResources() != null) { + return RedisClient.create(getClientResources(), getRedisURI()); + } + return RedisClient.create(getRedisURI()); + } +} diff --git a/src/main/java/io/lettuce/core/support/RedisClusterClientCdiBean.java b/src/main/java/io/lettuce/core/support/RedisClusterClientCdiBean.java new file mode 100644 index 0000000000..60b43f89d0 --- /dev/null +++ b/src/main/java/io/lettuce/core/support/RedisClusterClientCdiBean.java @@ -0,0 +1,70 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.lang.annotation.Annotation; +import java.util.Set; + +import javax.enterprise.context.spi.CreationalContext; +import javax.enterprise.inject.spi.Bean; +import javax.enterprise.inject.spi.BeanManager; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.resource.ClientResources; + +/** + * Factory Bean for {@link RedisClusterClient} instances. Requires a {@link RedisURI} and allows to reuse + * {@link io.lettuce.core.resource.ClientResources}. URI Format: {@code + * redis://[password@]host[:port] + * } + * + * @see RedisURI + * @author Mark Paluch + * @since 3.0 + */ +class RedisClusterClientCdiBean extends AbstractCdiBean { + + public RedisClusterClientCdiBean(Bean redisURIBean, Bean clientResourcesBean, + BeanManager beanManager, Set qualifiers, String name) { + super(redisURIBean, clientResourcesBean, beanManager, qualifiers, name); + } + + @Override + public Class getBeanClass() { + return RedisClusterClient.class; + } + + @Override + public RedisClusterClient create(CreationalContext creationalContext) { + + CreationalContext uriCreationalContext = beanManager.createCreationalContext(redisURIBean); + RedisURI redisURI = (RedisURI) beanManager.getReference(redisURIBean, RedisURI.class, uriCreationalContext); + + if (clientResourcesBean != null) { + ClientResources clientResources = (ClientResources) beanManager.getReference(clientResourcesBean, + ClientResources.class, uriCreationalContext); + return RedisClusterClient.create(clientResources, redisURI); + } + + return RedisClusterClient.create(redisURI); + } + + @Override + public void destroy(RedisClusterClient instance, CreationalContext creationalContext) { + instance.shutdown(); + } +} diff --git a/src/main/java/io/lettuce/core/support/RedisClusterClientFactoryBean.java b/src/main/java/io/lettuce/core/support/RedisClusterClientFactoryBean.java new file mode 100644 index 0000000000..041d62129b --- /dev/null +++ b/src/main/java/io/lettuce/core/support/RedisClusterClientFactoryBean.java @@ -0,0 +1,123 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static io.lettuce.core.LettuceStrings.isNotEmpty; + +import java.net.URI; +import java.util.Collection; +import java.util.Collections; +import java.util.List; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.RedisClusterURIUtil; +import io.lettuce.core.internal.LettuceAssert; + +/** + * Factory Bean for {@link RedisClusterClient} instances. Needs either a {@link URI} or a {@link RedisURI} as input and allows + * to reuse {@link io.lettuce.core.resource.ClientResources}. URI Format: {@code + * redis://[password@]host[:port][,host2[:port2]] + * } + * + * {@code + * rediss://[password@]host[:port][,host2[:port2]] + * } + * + * @see RedisURI + * @see ClientResourcesFactoryBean + * @author Mark Paluch + * @since 3.0 + */ +public class RedisClusterClientFactoryBean extends LettuceFactoryBeanSupport { + + private boolean verifyPeer = false; + private Collection redisURIs; + + @Override + public void afterPropertiesSet() throws Exception { + + if (redisURIs == null) { + + if (getUri() != null) { + URI uri = getUri(); + + LettuceAssert.isTrue(!uri.getScheme().equals(RedisURI.URI_SCHEME_REDIS_SENTINEL), + "Sentinel mode not supported when using RedisClusterClient"); + + List redisURIs = RedisClusterURIUtil.toRedisURIs(uri); + + for (RedisURI redisURI : redisURIs) { + applyProperties(uri.getScheme(), redisURI); + } + + this.redisURIs = redisURIs; + } else { + + URI uri = getRedisURI().toURI(); + RedisURI redisURI = RedisURI.create(uri); + applyProperties(uri.getScheme(), redisURI); + this.redisURIs = Collections.singleton(redisURI); + } + } + + super.afterPropertiesSet(); + } + + private void applyProperties(String scheme, RedisURI redisURI) { + + if (isNotEmpty(getPassword())) { + redisURI.setPassword(getPassword()); + } + + if (RedisURI.URI_SCHEME_REDIS_SECURE.equals(scheme) || RedisURI.URI_SCHEME_REDIS_SECURE_ALT.equals(scheme) + || RedisURI.URI_SCHEME_REDIS_TLS_ALT.equals(scheme)) { + redisURI.setVerifyPeer(verifyPeer); + } + } + + protected Collection getRedisURIs() { + return redisURIs; + } + + @Override + protected void destroyInstance(RedisClusterClient instance) throws Exception { + instance.shutdown(); + } + + @Override + public Class getObjectType() { + return RedisClusterClient.class; + } + + @Override + protected RedisClusterClient createInstance() throws Exception { + + if (getClientResources() != null) { + return RedisClusterClient.create(getClientResources(), redisURIs); + } + + return RedisClusterClient.create(redisURIs); + } + + public boolean isVerifyPeer() { + return verifyPeer; + } + + public void setVerifyPeer(boolean verifyPeer) { + this.verifyPeer = verifyPeer; + } +} diff --git a/src/main/java/io/lettuce/core/support/package-info.java b/src/main/java/io/lettuce/core/support/package-info.java new file mode 100644 index 0000000000..ad8c9bdedc --- /dev/null +++ b/src/main/java/io/lettuce/core/support/package-info.java @@ -0,0 +1,5 @@ +/** + * Supportive classes such as {@link io.lettuce.core.support.RedisClientCdiBean} for CDI support, {@link io.lettuce.core.support.RedisClientFactoryBean} for Spring. + */ +package io.lettuce.core.support; + diff --git a/src/main/java/io/lettuce/core/tracing/BraveTracing.java b/src/main/java/io/lettuce/core/tracing/BraveTracing.java new file mode 100644 index 0000000000..fdaef48adf --- /dev/null +++ b/src/main/java/io/lettuce/core/tracing/BraveTracing.java @@ -0,0 +1,451 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +import java.net.InetSocketAddress; +import java.net.SocketAddress; +import java.util.function.Consumer; + +import reactor.core.publisher.Mono; +import brave.Span; +import brave.propagation.TraceContextOrSamplingFlags; +import io.lettuce.core.internal.LettuceAssert; + +/** + * {@link Tracing} integration with OpenZipkin's Brave {@link brave.Tracer}. This implementation creates Brave + * {@link brave.Span}s that are optionally associated with a {@link TraceContext}. + * + *

    {@link TraceContext} Propagation

    Redis commands can use a parent trace context to create a + * {@link io.lettuce.core.tracing.Tracer.Span} to trace the actual command. A parent {@link brave.Span} is picked up for + * imperative (synchronous/asynchronous) API usage from {@link brave.Tracing#currentTracer()}. The context is not propagated + * across asynchronous call chains resulting from {@link java.util.concurrent.CompletionStage} chaining. + *

    + * Reactive API usage leverages Reactor's {@link reactor.util.context.Context} so that subscribers can register one of the + * following objects (using their {@link Class} as context key): + * + *

      + *
    1. A {@link TraceContextProvider}
    2. + *
    3. A Brave {@link Span}: Commands extract the {@link brave.propagation.TraceContext}
    4. + *
    5. A Brave {@link brave.propagation.TraceContext}
    6. + *
    + * + * If one of the context objects above is found, it's used to determine the parent context for the command {@link Span}. + * + * @author Mark Paluch + * @author Daniel Albuquerque + * @see brave.Tracer + * @see brave.Tracing#currentTracer() + * @see BraveTraceContextProvider + * @see #builder() + * @since 5.1 + */ +public class BraveTracing implements Tracing { + + private final BraveTracer tracer; + private final BraveTracingOptions tracingOptions; + private final boolean includeCommandArgsInSpanTags; + + /** + * Create a new {@link BraveTracing} instance. + * + * @param builder the {@link BraveTracing.Builder}. + */ + private BraveTracing(Builder builder) { + + LettuceAssert.notNull(builder.tracing, "Tracing must not be null"); + LettuceAssert.notNull(builder.serviceName, "Service name must not be null"); + + this.tracingOptions = new BraveTracingOptions(builder.serviceName, builder.endpointCustomizer, builder.spanCustomizer); + this.tracer = new BraveTracer(builder.tracing, this.tracingOptions); + this.includeCommandArgsInSpanTags = builder.includeCommandArgsInSpanTags; + } + + /** + * Create a new {@link BraveTracing} instance. + * + * @param tracing must not be {@literal null}. + * @return the {@link BraveTracing}. + */ + public static BraveTracing create(brave.Tracing tracing) { + return builder().tracing(tracing).build(); + } + + /** + * Create a new {@link Builder} to build {@link BraveTracing}. + * + * @return a new instance of {@link Builder}. + * @since 5.2 + */ + public static BraveTracing.Builder builder() { + return new BraveTracing.Builder(); + } + + /** + * Builder for {@link BraveTracing}. + * + * @since 5.2 + */ + public static class Builder { + + private brave.Tracing tracing; + private String serviceName = "redis"; + private Consumer endpointCustomizer = it -> { + }; + private Consumer spanCustomizer = it -> { + }; + private boolean includeCommandArgsInSpanTags = true; + + private Builder() { + } + + /** + * Sets the {@link Tracing}. + * + * @param tracing the Brave {@link brave.Tracing} object, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + public Builder tracing(brave.Tracing tracing) { + + LettuceAssert.notNull(tracing, "Tracing must not be null!"); + + this.tracing = tracing; + return this; + } + + /** + * Sets the name used in the {@link zipkin2.Endpoint}. + * + * @param serviceName the name for the {@link zipkin2.Endpoint}, must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + public Builder serviceName(String serviceName) { + + LettuceAssert.notEmpty(serviceName, "Service name must not be null!"); + + this.serviceName = serviceName; + return this; + } + + /** + * Excludes command arguments from {@link Span} tags. Enabled by default. + * + * @return {@code this} {@link Builder}. + */ + public Builder excludeCommandArgsFromSpanTags() { + return includeCommandArgsInSpanTags(false); + } + + /** + * Controls the inclusion of command arguments in {@link Span} tags. Enabled by default. + * + * @param includeCommandArgsInSpanTags the flag to enable or disable the inclusion of command args in {@link Span} tags. + * @return {@code this} {@link Builder}. + */ + public Builder includeCommandArgsInSpanTags(boolean includeCommandArgsInSpanTags) { + + this.includeCommandArgsInSpanTags = includeCommandArgsInSpanTags; + return this; + } + + /** + * Sets an {@link zipkin2.Endpoint} customizer to customize the {@link zipkin2.Endpoint} through its + * {@link zipkin2.Endpoint.Builder}. The customizer is invoked before {@link zipkin2.Endpoint.Builder#build() building} + * the endpoint. + * + * @param endpointCustomizer must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + public Builder endpointCustomizer(Consumer endpointCustomizer) { + + LettuceAssert.notNull(endpointCustomizer, "Endpoint customizer must not be null!"); + + this.endpointCustomizer = endpointCustomizer; + return this; + } + + /** + * Sets an {@link brave.Span} customizer to customize the {@link brave.Span}. The customizer is invoked before + * {@link Span#finish()} finishing} the span. + * + * @param spanCustomizer must not be {@literal null}. + * @return {@code this} {@link Builder}. + */ + public Builder spanCustomizer(Consumer spanCustomizer) { + + LettuceAssert.notNull(spanCustomizer, "Span customizer must not be null!"); + + this.spanCustomizer = spanCustomizer; + return this; + } + + /** + * @return a new instance of {@link BraveTracing} + */ + public BraveTracing build() { + + LettuceAssert.notNull(this.tracing, "Brave Tracing must not be null!"); + + return new BraveTracing(this); + } + } + + @Override + public boolean isEnabled() { + return true; + } + + @Override + public boolean includeCommandArgsInSpanTags() { + return includeCommandArgsInSpanTags; + } + + @Override + public TracerProvider getTracerProvider() { + return () -> tracer; + } + + @Override + public TraceContextProvider initialTraceContextProvider() { + return BraveTraceContextProvider.INSTANCE; + } + + @Override + public Endpoint createEndpoint(SocketAddress socketAddress) { + + zipkin2.Endpoint.Builder builder = zipkin2.Endpoint.newBuilder().serviceName(tracingOptions.serviceName); + + if (socketAddress instanceof InetSocketAddress) { + + InetSocketAddress inetSocketAddress = (InetSocketAddress) socketAddress; + builder.ip(inetSocketAddress.getAddress()).port(inetSocketAddress.getPort()); + + tracingOptions.customizeEndpoint(builder); + + return new BraveEndpoint(builder.build()); + } + + tracingOptions.customizeEndpoint(builder); + return new BraveEndpoint(builder.build()); + } + + /** + * Brave-specific implementation of {@link Tracer}. + */ + static class BraveTracer extends Tracer { + + private final brave.Tracing tracing; + private final BraveTracingOptions tracingOptions; + + BraveTracer(brave.Tracing tracing, BraveTracingOptions tracingOptions) { + this.tracing = tracing; + this.tracingOptions = tracingOptions; + } + + @Override + public Span nextSpan() { + return postProcessSpan(tracing.tracer().nextSpan()); + } + + @Override + public Span nextSpan(TraceContext traceContext) { + + if (!(traceContext instanceof BraveTraceContext)) { + return nextSpan(); + } + + BraveTraceContext braveTraceContext = BraveTraceContext.class.cast(traceContext); + + if (braveTraceContext.traceContext == null) { + return nextSpan(); + } + + return postProcessSpan(tracing.tracer() + .nextSpan(TraceContextOrSamplingFlags.create(braveTraceContext.traceContext))); + } + + private Span postProcessSpan(brave.Span span) { + + if (span == null || span.isNoop()) { + return NoOpTracing.NoOpSpan.INSTANCE; + } + + return new BraveSpan(span.kind(brave.Span.Kind.CLIENT), this.tracingOptions); + } + } + + /** + * Brave-specific {@link io.lettuce.core.tracing.Tracer.Span}. + */ + static class BraveSpan extends Tracer.Span { + + private final brave.Span span; + private final BraveTracingOptions tracingOptions; + + BraveSpan(Span span, BraveTracingOptions tracingOptions) { + this.span = span; + this.tracingOptions = tracingOptions; + } + + @Override + public BraveSpan start() { + + span.start(); + + return this; + } + + @Override + public BraveSpan name(String name) { + + span.name(name); + + return this; + } + + @Override + public BraveSpan annotate(String value) { + + span.annotate(value); + + return this; + } + + @Override + public BraveSpan tag(String key, String value) { + + span.tag(key, value); + + return this; + } + + @Override + public BraveSpan error(Throwable throwable) { + + span.error(throwable); + + return this; + } + + @Override + public BraveSpan remoteEndpoint(Endpoint endpoint) { + + span.remoteEndpoint(BraveEndpoint.class.cast(endpoint).endpoint); + + return this; + } + + @Override + public void finish() { + + this.tracingOptions.customizeSpan(span); + span.finish(); + } + + public brave.Span getSpan() { + return span; + } + } + + /** + * {@link Endpoint} implementation for Zipkin's {@link zipkin2.Endpoint}. + */ + public static class BraveEndpoint implements Endpoint { + + final zipkin2.Endpoint endpoint; + + public BraveEndpoint(zipkin2.Endpoint endpoint) { + this.endpoint = endpoint; + } + } + + /** + * {@link TraceContext} implementation for Brave's {@link brave.propagation.TraceContext}. + */ + public static class BraveTraceContext implements TraceContext { + + final brave.propagation.TraceContext traceContext; + + private BraveTraceContext(brave.propagation.TraceContext traceContext) { + this.traceContext = traceContext; + } + + public static BraveTraceContext create(brave.propagation.TraceContext traceContext) { + return new BraveTraceContext(traceContext); + } + } + + enum BraveTraceContextProvider implements TraceContextProvider { + INSTANCE; + + @Override + public TraceContext getTraceContext() { + + brave.Tracer tracer = brave.Tracing.currentTracer(); + + if (tracer != null) { + + Span span = tracer.currentSpan(); + + if (span != null) { + return new BraveTraceContext(span.context()); + } + } + return null; + } + + @Override + public Mono getTraceContextLater() { + + return Mono.subscriberContext() + .filter(it -> it.hasKey(Span.class) || it.hasKey(brave.propagation.TraceContext.class)).map(it -> { + + if (it.hasKey(Span.class)) { + return new BraveTraceContext(it.get(Span.class).context()); + } + + return new BraveTraceContext(it.get(brave.propagation.TraceContext.class)); + }); + } + } + + /** + * Value object encapsulating tracing options. + * + * @author Mark Paluch + * @since 5.2 + */ + static class BraveTracingOptions { + + private final String serviceName; + private final Consumer endpointCustomizer; + private final Consumer spanCustomizer; + + BraveTracingOptions(String serviceName, Consumer endpointCustomizer, + Consumer spanCustomizer) { + this.serviceName = serviceName; + this.endpointCustomizer = endpointCustomizer; + this.spanCustomizer = spanCustomizer; + } + + void customizeEndpoint(zipkin2.Endpoint.Builder builder) { + this.endpointCustomizer.accept(builder); + } + + void customizeSpan(brave.Span span) { + this.spanCustomizer.accept(span); + } + } +} diff --git a/src/main/java/io/lettuce/core/tracing/NoOpTracing.java b/src/main/java/io/lettuce/core/tracing/NoOpTracing.java new file mode 100644 index 0000000000..21fdc73885 --- /dev/null +++ b/src/main/java/io/lettuce/core/tracing/NoOpTracing.java @@ -0,0 +1,122 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +import java.net.SocketAddress; + +/** + * No-Op {@link Tracing} support that does not trace at all. + * + * @author Mark Paluch + * @author Daniel Albuquerque + * @since 5.1 + */ +enum NoOpTracing implements Tracing, TraceContextProvider, TracerProvider { + + INSTANCE; + + private final Endpoint NOOP_ENDPOINT = new Endpoint() { + }; + + @Override + public TraceContext getTraceContext() { + return TraceContext.EMPTY; + } + + @Override + public Tracer getTracer() { + return NoOpTracer.INSTANCE; + } + + @Override + public TracerProvider getTracerProvider() { + return this; + } + + @Override + public TraceContextProvider initialTraceContextProvider() { + return this; + } + + @Override + public boolean isEnabled() { + return false; + } + + @Override + public boolean includeCommandArgsInSpanTags() { + return false; + } + + @Override + public Endpoint createEndpoint(SocketAddress socketAddress) { + return NOOP_ENDPOINT; + } + + static class NoOpTracer extends Tracer { + + static final Tracer INSTANCE = new NoOpTracer(); + + @Override + public Span nextSpan(TraceContext traceContext) { + return NoOpSpan.INSTANCE; + } + + @Override + public Span nextSpan() { + return NoOpSpan.INSTANCE; + } + } + + public static class NoOpSpan extends Tracer.Span { + + static final NoOpSpan INSTANCE = new NoOpSpan(); + + @Override + public Tracer.Span start() { + return this; + } + + @Override + public Tracer.Span name(String name) { + return this; + } + + @Override + public Tracer.Span annotate(String value) { + return this; + } + + @Override + public Tracer.Span tag(String key, String value) { + return this; + } + + @Override + public Tracer.Span error(Throwable throwable) { + return this; + } + + @Override + public Tracer.Span remoteEndpoint(Tracing.Endpoint endpoint) { + return this; + } + + @Override + public void finish() { + } + } +} diff --git a/src/main/java/io/lettuce/core/tracing/TraceContext.java b/src/main/java/io/lettuce/core/tracing/TraceContext.java new file mode 100644 index 0000000000..6e4accdf01 --- /dev/null +++ b/src/main/java/io/lettuce/core/tracing/TraceContext.java @@ -0,0 +1,29 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +/** + * Marker interface for a context propagation of parent and child spans. Subclasses may add their propagation metadata. + * + * @author Mark Paluch + * @since 5.1 + * + */ +public interface TraceContext { + + TraceContext EMPTY = new TraceContext() { + }; +} diff --git a/src/main/java/io/lettuce/core/tracing/TraceContextProvider.java b/src/main/java/io/lettuce/core/tracing/TraceContextProvider.java new file mode 100644 index 0000000000..9367682402 --- /dev/null +++ b/src/main/java/io/lettuce/core/tracing/TraceContextProvider.java @@ -0,0 +1,40 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +import reactor.core.publisher.Mono; + +/** + * Interface to obtain a {@link TraceContext} allowing propagation of {@link Tracer.Span} {@link TraceContext}s across threads. + * + * @author Mark Paluch + * @since 5.1 + */ +@FunctionalInterface +public interface TraceContextProvider { + + /** + * @return the {@link TraceContext}. + */ + TraceContext getTraceContext(); + + /** + * @return the {@link TraceContext}. + */ + default Mono getTraceContextLater() { + return Mono.justOrEmpty(getTraceContext()); + } +} diff --git a/src/main/java/io/lettuce/core/tracing/Tracer.java b/src/main/java/io/lettuce/core/tracing/Tracer.java new file mode 100644 index 0000000000..780a43aaf7 --- /dev/null +++ b/src/main/java/io/lettuce/core/tracing/Tracer.java @@ -0,0 +1,100 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +/** + * Tracing abstraction to create {@link Span}s to capture latency and behavior of Redis commands. + * + * @author Mark Paluch + * @since 5.1 + * @see Span + */ +public abstract class Tracer { + + /** + * Returns a new trace {@link Tracer.Span}. + * + * @return a new {@link Span}. + */ + public abstract Span nextSpan(); + + /** + * Returns a new trace {@link Tracer.Span} associated with {@link TraceContext} or a new one if {@link TraceContext} is + * {@literal null}. + * + * @param traceContext the trace context. + * @return a new {@link Span}. + */ + public abstract Span nextSpan(TraceContext traceContext); + + /** + * Used to model the latency of an operation along with tags such as name or the {@link Tracing.Endpoint}. + */ + public abstract static class Span { + + /** + * Starts the span with. + * + * @return {@literal this} {@link Span}. + */ + public abstract Span start(); + + /** + * Sets the name for this {@link Span}. + * + * @param name must not be {@literal null}. + * @return {@literal this} {@link Span}. + */ + public abstract Span name(String name); + + /** + * Associates an event that explains latency with the current system time. + * + * @param value A short tag indicating the event, like "finagle.retry" + */ + public abstract Span annotate(String value); + + /** + * Associates a tag with this {@link Span}. + * + * @param key must not be {@literal null}. + * @param value must not be {@literal null}. + * @return {@literal this} {@link Span}. + */ + public abstract Span tag(String key, String value); + + /** + * Associate an {@link Throwable error} with this {@link Span}. + * + * @param throwable must not be {@literal null}. + * @return + */ + public abstract Span error(Throwable throwable); + + /** + * Associates an {@link Tracing.Endpoint} with this {@link Span}. + * + * @param endpoint must not be {@literal null}. + * @return + */ + public abstract Span remoteEndpoint(Tracing.Endpoint endpoint); + + /** + * Reports the span complete. + */ + public abstract void finish(); + } +} diff --git a/src/main/java/io/lettuce/core/tracing/TracerProvider.java b/src/main/java/io/lettuce/core/tracing/TracerProvider.java new file mode 100644 index 0000000000..30316de649 --- /dev/null +++ b/src/main/java/io/lettuce/core/tracing/TracerProvider.java @@ -0,0 +1,31 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +/** + * Interface to obtain a {@link Tracer}. + * + * @author Mark Paluch + * @since 5.1 + */ +@FunctionalInterface +public interface TracerProvider { + + /** + * @return the {@link Tracer}. + */ + Tracer getTracer(); +} diff --git a/src/main/java/io/lettuce/core/tracing/Tracing.java b/src/main/java/io/lettuce/core/tracing/Tracing.java new file mode 100644 index 0000000000..4e2e9298fa --- /dev/null +++ b/src/main/java/io/lettuce/core/tracing/Tracing.java @@ -0,0 +1,115 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +import java.net.SocketAddress; +import java.util.function.Function; + +import reactor.core.publisher.Mono; +import reactor.util.context.Context; + +/** + * Interface declaring methods to trace Redis commands. This interface contains declarations of basic required interfaces and + * value objects to represent traces, spans and metadata in an dependency-agnostic manner. + * + * @author Mark Paluch + * @author Daniel Albuquerque + * @since 5.1 + * @see TracerProvider + * @see TraceContextProvider + */ +public interface Tracing { + + /** + * @return the {@link TracerProvider}. + */ + TracerProvider getTracerProvider(); + + /** + * @return the {@link TraceContextProvider} supplying the initial {@link TraceContext} (i.e. if there is no active span). + */ + TraceContextProvider initialTraceContextProvider(); + + /** + * Returns {@literal true} if tracing is enabled. + * + * @return {@literal true} if tracing is enabled. + */ + boolean isEnabled(); + + /** + * Returns {@literal true} if tags for {@link Tracer.Span}s should include the command arguments. + * + * @return {@literal true} if tags for {@link Tracer.Span}s should include the command arguments. + * @since 5.2 + */ + boolean includeCommandArgsInSpanTags(); + + /** + * Create an {@link Endpoint} given {@link SocketAddress}. + * + * @param socketAddress the remote address. + * @return the {@link Endpoint} for {@link SocketAddress}. + */ + Endpoint createEndpoint(SocketAddress socketAddress); + + /** + * Returns a {@link TracerProvider} that is disabled. + * + * @return a disabled {@link TracerProvider}. + */ + static Tracing disabled() { + return NoOpTracing.INSTANCE; + } + + /** + * Gets the {@link TraceContextProvider} from Reactor {@link Context}. + * + * @return the {@link TraceContextProvider}. + */ + static Mono getContext() { + return Mono.subscriberContext().filter(c -> c.hasKey(TraceContextProvider.class)) + .map(c -> c.get(TraceContextProvider.class)); + } + + /** + * Clears the {@code Mono} from Reactor {@link Context}. + * + * @return Return a {@link Function} that clears the {@link TraceContextProvider} context. + */ + static Function clearContext() { + return context -> context.delete(TraceContextProvider.class); + } + + /** + * Creates a Reactor {@link Context} that contains the {@code Mono}. that can be merged into another + * {@link Context}. + * + * @param supplier the {@link TraceContextProvider} to set in the returned Reactor {@link Context}. + * @return a Reactor {@link Context} that contains the {@code Mono}. + */ + static Context withTraceContextProvider(TraceContextProvider supplier) { + return Context.of(TraceContextProvider.class, supplier); + } + + /** + * Value object interface to represent an endpoint. Used by {@link Tracer.Span}. + * + * @since 5.1 + */ + interface Endpoint { + } +} diff --git a/src/main/javadoc/overview.html b/src/main/javadoc/overview.html index 461f3826c1..4a82900436 100644 --- a/src/main/javadoc/overview.html +++ b/src/main/javadoc/overview.html @@ -1,10 +1,10 @@ -lettuce is a scalable thread-safe Java -{@link com.lambdaworks.redis.RedisClient RedisClient} providing -{@link com.lambdaworks.redis.api.sync.RedisCommands synchronous}, -{@link com.lambdaworks.redis.api.async.RedisAsyncCommands asynchronous} and -{@link com.lambdaworks.redis.api.rx.RedisReactiveCommands reactive} APIs for Redis Standalone, PubSub, -Redis Sentinel and {@link com.lambdaworks.redis.cluster.RedisClusterClient Redis Cluster}. +Lettuce is a scalable thread-safe Java +{@link io.lettuce.core.RedisClient RedisClient} providing +{@link io.lettuce.core.api.sync.RedisCommands synchronous}, +{@link io.lettuce.core.api.async.RedisAsyncCommands asynchronous} and +{@link io.lettuce.core.api.reactive.RedisReactiveCommands reactive} APIs for Redis Standalone, PubSub, +Redis Sentinel and {@link io.lettuce.core.cluster.RedisClusterClient Redis Cluster}. Multiple threads may share one connection if they avoid blocking and transactional operations such as BLPOP and @@ -16,8 +16,8 @@ Each redis command is implemented by one or more methods with names identical to the lowercase redis command name. Complex commands with multiple modifiers that change the result type include the CamelCased modifier as part of the command name, e.g. - {@link com.lambdaworks.redis.api.sync.RedisCommands#zrangebyscore zrangebyscore} and - {@link com.lambdaworks.redis.api.sync.RedisCommands#zrangebyscoreWithScores zrangebyscoreWithScores}. + {@link io.lettuce.core.api.sync.RedisCommands#zrangebyscore zrangebyscore} and + {@link io.lettuce.core.api.sync.RedisCommands#zrangebyscoreWithScores zrangebyscoreWithScores}.

    @@ -27,10 +27,10 @@

    - All connections inherit a default timeout from their {@link com.lambdaworks.redis.RedisClient} - and will throw a {@link com.lambdaworks.redis.RedisException} when non-blocking commands fail + All connections inherit a default timeout from their {@link io.lettuce.core.RedisClient} + and will throw a {@link io.lettuce.core.RedisException} when non-blocking commands fail to return a result before the timeout expires. The timeout defaults to 60 seconds and - may be changed via {@link com.lambdaworks.redis.RedisClient#setDefaultTimeout} or for + may be changed via {@link io.lettuce.core.RedisClient#setDefaultTimeout} or for each individual connection.

    diff --git a/src/main/javadoc/stylesheet.css b/src/main/javadoc/stylesheet.css new file mode 100644 index 0000000000..140baede56 --- /dev/null +++ b/src/main/javadoc/stylesheet.css @@ -0,0 +1,647 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/* Javadoc style sheet */ +/* +Overall document style +@import url('resources/fonts/dejavu.css'); +*/ + +body { + background-color:#ffffff; + color:#353833; + font-family: 'Source Sans Pro', sans-serif; + font-size:15px; + line-height: 22px; + margin:0; +} + +.indexContainer ul li a:link, .indexContainer ul li a:visited{ + color:#34302d; +} + + +.marble { + width: 500px; +} + +a:link, a:visited { + text-decoration:none; + color:#6db33f; +} +a:hover, a:focus { + text-decoration:underline; + color:#6db33f; +} +a:active { + text-decoration:none; + color:#6db33f; +} +a[href^="http"] { + color: #16906a; +} + +a[name] { + color:#353833; +} +a[name]:hover { + text-decoration:none; + color:#353833; +} +pre { + font-family: Consolas, monospace; + font-size:14px; +} +h1 { + font-size:20px; +} +h2 { + font-size:18px; +} +h3 { + font-size:16px; + font-style:italic; +} +h4 { + font-size:13px; +} +h5 { + font-size:12px; +} +h6 { + font-size:11px; +} +ul { + list-style-type:disc; +} +code, tt { + font-family: Consolas, monospace; + font-size:14px; + padding-top:4px; + margin-top:8px; + line-height:1.4em; +} +dt code { + font-family:Consolas, monospace; + font-size:14px; + padding-top:4px; +} +table tr td dt code { + font-family:Consolas, monospace; + font-size:14px; + vertical-align:top; + padding-top:4px; +} +sup { + font-size:8px; +} +/* +Document title and Copyright styles +*/ +.clear { + clear:both; + height:0px; + overflow:hidden; +} +.aboutLanguage { + position: absolute; + right: 2em; + top: 0.5em; + background: url(https://lettuce.io/assets/img/apple-touch-icon-180.png) no-repeat 0 0; + background-size: 40px 40px; + height: 40px; + width: 40px; + text-indent: -6000em; +} +.legalCopy { + margin-left:.5em; +} +.bar a, .bar a:link, .bar a:visited, .bar a:active { + color:#FFFFFF; + text-decoration:none; +} +.bar a:hover, .bar a:focus { + color:#bb7a2a; +} +.tab { + background-color:#0066FF; + color:#ffffff; + padding:8px; + width:5em; + font-weight:bold; +} +/* +Navigation bar styles +*/ +.bar { + background-color:#34302d; + color:#FFFFFF; + padding:.8em .5em .4em .8em; + height:auto;/*height:1.8em;*/ + font-size:11px; + margin:0; +} +h1.bar { + line-height: 30px; + padding: 5px 8px; + text-transform: uppercase; + font-size: 12Px; +} +.topNav { + position: relative; + background-color: #34302d; + color:#FFFFFF; + float:left; + padding:0; + width:100%; + clear:right; + padding-top:10px; + padding-bottom:10px; + overflow:hidden; + font-size:12px; +} +.bottomNav { + position: relative; + margin-top:10px; + background-color:#34302d; + color:#FFFFFF; + float:left; + padding:0; + width:100%; + clear:right; + padding-top:10px; + padding-bottom:10px; + overflow:hidden; + font-size:12px; +} +.subNav { + background-color:#423e3c; + overflow:hidden; + font-size:12px; + color: white; + font-weight: bold; + padding: 0 10px; +} +.subNav div { + clear:left; + float:left; + text-transform:uppercase; + line-height: 40px; +} +ul.navList, ul.subNavList { + float:left; + margin:0 25px 0 0; + padding:0 15px; +} +ul.navList li{ + list-style:none; + float:left; + padding: 5px 6px; + text-transform:uppercase; +} +ul.subNavList li{ + list-style:none; + float:left; +} +.topNav a:link, .topNav a:active, .topNav a:visited, .bottomNav a:link, .bottomNav a:active, .bottomNav a:visited { + color:#FFFFFF; + text-decoration:none; + text-transform:uppercase; +} +.topNav a:hover, .bottomNav a:hover { + text-decoration:none; + color:#6db33f; + text-transform:uppercase; +} +.navBarCell1Rev { + background-color:#6db33f; + color:#FFFFFF; + font-weight: bold; + margin: auto 5px; +} +.skipNav { + position:absolute; + top:auto; + left:-9999px; + overflow:hidden; +} +/* +Page header and footer styles +*/ +.header, .footer { + clear:both; + margin:0 40px; + padding:5px 0 0 0; +} +.indexHeader { + margin:10px; + position:relative; +} +.indexHeader span{ + margin-right:15px; +} +.indexHeader h1 { + font-size:13px; +} +.title { + color:#34302d; + margin:10px 0; + +} + +.header h1.title { + padding-top: 20px; +} +.subTitle { + margin:5px 0 0 0; +} +.header ul { + margin:0 0 15px 0; + padding:0; +} +.footer ul { + margin:20px 0 5px 0; +} +.header ul li, .footer ul li { + list-style:none; + font-size:13px; +} +/* +Heading styles +*/ +div.details ul.blockList ul.blockList ul.blockList li.blockList h4, div.details ul.blockList ul.blockList ul.blockListLast li.blockList h4 { + background-color:#EEE; + /*border:1px solid #d0d9e0;*/ + margin:0 0 6px -8px; + padding:7px 5px; +} +ul.blockList ul.blockList ul.blockList li.blockList h3 { + background-color:#f1f1f1; + /*border:1px solid #d0d9e0;*/ + margin:0 0 6px -8px; + padding:7px 5px; +} +ul.blockList ul.blockList li.blockList h3 { + padding:0; + margin:15px 0; +} +ul.blockList li.blockList h2 { + padding:0px 0 20px 0; +} +/* +Page layout container styles +*/ +.contentContainer, .sourceContainer, .classUseContainer, .serializedFormContainer, .constantValuesContainer { + clear:both; + padding:10px 40px; + position:relative; +} +.indexContainer { + margin:10px; + position:relative; + font-size:12px; +} +.indexContainer h2 { + font-size:13px; + padding:0 0 3px 0; + margin: 0; +} +.indexContainer ul { + margin:0; + padding:0; +} +.indexContainer ul li { + list-style:none; + padding-top:2px; +} +.indexContainer ul li a { + color: #353833; + font-size: 14px; +} +.indexContainer ul li a:hover { + color: #6db33f; +} +.contentContainer .description dl dt, .contentContainer .details dl dt, .serializedFormContainer dl dt { + font-size:14px; + font-weight:bold; + margin:10px 0 0 0; + color:#4E4E4E; +} +.contentContainer .description dl dd, .contentContainer .details dl dd, .serializedFormContainer dl dd { + margin:5px 0 10px 0px; + font-size:15px; + font-family: 'Source Sans Pro', sans-serif; +} +.serializedFormContainer dl.nameValue dt { + margin-left:1px; + font-size:1.1em; + display:inline; + font-weight:bold; +} +.serializedFormContainer dl.nameValue dd { + margin:0 0 0 1px; + font-size:1.1em; + display:inline; +} +/* +List styles +*/ +ul.horizontal li { + display:inline; + font-size:0.9em; +} +ul.inheritance { + margin:0; + padding:0; +} +ul.inheritance li { + display:inline; + list-style:none; +} +ul.inheritance li ul.inheritance { + margin-left:15px; + padding-left:15px; + padding-top:1px; +} +ul.blockList, ul.blockListLast { + margin:10px 0 10px 0; + padding:0; +} +ul.blockList li.blockList, ul.blockListLast li.blockList { + list-style:none; + margin-bottom:15px; + line-height:1.4; +} +ul.blockList ul.blockList li.blockList, ul.blockList ul.blockListLast li.blockList { + padding:0px 20px 5px 10px; + border:1px solid #ededed; + background-color:#f8f8f8; +} +ul.blockList ul.blockList ul.blockList li.blockList, ul.blockList ul.blockList ul.blockListLast li.blockList { + padding:0 0 5px 8px; + background-color:#ffffff; + border:none; +} +ul.blockList ul.blockList ul.blockList ul.blockList li.blockList { + margin-left:0; + padding-left:0; + padding-bottom:15px; + border:none; +} +ul.blockList ul.blockList ul.blockList ul.blockList li.blockListLast { + list-style:none; + border-bottom:none; + padding-bottom:0; +} +table tr td dl, table tr td dl dt, table tr td dl dd { + margin-top:0; + margin-bottom:1px; +} +/* +Table styles +*/ +.overviewSummary, .memberSummary, .typeSummary, .useSummary, .constantsSummary, .deprecatedSummary { + width:100%; + border-left:1px solid #EEE; + border-right:1px solid #EEE; + border-bottom:1px solid #EEE; +} +.overviewSummary, .memberSummary { + padding:0px; +} +.overviewSummary caption, .memberSummary caption, .typeSummary caption, +.useSummary caption, .constantsSummary caption, .deprecatedSummary caption { + position:relative; + text-align:left; + background-repeat:no-repeat; + color:#253441; + font-weight:bold; + clear:none; + overflow:hidden; + padding:0px; + padding-top:10px; + padding-left:1px; + margin:0px; + white-space:pre; +} +.overviewSummary caption a:link, .memberSummary caption a:link, .typeSummary caption a:link, +.useSummary caption a:link, .constantsSummary caption a:link, .deprecatedSummary caption a:link, +.overviewSummary caption a:hover, .memberSummary caption a:hover, .typeSummary caption a:hover, +.useSummary caption a:hover, .constantsSummary caption a:hover, .deprecatedSummary caption a:hover, +.overviewSummary caption a:active, .memberSummary caption a:active, .typeSummary caption a:active, +.useSummary caption a:active, .constantsSummary caption a:active, .deprecatedSummary caption a:active, +.overviewSummary caption a:visited, .memberSummary caption a:visited, .typeSummary caption a:visited, +.useSummary caption a:visited, .constantsSummary caption a:visited, .deprecatedSummary caption a:visited { + color:#FFFFFF; +} +.overviewSummary caption span, .memberSummary caption span, .typeSummary caption span, +.useSummary caption span, .constantsSummary caption span, .deprecatedSummary caption span { + white-space:nowrap; + padding-top:5px; + padding-left:12px; + padding-right:12px; + padding-bottom:7px; + display:inline-block; + float:left; + background-color:#6db33f; + border: none; + color: white; +} +.memberSummary caption span.activeTableTab span { + white-space:nowrap; + padding-top:5px; + padding-left:12px; + padding-right:12px; + margin-right:3px; + display:inline-block; + float:left; + background-color:#6db33f; + color: white; + font-size: 13px; +} +.memberSummary caption span.tableTab span { + white-space:nowrap; + padding: 5px 12px; + margin-right:3px; + display:inline-block; + float:left; + background-color:#e0e0e0; +} +.memberSummary caption span.tableTab span a { + color: #353833; + font-size: 13px; +} + +.memberSummary caption span.tableTab, .memberSummary caption span.activeTableTab { + padding-top:0px; + padding-left:0px; + padding-right:0px; + background-image:none; + float:none; + display:inline; +} +.overviewSummary .tabEnd, .memberSummary .tabEnd, .typeSummary .tabEnd, +.useSummary .tabEnd, .constantsSummary .tabEnd, .deprecatedSummary .tabEnd { + display:none; + width:5px; + position:relative; + float:left; + background-color:#6db33f; + color: white; +} +.memberSummary .activeTableTab .tabEnd { + display:none; + width:5px; + margin-right:3px; + position:relative; + float:left; + background-color:#6db33f; + color: white; +} +.memberSummary .tableTab .tabEnd { + display:none; + width:5px; + margin-right:3px; + position:relative; + background-color:#4D7A97; + float:left; + +} + +.overviewSummary td, .memberSummary td, .typeSummary td, +.useSummary td, .constantsSummary td, .deprecatedSummary td { + text-align:left; + padding:0px 0px 12px 10px; +} +th.colOne, th.colFirst, th.colLast, .useSummary th, .constantsSummary th, +td.colOne, td.colFirst, td.colLast, .useSummary td, .constantsSummary td{ + vertical-align:top; + padding-right:0px; + padding-top:8px; + padding-bottom: 8px; +} +th.colFirst, th.colLast, th.colOne, .constantsSummary th { + background:#f1f1f1; + text-align:left; + padding:8px; +} +td.colFirst, th.colFirst { + white-space:nowrap; + font-size:13px; +} +td.colLast, th.colLast { + font-size:13px; +} +td.colOne, th.colOne { + font-size:13px; +} +.overviewSummary td.colFirst, .overviewSummary th.colFirst, +.useSummary td.colFirst, .useSummary th.colFirst, +.overviewSummary td.colOne, .overviewSummary th.colOne, +.memberSummary td.colFirst, .memberSummary th.colFirst, +.memberSummary td.colOne, .memberSummary th.colOne, +.typeSummary td.colFirst{ + width:25%; + vertical-align:top; +} +td.colOne a:link, td.colOne a:active, td.colOne a:visited, td.colOne a:hover, td.colFirst a:link, td.colFirst a:active, td.colFirst a:visited, td.colFirst a:hover, td.colLast a:link, td.colLast a:active, td.colLast a:visited, td.colLast a:hover, .constantValuesContainer td a:link, .constantValuesContainer td a:active, .constantValuesContainer td a:visited, .constantValuesContainer td a:hover { + font-weight:bold; +} +.tableSubHeadingColor { + background-color:#EEEEFF; +} +.altColor { + background-color:#FFFFFF; +} +.rowColor { + background-color:#F1F1F1; +} +/* +Content styles +*/ +.description pre { + margin-top:0; +} +.deprecatedContent { + margin:0; + padding:10px 0; +} +.docSummary { + padding:0; +} + +ul.blockList ul.blockList ul.blockList li.blockList h3 { + font-style:normal; +} + +div.block { + font-size:15px; + font-family:'Source Sans Pro', sans-serif; +} + +td.colLast div { + padding-top:0px; +} + + +td.colLast a { + padding-bottom:3px; +} +/* +Formatting effect styles +*/ +.sourceLineNo { + color:green; + padding:0 30px 0 0; +} +h1.hidden { + visibility:hidden; + overflow:hidden; + font-size:10px; +} +.block { + display:block; + margin:3px 10px 2px 0px; + color:#474747; +} +.deprecatedLabel, .descfrmTypeLabel, .memberNameLabel, .memberNameLink, +.overrideSpecifyLabel, .packageHierarchyLabel, .paramLabel, .returnLabel, +.seeLabel, .simpleTagLabel, .throwsLabel, .typeNameLabel, .typeNameLink { + font-weight:bold; +} +.deprecationComment, .emphasizedPhrase, .interfaceName { + font-style:italic; +} + +div.block div.block span.deprecationComment, div.block div.block span.emphasizedPhrase, +div.block div.block span.interfaceName { + font-style:normal; +} + +div.contentContainer ul.blockList li.blockList h2{ + padding-bottom:0px; +} + +hr { + display: block; + -webkit-margin-before: 0.5em; + -webkit-margin-after: 0.5em; + -webkit-margin-start: auto; + -webkit-margin-end: auto; + border-style: inset; + border-width: 0; + border-bottom: 1px solid #d0d9e0; +} diff --git a/src/main/resources/META-INF/LICENSE b/src/main/resources/META-INF/LICENSE index f433b1a53f..ff9ad4530f 100644 --- a/src/main/resources/META-INF/LICENSE +++ b/src/main/resources/META-INF/LICENSE @@ -175,3 +175,28 @@ of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + https://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/src/main/resources/META-INF/NOTICE b/src/main/resources/META-INF/NOTICE new file mode 100644 index 0000000000..ffffbcdf64 --- /dev/null +++ b/src/main/resources/META-INF/NOTICE @@ -0,0 +1,11 @@ +Lettuce Java Redis Client ${version} +Copyright (c) 2011-2020 Mark Paluch + +This product is licensed to you under the Apache License, Version 2.0 +(the "License"). You may not use this product except in compliance with +the License. + +This product may include a number of subcomponents with separate +copyright notices and license terms. Your use of the source code for +these subcomponents is subject to the terms and conditions of the +subcomponent's license, as noted in the license file. diff --git a/src/main/resources/META-INF/services/javax.enterprise.inject.spi.Extension b/src/main/resources/META-INF/services/javax.enterprise.inject.spi.Extension index d5f09da89b..b6c82fbdad 100644 --- a/src/main/resources/META-INF/services/javax.enterprise.inject.spi.Extension +++ b/src/main/resources/META-INF/services/javax.enterprise.inject.spi.Extension @@ -1 +1 @@ -com.lambdaworks.redis.support.LettuceCdiExtension +io.lettuce.core.support.LettuceCdiExtension diff --git a/src/main/templates/com/lambdaworks/redis/api/BaseRedisCommands.java b/src/main/templates/com/lambdaworks/redis/api/BaseRedisCommands.java deleted file mode 100644 index d6b280fa16..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/BaseRedisCommands.java +++ /dev/null @@ -1,152 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.ProtocolKeyword; -import com.lambdaworks.redis.output.CommandOutput; - -/** - * - * ${intent} for basic commands. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface BaseRedisCommands extends AutoCloseable { - - /** - * Post a message to a channel. - * - * @param channel the channel type: key - * @param message the message type: value - * @return Long integer-reply the number of clients that received the message. - */ - Long publish(K channel, V message); - - /** - * Lists the currently *active channels*. - * - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - List pubsubChannels(); - - /** - * Lists the currently *active channels*. - * - * @param channel the key - * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. - */ - List pubsubChannels(K channel); - - /** - * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. - * - * @param channels channel keys - * @return array-reply a list of channels and number of subscribers for every channel. - */ - Map pubsubNumsub(K... channels); - - /** - * Returns the number of subscriptions to patterns. - * - * @return Long integer-reply the number of patterns all the clients are subscribed to. - */ - Long pubsubNumpat(); - - /** - * Echo the given string. - * - * @param msg the message type: value - * @return V bulk-string-reply - */ - V echo(V msg); - - /** - * Return the role of the instance in the context of replication. - * - * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional - * elements are role-specific. - */ - List role(); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - String ping(); - - /** - * Switch connection to Read-Only mode when connecting to a cluster. - * - * @return String simple-string-reply. - */ - String readOnly(); - - /** - * Switch connection to Read-Write mode (default) when connecting to a cluster. - * - * @return String simple-string-reply. - */ - String readWrite(); - - /** - * Close the connection. - * - * @return String simple-string-reply always OK. - */ - String quit(); - - /** - * Wait for replication. - * - * @param replicas minimum number of replicas - * @param timeout timeout in milliseconds - * @return number of replicas - */ - Long waitForReplication(int replicas, long timeout); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param response type - * @return the command response - */ - T dispatch(ProtocolKeyword type, CommandOutput output); - - /** - * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. - * - * @param type the command, must not be {@literal null}. - * @param output the command output, must not be {@literal null}. - * @param args the command arguments, must not be {@literal null}. - * @param response type - * @return the command response - */ - T dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); - - /** - * Close the connection. The connection will become not usable anymore as soon as this method was called. - */ - @Override - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the - * internal state machine gets out of sync with the connection. - */ - void reset(); - -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisGeoCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisGeoCommands.java deleted file mode 100644 index 583599f7af..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisGeoCommands.java +++ /dev/null @@ -1,151 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.redis.GeoArgs; -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.GeoRadiusStoreArgs; -import com.lambdaworks.redis.GeoWithin; -import java.util.List; -import java.util.Set; - -/** - * ${intent} for the Geo-API. - * - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisGeoCommands { - - /** - * Single geo add. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param member the member to add - * @return Long integer-reply the number of elements that were added to the set - */ - Long geoadd(K key, double longitude, double latitude, V member); - - /** - * Multi geo add. - * - * @param key the key of the geo set - * @param lngLatMember triplets of double longitude, double latitude and V member - * @return Long integer-reply the number of elements that were added to the set - */ - Long geoadd(K key, Object... lngLatMember); - - /** - * Retrieve Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index. - * - * @param key the key of the geo set - * @param members the members - * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. - */ - List geohash(K key, V... members); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @return bulk reply - */ - Set georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit); - - /** - * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - List> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadius(Object, double, double, double, Unit, GeoArgs)} query and store the results in a sorted set. - * - * @param key the key of the geo set - * @param longitude the longitude coordinate according to WGS84 - * @param latitude the latitude coordinate according to WGS84 - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - Long georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @return set of members - */ - Set georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); - - /** - * - * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the - * results. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoArgs args to control the result - * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} - */ - List> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); - - /** - * Perform a {@link #georadiusbymember(Object, Object, double, Unit, GeoArgs)} query and store the results in a sorted set. - * - * @param key the key of the geo set - * @param member reference member - * @param distance radius distance - * @param unit distance unit - * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with - * their locations a sorted set. - * @return Long integer-reply the number of elements in the result - */ - Long georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); - - /** - * Get geo coordinates for the {@code members}. - * - * @param key the key of the geo set - * @param members the members - * - * @return a list of {@link GeoCoordinates}s representing the x,y position of each element specified in the arguments. For - * missing elements {@literal null} is returned. - */ - List geopos(K key, V... members); - - /** - * - * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. Default in meters by, otherwise according to {@code unit} - * - * @param key the key of the geo set - * @param from from member - * @param to to member - * @param unit distance unit - * - * @return distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is - * returned. - */ - Double geodist(K key, V from, V to, GeoArgs.Unit unit); - -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisHLLCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisHLLCommands.java deleted file mode 100644 index a83f8bff4b..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisHLLCommands.java +++ /dev/null @@ -1,46 +0,0 @@ -package com.lambdaworks.redis.api; - -/** - * ${intent} for HyperLogLog (PF* commands). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public interface RedisHLLCommands { - - /** - * Adds the specified elements to the specified HyperLogLog. - * - * @param key the key - * @param values the values - * - * @return Long integer-reply specifically: - * - * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. - */ - Long pfadd(K key, V... values); - - /** - * Merge N different HyperLogLogs into a single one. - * - * @param destkey the destination key - * @param sourcekeys the source key - * - * @return String simple-string-reply The command just returns {@code OK}. - */ - String pfmerge(K destkey, K... sourcekeys); - - /** - * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). - * - * @param keys the keys - * - * @return Long integer-reply specifically: - * - * The approximated number of unique elements observed via {@code PFADD}. - */ - Long pfcount(K... keys); - -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisHashCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisHashCommands.java deleted file mode 100644 index e180cd2c81..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisHashCommands.java +++ /dev/null @@ -1,279 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.MapScanCursor; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * ${intent} for Hashes (Key-Value pairs). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisHashCommands { - - /** - * Delete one or more hash fields. - * - * @param key the key - * @param fields the field type: key - * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing - * fields. - */ - Long hdel(K key, K... fields); - - /** - * Determine if a hash field exists. - * - * @param key the key - * @param field the field type: key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, - * or {@code key} does not exist. - */ - Boolean hexists(K key, K field); - - /** - * Get the value of a hash field. - * - * @param key the key - * @param field the field type: key - * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present - * in the hash or {@code key} does not exist. - */ - V hget(K key, K field); - - /** - * Increment the integer value of a hash field by the given number. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: long - * @return Long integer-reply the value at {@code field} after the increment operation. - */ - Long hincrby(K key, K field, long amount); - - /** - * Increment the float value of a hash field by the given amount. - * - * @param key the key - * @param field the field type: key - * @param amount the increment type: double - * @return Double bulk-string-reply the value of {@code field} after the increment. - */ - Double hincrbyfloat(K key, K field, double amount); - - /** - * Get all the fields and values in a hash. - * - * @param key the key - * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} - * does not exist. - */ - Map hgetall(K key); - - /** - * Stream over all the fields and values in a hash. - * - * @param channel the channel - * @param key the key - * - * @return Long count of the keys. - */ - Long hgetall(KeyValueStreamingChannel channel, K key); - - /** - * Get all the fields in a hash. - * - * @param key the key - * @return List<K> array-reply list of fields in the hash, or an empty list when {@code key} does not exist. - */ - List hkeys(K key); - - /** - * Stream over all the fields in a hash. - * - * @param channel the channel - * @param key the key - * - * @return Long count of the keys. - */ - Long hkeys(KeyStreamingChannel channel, K key); - - /** - * Get the number of fields in a hash. - * - * @param key the key - * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. - */ - Long hlen(K key); - - /** - * Get the values of all the given hash fields. - * - * @param key the key - * @param fields the field type: key - * @return List<V> array-reply list of values associated with the given fields, in the same - */ - List hmget(K key, K... fields); - - /** - * Stream over the values of all the given hash fields. - * - * @param channel the channel - * @param key the key - * @param fields the fields - * - * @return Long count of the keys - */ - Long hmget(ValueStreamingChannel channel, K key, K... fields); - - /** - * Set multiple hash fields to multiple values. - * - * @param key the key - * @param map the null - * @return String simple-string-reply - */ - String hmset(K key, Map map); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanArgs scan arguments - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return MapScanCursor<K, V> map scan cursor. - */ - MapScanCursor hscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate hash fields and associated values. - * - * @param channel streaming channel that receives a call for every key-value pair - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Set the string value of a hash field. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@literal true} if {@code field} is a new field in the hash and {@code value} was set. {@literal false} if - * {@code field} already exists in the hash and the value was updated. - */ - Boolean hset(K key, K field, V value); - - /** - * Set the value of a hash field, only if the field does not exist. - * - * @param key the key - * @param field the field type: key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@code 1} if {@code field} is a new field in the hash and {@code value} was set. {@code 0} if {@code field} - * already exists in the hash and no operation was performed. - */ - Boolean hsetnx(K key, K field, V value); - - /** - * Get the string length of the field value in a hash. - * - * @param key the key - * @param field the field type: key - * @return Long integer-reply the string length of the {@code field} value, or {@code 0} when {@code field} is not present - * in the hash or {@code key} does not exist at all. - */ - Long hstrlen(K key, K field); - - /** - * Get all the values in a hash. - * - * @param key the key - * @return List<V> array-reply list of values in the hash, or an empty list when {@code key} does not exist. - */ - List hvals(K key); - - /** - * Stream over all the values in a hash. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * - * @return Long count of the keys. - */ - Long hvals(ValueStreamingChannel channel, K key); -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisKeyCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisKeyCommands.java deleted file mode 100644 index e835b6c082..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisKeyCommands.java +++ /dev/null @@ -1,398 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.Date; -import java.util.List; - -import com.lambdaworks.redis.KeyScanCursor; -import com.lambdaworks.redis.MigrateArgs; -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.SortArgs; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * ${intent} for Keys (Key manipulation/querying). - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisKeyCommands { - - /** - * Delete one or more keys. - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - */ - Long del(K... keys); - - /** - * Unlink one or more keys (non blocking DEL). - * - * @param keys the keys - * @return Long integer-reply The number of keys that were removed. - */ - Long unlink(K... keys); - - /** - * Return a serialized version of the value stored at the specified key. - * - * @param key the key - * @return byte[] bulk-string-reply the serialized value. - */ - byte[] dump(K key); - - /** - * Determine how many keys exist. - * - * @param keys the keys - * @return Long integer-reply specifically: Number of existing keys - */ - Long exists(K... keys); - - /** - * Set a key's time to live in seconds. - * - * @param key the key - * @param seconds the seconds type: long - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - Boolean expire(K key, long seconds); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean expireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp. - * - * @param key the key - * @param timestamp the timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean expireat(K key, long timestamp); - - /** - * Find all keys matching the given pattern. - * - * @param pattern the pattern type: patternkey (pattern) - * @return List<K> array-reply list of keys matching {@code pattern}. - */ - List keys(K pattern); - - /** - * Find all keys matching the given pattern. - * - * @param channel the channel - * @param pattern the pattern - * @return Long array-reply list of keys matching {@code pattern}. - */ - Long keys(KeyStreamingChannel channel, K pattern); - - /** - * Atomically transfer a key from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param key the key - * @param db the database - * @param timeout the timeout in milliseconds - * @return String simple-string-reply The command returns OK on success. - */ - String migrate(String host, int port, K key, int db, long timeout); - - /** - * Atomically transfer one or more keys from a Redis instance to another one. - * - * @param host the host - * @param port the port - * @param db the database - * @param timeout the timeout in milliseconds - * @param migrateArgs migrate args that allow to configure further options - * @return String simple-string-reply The command returns OK on success. - */ - String migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs); - - /** - * Move a key to another database. - * - * @param key the key - * @param db the db type: long - * @return Boolean integer-reply specifically: - */ - Boolean move(K key, int db); - - /** - * returns the kind of internal representation used in order to store the value associated with a key. - * - * @param key the key - * @return String - */ - String objectEncoding(K key); - - /** - * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write - * operations). - * - * @param key the key - * @return number of seconds since the object stored at the specified key is idle. - */ - Long objectIdletime(K key); - - /** - * returns the number of references of the value associated with the specified key. - * - * @param key the key - * @return Long - */ - Long objectRefcount(K key); - - /** - * Remove the expiration from a key. - * - * @param key the key - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an - * associated timeout. - */ - Boolean persist(K key); - - /** - * Set a key's time to live in milliseconds. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @return integer-reply, specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set. - */ - Boolean pexpire(K key, long milliseconds); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean pexpireat(K key, Date timestamp); - - /** - * Set the expiration for a key as a UNIX timestamp specified in milliseconds. - * - * @param key the key - * @param timestamp the milliseconds-timestamp type: posix time - * @return Boolean integer-reply specifically: - * - * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not - * be set (see: {@code EXPIRE}). - */ - Boolean pexpireat(K key, long timestamp); - - /** - * Get the time to live for a key in milliseconds. - * - * @param key the key - * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description - * above). - */ - Long pttl(K key); - - /** - * Return a random key from the keyspace. - * - * @return V bulk-string-reply the random key, or {@literal null} when the database is empty. - */ - V randomkey(); - - /** - * Rename a key. - * - * @param key the key - * @param newKey the newkey type: key - * @return String simple-string-reply - */ - String rename(K key, K newKey); - - /** - * Rename a key, only if the new key does not exist. - * - * @param key the key - * @param newKey the newkey type: key - * @return Boolean integer-reply specifically: - * - * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. - */ - Boolean renamenx(K key, K newKey); - - /** - * Create a key using the provided serialized value, previously obtained using DUMP. - * - * @param key the key - * @param ttl the ttl type: long - * @param value the serialized-value type: string - * @return String simple-string-reply The command returns OK on success. - */ - String restore(K key, long ttl, byte[] value); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @return List<V> array-reply list of sorted elements. - */ - List sort(K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return Long number of values. - */ - Long sort(ValueStreamingChannel channel, K key); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @return List<V> array-reply list of sorted elements. - */ - List sort(K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param sortArgs sort arguments - * @return Long number of values. - */ - Long sort(ValueStreamingChannel channel, K key, SortArgs sortArgs); - - /** - * Sort the elements in a list, set or sorted set. - * - * @param key the key - * @param sortArgs sort arguments - * @param destination the destination key to store sort results - * @return Long number of values. - */ - Long sortStore(K key, SortArgs sortArgs, K destination); - - /** - * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. - * - * @param keys the keys - * @return Long integer-reply the number of found keys. - */ - Long touch(K... keys); - - /** - * Get the time to live for a key. - * - * @param key the key - * @return Long integer-reply TTL in seconds, or a negative value in order to signal an error (see the description above). - */ - Long ttl(K key); - - /** - * Determine the type stored at key. - * - * @param key the key - * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. - */ - String type(K key); - - /** - * Incrementally iterate the keys space. - * - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(); - - /** - * Incrementally iterate the keys space. - * - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return KeyScanCursor<K> scan cursor. - */ - KeyScanCursor scan(ScanCursor scanCursor); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate the keys space. - * - * @param channel streaming channel that receives a call for every key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor); -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisListCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisListCommands.java deleted file mode 100644 index ec2b74f3e5..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisListCommands.java +++ /dev/null @@ -1,217 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.List; - -import com.lambdaworks.redis.KeyValue; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * ${intent} for Lists. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisListCommands { - - /** - * Remove and get the first element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return KeyValue<K,V> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - KeyValue blpop(long timeout, K... keys); - - /** - * Remove and get the last element in a list, or block until one is available. - * - * @param timeout the timeout in seconds - * @param keys the keys - * @return KeyValue<K,V> array-reply specifically: - * - * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk - * with the first element being the name of the key where an element was popped and the second element being the - * value of the popped element. - */ - KeyValue brpop(long timeout, K... keys); - - /** - * Pop a value from a list, push it to another list and return it; or block until one is available. - * - * @param timeout the timeout in seconds - * @param source the source key - * @param destination the destination type: key - * @return V bulk-string-reply the element being popped from {@code source} and pushed to {@code destination}. If - * {@code timeout} is reached, a - */ - V brpoplpush(long timeout, K source, K destination); - - /** - * Get an element from a list by its index. - * - * @param key the key - * @param index the index type: long - * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. - */ - V lindex(K key, long index); - - /** - * Insert an element before or after another element in a list. - * - * @param key the key - * @param before the before - * @param pivot the pivot - * @param value the value - * @return Long integer-reply the length of the list after the insert operation, or {@code -1} when the value {@code pivot} - * was not found. - */ - Long linsert(K key, boolean before, V pivot, V value); - - /** - * Get the length of a list. - * - * @param key the key - * @return Long integer-reply the length of the list at {@code key}. - */ - Long llen(K key); - - /** - * Remove and get the first element in a list. - * - * @param key the key - * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. - */ - V lpop(K key); - - /** - * Prepend one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return Long integer-reply the length of the list after the push operations. - */ - Long lpush(K key, V... values); - - /** - * Prepend a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #lpushx(Object, Object[])} - */ - @Deprecated - Long lpushx(K key, V value); - - /** - * Prepend values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - Long lpushx(K key, V... values); - - /** - * Get a range of elements from a list. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return List<V> array-reply list of elements in the specified range. - */ - List lrange(K key, long start, long stop); - - /** - * Get a range of elements from a list. - * - * @param channel the channel - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long count of elements in the specified range. - */ - Long lrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Remove elements from a list. - * - * @param key the key - * @param count the count type: long - * @param value the value - * @return Long integer-reply the number of removed elements. - */ - Long lrem(K key, long count, V value); - - /** - * Set the value of an element in a list by its index. - * - * @param key the key - * @param index the index type: long - * @param value the value - * @return String simple-string-reply - */ - String lset(K key, long index, V value); - - /** - * Trim a list to the specified range. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return String simple-string-reply - */ - String ltrim(K key, long start, long stop); - - /** - * Remove and get the last element in a list. - * - * @param key the key - * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. - */ - V rpop(K key); - - /** - * Remove the last element in a list, append it to another list and return it. - * - * @param source the source key - * @param destination the destination type: key - * @return V bulk-string-reply the element being popped and pushed. - */ - V rpoplpush(K source, K destination); - - /** - * Append one or multiple values to a list. - * - * @param key the key - * @param values the value - * @return Long integer-reply the length of the list after the push operation. - */ - Long rpush(K key, V... values); - - /** - * Append a value to a list, only if the list exists. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the list after the push operation. - * @deprecated Use {@link #rpushx(java.lang.Object, java.lang.Object[])} - */ - @Deprecated - Long rpushx(K key, V value); - - /** - * Append values to a list, only if the list exists. - * - * @param key the key - * @param values the values - * @return Long integer-reply the length of the list after the push operation. - */ - Long rpushx(K key, V... values); -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisScriptingCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisScriptingCommands.java deleted file mode 100644 index 2ce605c5af..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisScriptingCommands.java +++ /dev/null @@ -1,101 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.List; - -import com.lambdaworks.redis.ScriptOutputType; - -/** - * ${intent} for Scripting. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisScriptingCommands { - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type output type - * @param keys key names - * @param expected return type - * @return script result - */ - T eval(String script, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param script Lua 5.1 script. - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - T eval(String script, ScriptOutputType type, K[] keys, V... values); - - /** - * Evaluates a script cached on the server side by its SHA1 digest - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param expected return type - * @return script result - */ - T evalsha(String digest, ScriptOutputType type, K... keys); - - /** - * Execute a Lua script server side. - * - * @param digest SHA1 of the script - * @param type the type - * @param keys the keys - * @param values the values - * @param expected return type - * @return script result - */ - T evalsha(String digest, ScriptOutputType type, K[] keys, V... values); - - /** - * Check existence of scripts in the script cache. - * - * @param digests script digests - * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 - * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 - * is returned, otherwise 0 is returned. - */ - List scriptExists(String... digests); - - /** - * Remove all the scripts from the script cache. - * - * @return String simple-string-reply - */ - String scriptFlush(); - - /** - * Kill the script currently in execution. - * - * @return String simple-string-reply - */ - String scriptKill(); - - /** - * Load the specified Lua script into the script cache. - * - * @param script script content - * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. - */ - String scriptLoad(V script); - - /** - * Create a SHA1 digest from a Lua script. - * - * @param script script content - * @return the SHA1 value - */ - String digest(V script); -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisSentinelCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisSentinelCommands.java deleted file mode 100644 index 8020e24a0f..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisSentinelCommands.java +++ /dev/null @@ -1,120 +0,0 @@ -package com.lambdaworks.redis; - -import java.io.Closeable; -import java.net.SocketAddress; -import java.util.List; -import java.util.Map; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; - -/** - * ${intent} for Redis Sentinel. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisSentinelCommands extends Closeable{ - - /** - * Return the ip and port number of the master with that name. - * - * @param key the key - * @return SocketAddress; - */ - SocketAddress getMasterAddrByName(K key); - - /** - * Enumerates all the monitored masters and their states. - * - * @return Map<K, V>> - */ - List> masters(); - - /** - * Show the state and info of the specified master. - * - * @param key the key - * @return Map<K, V> - */ - Map master(K key); - - /** - * Provides a list of slaves for the master with the specified name. - * - * @param key the key - * @return List<Map<K, V>> - */ - List> slaves(K key); - - /** - * This command will reset all the masters with matching name. - * - * @param key the key - * @return Long - */ - Long reset(K key); - - /** - * Perform a failover. - * - * @param key the master id - * @return String - */ - String failover(K key); - - /** - * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. - * - * @param key the key - * @param ip the IP address - * @param port the port - * @param quorum the quorum count - * @return String - */ - String monitor(K key, String ip, int port, int quorum); - - /** - * Multiple option / value pairs can be specified (or none at all). - * - * @param key the key - * @param option the option - * @param value the value - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - String set(K key, String option, V value); - - /** - * remove the specified master. - * - * @param key the key - * @return String - */ - String remove(K key); - - /** - * Ping the server. - * - * @return String simple-string-reply - */ - String ping(); - - /** - * close the underlying connection. - */ - @Override - void close(); - - /** - * - * @return true if the connection is open (connected and not closed). - */ - boolean isOpen(); - - /** - * - * @return the underlying connection. - */ - StatefulRedisSentinelConnection getStatefulConnection(); -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisServerCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisServerCommands.java deleted file mode 100644 index 2c33a0b905..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisServerCommands.java +++ /dev/null @@ -1,336 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.Date; -import java.util.List; - -import com.lambdaworks.redis.KillArgs; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * ${intent} for Server Control. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisServerCommands { - - /** - * Asynchronously rewrite the append-only file. - * - * @return String simple-string-reply always {@code OK}. - */ - String bgrewriteaof(); - - /** - * Asynchronously save the dataset to disk. - * - * @return String simple-string-reply - */ - String bgsave(); - - /** - * Get the current connection name. - * - * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. - */ - K clientGetname(); - - /** - * Set the current connection name. - * - * @param name the client name - * @return simple-string-reply {@code OK} if the connection name was successfully set. - */ - String clientSetname(K name); - - /** - * Kill the connection of a client identified by ip:port. - * - * @param addr ip:port - * @return String simple-string-reply {@code OK} if the connection exists and has been closed - */ - String clientKill(String addr); - - /** - * Kill connections of clients which are filtered by {@code killArgs} - * - * @param killArgs args for the kill operation - * @return Long integer-reply number of killed connections - */ - Long clientKill(KillArgs killArgs); - - /** - * Stop processing commands from clients for some time. - * - * @param timeout the timeout value in milliseconds - * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. - */ - String clientPause(long timeout); - - /** - * Get the list of client connections. - * - * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), - * each line is composed of a succession of property=value fields separated by a space character. - */ - String clientList(); - - /** - * Returns an array reply of details about all Redis commands. - * - * @return List<Object> array-reply - */ - List command(); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return List<Object> array-reply - */ - List commandInfo(String... commands); - - /** - * Returns an array reply of details about the requested commands. - * - * @param commands the commands to query for - * @return List<Object> array-reply - */ - List commandInfo(CommandType... commands); - - /** - * Get total number of Redis commands. - * - * @return Long integer-reply of number of total commands in this Redis server. - */ - Long commandCount(); - - /** - * Get the value of a configuration parameter. - * - * @param parameter name of the parameter - * @return List<String> bulk-string-reply - */ - List configGet(String parameter); - - /** - * Reset the stats returned by INFO. - * - * @return String simple-string-reply always {@code OK}. - */ - String configResetstat(); - - /** - * Rewrite the configuration file with the in memory configuration. - * - * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is - * returned. - */ - String configRewrite(); - - /** - * Set a configuration parameter to the given value. - * - * @param parameter the parameter name - * @param value the parameter value - * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. - */ - String configSet(String parameter, String value); - - /** - * Return the number of keys in the selected database. - * - * @return Long integer-reply - */ - Long dbsize(); - - /** - * Crash and recover - * @param delay optional delay in milliseconds - * @return String simple-string-reply - */ - String debugCrashAndRecover(Long delay); - - /** - * Get debugging information about the internal hash-table state. - * - * @param db the database number - * @return String simple-string-reply - */ - String debugHtstats(int db); - - /** - * Get debugging information about a key. - * - * @param key the key - * @return String simple-string-reply - */ - String debugObject(K key); - - /** - * Make the server crash: Out of memory. - * - * @return nothing, because the server crashes before returning. - */ - void debugOom(); - - /** - * Make the server crash: Invalid pointer access. - * - * @return nothing, because the server crashes before returning. - */ - void debugSegfault(); - - /** - * Save RDB, clear the database and reload RDB. - * - * @return String simple-string-reply The commands returns OK on success. - */ - String debugReload(); - - /** - * Restart the server gracefully. - * @param delay optional delay in milliseconds - * @return String simple-string-reply - */ - String debugRestart(Long delay); - - /** - * Get debugging information about the internal SDS length. - * - * @param key the key - * @return String simple-string-reply - */ - String debugSdslen(K key); - - /** - * Remove all keys from all databases. - * - * @return String simple-string-reply - */ - String flushall(); - - /** - * Remove all keys asynchronously from all databases. - * - * @return String simple-string-reply - */ - String flushallAsync(); - - /** - * Remove all keys from the current database. - * - * @return String simple-string-reply - */ - String flushdb(); - - /** - * Remove all keys asynchronously from the current database. - * - * @return String simple-string-reply - */ - String flushdbAsync(); - - /** - * Get information and statistics about the server. - * - * @return String bulk-string-reply as a collection of text lines. - */ - String info(); - - /** - * Get information and statistics about the server. - * - * @param section the section type: string - * @return String bulk-string-reply as a collection of text lines. - */ - String info(String section); - - /** - * Get the UNIX time stamp of the last successful save to disk. - * - * @return Date integer-reply an UNIX time stamp. - */ - Date lastsave(); - - /** - * Synchronously save the dataset to disk. - * - * @return String simple-string-reply The commands returns OK on success. - */ - String save(); - - /** - * Synchronously save the dataset to disk and then shut down the server. - * - * @param save {@literal true} force save operation - */ - void shutdown(boolean save); - - /** - * Make the server a slave of another instance, or promote it as master. - * - * @param host the host type: string - * @param port the port type: string - * @return String simple-string-reply - */ - String slaveof(String host, int port); - - /** - * Promote server as master. - * - * @return String simple-string-reply - */ - String slaveofNoOne(); - - /** - * Read the slow log. - * - * @return List<Object> deeply nested multi bulk replies - */ - List slowlogGet(); - - /** - * Read the slow log. - * - * @param count the count - * @return List<Object> deeply nested multi bulk replies - */ - List slowlogGet(int count); - - /** - * Obtaining the current length of the slow log. - * - * @return Long length of the slow log. - */ - Long slowlogLen(); - - /** - * Resetting the slow log. - * - * @return String simple-string-reply The commands returns OK on success. - */ - String slowlogReset(); - - /** - * Internal command used for replication. - * - * @return String simple-string-reply - */ - @Deprecated - String sync(); - - /** - * Return the current server time. - * - * @return List<V> array-reply specifically: - * - * A multi bulk reply containing two elements: - * - * unix time in seconds. microseconds. - */ - List time(); - -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisSetCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisSetCommands.java deleted file mode 100644 index 56efcb1a10..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisSetCommands.java +++ /dev/null @@ -1,291 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.Set; - -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ValueScanCursor; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * ${intent} for Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisSetCommands { - - /** - * Add one or more members to a set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply the number of elements that were added to the set, not including all the elements already - * present into the set. - */ - Long sadd(K key, V... members); - - /** - * Get the number of members in a set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not - * exist. - */ - Long scard(K key); - - /** - * Subtract multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sdiff(K... keys); - - /** - * Subtract multiple sets. - * - * @param channel the channel - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sdiff(ValueStreamingChannel channel, K... keys); - - /** - * Subtract multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sdiffstore(K destination, K... keys); - - /** - * Intersect multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sinter(K... keys); - - /** - * Intersect multiple sets. - * - * @param channel the channel - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sinter(ValueStreamingChannel channel, K... keys); - - /** - * Intersect multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sinterstore(K destination, K... keys); - - /** - * Determine if a given value is a member of a set. - * - * @param key the key - * @param member the member type: value - * @return Boolean integer-reply specifically: - * - * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the - * set, or if {@code key} does not exist. - */ - Boolean sismember(K key, V member); - - /** - * Move a member from one set to another. - * - * @param source the source key - * @param destination the destination type: key - * @param member the member type: value - * @return Boolean integer-reply specifically: - * - * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no - * operation was performed. - */ - Boolean smove(K source, K destination, V member); - - /** - * Get all the members in a set. - * - * @param key the key - * @return Set<V> array-reply all elements of the set. - */ - Set smembers(K key); - - /** - * Get all the members in a set. - * - * @param channel the channel - * @param key the keys - * @return Long count of members of the resulting set. - */ - Long smembers(ValueStreamingChannel channel, K key); - - /** - * Remove and return a random member from a set. - * - * @param key the key - * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - V spop(K key); - - /** - * Remove and return one or multiple random members from a set. - * - * @param key the key - * @param count number of members to pop - * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. - */ - Set spop(K key, long count); - - /** - * Get one random member from a set. - * - * @param key the key - * - * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the - * randomly selected element, or {@literal null} when {@code key} does not exist. - */ - V srandmember(K key); - - /** - * Get one or multiple random members from a set. - * - * @param key the key - * @param count the count type: long - * @return Set<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply - * with the randomly selected element, or {@literal null} when {@code key} does not exist. - */ - List srandmember(K key, long count); - - /** - * Get one or multiple random members from a set. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param count the count - * @return Long count of members of the resulting set. - */ - Long srandmember(ValueStreamingChannel channel, K key, long count); - - /** - * Remove one or more members from a set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply the number of members that were removed from the set, not including non existing members. - */ - Long srem(K key, V... members); - - /** - * Add multiple sets. - * - * @param keys the key - * @return Set<V> array-reply list with members of the resulting set. - */ - Set sunion(K... keys); - - /** - * Add multiple sets. - * - * @param channel streaming channel that receives a call for every value - * @param keys the keys - * @return Long count of members of the resulting set. - */ - Long sunion(ValueStreamingChannel channel, K... keys); - - /** - * Add multiple sets and store the resulting set in a key. - * - * @param destination the destination type: key - * @param keys the key - * @return Long integer-reply the number of elements in the resulting set. - */ - Long sunionstore(K destination, K... keys); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanArgs scan arguments - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ValueScanCursor<V> scan cursor. - */ - ValueScanCursor sscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate Set elements. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor); -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisSortedSetCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisSortedSetCommands.java deleted file mode 100644 index 251a69dc4c..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisSortedSetCommands.java +++ /dev/null @@ -1,837 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.List; - -import com.lambdaworks.redis.ScanArgs; -import com.lambdaworks.redis.ScanCursor; -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.ScoredValueScanCursor; -import com.lambdaworks.redis.StreamScanCursor; -import com.lambdaworks.redis.ZAddArgs; -import com.lambdaworks.redis.ZStoreArgs; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * ${intent} for Sorted Sets. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisSortedSetCommands { - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ScoredValue... scoredValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ZAddArgs zAddArgs, double score, V member); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the key - * @param zAddArgs arguments for zadd - * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); - - /** - * Add one or more members to a sorted set, or update its score if it already exists. - * - * @param key the ke - * @param zAddArgs arguments for zadd - * @param scoredValues the scored values - * @return Long integer-reply specifically: - * - * The number of elements added to the sorted sets, not including elements already existing for which the score was - * updated. - */ - Long zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); - - /** - * ZADD acts like ZINCRBY - * - * @param key the key - * @param score the score - * @param member the member - * - * @return Long integer-reply specifically: - * - * The total number of elements changed - */ - Double zaddincr(K key, double score, V member); - - /** - * Get the number of members in a sorted set. - * - * @param key the key - * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} - * does not exist. - */ - Long zcard(K key); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zcount(K key, double min, double max); - - /** - * Count the members in a sorted set with scores within the given values. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zcount(K key, String min, String max); - - /** - * Increment the score of a member in a sorted set. - * - * @param key the key - * @param amount the increment type: long - * @param member the member type: key - * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented - * as string. - */ - Double zincrby(K key, double amount, K member); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zinterstore(K destination, K... keys); - - /** - * Intersect multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zinterstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Return a range of members in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List zrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List> zrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, double min, double max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, double min, double max, long offset, long count); - - /** - * Return a range of members in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebyscore(K key, String min, String max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, double min, double max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, String min, String max); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); - - /** - * Return a range of members with score in a sorted set, by score. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); - - /** - * Return a range of members in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified score range. - */ - Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); - - /** - * Determine the index of a member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Long zrank(K key, V member); - - /** - * Remove one or more members from a sorted set. - * - * @param key the key - * @param members the member type: value - * @return Long integer-reply specifically: - * - * The number of members removed from the sorted set, not including non existing members. - */ - Long zrem(K key, V... members); - - /** - * Remove all members in a sorted set within the given indexes. - * - * @param key the key - * @param start the start type: long - * @param stop the stop type: long - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyrank(K key, long start, long stop); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyscore(K key, double min, double max); - - /** - * Remove all members in a sorted set within the given scores. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebyscore(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List zrevrange(K key, long start, long stop); - - /** - * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param key the key - * @param start the start - * @param stop the stop - * @return List<V> array-reply list of elements in the specified range. - */ - List> zrevrangeWithScores(K key, long start, long stop); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, double max, double min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, String max, String min); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the withscores - * @param count the null - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, double max, double min, long offset, long count); - - /** - * Return a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrevrangebyscore(K key, String max, String min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<V> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, double max, double min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, String max, String min); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); - - /** - * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param key the key - * @param max max score - * @param min min score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrevrange(ValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param start the start - * @param stop the stop - * @return Long count of elements in the specified range. - */ - Long zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param max max score - * @param min min score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); - - /** - * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, - long count); - - /** - * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return Long count of elements in the specified range. - */ - Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, - long count); - - /** - * Determine the index of a member in a sorted set, with scores ordered from high to low. - * - * @param key the key - * @param member the member type: value - * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} - * does not exist, - */ - Long zrevrank(K key, V member); - - /** - * Get the score associated with the given member in a sorted set. - * - * @param key the key - * @param member the member type: value - * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as - * string. - */ - Double zscore(K key, V member); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination destination key - * @param keys source keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zunionstore(K destination, K... keys); - - /** - * Add multiple sorted sets and store the resulting sorted set in a new key. - * - * @param destination the destination - * @param storeArgs the storeArgs - * @param keys the keys - * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. - */ - Long zunionstore(K destination, ZStoreArgs storeArgs, K... keys); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return ScoredValueScanCursor<V> scan cursor. - */ - ScoredValueScanCursor zscan(K key, ScanCursor scanCursor); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @param scanArgs scan arguments - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); - - /** - * Incrementally iterate sorted sets elements and associated scores. - * - * @param channel streaming channel that receives a call for every scored value - * @param key the key - * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} - * @return StreamScanCursor scan cursor. - */ - StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); - - /** - * Count the number of members in a sorted set between a given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements in the specified score range. - */ - Long zlexcount(K key, String min, String max); - - /** - * Remove all members in a sorted set between the given lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return Long integer-reply the number of elements removed. - */ - Long zremrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebylex(K key, String min, String max); - - /** - * Return a range of members in a sorted set, by lexicographical range. - * - * @param key the key - * @param min min score - * @param max max score - * @param offset the offset - * @param count the count - * @return List<V> array-reply list of elements in the specified score range. - */ - List zrangebylex(K key, String min, String max, long offset, long count); -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisStringCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisStringCommands.java deleted file mode 100644 index e45d23bff8..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisStringCommands.java +++ /dev/null @@ -1,342 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.output.ValueStreamingChannel; -import com.lambdaworks.redis.BitFieldArgs; -import com.lambdaworks.redis.SetArgs; - -/** - * ${intent} for Strings. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisStringCommands { - - /** - * Append a value to a key. - * - * @param key the key - * @param value the value - * @return Long integer-reply the length of the string after the append operation. - */ - Long append(K key, V value); - - /** - * Count set bits in a string. - * - * @param key the key - * - * @return Long integer-reply The number of bits set to 1. - */ - Long bitcount(K key); - - /** - * Count set bits in a string. - * - * @param key the key - * @param start the start - * @param end the end - * - * @return Long integer-reply The number of bits set to 1. - */ - Long bitcount(K key, long start, long end); - - /** - * Execute {@code BITFIELD} with its subcommands. - * - * @param key the key - * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. - * - * @return Long bulk-reply the results from the bitfield commands. - */ - List bitfield(K key, BitFieldArgs bitFieldArgs); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the state - * - * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - Long bitpos(K key, boolean state); - - /** - * Find first bit set or clear in a string. - * - * @param key the key - * @param state the bit type: long - * @param start the start type: long - * @param end the end type: long - * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. - * - * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is - * returned. - * - * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns - * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the - * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. - * - * Basically the function consider the right of the string as padded with zeros if you look for clear bits and - * specify no range or the start argument only. - * - * However this behavior changes if you are looking for clear bits and specify a range with both - * start and end. If no clear bit is found in the specified range, the function - * returns -1 as the user specified a clear range and there are no 0 bits in that range. - */ - Long bitpos(K key, boolean state, long start, long end); - - /** - * Perform bitwise AND between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopAnd(K destination, K... keys); - - /** - * Perform bitwise NOT between strings. - * - * @param destination result key of the operation - * @param source operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopNot(K destination, K source); - - /** - * Perform bitwise OR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopOr(K destination, K... keys); - - /** - * Perform bitwise XOR between strings. - * - * @param destination result key of the operation - * @param keys operation input key names - * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest - * input string. - */ - Long bitopXor(K destination, K... keys); - - /** - * Decrement the integer value of a key by one. - * - * @param key the key - * @return Long integer-reply the value of {@code key} after the decrement - */ - Long decr(K key); - - /** - * Decrement the integer value of a key by the given number. - * - * @param key the key - * @param amount the decrement type: long - * @return Long integer-reply the value of {@code key} after the decrement - */ - Long decrby(K key, long amount); - - /** - * Get the value of a key. - * - * @param key the key - * @return V bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not exist. - */ - V get(K key); - - /** - * Returns the bit value at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @return Long integer-reply the bit value stored at offset. - */ - Long getbit(K key, long offset); - - /** - * Get a substring of the string stored at a key. - * - * @param key the key - * @param start the start type: long - * @param end the end type: long - * @return V bulk-string-reply - */ - V getrange(K key, long start, long end); - - /** - * Set the string value of a key and return its old value. - * - * @param key the key - * @param value the value - * @return V bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} did not exist. - */ - V getset(K key, V value); - - /** - * Increment the integer value of a key by one. - * - * @param key the key - * @return Long integer-reply the value of {@code key} after the increment - */ - Long incr(K key); - - /** - * Increment the integer value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: long - * @return Long integer-reply the value of {@code key} after the increment - */ - Long incrby(K key, long amount); - - /** - * Increment the float value of a key by the given amount. - * - * @param key the key - * @param amount the increment type: double - * @return Double bulk-string-reply the value of {@code key} after the increment. - */ - Double incrbyfloat(K key, double amount); - - /** - * Get the values of all the given keys. - * - * @param keys the key - * @return List<V> array-reply list of values at the specified keys. - */ - List mget(K... keys); - - /** - * Stream over the values of all the given keys. - * - * @param channel the channel - * @param keys the keys - * - * @return Long array-reply list of values at the specified keys. - */ - Long mget(ValueStreamingChannel channel, K... keys); - - /** - * Set multiple keys to multiple values. - * - * @param map the null - * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. - */ - String mset(Map map); - - /** - * Set multiple keys to multiple values, only if none of the keys exist. - * - * @param map the null - * @return Boolean integer-reply specifically: - * - * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). - */ - Boolean msetnx(Map map); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - String set(K key, V value); - - /** - * Set the string value of a key. - * - * @param key the key - * @param value the value - * @param setArgs the setArgs - * - * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. - */ - String set(K key, V value, SetArgs setArgs); - - /** - * Sets or clears the bit at offset in the string value stored at key. - * - * @param key the key - * @param offset the offset type: long - * @param value the value type: string - * @return Long integer-reply the original bit value stored at offset. - */ - Long setbit(K key, long offset, int value); - - /** - * Set the value and expiration of a key. - * - * @param key the key - * @param seconds the seconds type: long - * @param value the value - * @return String simple-string-reply - */ - String setex(K key, long seconds, V value); - - /** - * Set the value and expiration in milliseconds of a key. - * - * @param key the key - * @param milliseconds the milliseconds type: long - * @param value the value - * @return String simple-string-reply - */ - String psetex(K key, long milliseconds, V value); - - /** - * Set the value of a key, only if the key does not exist. - * - * @param key the key - * @param value the value - * @return Boolean integer-reply specifically: - * - * {@code 1} if the key was set {@code 0} if the key was not set - */ - Boolean setnx(K key, V value); - - /** - * Overwrite part of a string at key starting at the specified offset. - * - * @param key the key - * @param offset the offset type: long - * @param value the value - * @return Long integer-reply the length of the string after it was modified by the command. - */ - Long setrange(K key, long offset, V value); - - /** - * Get the length of the value stored in a key. - * - * @param key the key - * @return Long integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does not exist. - */ - Long strlen(K key); -} diff --git a/src/main/templates/com/lambdaworks/redis/api/RedisTransactionalCommands.java b/src/main/templates/com/lambdaworks/redis/api/RedisTransactionalCommands.java deleted file mode 100644 index 5da386726d..0000000000 --- a/src/main/templates/com/lambdaworks/redis/api/RedisTransactionalCommands.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis.api; - -import java.util.List; - -/** - * ${intent} for Transactions. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 4.0 - */ -public interface RedisTransactionalCommands { - - /** - * Discard all commands issued after MULTI. - * - * @return String simple-string-reply always {@code OK}. - */ - String discard(); - - /** - * Execute all commands issued after MULTI. - * - * @return List<Object> array-reply each element being the reply to each of the commands in the atomic transaction. - * - * When using {@code WATCH}, {@code EXEC} can return a - */ - List exec(); - - /** - * Mark the start of a transaction block. - * - * @return String simple-string-reply always {@code OK}. - */ - String multi(); - - /** - * Watch the given keys to determine execution of the MULTI/EXEC block. - * - * @param keys the key - * @return String simple-string-reply always {@code OK}. - */ - String watch(K... keys); - - /** - * Forget about all watched keys. - * - * @return String simple-string-reply always {@code OK}. - */ - String unwatch(); -} diff --git a/src/main/templates/io/lettuce/core/api/BaseRedisCommands.java b/src/main/templates/io/lettuce/core/api/BaseRedisCommands.java new file mode 100644 index 0000000000..d713c556e8 --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/BaseRedisCommands.java @@ -0,0 +1,176 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.lettuce.core.output.CommandOutput; + +/** + * + * ${intent} for basic commands. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface BaseRedisCommands { + + /** + * Post a message to a channel. + * + * @param channel the channel type: key + * @param message the message type: value + * @return Long integer-reply the number of clients that received the message. + */ + Long publish(K channel, V message); + + /** + * Lists the currently *active channels*. + * + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + List pubsubChannels(); + + /** + * Lists the currently *active channels*. + * + * @param channel the key + * @return List<K> array-reply a list of active channels, optionally matching the specified pattern. + */ + List pubsubChannels(K channel); + + /** + * Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels. + * + * @param channels channel keys + * @return array-reply a list of channels and number of subscribers for every channel. + */ + Map pubsubNumsub(K... channels); + + /** + * Returns the number of subscriptions to patterns. + * + * @return Long integer-reply the number of patterns all the clients are subscribed to. + */ + Long pubsubNumpat(); + + /** + * Echo the given string. + * + * @param msg the message type: value + * @return V bulk-string-reply + */ + V echo(V msg); + + /** + * Return the role of the instance in the context of replication. + * + * @return List<Object> array-reply where the first element is one of master, slave, sentinel and the additional + * elements are role-specific. + */ + List role(); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + String ping(); + + /** + * Switch connection to Read-Only mode when connecting to a cluster. + * + * @return String simple-string-reply. + */ + String readOnly(); + + /** + * Switch connection to Read-Write mode (default) when connecting to a cluster. + * + * @return String simple-string-reply. + */ + String readWrite(); + + /** + * Instructs Redis to disconnect the connection. Note that if auto-reconnect is enabled then Lettuce will auto-reconnect if + * the connection was disconnected. Use {@link io.lettuce.core.api.StatefulConnection#close} to close connections and release resources. + * + * @return String simple-string-reply always OK. + */ + String quit(); + + /** + * Wait for replication. + * + * @param replicas minimum number of replicas + * @param timeout timeout in milliseconds + * @return number of replicas + */ + Long waitForReplication(int replicas, long timeout); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param response type + * @return the command response + */ + T dispatch(ProtocolKeyword type, CommandOutput output); + + /** + * Dispatch a command to the Redis Server. Please note the command output type must fit to the command response. + * + * @param type the command, must not be {@literal null}. + * @param output the command output, must not be {@literal null}. + * @param args the command arguments, must not be {@literal null}. + * @param response type + * @return the command response + */ + T dispatch(ProtocolKeyword type, CommandOutput output, CommandArgs args); + + /** + * + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * Reset the command state. Queued commands will be canceled and the internal state will be reset. This is useful when the + * internal state machine gets out of sync with the connection. + */ + void reset(); + + /** + * Disable or enable auto-flush behavior. Default is {@literal true}. If autoFlushCommands is disabled, multiple commands + * can be issued without writing them actually to the transport. Commands are buffered until a {@link #flushCommands()} is + * issued. After calling {@link #flushCommands()} commands are sent to the transport and executed by Redis. + * + * @param autoFlush state of autoFlush. + */ + void setAutoFlushCommands(boolean autoFlush); + + /** + * Flush pending commands. This commands forces a flush on the channel and can be used to buffer ("pipeline") commands to + * achieve batching. No-op if channel is not connected. + */ + void flushCommands(); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisGeoCommands.java b/src/main/templates/io/lettuce/core/api/RedisGeoCommands.java new file mode 100644 index 0000000000..eb42826b66 --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisGeoCommands.java @@ -0,0 +1,163 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.*; +import java.util.List; +import java.util.Set; + +/** + * ${intent} for the Geo-API. + * + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisGeoCommands { + + /** + * Single geo add. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param member the member to add + * @return Long integer-reply the number of elements that were added to the set + */ + Long geoadd(K key, double longitude, double latitude, V member); + + /** + * Multi geo add. + * + * @param key the key of the geo set + * @param lngLatMember triplets of double longitude, double latitude and V member + * @return Long integer-reply the number of elements that were added to the set + */ + Long geoadd(K key, Object... lngLatMember); + + /** + * Retrieve Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index. + * + * @param key the key of the geo set + * @param members the members + * @return bulk reply Geohash strings in the order of {@code members}. Returns {@literal null} if a member is not found. + */ + List> geohash(K key, V... members); + + /** + * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @return bulk reply + */ + Set georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit); + + /** + * Retrieve members selected by distance with the center of {@code longitude} and {@code latitude}. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @param geoArgs args to control the result + * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} + */ + List> georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); + + /** + * Perform a {@link #georadius(Object, double, double, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. + * + * @param key the key of the geo set + * @param longitude the longitude coordinate according to WGS84 + * @param latitude the latitude coordinate according to WGS84 + * @param distance radius distance + * @param unit distance unit + * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with + * their locations a sorted set. + * @return Long integer-reply the number of elements in the result + */ + Long georadius(K key, double longitude, double latitude, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); + + /** + * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the + * results. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @return set of members + */ + Set georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit); + + /** + * + * Retrieve members selected by distance with the center of {@code member}. The member itself is always contained in the + * results. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @param geoArgs args to control the result + * @return nested multi-bulk reply. The {@link GeoWithin} contains only fields which were requested by {@link GeoArgs} + */ + List> georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoArgs geoArgs); + + /** + * Perform a {@link #georadiusbymember(Object, Object, double, GeoArgs.Unit, GeoArgs)} query and store the results in a sorted set. + * + * @param key the key of the geo set + * @param member reference member + * @param distance radius distance + * @param unit distance unit + * @param geoRadiusStoreArgs args to store either the resulting elements with their distance or the resulting elements with + * their locations a sorted set. + * @return Long integer-reply the number of elements in the result + */ + Long georadiusbymember(K key, V member, double distance, GeoArgs.Unit unit, GeoRadiusStoreArgs geoRadiusStoreArgs); + + /** + * Get geo coordinates for the {@code members}. + * + * @param key the key of the geo set + * @param members the members + * + * @return a list of {@link GeoCoordinates}s representing the x,y position of each element specified in the arguments. For + * missing elements {@literal null} is returned. + */ + List geopos(K key, V... members); + + /** + * + * Retrieve distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is + * returned. Default in meters by, otherwise according to {@code unit} + * + * @param key the key of the geo set + * @param from from member + * @param to to member + * @param unit distance unit + * + * @return distance between points {@code from} and {@code to}. If one or more elements are missing {@literal null} is + * returned. + */ + Double geodist(K key, V from, V to, GeoArgs.Unit unit); + +} diff --git a/src/main/templates/io/lettuce/core/api/RedisHLLCommands.java b/src/main/templates/io/lettuce/core/api/RedisHLLCommands.java new file mode 100644 index 0000000000..1a5b936a9d --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisHLLCommands.java @@ -0,0 +1,61 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +/** + * ${intent} for HyperLogLog (PF* commands). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public interface RedisHLLCommands { + + /** + * Adds the specified elements to the specified HyperLogLog. + * + * @param key the key + * @param values the values + * + * @return Long integer-reply specifically: + * + * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. + */ + Long pfadd(K key, V... values); + + /** + * Merge N different HyperLogLogs into a single one. + * + * @param destkey the destination key + * @param sourcekeys the source key + * + * @return String simple-string-reply The command just returns {@code OK}. + */ + String pfmerge(K destkey, K... sourcekeys); + + /** + * Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). + * + * @param keys the keys + * + * @return Long integer-reply specifically: + * + * The approximated number of unique elements observed via {@code PFADD}. + */ + Long pfcount(K... keys); + +} diff --git a/src/main/templates/io/lettuce/core/api/RedisHashCommands.java b/src/main/templates/io/lettuce/core/api/RedisHashCommands.java new file mode 100644 index 0000000000..2229401998 --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisHashCommands.java @@ -0,0 +1,301 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * ${intent} for Hashes (Key-Value pairs). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisHashCommands { + + /** + * Delete one or more hash fields. + * + * @param key the key + * @param fields the field type: key + * @return Long integer-reply the number of fields that were removed from the hash, not including specified but non existing + * fields. + */ + Long hdel(K key, K... fields); + + /** + * Determine if a hash field exists. + * + * @param key the key + * @param field the field type: key + * @return Boolean integer-reply specifically: + * + * {@literal true} if the hash contains {@code field}. {@literal false} if the hash does not contain {@code field}, + * or {@code key} does not exist. + */ + Boolean hexists(K key, K field); + + /** + * Get the value of a hash field. + * + * @param key the key + * @param field the field type: key + * @return V bulk-string-reply the value associated with {@code field}, or {@literal null} when {@code field} is not present + * in the hash or {@code key} does not exist. + */ + V hget(K key, K field); + + /** + * Increment the integer value of a hash field by the given number. + * + * @param key the key + * @param field the field type: key + * @param amount the increment type: long + * @return Long integer-reply the value at {@code field} after the increment operation. + */ + Long hincrby(K key, K field, long amount); + + /** + * Increment the float value of a hash field by the given amount. + * + * @param key the key + * @param field the field type: key + * @param amount the increment type: double + * @return Double bulk-string-reply the value of {@code field} after the increment. + */ + Double hincrbyfloat(K key, K field, double amount); + + /** + * Get all the fields and values in a hash. + * + * @param key the key + * @return Map<K,V> array-reply list of fields and their values stored in the hash, or an empty list when {@code key} + * does not exist. + */ + Map hgetall(K key); + + /** + * Stream over all the fields and values in a hash. + * + * @param channel the channel + * @param key the key + * + * @return Long count of the keys. + */ + Long hgetall(KeyValueStreamingChannel channel, K key); + + /** + * Get all the fields in a hash. + * + * @param key the key + * @return List<K> array-reply list of fields in the hash, or an empty list when {@code key} does not exist. + */ + List hkeys(K key); + + /** + * Stream over all the fields in a hash. + * + * @param channel the channel + * @param key the key + * + * @return Long count of the keys. + */ + Long hkeys(KeyStreamingChannel channel, K key); + + /** + * Get the number of fields in a hash. + * + * @param key the key + * @return Long integer-reply number of fields in the hash, or {@code 0} when {@code key} does not exist. + */ + Long hlen(K key); + + /** + * Get the values of all the given hash fields. + * + * @param key the key + * @param fields the field type: key + * @return List<V> array-reply list of values associated with the given fields, in the same + */ + List> hmget(K key, K... fields); + + /** + * Stream over the values of all the given hash fields. + * + * @param channel the channel + * @param key the key + * @param fields the fields + * + * @return Long count of the keys + */ + Long hmget(KeyValueStreamingChannel channel, K key, K... fields); + + /** + * Set multiple hash fields to multiple values. + * + * @param key the key + * @param map the null + * @return String simple-string-reply + */ + String hmset(K key, Map map); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @return MapScanCursor<K, V> map scan cursor. + */ + MapScanCursor hscan(K key); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanArgs scan arguments + * @return MapScanCursor<K, V> map scan cursor. + */ + MapScanCursor hscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return MapScanCursor<K, V> map scan cursor. + */ + MapScanCursor hscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return MapScanCursor<K, V> map scan cursor. + */ + MapScanCursor hscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor hscan(KeyValueStreamingChannel channel, K key); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate hash fields and associated values. + * + * @param channel streaming channel that receives a call for every key-value pair + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor hscan(KeyValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Set the string value of a hash field. + * + * @param key the key + * @param field the field type: key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@literal true} if {@code field} is a new field in the hash and {@code value} was set. {@literal false} if + * {@code field} already exists in the hash and the value was updated. + */ + Boolean hset(K key, K field, V value); + + /** + * Set multiple hash fields to multiple values. + * + * @param key the key of the hash + * @param map the field/value pairs to update + * @return Long integer-reply: the number of fields that were added. + * @since 5.3 + */ + Long hset(K key, Map map); + + /** + * Set the value of a hash field, only if the field does not exist. + * + * @param key the key + * @param field the field type: key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@code 1} if {@code field} is a new field in the hash and {@code value} was set. {@code 0} if {@code field} + * already exists in the hash and no operation was performed. + */ + Boolean hsetnx(K key, K field, V value); + + /** + * Get the string length of the field value in a hash. + * + * @param key the key + * @param field the field type: key + * @return Long integer-reply the string length of the {@code field} value, or {@code 0} when {@code field} is not present + * in the hash or {@code key} does not exist at all. + */ + Long hstrlen(K key, K field); + + /** + * Get all the values in a hash. + * + * @param key the key + * @return List<V> array-reply list of values in the hash, or an empty list when {@code key} does not exist. + */ + List hvals(K key); + + /** + * Stream over all the values in a hash. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * + * @return Long count of the keys. + */ + Long hvals(ValueStreamingChannel channel, K key); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisKeyCommands.java b/src/main/templates/io/lettuce/core/api/RedisKeyCommands.java new file mode 100644 index 0000000000..7051ee116f --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisKeyCommands.java @@ -0,0 +1,419 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.Date; +import java.util.List; + +import io.lettuce.core.*; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * ${intent} for Keys (Key manipulation/querying). + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisKeyCommands { + + /** + * Delete one or more keys. + * + * @param keys the keys + * @return Long integer-reply The number of keys that were removed. + */ + Long del(K... keys); + + /** + * Unlink one or more keys (non blocking DEL). + * + * @param keys the keys + * @return Long integer-reply The number of keys that were removed. + */ + Long unlink(K... keys); + + /** + * Return a serialized version of the value stored at the specified key. + * + * @param key the key + * @return byte[] bulk-string-reply the serialized value. + */ + byte[] dump(K key); + + /** + * Determine how many keys exist. + * + * @param keys the keys + * @return Long integer-reply specifically: Number of existing keys + */ + Long exists(K... keys); + + /** + * Set a key's time to live in seconds. + * + * @param key the key + * @param seconds the seconds type: long + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set. + */ + Boolean expire(K key, long seconds); + + /** + * Set the expiration for a key as a UNIX timestamp. + * + * @param key the key + * @param timestamp the timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Boolean expireat(K key, Date timestamp); + + /** + * Set the expiration for a key as a UNIX timestamp. + * + * @param key the key + * @param timestamp the timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Boolean expireat(K key, long timestamp); + + /** + * Find all keys matching the given pattern. + * + * @param pattern the pattern type: patternkey (pattern) + * @return List<K> array-reply list of keys matching {@code pattern}. + */ + List keys(K pattern); + + /** + * Find all keys matching the given pattern. + * + * @param channel the channel + * @param pattern the pattern + * @return Long array-reply list of keys matching {@code pattern}. + */ + Long keys(KeyStreamingChannel channel, K pattern); + + /** + * Atomically transfer a key from a Redis instance to another one. + * + * @param host the host + * @param port the port + * @param key the key + * @param db the database + * @param timeout the timeout in milliseconds + * @return String simple-string-reply The command returns OK on success. + */ + String migrate(String host, int port, K key, int db, long timeout); + + /** + * Atomically transfer one or more keys from a Redis instance to another one. + * + * @param host the host + * @param port the port + * @param db the database + * @param timeout the timeout in milliseconds + * @param migrateArgs migrate args that allow to configure further options + * @return String simple-string-reply The command returns OK on success. + */ + String migrate(String host, int port, int db, long timeout, MigrateArgs migrateArgs); + + /** + * Move a key to another database. + * + * @param key the key + * @param db the db type: long + * @return Boolean integer-reply specifically: + */ + Boolean move(K key, int db); + + /** + * returns the kind of internal representation used in order to store the value associated with a key. + * + * @param key the key + * @return String + */ + String objectEncoding(K key); + + /** + * returns the number of seconds since the object stored at the specified key is idle (not requested by read or write + * operations). + * + * @param key the key + * @return number of seconds since the object stored at the specified key is idle. + */ + Long objectIdletime(K key); + + /** + * returns the number of references of the value associated with the specified key. + * + * @param key the key + * @return Long + */ + Long objectRefcount(K key); + + /** + * Remove the expiration from a key. + * + * @param key the key + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was removed. {@literal false} if {@code key} does not exist or does not have an + * associated timeout. + */ + Boolean persist(K key); + + /** + * Set a key's time to live in milliseconds. + * + * @param key the key + * @param milliseconds the milliseconds type: long + * @return integer-reply, specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set. + */ + Boolean pexpire(K key, long milliseconds); + + /** + * Set the expiration for a key as a UNIX timestamp specified in milliseconds. + * + * @param key the key + * @param timestamp the milliseconds-timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Boolean pexpireat(K key, Date timestamp); + + /** + * Set the expiration for a key as a UNIX timestamp specified in milliseconds. + * + * @param key the key + * @param timestamp the milliseconds-timestamp type: posix time + * @return Boolean integer-reply specifically: + * + * {@literal true} if the timeout was set. {@literal false} if {@code key} does not exist or the timeout could not + * be set (see: {@code EXPIRE}). + */ + Boolean pexpireat(K key, long timestamp); + + /** + * Get the time to live for a key in milliseconds. + * + * @param key the key + * @return Long integer-reply TTL in milliseconds, or a negative value in order to signal an error (see the description + * above). + */ + Long pttl(K key); + + /** + * Return a random key from the keyspace. + * + * @return K bulk-string-reply the random key, or {@literal null} when the database is empty. + */ + K randomkey(); + + /** + * Rename a key. + * + * @param key the key + * @param newKey the newkey type: key + * @return String simple-string-reply + */ + String rename(K key, K newKey); + + /** + * Rename a key, only if the new key does not exist. + * + * @param key the key + * @param newKey the newkey type: key + * @return Boolean integer-reply specifically: + * + * {@literal true} if {@code key} was renamed to {@code newkey}. {@literal false} if {@code newkey} already exists. + */ + Boolean renamenx(K key, K newKey); + + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param ttl the ttl type: long + * @param value the serialized-value type: string + * @return String simple-string-reply The command returns OK on success. + */ + String restore(K key, long ttl, byte[] value); + + /** + * Create a key using the provided serialized value, previously obtained using DUMP. + * + * @param key the key + * @param value the serialized-value type: string + * @param args the {@link RestoreArgs}, must not be {@literal null}. + * @return String simple-string-reply The command returns OK on success. + * @since 5.1 + */ + String restore(K key, byte[] value, RestoreArgs args); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @return List<V> array-reply list of sorted elements. + */ + List sort(K key); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @return Long number of values. + */ + Long sort(ValueStreamingChannel channel, K key); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @param sortArgs sort arguments + * @return List<V> array-reply list of sorted elements. + */ + List sort(K key, SortArgs sortArgs); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param sortArgs sort arguments + * @return Long number of values. + */ + Long sort(ValueStreamingChannel channel, K key, SortArgs sortArgs); + + /** + * Sort the elements in a list, set or sorted set. + * + * @param key the key + * @param sortArgs sort arguments + * @param destination the destination key to store sort results + * @return Long number of values. + */ + Long sortStore(K key, SortArgs sortArgs, K destination); + + /** + * Touch one or more keys. Touch sets the last accessed time for a key. Non-exsitent keys wont get created. + * + * @param keys the keys + * @return Long integer-reply the number of found keys. + */ + Long touch(K... keys); + + /** + * Get the time to live for a key. + * + * @param key the key + * @return Long integer-reply TTL in seconds, or a negative value in order to signal an error (see the description above). + */ + Long ttl(K key); + + /** + * Determine the type stored at key. + * + * @param key the key + * @return String simple-string-reply type of {@code key}, or {@code none} when {@code key} does not exist. + */ + String type(K key); + + /** + * Incrementally iterate the keys space. + * + * @return KeyScanCursor<K> scan cursor. + */ + KeyScanCursor scan(); + + /** + * Incrementally iterate the keys space. + * + * @param scanArgs scan arguments + * @return KeyScanCursor<K> scan cursor. + */ + KeyScanCursor scan(ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return KeyScanCursor<K> scan cursor. + */ + KeyScanCursor scan(ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return KeyScanCursor<K> scan cursor. + */ + KeyScanCursor scan(ScanCursor scanCursor); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor scan(KeyStreamingChannel channel); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor scan(KeyStreamingChannel channel, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate the keys space. + * + * @param channel streaming channel that receives a call for every key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor scan(KeyStreamingChannel channel, ScanCursor scanCursor); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisListCommands.java b/src/main/templates/io/lettuce/core/api/RedisListCommands.java new file mode 100644 index 0000000000..0192a06b93 --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisListCommands.java @@ -0,0 +1,210 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.List; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * ${intent} for Lists. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisListCommands { + + /** + * Remove and get the first element in a list, or block until one is available. + * + * @param timeout the timeout in seconds + * @param keys the keys + * @return KeyValue<K,V> array-reply specifically: + * + * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk + * with the first element being the name of the key where an element was popped and the second element being the + * value of the popped element. + */ + KeyValue blpop(long timeout, K... keys); + + /** + * Remove and get the last element in a list, or block until one is available. + * + * @param timeout the timeout in seconds + * @param keys the keys + * @return KeyValue<K,V> array-reply specifically: + * + * A {@literal null} multi-bulk when no element could be popped and the timeout expired. A two-element multi-bulk + * with the first element being the name of the key where an element was popped and the second element being the + * value of the popped element. + */ + KeyValue brpop(long timeout, K... keys); + + /** + * Pop a value from a list, push it to another list and return it; or block until one is available. + * + * @param timeout the timeout in seconds + * @param source the source key + * @param destination the destination type: key + * @return V bulk-string-reply the element being popped from {@code source} and pushed to {@code destination}. If + * {@code timeout} is reached, a + */ + V brpoplpush(long timeout, K source, K destination); + + /** + * Get an element from a list by its index. + * + * @param key the key + * @param index the index type: long + * @return V bulk-string-reply the requested element, or {@literal null} when {@code index} is out of range. + */ + V lindex(K key, long index); + + /** + * Insert an element before or after another element in a list. + * + * @param key the key + * @param before the before + * @param pivot the pivot + * @param value the value + * @return Long integer-reply the length of the list after the insert operation, or {@code -1} when the value {@code pivot} + * was not found. + */ + Long linsert(K key, boolean before, V pivot, V value); + + /** + * Get the length of a list. + * + * @param key the key + * @return Long integer-reply the length of the list at {@code key}. + */ + Long llen(K key); + + /** + * Remove and get the first element in a list. + * + * @param key the key + * @return V bulk-string-reply the value of the first element, or {@literal null} when {@code key} does not exist. + */ + V lpop(K key); + + /** + * Prepend one or multiple values to a list. + * + * @param key the key + * @param values the value + * @return Long integer-reply the length of the list after the push operations. + */ + Long lpush(K key, V... values); + + /** + * Prepend values to a list, only if the list exists. + * + * @param key the key + * @param values the values + * @return Long integer-reply the length of the list after the push operation. + */ + Long lpushx(K key, V... values); + + /** + * Get a range of elements from a list. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return List<V> array-reply list of elements in the specified range. + */ + List lrange(K key, long start, long stop); + + /** + * Get a range of elements from a list. + * + * @param channel the channel + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long count of elements in the specified range. + */ + Long lrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Remove elements from a list. + * + * @param key the key + * @param count the count type: long + * @param value the value + * @return Long integer-reply the number of removed elements. + */ + Long lrem(K key, long count, V value); + + /** + * Set the value of an element in a list by its index. + * + * @param key the key + * @param index the index type: long + * @param value the value + * @return String simple-string-reply + */ + String lset(K key, long index, V value); + + /** + * Trim a list to the specified range. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return String simple-string-reply + */ + String ltrim(K key, long start, long stop); + + /** + * Remove and get the last element in a list. + * + * @param key the key + * @return V bulk-string-reply the value of the last element, or {@literal null} when {@code key} does not exist. + */ + V rpop(K key); + + /** + * Remove the last element in a list, append it to another list and return it. + * + * @param source the source key + * @param destination the destination type: key + * @return V bulk-string-reply the element being popped and pushed. + */ + V rpoplpush(K source, K destination); + + /** + * Append one or multiple values to a list. + * + * @param key the key + * @param values the value + * @return Long integer-reply the length of the list after the push operation. + */ + Long rpush(K key, V... values); + + /** + * Append values to a list, only if the list exists. + * + * @param key the key + * @param values the values + * @return Long integer-reply the length of the list after the push operation. + */ + Long rpushx(K key, V... values); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisScriptingCommands.java b/src/main/templates/io/lettuce/core/api/RedisScriptingCommands.java new file mode 100644 index 0000000000..3cefd9ecdf --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisScriptingCommands.java @@ -0,0 +1,163 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.List; + +import io.lettuce.core.ScriptOutputType; + +/** + * ${intent} for Scripting. {@link java.lang.String Lua scripts} are encoded by using the configured + * {@link io.lettuce.core.ClientOptions#getScriptCharset() charset}. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisScriptingCommands { + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + */ + T eval(String script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type output type + * @param keys key names + * @param expected return type + * @return script result + * @since 6.0 + */ + T eval(byte[] script, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + T eval(String script, ScriptOutputType type, K[] keys, V... values); + + /** + * Execute a Lua script server side. + * + * @param script Lua 5.1 script. + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + * @since 6.0 + */ + T eval(byte[] script, ScriptOutputType type, K[] keys, V... values); + + /** + * Evaluates a script cached on the server side by its SHA1 digest + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param expected return type + * @return script result + */ + T evalsha(String digest, ScriptOutputType type, K... keys); + + /** + * Execute a Lua script server side. + * + * @param digest SHA1 of the script + * @param type the type + * @param keys the keys + * @param values the values + * @param expected return type + * @return script result + */ + T evalsha(String digest, ScriptOutputType type, K[] keys, V... values); + + /** + * Check existence of scripts in the script cache. + * + * @param digests script digests + * @return List<Boolean> array-reply The command returns an array of integers that correspond to the specified SHA1 + * digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 + * is returned, otherwise 0 is returned. + */ + List scriptExists(String... digests); + + /** + * Remove all the scripts from the script cache. + * + * @return String simple-string-reply + */ + String scriptFlush(); + + /** + * Kill the script currently in execution. + * + * @return String simple-string-reply + */ + String scriptKill(); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + String scriptLoad(String script); + + /** + * Load the specified Lua script into the script cache. + * + * @param script script content + * @return String bulk-string-reply This command returns the SHA1 digest of the script added into the script cache. + * @since 6.0 + */ + String scriptLoad(byte[] script); + + /** + * Create a SHA1 digest from a Lua script. + * + * @param script script content + * @return the SHA1 value + * @since 6.0 + */ + String digest(String script); + + /** + * Create a SHA1 digest from a Lua script. + * + * @param script script content + * @return the SHA1 value + * @since 6.0 + */ + String digest(byte[] script); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisSentinelCommands.java b/src/main/templates/io/lettuce/core/api/RedisSentinelCommands.java new file mode 100644 index 0000000000..de698f2e76 --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisSentinelCommands.java @@ -0,0 +1,192 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.net.SocketAddress; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; + +/** + * ${intent} for Redis Sentinel. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisSentinelCommands { + + /** + * Return the ip and port number of the master with that name. + * + * @param key the key + * @return SocketAddress; + */ + SocketAddress getMasterAddrByName(K key); + + /** + * Enumerates all the monitored masters and their states. + * + * @return Map<K, V>> + */ + List> masters(); + + /** + * Show the state and info of the specified master. + * + * @param key the key + * @return Map<K, V> + */ + Map master(K key); + + /** + * Provides a list of replicas for the master with the specified name. + * + * @param key the key + * @return List<Map<K, V>> + */ + List> slaves(K key); + + /** + * This command will reset all the masters with matching name. + * + * @param key the key + * @return Long + */ + Long reset(K key); + + /** + * Perform a failover. + * + * @param key the master id + * @return String + */ + String failover(K key); + + /** + * This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. + * + * @param key the key + * @param ip the IP address + * @param port the port + * @param quorum the quorum count + * @return String + */ + String monitor(K key, String ip, int port, int quorum); + + /** + * Multiple option / value pairs can be specified (or none at all). + * + * @param key the key + * @param option the option + * @param value the value + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + String set(K key, String option, V value); + + /** + * remove the specified master. + * + * @param key the key + * @return String + */ + String remove(K key); + + /** + * Get the current connection name. + * + * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. + */ + K clientGetname(); + + /** + * Set the current connection name. + * + * @param name the client name + * @return simple-string-reply {@code OK} if the connection name was successfully set. + */ + String clientSetname(K name); + + /** + * Kill the connection of a client identified by ip:port. + * + * @param addr ip:port + * @return String simple-string-reply {@code OK} if the connection exists and has been closed + */ + String clientKill(String addr); + + /** + * Kill connections of clients which are filtered by {@code killArgs} + * + * @param killArgs args for the kill operation + * @return Long integer-reply number of killed connections + */ + Long clientKill(KillArgs killArgs); + + /** + * Stop processing commands from clients for some time. + * + * @param timeout the timeout value in milliseconds + * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. + */ + String clientPause(long timeout); + + /** + * Get the list of client connections. + * + * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), + * each line is composed of a succession of property=value fields separated by a space character. + */ + String clientList(); + + /** + * Get information and statistics about the server. + * + * @return String bulk-string-reply as a collection of text lines. + */ + String info(); + + /** + * Get information and statistics about the server. + * + * @param section the section type: string + * @return String bulk-string-reply as a collection of text lines. + */ + String info(String section); + + /** + * Ping the server. + * + * @return String simple-string-reply + */ + String ping(); + + /** + * + * @return true if the connection is open (connected and not closed). + */ + boolean isOpen(); + + /** + * + * @return the underlying connection. + */ + StatefulRedisSentinelConnection getStatefulConnection(); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisServerCommands.java b/src/main/templates/io/lettuce/core/api/RedisServerCommands.java new file mode 100644 index 0000000000..a09797ff6e --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisServerCommands.java @@ -0,0 +1,372 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.Date; +import java.util.List; +import java.util.Map; + +import io.lettuce.core.KillArgs; +import io.lettuce.core.UnblockType; +import io.lettuce.core.protocol.CommandType; + +/** + * ${intent} for Server Control. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisServerCommands { + + /** + * Asynchronously rewrite the append-only file. + * + * @return String simple-string-reply always {@code OK}. + */ + String bgrewriteaof(); + + /** + * Asynchronously save the dataset to disk. + * + * @return String simple-string-reply + */ + String bgsave(); + + /** + * Get the current connection name. + * + * @return K bulk-string-reply The connection name, or a null bulk reply if no name is set. + */ + K clientGetname(); + + /** + * Set the current connection name. + * + * @param name the client name + * @return simple-string-reply {@code OK} if the connection name was successfully set. + */ + String clientSetname(K name); + + /** + * Kill the connection of a client identified by ip:port. + * + * @param addr ip:port + * @return String simple-string-reply {@code OK} if the connection exists and has been closed + */ + String clientKill(String addr); + + /** + * Kill connections of clients which are filtered by {@code killArgs} + * + * @param killArgs args for the kill operation + * @return Long integer-reply number of killed connections + */ + Long clientKill(KillArgs killArgs); + + /** + * Unblock the specified blocked client. + * + * @param id the client id. + * @param type unblock type. + * @return Long integer-reply number of unblocked connections. + * @since 5.1 + */ + Long clientUnblock(long id, UnblockType type); + + /** + * Stop processing commands from clients for some time. + * + * @param timeout the timeout value in milliseconds + * @return String simple-string-reply The command returns OK or an error if the timeout is invalid. + */ + String clientPause(long timeout); + + /** + * Get the list of client connections. + * + * @return String bulk-string-reply a unique string, formatted as follows: One client connection per line (separated by LF), + * each line is composed of a succession of property=value fields separated by a space character. + */ + String clientList(); + + /** + * Get the id of the current connection. + * + * @return Long The command just returns the ID of the current connection. + * @since 5.3 + */ + Long clientId(); + + /** + * Returns an array reply of details about all Redis commands. + * + * @return List<Object> array-reply + */ + List command(); + + /** + * Returns an array reply of details about the requested commands. + * + * @param commands the commands to query for + * @return List<Object> array-reply + */ + List commandInfo(String... commands); + + /** + * Returns an array reply of details about the requested commands. + * + * @param commands the commands to query for + * @return List<Object> array-reply + */ + List commandInfo(CommandType... commands); + + /** + * Get total number of Redis commands. + * + * @return Long integer-reply of number of total commands in this Redis server. + */ + Long commandCount(); + + /** + * Get the value of a configuration parameter. + * + * @param parameter name of the parameter + * @return Map<String, String> bulk-string-reply + */ + Map configGet(String parameter); + + /** + * Reset the stats returned by INFO. + * + * @return String simple-string-reply always {@code OK}. + */ + String configResetstat(); + + /** + * Rewrite the configuration file with the in memory configuration. + * + * @return String simple-string-reply {@code OK} when the configuration was rewritten properly. Otherwise an error is + * returned. + */ + String configRewrite(); + + /** + * Set a configuration parameter to the given value. + * + * @param parameter the parameter name + * @param value the parameter value + * @return String simple-string-reply: {@code OK} when the configuration was set properly. Otherwise an error is returned. + */ + String configSet(String parameter, String value); + + /** + * Return the number of keys in the selected database. + * + * @return Long integer-reply + */ + Long dbsize(); + + /** + * Crash and recover + * + * @param delay optional delay in milliseconds + * @return String simple-string-reply + */ + String debugCrashAndRecover(Long delay); + + /** + * Get debugging information about the internal hash-table state. + * + * @param db the database number + * @return String simple-string-reply + */ + String debugHtstats(int db); + + /** + * Get debugging information about a key. + * + * @param key the key + * @return String simple-string-reply + */ + String debugObject(K key); + + /** + * Make the server crash: Out of memory. + * + * @return nothing, because the server crashes before returning. + */ + void debugOom(); + + /** + * Make the server crash: Invalid pointer access. + * + * @return nothing, because the server crashes before returning. + */ + void debugSegfault(); + + /** + * Save RDB, clear the database and reload RDB. + * + * @return String simple-string-reply The commands returns OK on success. + */ + String debugReload(); + + /** + * Restart the server gracefully. + * + * @param delay optional delay in milliseconds + * @return String simple-string-reply + */ + String debugRestart(Long delay); + + /** + * Get debugging information about the internal SDS length. + * + * @param key the key + * @return String simple-string-reply + */ + String debugSdslen(K key); + + /** + * Remove all keys from all databases. + * + * @return String simple-string-reply + */ + String flushall(); + + /** + * Remove all keys asynchronously from all databases. + * + * @return String simple-string-reply + */ + String flushallAsync(); + + /** + * Remove all keys from the current database. + * + * @return String simple-string-reply + */ + String flushdb(); + + /** + * Remove all keys asynchronously from the current database. + * + * @return String simple-string-reply + */ + String flushdbAsync(); + + /** + * Get information and statistics about the server. + * + * @return String bulk-string-reply as a collection of text lines. + */ + String info(); + + /** + * Get information and statistics about the server. + * + * @param section the section type: string + * @return String bulk-string-reply as a collection of text lines. + */ + String info(String section); + + /** + * Get the UNIX time stamp of the last successful save to disk. + * + * @return Date integer-reply an UNIX time stamp. + */ + Date lastsave(); + + /** + * Reports the number of bytes that a key and its value require to be stored in RAM. + * + * @return memory usage in bytes. + * @since 5.2 + */ + Long memoryUsage(K key); + + /** + * Synchronously save the dataset to disk. + * + * @return String simple-string-reply The commands returns OK on success. + */ + String save(); + + /** + * Synchronously save the dataset to disk and then shut down the server. + * + * @param save {@literal true} force save operation + */ + void shutdown(boolean save); + + /** + * Make the server a replica of another instance, or promote it as master. + * + * @param host the host type: string + * @param port the port type: string + * @return String simple-string-reply + */ + String slaveof(String host, int port); + + /** + * Promote server as master. + * + * @return String simple-string-reply + */ + String slaveofNoOne(); + + /** + * Read the slow log. + * + * @return List<Object> deeply nested multi bulk replies + */ + List slowlogGet(); + + /** + * Read the slow log. + * + * @param count the count + * @return List<Object> deeply nested multi bulk replies + */ + List slowlogGet(int count); + + /** + * Obtaining the current length of the slow log. + * + * @return Long length of the slow log. + */ + Long slowlogLen(); + + /** + * Resetting the slow log. + * + * @return String simple-string-reply The commands returns OK on success. + */ + String slowlogReset(); + + /** + * Return the current server time. + * + * @return List<V> array-reply specifically: + * + * A multi bulk reply containing two elements: + * + * unix time in seconds. microseconds. + */ + List time(); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisSetCommands.java b/src/main/templates/io/lettuce/core/api/RedisSetCommands.java new file mode 100644 index 0000000000..6920620fb1 --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisSetCommands.java @@ -0,0 +1,307 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.List; +import java.util.Set; + +import io.lettuce.core.ScanArgs; +import io.lettuce.core.ScanCursor; +import io.lettuce.core.StreamScanCursor; +import io.lettuce.core.ValueScanCursor; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * ${intent} for Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisSetCommands { + + /** + * Add one or more members to a set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply the number of elements that were added to the set, not including all the elements already + * present into the set. + */ + Long sadd(K key, V... members); + + /** + * Get the number of members in a set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the set, or {@literal false} if {@code key} does not + * exist. + */ + Long scard(K key); + + /** + * Subtract multiple sets. + * + * @param keys the key + * @return Set<V> array-reply list with members of the resulting set. + */ + Set sdiff(K... keys); + + /** + * Subtract multiple sets. + * + * @param channel the channel + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Long sdiff(ValueStreamingChannel channel, K... keys); + + /** + * Subtract multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Long sdiffstore(K destination, K... keys); + + /** + * Intersect multiple sets. + * + * @param keys the key + * @return Set<V> array-reply list with members of the resulting set. + */ + Set sinter(K... keys); + + /** + * Intersect multiple sets. + * + * @param channel the channel + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Long sinter(ValueStreamingChannel channel, K... keys); + + /** + * Intersect multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Long sinterstore(K destination, K... keys); + + /** + * Determine if a given value is a member of a set. + * + * @param key the key + * @param member the member type: value + * @return Boolean integer-reply specifically: + * + * {@literal true} if the element is a member of the set. {@literal false} if the element is not a member of the + * set, or if {@code key} does not exist. + */ + Boolean sismember(K key, V member); + + /** + * Move a member from one set to another. + * + * @param source the source key + * @param destination the destination type: key + * @param member the member type: value + * @return Boolean integer-reply specifically: + * + * {@literal true} if the element is moved. {@literal false} if the element is not a member of {@code source} and no + * operation was performed. + */ + Boolean smove(K source, K destination, V member); + + /** + * Get all the members in a set. + * + * @param key the key + * @return Set<V> array-reply all elements of the set. + */ + Set smembers(K key); + + /** + * Get all the members in a set. + * + * @param channel the channel + * @param key the keys + * @return Long count of members of the resulting set. + */ + Long smembers(ValueStreamingChannel channel, K key); + + /** + * Remove and return a random member from a set. + * + * @param key the key + * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. + */ + V spop(K key); + + /** + * Remove and return one or multiple random members from a set. + * + * @param key the key + * @param count number of members to pop + * @return V bulk-string-reply the removed element, or {@literal null} when {@code key} does not exist. + */ + Set spop(K key, long count); + + /** + * Get one random member from a set. + * + * @param key the key + * + * @return V bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply with the + * randomly selected element, or {@literal null} when {@code key} does not exist. + */ + V srandmember(K key); + + /** + * Get one or multiple random members from a set. + * + * @param key the key + * @param count the count type: long + * @return Set<V> bulk-string-reply without the additional {@code count} argument the command returns a Bulk Reply + * with the randomly selected element, or {@literal null} when {@code key} does not exist. + */ + List srandmember(K key, long count); + + /** + * Get one or multiple random members from a set. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param count the count + * @return Long count of members of the resulting set. + */ + Long srandmember(ValueStreamingChannel channel, K key, long count); + + /** + * Remove one or more members from a set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply the number of members that were removed from the set, not including non existing members. + */ + Long srem(K key, V... members); + + /** + * Add multiple sets. + * + * @param keys the key + * @return Set<V> array-reply list with members of the resulting set. + */ + Set sunion(K... keys); + + /** + * Add multiple sets. + * + * @param channel streaming channel that receives a call for every value + * @param keys the keys + * @return Long count of members of the resulting set. + */ + Long sunion(ValueStreamingChannel channel, K... keys); + + /** + * Add multiple sets and store the resulting set in a key. + * + * @param destination the destination type: key + * @param keys the key + * @return Long integer-reply the number of elements in the resulting set. + */ + Long sunionstore(K destination, K... keys); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @return ValueScanCursor<V> scan cursor. + */ + ValueScanCursor sscan(K key); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanArgs scan arguments + * @return ValueScanCursor<V> scan cursor. + */ + ValueScanCursor sscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ValueScanCursor<V> scan cursor. + */ + ValueScanCursor sscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ValueScanCursor<V> scan cursor. + */ + ValueScanCursor sscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor sscan(ValueStreamingChannel channel, K key); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate Set elements. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor sscan(ValueStreamingChannel channel, K key, ScanCursor scanCursor); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisSortedSetCommands.java b/src/main/templates/io/lettuce/core/api/RedisSortedSetCommands.java new file mode 100644 index 0000000000..2b392b21fd --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisSortedSetCommands.java @@ -0,0 +1,1252 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.List; + +import io.lettuce.core.*; +import io.lettuce.core.output.ScoredValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * ${intent} for Sorted Sets. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisSortedSetCommands { + + /** + * Removes and returns a member with the lowest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + KeyValue> bzpopmin(long timeout, K... keys); + + /** + * Removes and returns a member with the highest scores in the sorted set stored at one of the keys. + * + * @param timeout the timeout in seconds. + * @param keys the keys. + * @return KeyValue<K, ScoredValue<V>> multi-bulk containing the name of the key, the score and the popped member. + * @since 5.1 + */ + KeyValue> bzpopmax(long timeout, K... keys); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param scoresAndValues the scoresAndValue tuples (score,value,score,value,...) + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, ZAddArgs zAddArgs, Object... scoresAndValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists. + * + * @param key the ke + * @param zAddArgs arguments for zadd + * @param scoredValues the scored values + * @return Long integer-reply specifically: + * + * The number of elements added to the sorted sets, not including elements already existing for which the score was + * updated. + */ + Long zadd(K key, ZAddArgs zAddArgs, ScoredValue... scoredValues); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + */ + Double zaddincr(K key, double score, V member); + + /** + * Add one or more members to a sorted set, or update its score if it already exists applying the {@code INCR} option. ZADD + * acts like ZINCRBY. + * + * @param key the key + * @param zAddArgs arguments for zadd + * @param score the score + * @param member the member + * @return Long integer-reply specifically: The total number of elements changed + * @since 4.3 + */ + Double zaddincr(K key, ZAddArgs zAddArgs, double score, V member); + + /** + * Get the number of members in a sorted set. + * + * @param key the key + * @return Long integer-reply the cardinality (number of elements) of the sorted set, or {@literal false} if {@code key} + * does not exist. + */ + Long zcard(K key); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + Long zcount(K key, double min, double max); + + /** + * Count the members in a sorted set with scores within the given values. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zcount(java.lang.Object, Range)} + */ + @Deprecated + Long zcount(K key, String min, String max); + + /** + * Count the members in a sorted set with scores within the given {@link Range}. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + Long zcount(K key, Range range); + + /** + * Increment the score of a member in a sorted set. + * + * @param key the key + * @param amount the increment type: long + * @param member the member type: value + * @return Double bulk-string-reply the new score of {@code member} (a double precision floating point number), represented + * as string. + */ + Double zincrby(K key, double amount, V member); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Long zinterstore(K destination, K... keys); + + /** + * Intersect multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Long zinterstore(K destination, ZStoreArgs storeArgs, K... keys); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements in the specified score range. + * @deprecated Use {@link #zlexcount(java.lang.Object, Range)} + */ + @Deprecated + Long zlexcount(K key, String min, String max); + + /** + * Count the number of members in a sorted set between a given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements in the specified score range. + * @since 4.3 + */ + Long zlexcount(K key, Range range); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + ScoredValue zpopmin(K key); + + /** + * Removes and returns up to count members with the lowest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + List> zpopmin(K key, long count); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key + * @return ScoredValue<V> the removed element. + * @since 5.1 + */ + ScoredValue zpopmax(K key); + + /** + * Removes and returns up to count members with the highest scores in the sorted set stored at key. + * + * @param key the key. + * @param count the number of elements to return. + * @return List<ScoredValue<V>> array-reply list of popped scores and elements. + * @since 5.1 + */ + List> zpopmax(K key, long count); + + /** + * Return a range of members in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + List zrange(K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Long zrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + List> zrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Long zrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + List zrangebylex(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + List zrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified range. + * @deprecated Use {@link #zrangebylex(java.lang.Object, Range)} + */ + @Deprecated + List zrangebylex(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by lexicographical range. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified range. + * @since 4.3 + */ + List zrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + List zrangebyscore(K key, double min, double max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + List zrangebyscore(K key, String min, String max); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + List zrangebyscore(K key, double min, double max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + List zrangebyscore(K key, String min, String max, long offset, long count); + + /** + * Return a range of members in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Long zrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Long zrangebyscore(ValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscore(ValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Long zrangebyscore(ValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Long zrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + List> zrangebyscoreWithScores(K key, double min, double max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + List> zrangebyscoreWithScores(K key, String min, String max); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List> zrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit limit)} + */ + @Deprecated + List> zrangebyscoreWithScores(K key, double min, double max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + List> zrangebyscoreWithScores(K key, String min, String max, long offset, long count); + + /** + * Return a range of members with score in a sorted set, by score. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List> zrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double min, double max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified score range. + * @deprecated Use {@link #zrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit limit)} + */ + @Deprecated + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String min, String max, long offset, long count); + + /** + * Stream over a range of members with scores in a sorted set, by score. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified score range. + * @since 4.3 + */ + Long zrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + Long zrank(K key, V member); + + /** + * Remove one or more members from a sorted set. + * + * @param key the key + * @param members the member type: value + * @return Long integer-reply specifically: + * + * The number of members removed from the sorted set, not including non existing members. + */ + Long zrem(K key, V... members); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebylex(java.lang.Object, Range)} + */ + @Deprecated + Long zremrangebylex(K key, String min, String max); + + /** + * Remove all members in a sorted set between the given lexicographical range. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + Long zremrangebylex(K key, Range range); + + /** + * Remove all members in a sorted set within the given indexes. + * + * @param key the key + * @param start the start type: long + * @param stop the stop type: long + * @return Long integer-reply the number of elements removed. + */ + Long zremrangebyrank(K key, long start, long stop); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Long zremrangebyscore(K key, double min, double max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param min min score + * @param max max score + * @return Long integer-reply the number of elements removed. + * @deprecated Use {@link #zremrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Long zremrangebyscore(K key, String min, String max); + + /** + * Remove all members in a sorted set within the given scores. + * + * @param key the key + * @param range the range + * @return Long integer-reply the number of elements removed. + * @since 4.3 + */ + Long zremrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + List zrevrange(K key, long start, long stop); + + /** + * Stream over a range of members in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Long zrevrange(ValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param key the key + * @param start the start + * @param stop the stop + * @return List<V> array-reply list of elements in the specified range. + */ + List> zrevrangeWithScores(K key, long start, long stop); + + /** + * Stream over a range of members with scores in a sorted set, by index, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param start the start + * @param stop the stop + * @return Long count of elements in the specified range. + */ + Long zrevrangeWithScores(ScoredValueStreamingChannel channel, K key, long start, long stop); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrevrangebylex(K key, Range range); + + /** + * Return a range of members in a sorted set, by lexicographical range ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrevrangebylex(K key, Range range, Limit limit); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + List zrevrangebyscore(K key, double max, double min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param min min score + * @param max max score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + List zrevrangebyscore(K key, String max, String min); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrevrangebyscore(K key, Range range); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the withscores + * @param count the null + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + List zrevrangebyscore(K key, double max, double min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range, Limit)} + */ + @Deprecated + List zrevrangebyscore(K key, String max, String min, long offset, long count); + + /** + * Return a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit the limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List zrevrangebyscore(K key, Range range, Limit limit); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param max max score + * @param min min score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscore(java.lang.Object, Range)} + */ + @Deprecated + Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Long zrevrangebyscore(ValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Long zrevrangebyscore(ValueStreamingChannel channel, K key, double max, double min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + Long zrevrangebyscore(ValueStreamingChannel channel, K key, String max, String min, long offset, long count); + + /** + * Stream over a range of members in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Long zrevrangebyscore(ValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + List> zrevrangebyscoreWithScores(K key, double max, double min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range)} + */ + @Deprecated + List> zrevrangebyscoreWithScores(K key, String max, String min); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List> zrevrangebyscoreWithScores(K key, Range range); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<ScoredValue<V>> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + List> zrevrangebyscoreWithScores(K key, double max, double min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param max max score + * @param min min score + * @param offset the offset + * @param count the count + * @return List<V> array-reply list of elements in the specified score range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(java.lang.Object, Range, Limit)} + */ + @Deprecated + List> zrevrangebyscoreWithScores(K key, String max, String min, long offset, long count); + + /** + * Return a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param key the key + * @param range the range + * @param limit limit + * @return List<V> array-reply list of elements in the specified score range. + * @since 4.3 + */ + List> zrevrangebyscoreWithScores(K key, Range range, Limit limit); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range)} + */ + @Deprecated + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @return Long count of elements in the specified range. + */ + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, double max, double min, long offset, + long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param min min score + * @param max max score + * @param offset the offset + * @param count the count + * @return Long count of elements in the specified range. + * @deprecated Use {@link #zrevrangebyscoreWithScores(ScoredValueStreamingChannel, java.lang.Object, Range, Limit)} + */ + @Deprecated + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, String max, String min, long offset, + long count); + + /** + * Stream over a range of members with scores in a sorted set, by score, with scores ordered from high to low. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param range the range + * @param limit the limit + * @return Long count of elements in the specified range. + * @since 4.3 + */ + Long zrevrangebyscoreWithScores(ScoredValueStreamingChannel channel, K key, Range range, Limit limit); + + /** + * Determine the index of a member in a sorted set, with scores ordered from high to low. + * + * @param key the key + * @param member the member type: value + * @return Long integer-reply the rank of {@code member}. If {@code member} does not exist in the sorted set or {@code key} + * does not exist, + */ + Long zrevrank(K key, V member); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @return ScoredValueScanCursor<V> scan cursor. + */ + ScoredValueScanCursor zscan(K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + ScoredValueScanCursor zscan(K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return ScoredValueScanCursor<V> scan cursor. + */ + ScoredValueScanCursor zscan(K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return ScoredValueScanCursor<V> scan cursor. + */ + ScoredValueScanCursor zscan(K key, ScanCursor scanCursor); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @param scanArgs scan arguments + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor, ScanArgs scanArgs); + + /** + * Incrementally iterate sorted sets elements and associated scores. + * + * @param channel streaming channel that receives a call for every scored value + * @param key the key + * @param scanCursor cursor to resume from a previous scan, must not be {@literal null} + * @return StreamScanCursor scan cursor. + */ + StreamScanCursor zscan(ScoredValueStreamingChannel channel, K key, ScanCursor scanCursor); + + /** + * Get the score associated with the given member in a sorted set. + * + * @param key the key + * @param member the member type: value + * @return Double bulk-string-reply the score of {@code member} (a double precision floating point number), represented as + * string. + */ + Double zscore(K key, V member); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination destination key + * @param keys source keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Long zunionstore(K destination, K... keys); + + /** + * Add multiple sorted sets and store the resulting sorted set in a new key. + * + * @param destination the destination + * @param storeArgs the storeArgs + * @param keys the keys + * @return Long integer-reply the number of elements in the resulting sorted set at {@code destination}. + */ + Long zunionstore(K destination, ZStoreArgs storeArgs, K... keys); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisStreamCommands.java b/src/main/templates/io/lettuce/core/api/RedisStreamCommands.java new file mode 100644 index 0000000000..387f2aeb4b --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisStreamCommands.java @@ -0,0 +1,326 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.Limit; +import io.lettuce.core.Range; +import io.lettuce.core.StreamMessage; +import io.lettuce.core.XClaimArgs; +import io.lettuce.core.XReadArgs.StreamOffset; + +/** + * ${intent} for Streams. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 5.1 + */ +public interface RedisStreamCommands { + + /** + * Acknowledge one or more messages as processed. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param messageIds message Id's to acknowledge. + * @return simple-reply the lenght of acknowledged messages. + */ + Long xack(K key, K group, String... messageIds); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param body message body. + * @return simple-reply the message Id. + */ + String xadd(K key, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param body message body. + * @return simple-reply the message Id. + */ + String xadd(K key, XAddArgs args, Map body); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + String xadd(K key, Object... keysAndValues); + + /** + * Append a message to the stream {@code key}. + * + * @param key the stream key. + * @param args + * @param keysAndValues message body. + * @return simple-reply the message Id. + */ + String xadd(K key, XAddArgs args, Object... keysAndValues); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param minIdleTime + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + List> xclaim(K key, Consumer consumer, long minIdleTime, String... messageIds); + + /** + * Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. + *

    + * Note that setting the {@code JUSTID} flag (calling this method with {@link XClaimArgs#justid()}) suppresses the message + * bode and {@link StreamMessage#getBody()} is {@code null}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param args + * @param messageIds message Id's to claim. + * @return simple-reply the {@link StreamMessage} + */ + List> xclaim(K key, Consumer consumer, XClaimArgs args, String... messageIds); + + /** + * Removes the specified entries from the stream. Returns the number of items deleted, that may be different from the number + * of IDs passed in case certain IDs do not exist. + * + * @param key the stream key. + * @param messageIds stream message Id's. + * @return simple-reply number of removed entries. + */ + Long xdel(K key, String... messageIds); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + String xgroupCreate(StreamOffset streamOffset, K group); + + /** + * Create a consumer group. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @param args + * @return simple-reply {@literal true} if successful. + * @since 5.2 + */ + String xgroupCreate(StreamOffset streamOffset, K group, XGroupCreateArgs args); + + /** + * Delete a consumer from a consumer group. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @return simple-reply {@literal true} if successful. + */ + Boolean xgroupDelconsumer(K key, Consumer consumer); + + /** + * Destroy a consumer group. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return simple-reply {@literal true} if successful. + */ + Boolean xgroupDestroy(K key, K group); + + /** + * Set the current {@code group} id. + * + * @param streamOffset name of the stream containing the offset to set. + * @param group name of the consumer group. + * @return simple-reply OK + */ + String xgroupSetid(StreamOffset streamOffset, K group); + + /** + * Retrieve information about the stream at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + List xinfoStream(K key); + + /** + * Retrieve information about the stream consumer groups at {@code key}. + * + * @param key the stream key. + * @return List<Object> array-reply. + * @since 5.2 + */ + List xinfoGroups(K key); + + /** + * Retrieve information about consumer groups of group {@code group} and stream at {@code key}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply. + * @since 5.2 + */ + List xinfoConsumers(K key, K group); + + /** + * Get the length of a steam. + * + * @param key the stream key. + * @return simple-reply the lenght of the stream. + */ + Long xlen(K key); + + /** + * Read pending messages from a stream for a {@code group}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @return List<Object> array-reply list pending entries. + */ + List xpending(K key, K group); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param group name of the consumer group. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + List xpending(K key, K group, Range range, Limit limit); + + /** + * Read pending messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param consumer consumer identified by group name and consumer key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<Object> array-reply list with members of the resulting stream. + */ + List xpending(K key, Consumer consumer, Range range, Limit limit); + + /** + * Read messages from a stream within a specific {@link Range}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit}. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xrange(K key, Range range, Limit limit); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xread(StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s. + * + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xread(XReadArgs args, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xreadgroup(Consumer consumer, StreamOffset... streams); + + /** + * Read messages from one or more {@link StreamOffset}s using a consumer group. + * + * @param consumer consumer/group. + * @param args read arguments. + * @param streams the streams to read from. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xreadgroup(Consumer consumer, XReadArgs args, StreamOffset... streams); + + /** + * Read messages from a stream within a specific {@link Range} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xrevrange(K key, Range range); + + /** + * Read messages from a stream within a specific {@link Range} applying a {@link Limit} in reverse order. + * + * @param key the stream key. + * @param range must not be {@literal null}. + * @param limit must not be {@literal null}. + * @return List<StreamMessage> array-reply list with members of the resulting stream. + */ + List> xrevrange(K key, Range range, Limit limit); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + Long xtrim(K key, long count); + + /** + * Trims the stream to {@code count} elements. + * + * @param key the stream key. + * @param approximateTrimming {@literal true} to trim approximately using the {@code ~} flag. + * @param count length of the stream. + * @return simple-reply number of removed entries. + */ + Long xtrim(K key, boolean approximateTrimming, long count); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisStringCommands.java b/src/main/templates/io/lettuce/core/api/RedisStringCommands.java new file mode 100644 index 0000000000..72d22fe338 --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisStringCommands.java @@ -0,0 +1,377 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.List; +import java.util.Map; + +import io.lettuce.core.output.KeyValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; +import io.lettuce.core.BitFieldArgs; +import io.lettuce.core.KeyValue; +import io.lettuce.core.SetArgs; +import io.lettuce.core.Value; + +/** + * ${intent} for Strings. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisStringCommands { + + /** + * Append a value to a key. + * + * @param key the key + * @param value the value + * @return Long integer-reply the length of the string after the append operation. + */ + Long append(K key, V value); + + /** + * Count set bits in a string. + * + * @param key the key + * + * @return Long integer-reply The number of bits set to 1. + */ + Long bitcount(K key); + + /** + * Count set bits in a string. + * + * @param key the key + * @param start the start + * @param end the end + * + * @return Long integer-reply The number of bits set to 1. + */ + Long bitcount(K key, long start, long end); + + /** + * Execute {@code BITFIELD} with its subcommands. + * + * @param key the key + * @param bitFieldArgs the args containing subcommands, must not be {@literal null}. + * + * @return Long bulk-reply the results from the bitfield commands. + */ + List bitfield(K key, BitFieldArgs bitFieldArgs); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the state + * + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + */ + Long bitpos(K key, boolean state); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * @since 5.0.1 + */ + Long bitpos(K key, boolean state, long start); + + /** + * Find first bit set or clear in a string. + * + * @param key the key + * @param state the bit type: long + * @param start the start type: long + * @param end the end type: long + * @return Long integer-reply The command returns the position of the first bit set to 1 or 0 according to the request. + * + * If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is + * returned. + * + * If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns + * the first bit not part of the string on the right. So if the string is tree bytes set to the value 0xff the + * command {@code BITPOS key 0} will return 24, since up to bit 23 all the bits are 1. + * + * Basically the function consider the right of the string as padded with zeros if you look for clear bits and + * specify no range or the start argument only. + * + * However this behavior changes if you are looking for clear bits and specify a range with both + * start and end. If no clear bit is found in the specified range, the function + * returns -1 as the user specified a clear range and there are no 0 bits in that range. + */ + Long bitpos(K key, boolean state, long start, long end); + + /** + * Perform bitwise AND between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Long bitopAnd(K destination, K... keys); + + /** + * Perform bitwise NOT between strings. + * + * @param destination result key of the operation + * @param source operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Long bitopNot(K destination, K source); + + /** + * Perform bitwise OR between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Long bitopOr(K destination, K... keys); + + /** + * Perform bitwise XOR between strings. + * + * @param destination result key of the operation + * @param keys operation input key names + * @return Long integer-reply The size of the string stored in the destination key, that is equal to the size of the longest + * input string. + */ + Long bitopXor(K destination, K... keys); + + /** + * Decrement the integer value of a key by one. + * + * @param key the key + * @return Long integer-reply the value of {@code key} after the decrement + */ + Long decr(K key); + + /** + * Decrement the integer value of a key by the given number. + * + * @param key the key + * @param amount the decrement type: long + * @return Long integer-reply the value of {@code key} after the decrement + */ + Long decrby(K key, long amount); + + /** + * Get the value of a key. + * + * @param key the key + * @return V bulk-string-reply the value of {@code key}, or {@literal null} when {@code key} does not exist. + */ + V get(K key); + + /** + * Returns the bit value at offset in the string value stored at key. + * + * @param key the key + * @param offset the offset type: long + * @return Long integer-reply the bit value stored at offset. + */ + Long getbit(K key, long offset); + + /** + * Get a substring of the string stored at a key. + * + * @param key the key + * @param start the start type: long + * @param end the end type: long + * @return V bulk-string-reply + */ + V getrange(K key, long start, long end); + + /** + * Set the string value of a key and return its old value. + * + * @param key the key + * @param value the value + * @return V bulk-string-reply the old value stored at {@code key}, or {@literal null} when {@code key} did not exist. + */ + V getset(K key, V value); + + /** + * Increment the integer value of a key by one. + * + * @param key the key + * @return Long integer-reply the value of {@code key} after the increment + */ + Long incr(K key); + + /** + * Increment the integer value of a key by the given amount. + * + * @param key the key + * @param amount the increment type: long + * @return Long integer-reply the value of {@code key} after the increment + */ + Long incrby(K key, long amount); + + /** + * Increment the float value of a key by the given amount. + * + * @param key the key + * @param amount the increment type: double + * @return Double bulk-string-reply the value of {@code key} after the increment. + */ + Double incrbyfloat(K key, double amount); + + /** + * Get the values of all the given keys. + * + * @param keys the key + * @return List<V> array-reply list of values at the specified keys. + */ + List> mget(K... keys); + + /** + * Stream over the values of all the given keys. + * + * @param channel the channel + * @param keys the keys + * + * @return Long array-reply list of values at the specified keys. + */ + Long mget(KeyValueStreamingChannel channel, K... keys); + + /** + * Set multiple keys to multiple values. + * + * @param map the null + * @return String simple-string-reply always {@code OK} since {@code MSET} can't fail. + */ + String mset(Map map); + + /** + * Set multiple keys to multiple values, only if none of the keys exist. + * + * @param map the null + * @return Boolean integer-reply specifically: + * + * {@code 1} if the all the keys were set. {@code 0} if no key was set (at least one key already existed). + */ + Boolean msetnx(Map map); + + /** + * Set the string value of a key. + * + * @param key the key + * @param value the value + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + String set(K key, V value); + + /** + * Set the string value of a key. + * + * @param key the key + * @param value the value + * @param setArgs the setArgs + * + * @return String simple-string-reply {@code OK} if {@code SET} was executed correctly. + */ + String set(K key, V value, SetArgs setArgs); + + /** + * Sets or clears the bit at offset in the string value stored at key. + * + * @param key the key + * @param offset the offset type: long + * @param value the value type: string + * @return Long integer-reply the original bit value stored at offset. + */ + Long setbit(K key, long offset, int value); + + /** + * Set the value and expiration of a key. + * + * @param key the key + * @param seconds the seconds type: long + * @param value the value + * @return String simple-string-reply + */ + String setex(K key, long seconds, V value); + + /** + * Set the value and expiration in milliseconds of a key. + * + * @param key the key + * @param milliseconds the milliseconds type: long + * @param value the value + * @return String simple-string-reply + */ + String psetex(K key, long milliseconds, V value); + + /** + * Set the value of a key, only if the key does not exist. + * + * @param key the key + * @param value the value + * @return Boolean integer-reply specifically: + * + * {@code 1} if the key was set {@code 0} if the key was not set + */ + Boolean setnx(K key, V value); + + /** + * Overwrite part of a string at key starting at the specified offset. + * + * @param key the key + * @param offset the offset type: long + * @param value the value + * @return Long integer-reply the length of the string after it was modified by the command. + */ + Long setrange(K key, long offset, V value); + + /** + * Get the length of the value stored in a key. + * + * @param key the key + * @return Long integer-reply the length of the string at {@code key}, or {@code 0} when {@code key} does not exist. + */ + Long strlen(K key); +} diff --git a/src/main/templates/io/lettuce/core/api/RedisTransactionalCommands.java b/src/main/templates/io/lettuce/core/api/RedisTransactionalCommands.java new file mode 100644 index 0000000000..5926758ef3 --- /dev/null +++ b/src/main/templates/io/lettuce/core/api/RedisTransactionalCommands.java @@ -0,0 +1,70 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.api; + +import java.util.List; +import io.lettuce.core.TransactionResult; + +/** + * ${intent} for Transactions. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 4.0 + */ +public interface RedisTransactionalCommands { + + /** + * Discard all commands issued after MULTI. + * + * @return String simple-string-reply always {@code OK}. + */ + String discard(); + + /** + * Execute all commands issued after MULTI. + * + * @return List<Object> array-reply each element being the reply to each of the commands in the atomic transaction. + * + * When using {@code WATCH}, {@code EXEC} can return a {@link TransactionResult#wasDiscarded discarded + * TransactionResult}. + * @see TransactionResult#wasDiscarded + */ + TransactionResult exec(); + + /** + * Mark the start of a transaction block. + * + * @return String simple-string-reply always {@code OK}. + */ + String multi(); + + /** + * Watch the given keys to determine execution of the MULTI/EXEC block. + * + * @param keys the key + * @return String simple-string-reply always {@code OK}. + */ + String watch(K... keys); + + /** + * Forget about all watched keys. + * + * @return String simple-string-reply always {@code OK}. + */ + String unwatch(); +} diff --git a/src/site/markdown/download.md.vm b/src/site/markdown/download.md.vm deleted file mode 100644 index 250140c85a..0000000000 --- a/src/site/markdown/download.md.vm +++ /dev/null @@ -1,75 +0,0 @@ -Download lettuce -====================== - - lettuce is distributed under the - [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.txt). - - The checksum and signature are links to the originals on the main distribution server. - - -lettuce 3.x ------------ - -| | Download | Checksum | Signature | -| ----------------- |:-------------|:-------------|:-------------| -| lettuce (jar) | [lettuce-${lettuce3-release-version}.jar](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce3-release-version}/lettuce-${lettuce3-release-version}.jar)|[lettuce-${lettuce3-release-version}.jar.md5](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce3-release-version}/lettuce-${lettuce3-release-version}.jar.md5)|[lettuce-${lettuce3-release-version}.jar.asc](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce3-release-version}/lettuce-${lettuce3-release-version}.jar.asc)| -| lettuce shaded (jar) | [lettuce-${lettuce3-release-version}-shaded.jar](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-shaded.jar)|[lettuce-${lettuce3-release-version}-shaded.jar.md5](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-shaded.jar.md5)|[lettuce-${lettuce3-release-version}-shaded.jar.asc](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-shaded.jar.asc)| -| lettuce binary (zip) | [lettuce-${lettuce3-release-version}-bin.zip](https://github.com/mp911de/lettuce/releases/download/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-bin.zip)|[lettuce-${lettuce3-release-version}-bin.zip.md5](https://github.com/mp911de/lettuce/releases/download/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-bin.zip.md5)|[lettuce-${lettuce3-release-version}-bin.zip.asc](https://github.com/mp911de/lettuce/releases/download/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-bin.zip.asc)| -| lettuce binary (tar.gz) | [lettuce-${lettuce3-release-version}-bin.tar.gz](https://github.com/mp911de/lettuce/releases/download/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-bin.tar.gz)|[lettuce-${lettuce3-release-version}-bin.tar.gz.md5](https://github.com/mp911de/lettuce/releases/download/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-bin.tar.gz.md5)|[lettuce-${lettuce3-release-version}-bin.tar.gz.asc](https://github.com/mp911de/lettuce/releases/download/${lettuce3-release-version}/lettuce-${lettuce3-release-version}-bin.tar.gz.asc)| - - -lettuce 4.x ------------ - -| | Download | Checksum | Signature | -| ----------------- |:-------------|:-------------|:-------------| -| lettuce (jar) | [lettuce-${lettuce-release-version}.jar](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce-release-version}/lettuce-${lettuce-release-version}.jar)|[lettuce-${lettuce-release-version}.jar.md5](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce-release-version}/lettuce-${lettuce-release-version}.jar.md5)|[lettuce-${lettuce-release-version}.jar.asc](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce-release-version}/lettuce-${lettuce-release-version}.jar.asc)| -| lettuce shaded (jar) | [lettuce-${lettuce-release-version}-shaded.jar](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce-release-version}/lettuce-${lettuce-release-version}-shaded.jar)|[lettuce-${lettuce-release-version}-shaded.jar.md5](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce-release-version}/lettuce-${lettuce-release-version}-shaded.jar.md5)|[lettuce-${lettuce-release-version}-shaded.jar.asc](http://search.maven.org/remotecontent?filepath=biz/paluch/redis/lettuce/${lettuce-release-version}/lettuce-${lettuce-release-version}-shaded.jar.asc)| -| lettuce binary (zip) | [lettuce-${lettuce-release-version}-bin.zip](https://github.com/mp911de/lettuce/releases/download/${lettuce-release-version}/lettuce-${lettuce-release-version}-bin.zip)|[lettuce-${lettuce-release-version}-bin.zip.md5](https://github.com/mp911de/lettuce/releases/download/${lettuce-release-version}/lettuce-${lettuce-release-version}-bin.zip.md5)|[lettuce-${lettuce-release-version}-bin.zip.asc](https://github.com/mp911de/lettuce/releases/download/${lettuce-release-version}/lettuce-${lettuce-release-version}-bin.zip.asc)| -| lettuce binary (tar.gz) | [lettuce-${lettuce-release-version}-bin.tar.gz](https://github.com/mp911de/lettuce/releases/download/${lettuce-release-version}/lettuce-${lettuce-release-version}-bin.tar.gz)|[lettuce-${lettuce-release-version}-bin.tar.gz.md5](https://github.com/mp911de/lettuce/releases/download/${lettuce-release-version}/lettuce-${lettuce-release-version}-bin.tar.gz.md5)|[lettuce-${lettuce-release-version}-bin.tar.gz.asc](https://github.com/mp911de/lettuce/releases/download/${lettuce-release-version}/lettuce-${lettuce-release-version}-bin.tar.gz.asc)| - - - -It is essential that you verify the integrity of the downloaded files using the PGP or MD5 signatures. - -The PGP signatures can be verified using PGP or GPG. First download the -[KEYS](http://redis.paluch.biz/KEYS) as well as the asc signature file for the relevant distribution. -Make sure you get these files from Maven Central rather -than from a mirror. Then verify the signatures using - - - % gpg --import KEYS - % gpg --verify lettuce-${lettuce-release-version}.jar.asc - - -Alternatively, you can verify the MD5 signature on the files. A unix program called md5 or md5sum is included -in many unix distributions. - - -Previous Releases ------------------ - - All previous releases of lettuce can be found in - [Maven Central](http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22biz.paluch.redis%22%20AND%20a%3A%22lettuce%22). - - -Using lettuce on your classpath ------------------ - - To use lettuce in your application make sure that the jars are in the application's classpath. Add - the dependencies listed below to your classpath. - - lettuce-${lettuce-release-version}.jar - netty-buffer-${netty-version}.jar - netty-codec-${netty-version}.Final.jar - netty-common-${netty-version}.Final.jar - netty-transport-${netty-version}.jar - netty-transport-native-epoll-${netty-version}.jar - netty-handler-${netty-version}.jar - guava-17.0.jar - rxjava-1.1.9.jar - LatencyUtils-2.0.3.jar - HdrHistogram-2.1.8.jar - commons-pool2-2.4.2.jar - -You can do this from the command line or a manifest file. diff --git a/src/site/markdown/index.md.vm b/src/site/markdown/index.md.vm deleted file mode 100644 index 25a4d5792f..0000000000 --- a/src/site/markdown/index.md.vm +++ /dev/null @@ -1,117 +0,0 @@ -Introduction -============= - -Lettuce is a scalable thread-safe Redis client for synchronous, -asynchronous and reactive usage. Multiple threads may share one connection if they avoid blocking and transactional -operations such as `BLPOP` and `MULTI`/`EXEC`. -lettuce is built with [netty](https://github.com/netty/netty). -Supports advanced Redis features such as Sentinel, Cluster, Pipelining, Auto-Reconnect and Redis data models. - - -This version of lettuce has been tested against Redis and 3.0. - -* lettuce 3.x works with Java 6, 7 and 8, lettuce 4.x requires Java 8 -* [synchronous](https://github.com/mp911de/lettuce/wiki/Basic-usage), [asynchronous](https://github.com/mp911de/lettuce/wiki/Asynchronous-API-%284.0%29) and [reactive](https://github.com/mp911de/lettuce/wiki/Reactive-API-%284.0%29) usage -* [Redis Sentinel](https://github.com/mp911de/lettuce/wiki/Redis-Sentinel) -* [Redis Cluster](https://github.com/mp911de/lettuce/wiki/Redis-Cluster) -* [SSL](https://github.com/mp911de/lettuce/wiki/SSL-Connections) and [Unix Domain Socket](https://github.com/mp911de/lettuce/wiki/Unix-Domain-Sockets) connections -* [Streaming API](https://github.com/mp911de/lettuce/wiki/Streaming-API) -* [CDI](https://github.com/mp911de/lettuce/wiki/CDI-Support) and [Spring](https://github.com/mp911de/lettuce/wiki/Spring-Support) integration -* [Codecs](https://github.com/mp911de/lettuce/wiki/Codecs) (for UTF8/bit/JSON etc. representation of your data) -* multiple [Command Interfaces](https://github.com/mp911de/lettuce/wiki/Command-Interfaces-%284.0%29) - -See the [Wiki](https://github.com/mp911de/lettuce/wiki) for more docs. - -I'm developing and maintaining actively the fork of https://github.com/wg/lettuce - -[![Join the chat at https://gitter.im/mp911de/lettuce](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/mp911de/lettuce) [![Build Status](https://travis-ci.org/mp911de/lettuce.svg)](https://travis-ci.org/mp911de/lettuce) [![Coverage Status](https://img.shields.io/coveralls/mp911de/lettuce.svg)](https://coveralls.io/r/mp911de/lettuce) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/biz.paluch.redis/lettuce/badge.svg)](https://maven-badges.herokuapp.com/maven-central/biz.paluch.redis/lettuce) - -- - - - -3.x and 4.x ------------- -lettuce is available in two major versions. The 3.x stream and the 4.x stream. Both streams are maintained. - -After this release, the 4.x branch will be promoted to the default branch. -Following rules should give a guidance for the stream in which a particular change is done: - -**Changes affecting both streams** - -* New Redis commands (such as HSTRLEN) -* Bugfixes - -**Changes for the 4.x stream only** - -* New Redis paradigms -* Enriching the API (such as multi-key command execution in the Cluster API) -* Technical improvements to the client (such as the Reactive API) - -The 3.x stream will be maintained at least until end of 2016. - -- - - - -How to get -------------- - -``` - - biz.paluch.redis - lettuce - ${lettuce-release-version} - -``` - -Shaded JAR-File (packaged dependencies and relocated to the `com.lambdaworks` package to prevent version conflicts) - -``` - - biz.paluch.redis - lettuce - ${lettuce-release-version} - shaded - - - - io.reactivex - rxjava - - - org.latencyutils - LatencyUtils - - - io.netty - netty-common - - - io.netty - netty-transport - - - io.netty - netty-handler - - - io.netty - netty-codec - - - com.google.guava - guava - - - io.netty - netty-transport-native-epoll - - - org.apache.commons - commons-pool2 - - - -``` - -All versions: [Maven Central](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22biz.paluch.redis%22%20AND%20a%3A%22lettuce%22) - -Snapshots: [Sonatype OSS Repository](https://oss.sonatype.org/#nexus-search;gav~biz.paluch.redis~lettuce~~~) - diff --git a/src/site/site.xml b/src/site/site.xml deleted file mode 100644 index ad9fce6483..0000000000 --- a/src/site/site.xml +++ /dev/null @@ -1,53 +0,0 @@ - - - lettuce - Advanced Java Redis client - http://redis.paluch.biz/ - - - - - - - - - - - - - - - - - - - - - - - - true - - mp911de/lettuce - right - black - - - mp911de - true - true - - - piwik.paluch.biz - 6 - - - - - - org.apache.maven.skins - maven-fluido-skin - 1.3.1 - - diff --git a/src/test/bash/create_certificates.sh b/src/test/bash/create_certificates.sh new file mode 100755 index 0000000000..28db1eb517 --- /dev/null +++ b/src/test/bash/create_certificates.sh @@ -0,0 +1,131 @@ +#!/bin/bash + +DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" +CA_DIR=work/ca +TRUSTSTORE_FILE=work/truststore.jks +KEYSTORE_FILE=work/keystore.jks + +if [[ -d work/ca ]] ; then + rm -Rf ${CA_DIR} +fi + +if [[ -f ${TRUSTSTORE_FILE} ]] ; then + rm -Rf ${TRUSTSTORE_FILE} +fi + +if [[ -f ${KEYSTORE_FILE} ]] ; then + rm -Rf ${KEYSTORE_FILE} +fi + +if [ ! -x "$(which openssl)" ] ; then + echo "[ERROR] No openssl in PATH" + exit 1 +fi + +KEYTOOL=keytool + +if [ ! -x "${KEYTOOL}" ] ; then + KEYTOOL=${JAVA_HOME}/bin/keytool +fi + +if [ ! -x "${KEYTOOL}" ] ; then + echo "[ERROR] No keytool in PATH/JAVA_HOME" + exit 1 +fi + +mkdir -p ${CA_DIR}/private ${CA_DIR}/certs ${CA_DIR}/crl ${CA_DIR}/csr ${CA_DIR}/newcerts ${CA_DIR}/intermediate + +echo "[INFO] Generating CA private key" +# Less bits = less secure = faster to generate +openssl genrsa -passout pass:changeit -aes256 -out ${CA_DIR}/private/ca.key.pem 2048 + +chmod 400 ${CA_DIR}/private/ca.key.pem + +echo "[INFO] Generating CA certificate" +openssl req -config ${DIR}/openssl.cnf \ + -key ${CA_DIR}/private/ca.key.pem \ + -new -x509 -days 7300 -sha256 -extensions v3_ca \ + -out ${CA_DIR}/certs/ca.cert.pem \ + -passin pass:changeit \ + -subj "/C=NN/ST=Unknown/L=Unknown/O=lettuce/CN=CA Certificate" + +echo "[INFO] Prepare CA database" +echo 1000 > ${CA_DIR}/serial +touch ${CA_DIR}/index.txt + +function generateKey { + + host=$1 + ip=$2 + + echo "[INFO] Generating server private key" + openssl genrsa -aes256 \ + -passout pass:changeit \ + -out ${CA_DIR}/private/${host}.key.pem 2048 + + openssl rsa -in ${CA_DIR}/private/${host}.key.pem \ + -out ${CA_DIR}/private/${host}.decrypted.key.pem \ + -passin pass:changeit + + chmod 400 ${CA_DIR}/private/${host}.key.pem + chmod 400 ${CA_DIR}/private/${host}.decrypted.key.pem + + echo "[INFO] Generating server certificate request" + openssl req -config <(cat ${DIR}/openssl.cnf \ + <(printf "\n[SAN]\nsubjectAltName=DNS:${host},IP:${ip}")) \ + -reqexts SAN \ + -key ${CA_DIR}/private/${host}.key.pem \ + -passin pass:changeit \ + -new -sha256 -out ${CA_DIR}/csr/${host}.csr.pem \ + -subj "/C=NN/ST=Unknown/L=Unknown/O=lettuce/CN=${host}" + + echo "[INFO] Signing certificate request" + openssl ca -config ${DIR}/openssl.cnf \ + -extensions server_cert -days 375 -notext -md sha256 \ + -passin pass:changeit \ + -batch \ + -in ${CA_DIR}/csr/${host}.csr.pem \ + -out ${CA_DIR}/certs/${host}.cert.pem +} + +generateKey "localhost" "127.0.0.1" +generateKey "foo-host" "1.2.3.4" + +echo "[INFO] Generating client auth private key" +openssl genrsa -aes256 \ + -passout pass:changeit \ + -out ${CA_DIR}/private/client.key.pem 2048 + +openssl rsa -in ${CA_DIR}/private/client.key.pem \ + -out ${CA_DIR}/private/client.decrypted.key.pem \ + -passin pass:changeit + +chmod 400 ${CA_DIR}/private/client.key.pem + +echo "[INFO] Generating client certificate request" +openssl req -config ${DIR}/openssl.cnf \ + -key ${CA_DIR}/private/client.key.pem \ + -passin pass:changeit \ + -new -sha256 -out ${CA_DIR}/csr/client.csr.pem \ + -subj "/C=NN/ST=Unknown/L=Unknown/O=lettuce/CN=client" + +echo "[INFO] Signing certificate request" +openssl ca -config ${DIR}/openssl.cnf \ + -extensions usr_cert -days 375 -notext -md sha256 \ + -passin pass:changeit \ + -batch \ + -in ${CA_DIR}/csr/client.csr.pem \ + -out ${CA_DIR}/certs/client.cert.pem + +echo "[INFO] Creating PKCS12 file with client certificate" +openssl pkcs12 -export -clcerts \ + -in ${CA_DIR}/certs/client.cert.pem \ + -inkey ${CA_DIR}/private/client.decrypted.key.pem \ + -passout pass:changeit \ + -out ${CA_DIR}/client.p12 + +${KEYTOOL} -importcert -keystore ${TRUSTSTORE_FILE} -file ${CA_DIR}/certs/ca.cert.pem -noprompt -storepass changeit +${KEYTOOL} -importkeystore \ + -srckeystore ${CA_DIR}/client.p12 -srcstoretype PKCS12 -srcstorepass changeit\ + -destkeystore ${KEYSTORE_FILE} -deststoretype PKCS12 \ + -noprompt -storepass changeit diff --git a/src/test/bash/openssl.cnf b/src/test/bash/openssl.cnf new file mode 100644 index 0000000000..721bd488db --- /dev/null +++ b/src/test/bash/openssl.cnf @@ -0,0 +1,107 @@ +[ ca ] +# `man ca` +default_ca = CA_default + +[ CA_default ] +# Directory and file locations. +dir = work/ca +certs = $dir/certs +crl_dir = $dir/crl +new_certs_dir = $dir/newcerts +database = $dir/index.txt +serial = $dir/serial +RANDFILE = $dir/private/.rand + +# The root key and root certificate. +private_key = $dir/private/ca.key.pem +certificate = $dir/certs/ca.cert.pem + +# For certificate revocation lists. +crlnumber = $dir/crlnumber +crl = $dir/crl/ca.crl.pem +crl_extensions = crl_ext +default_crl_days = 30 + +# SHA-1 is deprecated, so use SHA-2 instead. +default_md = sha256 + +name_opt = ca_default +cert_opt = ca_default +default_days = 375 +preserve = no +policy = policy_strict +copy_extensions = copy + +[ policy_strict ] +# The root CA should only sign intermediate certificates that match. +# See the POLICY FORMAT section of `man ca`. +countryName = match +stateOrProvinceName = match +organizationName = match +organizationalUnitName = optional +commonName = supplied +emailAddress = optional + +[ req ] +# Options for the `req` tool (`man req`). +default_bits = 2048 +distinguished_name = req_distinguished_name +string_mask = utf8only + +# SHA-1 is deprecated, so use SHA-2 instead. +default_md = sha256 + +# Extension to add when the -x509 option is used. +x509_extensions = v3_ca + +[ req_distinguished_name ] +# See . +countryName = Country Name (2 letter code) +stateOrProvinceName = State or Province Name +localityName = Locality Name +0.organizationName = Organization Name +organizationalUnitName = Organizational Unit Name +commonName = Common Name +emailAddress = Email Address + +# Optionally, specify some defaults. +countryName_default = NN +stateOrProvinceName_default = Vault Test +localityName_default = +0.organizationName_default = spring-cloud-vault-config +#organizationalUnitName_default = +#emailAddress_default = info@spring-cloud-vault-config.dummy + +[ v3_ca ] +# Extensions for a typical CA (`man x509v3_config`). +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid:always,issuer +basicConstraints = critical, CA:true +keyUsage = critical, digitalSignature, cRLSign, keyCertSign + +[ v3_intermediate_ca ] +# Extensions for a typical intermediate CA (`man x509v3_config`). +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid:always,issuer +basicConstraints = critical, CA:true, pathlen:0 +keyUsage = critical, digitalSignature, cRLSign, keyCertSign + +[ usr_cert ] +# Extensions for client certificates (`man x509v3_config`). +basicConstraints = CA:FALSE +nsCertType = client, email +nsComment = "OpenSSL Generated Client Certificate" +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid,issuer +keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment +extendedKeyUsage = clientAuth, emailProtection + +[ server_cert ] +# Extensions for server certificates (`man x509v3_config`). +basicConstraints = CA:FALSE +nsCertType = server +nsComment = "OpenSSL Generated Server Certificate" +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid,issuer:always +keyUsage = critical, digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth diff --git a/src/test/java/biz/paluch/redis/extensibility/LettuceGeoDemo.java b/src/test/java/biz/paluch/redis/extensibility/LettuceGeoDemo.java index 6e7f515d0b..8a026da45d 100644 --- a/src/test/java/biz/paluch/redis/extensibility/LettuceGeoDemo.java +++ b/src/test/java/biz/paluch/redis/extensibility/LettuceGeoDemo.java @@ -1,14 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ package biz.paluch.redis.extensibility; -import java.util.*; -import com.lambdaworks.redis.*; +import java.util.List; +import java.util.Set; + +import io.lettuce.core.*; +import io.lettuce.core.api.sync.RedisCommands; public class LettuceGeoDemo { public static void main(String[] args) { RedisClient redisClient = RedisClient.create(RedisURI.Builder.redis("localhost", 6379).build()); - RedisConnection redis = (RedisConnection) redisClient.connect(); + RedisCommands redis = redisClient.connect().sync(); String key = "my-geo-set"; redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim", 8.3796281, 48.9978127, "Office tower", 8.665351, 49.553302, @@ -32,16 +50,16 @@ public static void main(String[] args) { // ordered descending by distance and containing distance/coordinates GeoWithin weinheim = georadiusWithArgs.get(0); - System.out.println("Member: " + weinheim.member); - System.out.println("Geo hash: " + weinheim.geohash); - System.out.println("Distance: " + weinheim.distance); - System.out.println("Coordinates: " + weinheim.coordinates.x + "/" + weinheim.coordinates.y); + System.out.println("Member: " + weinheim.getMember()); + System.out.println("Geo hash: " + weinheim.getGeohash()); + System.out.println("Distance: " + weinheim.getDistance()); + System.out.println("Coordinates: " + weinheim.getCoordinates().getX() + "/" + weinheim.getCoordinates().getY()); List geopos = redis.geopos(key, "Weinheim", "Train station"); GeoCoordinates weinheimGeopos = geopos.get(0); - System.out.println("Coordinates: " + weinheimGeopos.x + "/" + weinheimGeopos.y); + System.out.println("Coordinates: " + weinheimGeopos.getX() + "/" + weinheimGeopos.getY()); - redis.close(); + redis.getStatefulConnection().close(); redisClient.shutdown(); } } diff --git a/src/test/java/biz/paluch/redis/extensibility/MyExtendedRedisClient.java b/src/test/java/biz/paluch/redis/extensibility/MyExtendedRedisClient.java index f1c2014db6..345fbbd1c6 100644 --- a/src/test/java/biz/paluch/redis/extensibility/MyExtendedRedisClient.java +++ b/src/test/java/biz/paluch/redis/extensibility/MyExtendedRedisClient.java @@ -1,39 +1,50 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ package biz.paluch.redis.extensibility; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.pubsub.PubSubCommandHandler; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnectionImpl; +import java.time.Duration; import javax.enterprise.inject.Alternative; -import java.util.concurrent.TimeUnit; + +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.pubsub.PubSubEndpoint; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnectionImpl; +import io.lettuce.core.resource.ClientResources; /** * Demo code for extending a RedisClient. - * + * * @author Mark Paluch */ @Alternative public class MyExtendedRedisClient extends RedisClient { - public MyExtendedRedisClient() { - } - public MyExtendedRedisClient(String host) { - super(host); + public MyExtendedRedisClient(ClientResources clientResources, RedisURI redisURI) { + super(clientResources, redisURI); } - public MyExtendedRedisClient(String host, int port) { - super(host, port); - } - - public MyExtendedRedisClient(RedisURI redisURI) { - super(redisURI); + public MyExtendedRedisClient() { } @Override - protected StatefulRedisPubSubConnectionImpl newStatefulRedisPubSubConnection( - PubSubCommandHandler commandHandler, RedisCodec codec, long timeout, TimeUnit unit) { - return new MyPubSubConnection<>(commandHandler, codec, timeout, unit); + protected StatefulRedisPubSubConnectionImpl newStatefulRedisPubSubConnection(PubSubEndpoint endpoint, + RedisChannelWriter channelWriter, RedisCodec codec, Duration timeout) { + return new MyPubSubConnection<>(endpoint, channelWriter, codec, timeout); } } diff --git a/src/test/java/biz/paluch/redis/extensibility/MyExtendedRedisClientTest.java b/src/test/java/biz/paluch/redis/extensibility/MyExtendedRedisClientTest.java index a2d1db7ae7..a596784d1b 100644 --- a/src/test/java/biz/paluch/redis/extensibility/MyExtendedRedisClientTest.java +++ b/src/test/java/biz/paluch/redis/extensibility/MyExtendedRedisClientTest.java @@ -1,50 +1,68 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ package biz.paluch.redis.extensibility; -import static org.assertj.core.api.Assertions.*; +import static org.assertj.core.api.Assertions.assertThat; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.pubsub.RedisPubSubAsyncCommandsImpl; -import com.lambdaworks.redis.pubsub.api.async.RedisPubSubAsyncCommands; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.pubsub.RedisPubSubAsyncCommandsImpl; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.settings.TestSettings; /** * Test for override/extensability of RedisClient */ -public class MyExtendedRedisClientTest { - public static final String host = TestSettings.host(); - public static final int port = TestSettings.port(); +class MyExtendedRedisClientTest { + private static final String host = TestSettings.host(); + private static final int port = TestSettings.port(); - protected static MyExtendedRedisClient client; - protected RedisConnection redis; + private static MyExtendedRedisClient client; + protected RedisCommands redis; protected String key = "key"; protected String value = "value"; - @BeforeClass - public static void setupClient() { + @BeforeAll + static void setupClient() { client = getRedisClient(); } - protected static MyExtendedRedisClient getRedisClient() { - return new MyExtendedRedisClient(host, port); + static MyExtendedRedisClient getRedisClient() { + return new MyExtendedRedisClient(null, RedisURI.create(host, port)); } - @AfterClass - public static void shutdownClient() { + @AfterAll + static void shutdownClient() { FastShutdown.shutdown(client); } @Test - public void testPubsub() throws Exception { - RedisPubSubAsyncCommands connection = client.connectPubSub().async(); - assertThat(connection).isInstanceOf(RedisPubSubAsyncCommandsImpl.class); - assertThat(connection.getStatefulConnection()).isInstanceOf(MyPubSubConnection.class); - connection.set("key", "value").get(); + void testPubsub() throws Exception { + StatefulRedisPubSubConnection connection = client + .connectPubSub(); + RedisPubSubAsyncCommands commands = connection.async(); + assertThat(commands).isInstanceOf(RedisPubSubAsyncCommandsImpl.class); + assertThat(commands.getStatefulConnection()).isInstanceOf(MyPubSubConnection.class); + commands.set("key", "value").get(); connection.close(); - } } diff --git a/src/test/java/biz/paluch/redis/extensibility/MyPubSubConnection.java b/src/test/java/biz/paluch/redis/extensibility/MyPubSubConnection.java index 77d1e9be71..7187a1df09 100644 --- a/src/test/java/biz/paluch/redis/extensibility/MyPubSubConnection.java +++ b/src/test/java/biz/paluch/redis/extensibility/MyPubSubConnection.java @@ -1,67 +1,59 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ package biz.paluch.redis.extensibility; -import java.util.concurrent.TimeUnit; +import java.time.Duration; import java.util.concurrent.atomic.AtomicInteger; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.pubsub.PubSubOutput; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnectionImpl; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.pubsub.PubSubEndpoint; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnectionImpl; /** * Demo code for extending a RedisPubSubConnectionImpl. - * + * * @author Mark Paluch */ @SuppressWarnings("unchecked") -public class MyPubSubConnection extends StatefulRedisPubSubConnectionImpl { +class MyPubSubConnection extends StatefulRedisPubSubConnectionImpl { private AtomicInteger subscriptions = new AtomicInteger(); /** * Initialize a new connection. - * - * @param writer + * + * @param endpoint + * @param writer the channel writer * @param codec Codec used to encode/decode keys and values. - * @param timeout Maximum time to wait for a responses. - * @param unit Unit of time for the timeout. + * @param timeout Maximum time to wait for a response. */ - public MyPubSubConnection(RedisChannelWriter writer, RedisCodec codec, long timeout, TimeUnit unit) { - super(writer, codec, timeout, unit); + public MyPubSubConnection(PubSubEndpoint endpoint, RedisChannelWriter writer, RedisCodec codec, Duration timeout) { + super(endpoint, writer, codec, timeout); } @Override - public > C dispatch(C cmd) { + public RedisCommand dispatch(RedisCommand command) { - if (cmd.getType() == CommandType.SUBSCRIBE) { + if (command.getType() == CommandType.SUBSCRIBE) { subscriptions.incrementAndGet(); } - return super.dispatch(cmd); + return super.dispatch(command); } - - public void channelRead(Object msg) { - PubSubOutput output = (PubSubOutput) msg; - // update internal state - switch (output.type()) { - case psubscribe: - patterns.add(output.pattern()); - break; - case punsubscribe: - patterns.remove(output.pattern()); - break; - case subscribe: - channels.add(output.channel()); - break; - case unsubscribe: - channels.remove(output.channel()); - break; - default: - break; - } - super.channelRead(msg); - } - } diff --git a/src/test/java/com/lambdaworks/CanConnect.java b/src/test/java/com/lambdaworks/CanConnect.java deleted file mode 100644 index 764cd2c267..0000000000 --- a/src/test/java/com/lambdaworks/CanConnect.java +++ /dev/null @@ -1,43 +0,0 @@ -package com.lambdaworks; - -import java.io.IOException; -import java.net.InetSocketAddress; -import java.net.Socket; -import java.net.SocketAddress; -import java.util.concurrent.TimeUnit; - -/** - * @author Mark Paluch - * @soundtrack Ronski Speed - Maracaido Sessions, formerly Tool Sessions (May 2016) - */ -public class CanConnect { - - /** - * Check whether a TCP connection can be established to the given {@link SocketAddress}. - * - * @param host - * @param port - * @return - */ - public static boolean to(String host, int port) { - return to(new InetSocketAddress(host, port)); - } - - /** - * Check whether a TCP connection can be established to the given {@link SocketAddress}. - * - * @param socketAddress - * @return - */ - public static boolean to(SocketAddress socketAddress) { - - try { - Socket socket = new Socket(); - socket.connect(socketAddress, (int) TimeUnit.SECONDS.toMillis(5)); - socket.close(); - return true; - } catch (IOException e) { - return false; - } - } -} diff --git a/src/test/java/com/lambdaworks/Connections.java b/src/test/java/com/lambdaworks/Connections.java deleted file mode 100644 index ea11e61cf7..0000000000 --- a/src/test/java/com/lambdaworks/Connections.java +++ /dev/null @@ -1,34 +0,0 @@ -package com.lambdaworks; - -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.redis.RedisChannelHandler; -import com.lambdaworks.redis.StatefulRedisConnectionImpl; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; - -import io.netty.channel.Channel; - -/** - * @author Mark Paluch - */ -public class Connections { - - public static Channel getChannel(StatefulConnection connection) { - RedisChannelHandler channelHandler = (RedisChannelHandler) connection; - - Channel channel = (Channel) ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "channel"); - return channel; - } - - public static ConnectionWatchdog getConnectionWatchdog(StatefulConnection connection) { - - Channel channel = getChannel(connection); - return channel.pipeline().get(ConnectionWatchdog.class); - } - - public static StatefulRedisConnectionImpl getStatefulConnection(RedisAsyncCommands connection) { - return (StatefulRedisConnectionImpl) connection.getStatefulConnection(); - } -} diff --git a/src/test/java/com/lambdaworks/Delay.java b/src/test/java/com/lambdaworks/Delay.java deleted file mode 100644 index b1d36361d6..0000000000 --- a/src/test/java/com/lambdaworks/Delay.java +++ /dev/null @@ -1,18 +0,0 @@ -package com.lambdaworks; - -import com.google.code.tempusfugit.temporal.Duration; - -/** - * @author Mark Paluch - */ -public class Delay { - - public static void delay(Duration duration) { - - try { - Thread.sleep(duration.inMillis()); - } catch (InterruptedException e) { - throw new IllegalStateException(e); - } - } -} diff --git a/src/test/java/com/lambdaworks/Futures.java b/src/test/java/com/lambdaworks/Futures.java deleted file mode 100644 index 4bb354538b..0000000000 --- a/src/test/java/com/lambdaworks/Futures.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks; - -import java.util.Collection; -import java.util.concurrent.Future; - -/** - * @author Mark Paluch - */ -public class Futures { - - /** - * Check if all {@code futures} are {@link Future#isDone() completed}. - * - * @param futures - * @return {@literal true} if all {@code futures} are {@link Future#isDone() completed} - */ - public static boolean areAllCompleted(Collection> futures) { - - for (Future future : futures) { - if (!future.isDone()) { - return false; - } - } - return true; - } -} diff --git a/src/test/java/com/lambdaworks/LoggingTestRule.java b/src/test/java/com/lambdaworks/LoggingTestRule.java deleted file mode 100644 index f18befd8c8..0000000000 --- a/src/test/java/com/lambdaworks/LoggingTestRule.java +++ /dev/null @@ -1,104 +0,0 @@ -package com.lambdaworks; - -import java.io.ByteArrayOutputStream; -import java.io.PrintStream; -import java.lang.management.ManagementFactory; -import java.lang.management.ThreadInfo; -import java.lang.management.ThreadMXBean; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.junit.rules.MethodRule; -import org.junit.runners.model.FrameworkMethod; -import org.junit.runners.model.Statement; - -/** - * @author Mark Paluch - */ -public class LoggingTestRule implements MethodRule { - - private boolean threadDumpOnFailure = false; - - public LoggingTestRule(boolean threadDumpOnFailure) { - this.threadDumpOnFailure = threadDumpOnFailure; - } - - @Override - public Statement apply(Statement base, FrameworkMethod method, Object target) { - - return new Statement() { - @Override - public void evaluate() throws Throwable { - Logger logger = LogManager.getLogger(method.getMethod().getDeclaringClass()); - logger.info("---------------------------------------"); - logger.info("-- Invoke method " + method.getMethod().getDeclaringClass().getSimpleName() + "." - + method.getName()); - logger.info("---------------------------------------"); - - try { - base.evaluate(); - } catch (Throwable t) { - if (threadDumpOnFailure) { - printThreadDump(logger); - } - - throw t; - } finally { - logger.info("---------------------------------------"); - logger.info("-- Finished method " + method.getMethod().getDeclaringClass().getSimpleName() + "." - + method.getName()); - logger.info("---------------------------------------"); - } - } - }; - } - - private void printThreadDump(Logger logger) { - logger.info("---------------------------------------"); - - ByteArrayOutputStream buffer = getThreadDump(); - logger.info("-- Thread dump: " + buffer.toString()); - - logger.info("---------------------------------------"); - } - - private ByteArrayOutputStream getThreadDump() { - ThreadMXBean threadBean = ManagementFactory.getThreadMXBean(); - long[] threadIds = threadBean.getAllThreadIds(); - ByteArrayOutputStream buffer = new ByteArrayOutputStream(); - PrintStream stream = new PrintStream(buffer); - - for (long tid : threadIds) { - ThreadInfo info = threadBean.getThreadInfo(tid, 50); - if (info == null) { - stream.println(" Inactive"); - continue; - } - stream.println("Thread " + getTaskName(info.getThreadId(), info.getThreadName()) + ":"); - Thread.State state = info.getThreadState(); - stream.println(" State: " + state); - stream.println(" Blocked count: " + info.getBlockedCount()); - stream.println(" Waited count: " + info.getWaitedCount()); - - if (state == Thread.State.WAITING) { - stream.println(" Waiting on " + info.getLockName()); - } else if (state == Thread.State.BLOCKED) { - stream.println(" Blocked on " + info.getLockName()); - stream.println(" Blocked by " + getTaskName(info.getLockOwnerId(), info.getLockOwnerName())); - } - stream.println(" Stack:"); - for (StackTraceElement frame : info.getStackTrace()) { - stream.println(" " + frame.toString()); - } - } - stream.flush(); - return buffer; - } - - private static String getTaskName(long id, String name) { - if (name == null) { - return Long.toString(id); - } - return id + " (" + name + ")"; - } -} diff --git a/src/test/java/com/lambdaworks/RandomKeys.java b/src/test/java/com/lambdaworks/RandomKeys.java deleted file mode 100644 index 0524e3f7e1..0000000000 --- a/src/test/java/com/lambdaworks/RandomKeys.java +++ /dev/null @@ -1,55 +0,0 @@ -package com.lambdaworks; - -import java.util.*; - -import org.apache.commons.lang3.RandomStringUtils; - -/** - * Random keys for testing slot-hashes. - * - * @author Mark Paluch - */ -public class RandomKeys { - - /** - * Ordered list of random keys. The order corresponds with the list of {@code VALUES}. - */ - public static final List KEYS; - - /** - * Ordered list of random values. The order corresponds with the list of {@code KEYS}. - */ - public static final List VALUES; - - /** - * Mapping between {@code KEYS} and {@code VALUES} - */ - public static final Map MAP; - - /** - * Number of entries. - */ - public final static int COUNT = 500; - - static { - - List keys = new ArrayList<>(); - List values = new ArrayList<>(); - Map map = new HashMap<>(); - - for (int i = 0; i < COUNT; i++) { - - String key = RandomStringUtils.random(10, true, true); - String value = RandomStringUtils.random(10, true, true); - - keys.add(key); - values.add(value); - map.put(key, value); - } - - KEYS = Collections.unmodifiableList(keys); - VALUES = Collections.unmodifiableList(values); - MAP = Collections.unmodifiableMap(map); - } - -} diff --git a/src/test/java/com/lambdaworks/Sockets.java b/src/test/java/com/lambdaworks/Sockets.java deleted file mode 100644 index c6fef47ce5..0000000000 --- a/src/test/java/com/lambdaworks/Sockets.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks; - -import java.io.IOException; -import java.net.InetSocketAddress; -import java.net.Socket; -import java.util.concurrent.TimeUnit; - -/** - * @author Mark Paluch - */ -public class Sockets { - public static boolean isOpen(String host, int port) { - Socket socket = new Socket(); - try { - socket.connect(new InetSocketAddress(host, port), (int) TimeUnit.MILLISECONDS.convert(1, TimeUnit.SECONDS)); - socket.close(); - return true; - } catch (IOException e) { - return false; - } - } - - private Sockets() { - // unused - } -} diff --git a/src/test/java/com/lambdaworks/SslTest.java b/src/test/java/com/lambdaworks/SslTest.java deleted file mode 100644 index 2b3fb2e288..0000000000 --- a/src/test/java/com/lambdaworks/SslTest.java +++ /dev/null @@ -1,241 +0,0 @@ -package com.lambdaworks; - -import static com.lambdaworks.redis.TestSettings.host; -import static com.lambdaworks.redis.TestSettings.sslPort; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; -import static org.junit.Assume.assumeTrue; - -import java.io.File; -import java.io.IOException; -import java.security.cert.CertificateException; -import java.util.List; -import java.util.concurrent.ExecutionException; - -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.pubsub.api.async.RedisPubSubAsyncCommands; -import com.lambdaworks.redis.pubsub.api.sync.RedisPubSubCommands; - -import io.netty.handler.codec.DecoderException; - -/** - * @author Mark Paluch - */ -public class SslTest extends AbstractTest { - - private static final String KEYSTORE = "work/keystore.jks"; - private static final String LOCALHOST_KEYSTORE = "work/keystore-localhost.jks"; - private static final RedisClient redisClient = RedisClient.create(); - - private static final RedisURI URI_NO_VERIFY = RedisURI.Builder.redis(host(), sslPort()) // - .withSsl(true) // - .withVerifyPeer(false) // - .build(); - - private static final RedisURI URI_VERIFY = RedisURI.Builder.redis(host(), sslPort(1)) // - .withSsl(true) // - .withVerifyPeer(true) // - .build(); - - @Before - public void before() throws Exception { - - assumeTrue("Assume that stunnel runs on port 6443", Sockets.isOpen(host(), sslPort())); - assertThat(new File(KEYSTORE)).exists(); - - System.setProperty("javax.net.ssl.trustStore", KEYSTORE); - redisClient.setOptions(ClientOptions.create()); - } - - @AfterClass - public static void afterClass() { - FastShutdown.shutdown(redisClient); - } - - @Test - public void standaloneWithSsl() throws Exception { - - RedisCommands connection = redisClient.connect(URI_NO_VERIFY).sync(); - connection.set("key", "value"); - assertThat(connection.get("key")).isEqualTo("value"); - connection.close(); - } - - @Test - public void standaloneWithJdkSsl() throws Exception { - - SslOptions sslOptions = SslOptions.builder() // - .jdkSslProvider() // - .truststore(new File(LOCALHOST_KEYSTORE)) // - .build(); - setOptions(sslOptions); - - verifyConnection(URI_VERIFY); - } - - @Test - public void standaloneWithJdkSslUsingTruststoreUrl() throws Exception { - - SslOptions sslOptions = SslOptions.builder() // - .jdkSslProvider() // - .truststore(new File(LOCALHOST_KEYSTORE).toURI().toURL()) // - .build(); - setOptions(sslOptions); - - verifyConnection(URI_VERIFY); - } - - @Test(expected = RedisConnectionException.class) - public void standaloneWithJdkSslUsingTruststoreUrlWithWrongPassword() throws Exception { - - SslOptions sslOptions = SslOptions.builder() // - .jdkSslProvider() // - .truststore(new File(LOCALHOST_KEYSTORE).toURI().toURL(), "knödel") // - .build(); - setOptions(sslOptions); - - verifyConnection(URI_VERIFY); - } - - @Test(expected = RedisConnectionException.class) - public void standaloneWithJdkSslFailsWithWrongTruststore() throws Exception { - - SslOptions sslOptions = SslOptions.builder() // - .jdkSslProvider() // - .build(); - setOptions(sslOptions); - - verifyConnection(URI_VERIFY); - } - - @Test - public void standaloneWithOpenSsl() throws Exception { - - SslOptions sslOptions = SslOptions.builder() // - .openSslProvider() // - .truststore(new File(LOCALHOST_KEYSTORE)) // - .build(); - setOptions(sslOptions); - - verifyConnection(URI_VERIFY); - } - - @Test(expected = RedisConnectionException.class) - public void standaloneWithOpenSslFailsWithWrongTruststore() throws Exception { - - SslOptions sslOptions = SslOptions.builder() // - .openSslProvider() // - .build(); - setOptions(sslOptions); - - verifyConnection(URI_VERIFY); - } - - @Test - public void pingBeforeActivate() throws Exception { - - redisClient.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); - - verifyConnection(URI_NO_VERIFY); - } - - @Test - public void regularSslWithReconnect() throws Exception { - - - RedisCommands connection = redisClient.connect(URI_NO_VERIFY).sync(); - connection.quit(); - Thread.sleep(200); - assertThat(connection.ping()).isEqualTo("PONG"); - connection.close(); - } - - @Test(expected = RedisConnectionException.class) - public void sslWithVerificationWillFail() throws Exception { - - RedisURI redisUri = RedisURI.create("rediss://" + host() + ":" + sslPort()); - redisClient.connect(redisUri).sync(); - } - - @Test - public void pubSubSsl() throws Exception { - - RedisPubSubCommands connection = redisClient.connectPubSub(URI_NO_VERIFY).sync(); - connection.subscribe("c1"); - connection.subscribe("c2"); - Thread.sleep(200); - - RedisPubSubCommands connection2 = redisClient.connectPubSub(URI_NO_VERIFY).sync(); - - assertThat(connection2.pubsubChannels()).contains("c1", "c2"); - connection.quit(); - Thread.sleep(200); - Wait.untilTrue(connection::isOpen).waitOrTimeout(); - Wait.untilEquals(2, () -> connection2.pubsubChannels().size()).waitOrTimeout(); - - assertThat(connection2.pubsubChannels()).contains("c1", "c2"); - - connection.close(); - connection2.close(); - } - - @Test - public void pubSubSslAndBreakConnection() throws Exception { - - RedisURI redisURI = RedisURI.Builder.redis(host(), sslPort()).withSsl(true).withVerifyPeer(false) - .build(); - redisClient.setOptions(ClientOptions.builder().suspendReconnectOnProtocolFailure(true).build()); - - RedisPubSubAsyncCommands connection = redisClient.connectPubSub(redisURI).async(); - connection.subscribe("c1").get(); - connection.subscribe("c2").get(); - Thread.sleep(200); - - RedisPubSubAsyncCommands connection2 = redisClient.connectPubSub(redisURI).async(); - - assertThat(connection2.pubsubChannels().get()).contains("c1", "c2"); - - redisURI.setVerifyPeer(true); - - connection.quit(); - Thread.sleep(500); - - RedisFuture> future = connection2.pubsubChannels(); - assertThat(future.get()).doesNotContain("c1", "c2"); - assertThat(future.isDone()).isEqualTo(true); - - RedisFuture> defectFuture = connection.pubsubChannels(); - - try { - assertThat(defectFuture.get()).doesNotContain("c1", "c2"); - fail("Missing ExecutionException with nested SSLHandshakeException"); - } catch (InterruptedException e) { - fail("Missing ExecutionException with nested SSLHandshakeException"); - } catch (ExecutionException e) { - assertThat(e).hasCauseInstanceOf(DecoderException.class); - assertThat(e).hasRootCauseInstanceOf(CertificateException.class); - } - - assertThat(defectFuture.isDone()).isEqualTo(true); - - connection.close(); - connection2.close(); - } - - private void setOptions(SslOptions sslOptions) { - ClientOptions clientOptions = ClientOptions.builder().sslOptions(sslOptions).build(); - redisClient.setOptions(clientOptions); - } - - private void verifyConnection(RedisURI redisUri) { - StatefulRedisConnection connection = redisClient.connect(redisUri); - connection.sync().ping(); - connection.close(); - } -} diff --git a/src/test/java/com/lambdaworks/TestClientResources.java b/src/test/java/com/lambdaworks/TestClientResources.java deleted file mode 100644 index 89478cc9a1..0000000000 --- a/src/test/java/com/lambdaworks/TestClientResources.java +++ /dev/null @@ -1,35 +0,0 @@ -package com.lambdaworks; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.TestEventLoopGroupProvider; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.DefaultClientResources; - -/** - * Client-Resources suitable for testing. Uses {@link com.lambdaworks.redis.TestEventLoopGroupProvider} to preserve the event - * loop groups between tests. Every time a new {@link TestClientResources} instance is created, shutdown hook is added - * {@link Runtime#addShutdownHook(Thread)}. - * - * @author Mark Paluch - */ -public class TestClientResources { - - public static ClientResources create() { - final DefaultClientResources resources = new DefaultClientResources.Builder().eventLoopGroupProvider( - new TestEventLoopGroupProvider()).build(); - - Runtime.getRuntime().addShutdownHook(new Thread() { - @Override - public void run() { - try { - resources.shutdown(100, 100, TimeUnit.MILLISECONDS).get(10, TimeUnit.SECONDS); - } catch (Exception e) { - e.printStackTrace(); - } - } - }); - - return resources; - } -} diff --git a/src/test/java/com/lambdaworks/Wait.java b/src/test/java/com/lambdaworks/Wait.java deleted file mode 100644 index 12f941f157..0000000000 --- a/src/test/java/com/lambdaworks/Wait.java +++ /dev/null @@ -1,248 +0,0 @@ -package com.lambdaworks; - -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeoutException; -import java.util.function.Function; -import java.util.function.Predicate; - -import com.google.code.tempusfugit.temporal.Duration; -import com.google.code.tempusfugit.temporal.Sleeper; -import com.google.code.tempusfugit.temporal.ThreadSleep; -import com.google.code.tempusfugit.temporal.Timeout; - -/** - * Wait-Until helper. - * - * @author Mark Paluch - */ -public class Wait { - - /** - * Initialize a {@link com.lambdaworks.Wait.WaitBuilder} to wait until the {@code supplier} supplies {@literal true} - * - * @param supplier - * @return - */ - public static WaitBuilder untilTrue(Supplier supplier) { - WaitBuilder wb = new WaitBuilder<>(); - - wb.supplier = supplier; - wb.check = o -> o; - - return wb; - } - - /** - * Initialize a {@link com.lambdaworks.Wait.WaitBuilder} to wait until the {@code condition} does not throw exceptions - * - * @param condition - * @return - */ - public static WaitBuilder untilNoException(VoidWaitCondition condition) { - WaitBuilder wb = new WaitBuilder<>(); - wb.waitCondition = () -> { - try { - condition.test(); - return true; - } catch (Exception e) { - return false; - } - }; - - wb.supplier = () -> { - condition.test(); - return null; - }; - - return wb; - } - - /** - * Initialize a {@link com.lambdaworks.Wait.WaitBuilder} to wait until the {@code actualSupplier} provides an object that is - * not equal to {@code expectation} - * - * @param expectation - * @param actualSupplier - * @param - * @return - */ - public static WaitBuilder untilNotEquals(T expectation, Supplier actualSupplier) { - WaitBuilder wb = new WaitBuilder<>(); - - wb.supplier = actualSupplier; - wb.check = o -> { - if (o == expectation) { - return false; - } - - if ((o == null && expectation != null) || (o != null && expectation == null)) { - return true; - } - - if (o instanceof Number && expectation instanceof Number) { - Number actualNumber = (Number) o; - Number expectedNumber = (Number) o; - - if (actualNumber.doubleValue() == expectedNumber.doubleValue()) { - return false; - } - - if (actualNumber.longValue() == expectedNumber.longValue()) { - return false; - } - } - - return !o.equals(expectation); - }; - wb.messageFunction = o -> "Objects are equal: " + expectation + " and " + o; - - return wb; - } - - /** - * Initialize a {@link com.lambdaworks.Wait.WaitBuilder} to wait until the {@code actualSupplier} provides an object that is - * not equal to {@code expectation} - * - * @param expectation - * @param actualSupplier - * @param - * @return - */ - public static WaitBuilder untilEquals(T expectation, Supplier actualSupplier) { - WaitBuilder wb = new WaitBuilder<>(); - - wb.supplier = actualSupplier; - wb.check = o -> { - if (o == expectation) { - return true; - } - - if ((o == null && expectation != null) || (o != null && expectation == null)) { - return false; - } - - if (o instanceof Number && expectation instanceof Number) { - Number actualNumber = (Number) o; - Number expectedNumber = (Number) expectation; - - if (actualNumber.doubleValue() == expectedNumber.doubleValue()) { - return true; - } - - if (actualNumber.longValue() == expectedNumber.longValue()) { - return true; - } - } - - return o.equals(expectation); - }; - wb.messageFunction = o -> "Objects are not equal: " + expectation + " and " + o; - - return wb; - } - - @FunctionalInterface - public interface WaitCondition { - - boolean isSatisfied() throws Exception; - } - - @FunctionalInterface - public interface VoidWaitCondition { - - void test() throws Exception; - } - - @FunctionalInterface - public interface Supplier { - T get() throws Exception; - } - - public static class WaitBuilder { - - private Duration duration = Duration.seconds(10); - private Sleeper sleeper = new ThreadSleep(Duration.millis(100)); - private Function messageFunction; - private Supplier supplier; - private Predicate check; - private WaitCondition waitCondition; - - public WaitBuilder during(Duration duration) { - this.duration = duration; - return this; - } - - public WaitBuilder message(String message) { - this.messageFunction = o -> message; - return this; - } - - public void waitOrTimeout() { - - Waiter waiter = new Waiter(); - waiter.duration = duration; - waiter.sleeper = sleeper; - waiter.messageFunction = (Function) messageFunction; - - if (waitCondition != null) { - waiter.waitOrTimeout(waitCondition, supplier); - } else { - waiter.waitOrTimeout(supplier, check); - } - } - - } - - private static class Waiter { - private Duration duration; - private Sleeper sleeper; - private Function messageFunction; - - private void waitOrTimeout(Supplier supplier, Predicate check) { - - try { - if (!success(() -> check.test(supplier.get()), Timeout.timeout(duration))) { - if (messageFunction != null) { - throw new TimeoutException(messageFunction.apply(supplier.get())); - } - throw new TimeoutException("Condition not satisfied for: " + supplier.get()); - } - } catch (Exception e) { - throw new IllegalStateException(e); - } - } - - private void waitOrTimeout(WaitCondition waitCondition, Supplier supplier) { - - try { - if (!success(waitCondition, Timeout.timeout(duration))) { - try { - if (messageFunction != null) { - throw new TimeoutException(messageFunction.apply(supplier.get())); - } - throw new TimeoutException("Condition not satisfied for: " + supplier.get()); - } catch (TimeoutException e) { - throw e; - } catch (Exception e) { - if (messageFunction != null) { - throw new ExecutionException(messageFunction.apply(null), e); - } - throw new ExecutionException("Condition not satisfied", e); - } - } - } catch (Exception e) { - throw new IllegalStateException(e); - } - } - - private boolean success(WaitCondition condition, Timeout timeout) throws Exception { - while (!timeout.hasExpired()) { - if (condition.isSatisfied()) { - return true; - } - sleeper.sleep(); - } - return false; - } - } -} diff --git a/src/test/java/com/lambdaworks/apigenerator/CompilationUnitFactory.java b/src/test/java/com/lambdaworks/apigenerator/CompilationUnitFactory.java deleted file mode 100644 index efc4e934f0..0000000000 --- a/src/test/java/com/lambdaworks/apigenerator/CompilationUnitFactory.java +++ /dev/null @@ -1,158 +0,0 @@ -package com.lambdaworks.apigenerator; - -import java.io.File; -import java.io.FileOutputStream; -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.function.Consumer; -import java.util.function.Function; -import java.util.function.Predicate; -import java.util.function.Supplier; - -import com.github.javaparser.ASTHelper; -import com.github.javaparser.JavaParser; -import com.github.javaparser.ast.CompilationUnit; -import com.github.javaparser.ast.ImportDeclaration; -import com.github.javaparser.ast.PackageDeclaration; -import com.github.javaparser.ast.TypeParameter; -import com.github.javaparser.ast.body.ClassOrInterfaceDeclaration; -import com.github.javaparser.ast.body.MethodDeclaration; -import com.github.javaparser.ast.body.ModifierSet; -import com.github.javaparser.ast.body.Parameter; -import com.github.javaparser.ast.comments.Comment; -import com.github.javaparser.ast.comments.JavadocComment; -import com.github.javaparser.ast.expr.NameExpr; -import com.github.javaparser.ast.type.Type; -import com.github.javaparser.ast.visitor.VoidVisitorAdapter; - -/** - * @author Mark Paluch - */ -public class CompilationUnitFactory { - - private File templateFile; - private File sources; - private File target; - private String targetPackage; - private String targetName; - - private Function typeDocFunction; - private Function methodReturnTypeFunction; - private Predicate methodFilter; - private Supplier> importSupplier; - private Consumer typeMutator; - private Function methodCommentMutator; - - CompilationUnit template; - CompilationUnit result = new CompilationUnit(); - ClassOrInterfaceDeclaration resultType; - - public CompilationUnitFactory(File templateFile, File sources, String targetPackage, String targetName, - Function typeDocFunction, - Function methodReturnTypeFunction, - Predicate methodFilter, Supplier> importSupplier, - Consumer typeMutator, - Function methodCommentMutator) { - - this.templateFile = templateFile; - this.sources = sources; - this.targetPackage = targetPackage; - this.targetName = targetName; - this.typeDocFunction = typeDocFunction; - this.methodReturnTypeFunction = methodReturnTypeFunction; - this.methodFilter = methodFilter; - this.importSupplier = importSupplier; - this.typeMutator = typeMutator; - this.methodCommentMutator = methodCommentMutator; - - this.target = new File(sources, targetPackage.replace('.', '/') + "/" + targetName + ".java"); - } - - public void createInterface() throws Exception { - - result.setPackage(new PackageDeclaration(ASTHelper.createNameExpr(targetPackage))); - - template = JavaParser.parse(templateFile); - - ClassOrInterfaceDeclaration templateTypeDeclaration = (ClassOrInterfaceDeclaration) template.getTypes().get(0); - resultType = new ClassOrInterfaceDeclaration(ModifierSet.PUBLIC, true, targetName); - if (templateTypeDeclaration.getExtends() != null) { - resultType.setExtends(templateTypeDeclaration.getExtends()); - } - - if (!templateTypeDeclaration.getTypeParameters().isEmpty()) { - resultType.setTypeParameters(new ArrayList<>()); - for (TypeParameter typeParameter : templateTypeDeclaration.getTypeParameters()) { - resultType.getTypeParameters().add(new TypeParameter(typeParameter.getName(), typeParameter.getTypeBound())); - } - } - - resultType.setComment(new JavadocComment(typeDocFunction.apply(templateTypeDeclaration.getComment().getContent()))); - - result.setImports(new ArrayList<>()); - ASTHelper.addTypeDeclaration(result, resultType); - resultType.setParentNode(result); - - if (template.getImports() != null) { - result.getImports().addAll(template.getImports()); - } - List importLines = importSupplier.get(); - for (String importLine : importLines) { - result.getImports().add(new ImportDeclaration(new NameExpr(importLine), false, false)); - } - - new MethodVisitor().visit(template, null); - - if (typeMutator != null) { - typeMutator.accept(resultType); - } - - writeResult(); - - } - - protected void writeResult() throws IOException { - FileOutputStream fos = new FileOutputStream(target); - fos.write(result.toString().getBytes()); - fos.close(); - } - - /** - * Simple visitor implementation for visiting MethodDeclaration nodes. - */ - private class MethodVisitor extends VoidVisitorAdapter { - - @Override - public void visit(MethodDeclaration n, Object arg) { - - if (!methodFilter.test(n)) { - return; - } - - MethodDeclaration method = new MethodDeclaration(n.getModifiers(), methodReturnTypeFunction.apply(n), n.getName()); - - if(methodCommentMutator != null){ - method.setComment(methodCommentMutator.apply(n.getComment())); - }else { - method.setComment(n.getComment()); - } - - for (Parameter parameter : n.getParameters()) { - Parameter param = ASTHelper.createParameter(parameter.getType(), parameter.getId().getName()); - param.setVarArgs(parameter.isVarArgs()); - - ASTHelper.addParameter(method, param); - } - - if (n.getTypeParameters() != null) { - method.setTypeParameters(new ArrayList<>()); - method.getTypeParameters().addAll(n.getTypeParameters()); - } - - ASTHelper.addMember(resultType, method); - - } - } - -} diff --git a/src/test/java/com/lambdaworks/apigenerator/Constants.java b/src/test/java/com/lambdaworks/apigenerator/Constants.java deleted file mode 100644 index 3f60c1dceb..0000000000 --- a/src/test/java/com/lambdaworks/apigenerator/Constants.java +++ /dev/null @@ -1,17 +0,0 @@ -package com.lambdaworks.apigenerator; - -import java.io.File; - -/** - * @author Mark Paluch - */ -class Constants { - - public final static String[] TEMPLATE_NAMES = { "RedisHashCommands", "RedisHLLCommands", "RedisKeyCommands", - "RedisListCommands", "RedisScriptingCommands", "RedisServerCommands", "RedisSetCommands", "RedisSortedSetCommands", - "RedisStringCommands", "RedisTransactionalCommands", "RedisSentinelCommands", "BaseRedisCommands", - "RedisGeoCommands" }; - - public final static File TEMPLATES = new File("src/main/templates"); - public final static File SOURCES = new File("src/main/java"); -} diff --git a/src/test/java/com/lambdaworks/apigenerator/CreateAsyncApi.java b/src/test/java/com/lambdaworks/apigenerator/CreateAsyncApi.java deleted file mode 100644 index e2f784b4cd..0000000000 --- a/src/test/java/com/lambdaworks/apigenerator/CreateAsyncApi.java +++ /dev/null @@ -1,129 +0,0 @@ -package com.lambdaworks.apigenerator; - -import java.io.File; -import java.util.*; -import java.util.function.Consumer; -import java.util.function.Function; -import java.util.function.Supplier; - -import com.lambdaworks.redis.internal.LettuceSets; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -import com.github.javaparser.ast.CompilationUnit; -import com.github.javaparser.ast.ImportDeclaration; -import com.github.javaparser.ast.body.ClassOrInterfaceDeclaration; -import com.github.javaparser.ast.body.MethodDeclaration; -import com.github.javaparser.ast.expr.NameExpr; -import com.github.javaparser.ast.type.ClassOrInterfaceType; -import com.github.javaparser.ast.type.ReferenceType; -import com.github.javaparser.ast.type.Type; - -/** - * Create async API based on the templates. - * - * @author Mark Paluch - */ -@RunWith(Parameterized.class) -public class CreateAsyncApi { - - private Set KEEP_METHOD_RESULT_TYPE = LettuceSets.unmodifiableSet("shutdown", "debugOom", "debugSegfault", - "digest", "close", "isOpen", "BaseRedisCommands.reset", "getStatefulConnection"); - - private CompilationUnitFactory factory; - - @Parameterized.Parameters(name = "Create {0}") - public static List arguments() { - List result = new ArrayList<>(); - - for (String templateName : Constants.TEMPLATE_NAMES) { - result.add(new Object[] { templateName }); - } - - return result; - } - - /** - * @param templateName - */ - public CreateAsyncApi(String templateName) { - - String targetName = templateName.replace("Commands", "AsyncCommands"); - - File templateFile = new File(Constants.TEMPLATES, "com/lambdaworks/redis/api/" + templateName + ".java"); - String targetPackage; - - if (templateName.contains("RedisSentinel")) { - targetPackage = "com.lambdaworks.redis.sentinel.api.async"; - } else { - targetPackage = "com.lambdaworks.redis.api.async"; - } - - factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), - methodTypeMutator(), methodDeclaration -> true, importSupplier(), typeMutator(), null); - } - - private Consumer typeMutator() { - return type -> { - - if (type.getName().contains("SentinelAsyncCommands")) { - type.getExtends().add(new ClassOrInterfaceType("RedisSentinelAsyncConnection")); - CompilationUnit compilationUnit = (CompilationUnit) type.getParentNode(); - if (compilationUnit.getImports() == null) { - compilationUnit.setImports(new ArrayList<>()); - } - compilationUnit.getImports() - .add(new ImportDeclaration(new NameExpr("com.lambdaworks.redis.RedisSentinelAsyncConnection"), false, - false)); - } - - }; - } - - /** - * Mutate type comment. - * - * @return - */ - protected Function commentMutator() { - return s -> s.replaceAll("\\$\\{intent\\}", "Asynchronous executed commands") + "* @generated by " - + getClass().getName() + "\r\n "; - } - - /** - * Mutate type to async result. - * - * @return - */ - protected Function methodTypeMutator() { - return method -> { - ClassOrInterfaceDeclaration classOfMethod = (ClassOrInterfaceDeclaration) method.getParentNode(); - if (KEEP_METHOD_RESULT_TYPE.contains(method.getName()) - || KEEP_METHOD_RESULT_TYPE.contains(classOfMethod.getName() + "." + method.getName())) { - return method.getType(); - } - - String typeAsString = method.getType().toStringWithoutComments().trim(); - if (typeAsString.equals("void")) { - typeAsString = "Void"; - } - - return new ReferenceType(new ClassOrInterfaceType("RedisFuture<" + typeAsString + ">")); - }; - } - - /** - * Supply addititional imports. - * - * @return - */ - protected Supplier> importSupplier() { - return () -> Collections.singletonList("com.lambdaworks.redis.RedisFuture"); - } - - @Test - public void createInterface() throws Exception { - factory.createInterface(); - } -} diff --git a/src/test/java/com/lambdaworks/apigenerator/CreateAsyncNodeSelectionClusterApi.java b/src/test/java/com/lambdaworks/apigenerator/CreateAsyncNodeSelectionClusterApi.java deleted file mode 100644 index 0f83a7d863..0000000000 --- a/src/test/java/com/lambdaworks/apigenerator/CreateAsyncNodeSelectionClusterApi.java +++ /dev/null @@ -1,123 +0,0 @@ -package com.lambdaworks.apigenerator; - -import java.io.File; -import java.util.*; -import java.util.function.Function; -import java.util.function.Predicate; -import java.util.function.Supplier; - -import com.lambdaworks.redis.internal.LettuceSets; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -import com.github.javaparser.ast.body.ClassOrInterfaceDeclaration; -import com.github.javaparser.ast.body.MethodDeclaration; -import com.github.javaparser.ast.type.ClassOrInterfaceType; -import com.github.javaparser.ast.type.ReferenceType; -import com.github.javaparser.ast.type.Type; - -/** - * Create async API based on the templates. - * - * @author Mark Paluch - */ -@RunWith(Parameterized.class) -public class CreateAsyncNodeSelectionClusterApi { - - private Set FILTER_METHODS = LettuceSets.unmodifiableSet("shutdown", "debugOom", "debugSegfault", "digest", - "close", "isOpen", "BaseRedisCommands.reset", "readOnly", "readWrite"); - - private CompilationUnitFactory factory; - - @Parameterized.Parameters(name = "Create {0}") - public static List arguments() { - List result = new ArrayList<>(); - - for (String templateName : Constants.TEMPLATE_NAMES) { - if (templateName.contains("Transactional") || templateName.contains("Sentinel")) { - continue; - } - result.add(new Object[] { templateName }); - } - - return result; - } - - /** - * @param templateName - */ - public CreateAsyncNodeSelectionClusterApi(String templateName) { - - String targetName = templateName.replace("Commands", "AsyncCommands").replace("Redis", "NodeSelection"); - File templateFile = new File(Constants.TEMPLATES, "com/lambdaworks/redis/api/" + templateName + ".java"); - String targetPackage = "com.lambdaworks.redis.cluster.api.async"; - - // todo: remove AutoCloseable from BaseNodeSelectionAsyncCommands - factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), - methodTypeMutator(), methodFilter(), importSupplier(), null, null); - } - - /** - * Mutate type comment. - * - * @return - */ - protected Function commentMutator() { - return s -> s.replaceAll("\\$\\{intent\\}", "Asynchronous executed commands on a node selection") + "* @generated by " - + getClass().getName() + "\r\n "; - } - - /** - * Mutate type to async result. - * - * @return - */ - protected Predicate methodFilter() { - return method -> { - ClassOrInterfaceDeclaration classOfMethod = (ClassOrInterfaceDeclaration) method.getParentNode(); - if (FILTER_METHODS.contains(method.getName()) - || FILTER_METHODS.contains(classOfMethod.getName() + "." + method.getName())) { - return false; - } - - return true; - }; - } - - /** - * Mutate type to async result. - * - * @return - */ - protected Function methodTypeMutator() { - return method -> { - ClassOrInterfaceDeclaration classOfMethod = (ClassOrInterfaceDeclaration) method.getParentNode(); - if (FILTER_METHODS.contains(method.getName()) - || FILTER_METHODS.contains(classOfMethod.getName() + "." + method.getName())) { - return method.getType(); - } - - String typeAsString = method.getType().toStringWithoutComments().trim(); - if (typeAsString.equals("void")) { - typeAsString = "Void"; - } - - return new ReferenceType(new ClassOrInterfaceType("AsyncExecutions<" + typeAsString + ">")); - }; - } - - /** - * Supply addititional imports. - * - * @return - */ - protected Supplier> importSupplier() { - return () -> Collections.singletonList("com.lambdaworks.redis.RedisFuture"); - } - - @Test - public void createInterface() throws Exception { - factory.createInterface(); - } -} diff --git a/src/test/java/com/lambdaworks/apigenerator/CreateReactiveApi.java b/src/test/java/com/lambdaworks/apigenerator/CreateReactiveApi.java deleted file mode 100644 index 832ae8c9a9..0000000000 --- a/src/test/java/com/lambdaworks/apigenerator/CreateReactiveApi.java +++ /dev/null @@ -1,126 +0,0 @@ -package com.lambdaworks.apigenerator; - -import java.io.File; -import java.util.*; -import java.util.function.Function; -import java.util.function.Supplier; - -import com.lambdaworks.redis.internal.LettuceSets; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -import com.github.javaparser.ast.body.ClassOrInterfaceDeclaration; -import com.github.javaparser.ast.body.MethodDeclaration; -import com.github.javaparser.ast.comments.Comment; -import com.github.javaparser.ast.type.ClassOrInterfaceType; -import com.github.javaparser.ast.type.ReferenceType; -import com.github.javaparser.ast.type.Type; - -/** - * Create reactive API based on the templates. - * - * @author Mark Paluch - */ -@RunWith(Parameterized.class) -public class CreateReactiveApi { - - private Set KEEP_METHOD_RESULT_TYPE = LettuceSets.unmodifiableSet( - "digest", "close", "isOpen", "BaseRedisCommands.reset", - "getStatefulConnection"); - - private CompilationUnitFactory factory; - - @Parameterized.Parameters(name = "Create {0}") - public static List arguments() { - List result = new ArrayList<>(); - - for (String templateName : Constants.TEMPLATE_NAMES) { - result.add(new Object[] { templateName }); - } - - return result; - } - - /** - * - * @param templateName - */ - public CreateReactiveApi(String templateName) { - - String targetName = templateName.replace("Commands", "ReactiveCommands"); - File templateFile = new File(Constants.TEMPLATES, "com/lambdaworks/redis/api/" + templateName + ".java"); - String targetPackage; - - if (templateName.contains("RedisSentinel")) { - targetPackage = "com.lambdaworks.redis.sentinel.api.rx"; - } else { - targetPackage = "com.lambdaworks.redis.api.rx"; - } - - factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), - methodTypeMutator(), methodDeclaration -> true, importSupplier(), null, methodCommentMutator()); - } - - /** - * Mutate type comment. - * - * @return - */ - protected Function commentMutator() { - return s -> s.replaceAll("\\$\\{intent\\}", "Observable commands").replaceAll("@since 3.0", "@since 4.0") - + "* @generated by " + getClass().getName() + "\r\n "; - } - - protected Function methodCommentMutator() { - return comment -> { - if(comment != null && comment.getContent() != null){ - comment.setContent(comment.getContent().replaceAll("List<(.*)>", "$1").replaceAll("Set<(.*)>", "$1")); - } - return comment; - }; - } - - /** - * Mutate type to async result. - * - * @return - */ - protected Function methodTypeMutator() { - return method -> { - - ClassOrInterfaceDeclaration classOfMethod = (ClassOrInterfaceDeclaration) method.getParentNode(); - if (KEEP_METHOD_RESULT_TYPE.contains(method.getName()) - || KEEP_METHOD_RESULT_TYPE.contains(classOfMethod.getName() + "." + method.getName())) { - return method.getType(); - } - - String typeAsString = method.getType().toStringWithoutComments().trim(); - if (typeAsString.equals("void")) { - typeAsString = "Success"; - } - - if (typeAsString.startsWith("List<")) { - typeAsString = typeAsString.substring(5, typeAsString.length() - 1); - } else if (typeAsString.startsWith("Set<")) { - typeAsString = typeAsString.substring(4, typeAsString.length() - 1); - } - - return new ReferenceType(new ClassOrInterfaceType("Observable<" + typeAsString + ">")); - }; - } - - /** - * Supply addititional imports. - * - * @return - */ - protected Supplier> importSupplier() { - return () -> Collections.singletonList("rx.Observable"); - } - - @Test - public void createInterface() throws Exception { - factory.createInterface(); - } -} diff --git a/src/test/java/com/lambdaworks/apigenerator/CreateSyncApi.java b/src/test/java/com/lambdaworks/apigenerator/CreateSyncApi.java deleted file mode 100644 index ab04430a5c..0000000000 --- a/src/test/java/com/lambdaworks/apigenerator/CreateSyncApi.java +++ /dev/null @@ -1,90 +0,0 @@ -package com.lambdaworks.apigenerator; - -import java.io.File; -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.function.Function; -import java.util.function.Supplier; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -import com.github.javaparser.ast.body.MethodDeclaration; -import com.github.javaparser.ast.type.Type; - -/** - * Create sync API based on the templates. - * - * @author Mark Paluch - */ -@RunWith(Parameterized.class) -public class CreateSyncApi { - - private CompilationUnitFactory factory; - - @Parameterized.Parameters(name = "Create {0}") - public static List arguments() { - List result = new ArrayList<>(); - - for (String templateName : Constants.TEMPLATE_NAMES) { - result.add(new Object[] { templateName }); - } - - return result; - } - - /** - * - * @param templateName - */ - public CreateSyncApi(String templateName) { - - String targetName = templateName; - File templateFile = new File(Constants.TEMPLATES, "com/lambdaworks/redis/api/" + templateName + ".java"); - String targetPackage; - - if (templateName.contains("RedisSentinel")) { - targetPackage = "com.lambdaworks.redis.sentinel.api.sync"; - } else { - targetPackage = "com.lambdaworks.redis.api.sync"; - } - - factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), - methodTypeMutator(), methodDeclaration -> true, importSupplier(), null, null); - } - - /** - * Mutate type comment. - * - * @return - */ - protected Function commentMutator() { - return s -> s.replaceAll("\\$\\{intent\\}", "Synchronous executed commands") + "* @generated by " - + getClass().getName() + "\r\n "; - } - - /** - * Mutate type to async result. - * - * @return - */ - protected Function methodTypeMutator() { - return methodDeclaration -> methodDeclaration.getType(); - } - - /** - * Supply addititional imports. - * - * @return - */ - protected Supplier> importSupplier() { - return () -> Collections.emptyList(); - } - - @Test - public void createInterface() throws Exception { - factory.createInterface(); - } -} diff --git a/src/test/java/com/lambdaworks/apigenerator/CreateSyncAsyncRxApis.java b/src/test/java/com/lambdaworks/apigenerator/CreateSyncAsyncRxApis.java deleted file mode 100644 index 35ecb176fb..0000000000 --- a/src/test/java/com/lambdaworks/apigenerator/CreateSyncAsyncRxApis.java +++ /dev/null @@ -1,14 +0,0 @@ -package com.lambdaworks.apigenerator; - -import org.junit.runner.RunWith; -import org.junit.runners.Suite; - -/** - * @author Mark Paluch - */ -@RunWith(Suite.class) -@Suite.SuiteClasses({ CreateAsyncApi.class, CreateSyncApi.class, CreateReactiveApi.class, - CreateAsyncNodeSelectionClusterApi.class, CreateSyncNodeSelectionClusterApi.class }) -public class CreateSyncAsyncRxApis { - -} diff --git a/src/test/java/com/lambdaworks/apigenerator/CreateSyncNodeSelectionClusterApi.java b/src/test/java/com/lambdaworks/apigenerator/CreateSyncNodeSelectionClusterApi.java deleted file mode 100644 index 27e813df11..0000000000 --- a/src/test/java/com/lambdaworks/apigenerator/CreateSyncNodeSelectionClusterApi.java +++ /dev/null @@ -1,126 +0,0 @@ -package com.lambdaworks.apigenerator; - -import java.io.File; -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.Set; -import java.util.function.Function; -import java.util.function.Predicate; -import java.util.function.Supplier; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -import com.github.javaparser.ast.body.ClassOrInterfaceDeclaration; -import com.github.javaparser.ast.body.MethodDeclaration; -import com.github.javaparser.ast.type.ClassOrInterfaceType; -import com.github.javaparser.ast.type.ReferenceType; -import com.github.javaparser.ast.type.Type; -import com.lambdaworks.redis.internal.LettuceSets; - -/** - * Create sync API based on the templates. - * - * @author Mark Paluch - */ -@RunWith(Parameterized.class) -public class CreateSyncNodeSelectionClusterApi { - - private Set FILTER_METHODS = LettuceSets.unmodifiableSet("shutdown", "debugOom", "debugSegfault", "digest", "close", - "isOpen", "BaseRedisCommands.reset", "readOnly", "readWrite", "dispatch"); - - private CompilationUnitFactory factory; - - @Parameterized.Parameters(name = "Create {0}") - public static List arguments() { - List result = new ArrayList<>(); - - for (String templateName : Constants.TEMPLATE_NAMES) { - if (templateName.contains("Transactional") || templateName.contains("Sentinel")) { - continue; - } - result.add(new Object[] { templateName }); - } - - return result; - } - - /** - * @param templateName - */ - public CreateSyncNodeSelectionClusterApi(String templateName) { - - String targetName = templateName.replace("Redis", "NodeSelection"); - File templateFile = new File(Constants.TEMPLATES, "com/lambdaworks/redis/api/" + templateName + ".java"); - String targetPackage = "com.lambdaworks.redis.cluster.api.sync"; - - // todo: remove AutoCloseable from BaseNodeSelectionAsyncCommands - factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), - methodTypeMutator(), methodFilter(), importSupplier(), null, null); - } - - /** - * Mutate type comment. - * - * @return - */ - protected Function commentMutator() { - return s -> s.replaceAll("\\$\\{intent\\}", "Synchronous executed commands on a node selection") + "* @generated by " - + getClass().getName() + "\r\n "; - } - - /** - * Mutate type to async result. - * - * @return - */ - protected Predicate methodFilter() { - return method -> { - ClassOrInterfaceDeclaration classOfMethod = (ClassOrInterfaceDeclaration) method.getParentNode(); - if (FILTER_METHODS.contains(method.getName()) - || FILTER_METHODS.contains(classOfMethod.getName() + "." + method.getName())) { - return false; - } - - return true; - }; - } - - /** - * Mutate type to async result. - * - * @return - */ - protected Function methodTypeMutator() { - return method -> { - ClassOrInterfaceDeclaration classOfMethod = (ClassOrInterfaceDeclaration) method.getParentNode(); - if (FILTER_METHODS.contains(method.getName()) - || FILTER_METHODS.contains(classOfMethod.getName() + "." + method.getName())) { - return method.getType(); - } - - String typeAsString = method.getType().toStringWithoutComments().trim(); - if (typeAsString.equals("void")) { - typeAsString = "Void"; - } - - return new ReferenceType(new ClassOrInterfaceType("Executions<" + typeAsString + ">")); - }; - } - - /** - * Supply addititional imports. - * - * @return - */ - protected Supplier> importSupplier() { - return () -> Collections.emptyList(); - } - - @Test - public void createInterface() throws Exception { - factory.createInterface(); - } -} diff --git a/src/test/java/com/lambdaworks/category/SlowTests.java b/src/test/java/com/lambdaworks/category/SlowTests.java deleted file mode 100644 index 026b2b6ceb..0000000000 --- a/src/test/java/com/lambdaworks/category/SlowTests.java +++ /dev/null @@ -1,8 +0,0 @@ -package com.lambdaworks.category; - -/** - * @author Mark Paluch - */ -public @interface SlowTests { - -} diff --git a/src/test/java/com/lambdaworks/codec/CRC16Test.java b/src/test/java/com/lambdaworks/codec/CRC16Test.java deleted file mode 100644 index 96676b0d3a..0000000000 --- a/src/test/java/com/lambdaworks/codec/CRC16Test.java +++ /dev/null @@ -1,49 +0,0 @@ -package com.lambdaworks.codec; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.ArrayList; -import java.util.List; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -@RunWith(Parameterized.class) -public class CRC16Test { - - private byte[] bytes; - private int expected; - - public CRC16Test(byte[] bytes, int expected, String hex) { - this.bytes = bytes; - this.expected = expected; - } - - @Parameterized.Parameters(name = "{2}") - public static List parameters() { - - List parameters = new ArrayList<>(); - - params(parameters, "".getBytes(), 0x0); - params(parameters, "123456789".getBytes(), 0x31C3); - params(parameters, "sfger132515".getBytes(), 0xA45C); - params(parameters, "hae9Napahngaikeethievubaibogiech".getBytes(), 0x58CE); - params(parameters, "AAAAAAAAAAAAAAAAAAAAAA".getBytes(), 0x92cd); - params(parameters, "Hello, World!".getBytes(), 0x4FD6); - - return parameters; - } - - private static void params(List parameters, byte[] bytes, int expectation) { - parameters.add(new Object[] { bytes, expectation, "0x" + Integer.toHexString(expectation).toUpperCase() }); - } - - @Test - public void testCRC16() throws Exception { - - int result = CRC16.crc16(bytes); - assertThat(result).describedAs("Expects " + Integer.toHexString(expected)).isEqualTo(expected); - - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToElastiCacheMaster.java b/src/test/java/com/lambdaworks/examples/ConnectToElastiCacheMaster.java deleted file mode 100644 index 98fc3a35fd..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToElastiCacheMaster.java +++ /dev/null @@ -1,29 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.resource.DefaultClientResources; -import com.lambdaworks.redis.resource.DirContextDnsResolver; - -/** - * @author Mark Paluch - */ -public class ConnectToElastiCacheMaster { - - public static void main(String[] args) { - - // Syntax: redis://[password@]host[:port][/databaseNumber] - - DefaultClientResources clientResources = DefaultClientResources.builder() // - .dnsResolver(new DirContextDnsResolver()) // Does not cache DNS lookups - .build(); - - RedisClient redisClient = RedisClient.create(clientResources, "redis://password@localhost:6379/0"); - StatefulRedisConnection connection = redisClient.connect(); - - System.out.println("Connected to Redis"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java b/src/test/java/com/lambdaworks/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java deleted file mode 100644 index 533c3b6255..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java +++ /dev/null @@ -1,37 +0,0 @@ -package com.lambdaworks.examples; - -import java.util.Arrays; -import java.util.List; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.masterslave.MasterSlave; -import com.lambdaworks.redis.masterslave.StatefulRedisMasterSlaveConnection; - -/** - * @author Mark Paluch - */ -public class ConnectToMasterSlaveUsingElastiCacheCluster { - - public static void main(String[] args) { - - // Syntax: redis://[password@]host[:port][/databaseNumber] - RedisClient redisClient = RedisClient.create(); - - List nodes = Arrays.asList(RedisURI.create("redis://host1"), - RedisURI.create("redis://host2"), - RedisURI.create("redis://host3")); - - StatefulRedisMasterSlaveConnection connection = MasterSlave - .connect(redisClient, new Utf8StringCodec(), nodes); - connection.setReadFrom(ReadFrom.MASTER_PREFERRED); - - System.out.println("Connected to Redis"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToMasterSlaveUsingRedisSentinel.java b/src/test/java/com/lambdaworks/examples/ConnectToMasterSlaveUsingRedisSentinel.java deleted file mode 100644 index 279e79e44f..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToMasterSlaveUsingRedisSentinel.java +++ /dev/null @@ -1,28 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.masterslave.MasterSlave; -import com.lambdaworks.redis.masterslave.StatefulRedisMasterSlaveConnection; - -/** - * @author Mark Paluch - */ -public class ConnectToMasterSlaveUsingRedisSentinel { - - public static void main(String[] args) { - // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId - RedisClient redisClient = RedisClient.create(); - - StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(redisClient, new Utf8StringCodec(), - RedisURI.create("redis-sentinel://localhost:26379,localhost:26380/0#mymaster")); - connection.setReadFrom(ReadFrom.MASTER_PREFERRED); - - System.out.println("Connected to Redis"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToRedis.java b/src/test/java/com/lambdaworks/examples/ConnectToRedis.java deleted file mode 100644 index a7448257f8..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToRedis.java +++ /dev/null @@ -1,24 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; - -/** - * @author Mark Paluch - */ -public class ConnectToRedis { - - public static void main(String[] args) { - - // Syntax: redis://[password@]host[:port][/databaseNumber] - RedisClient redisClient = RedisClient.create("redis://password@localhost:6379/0"); - StatefulRedisConnection connection = redisClient.connect(); - - System.out.println("Connected to Redis"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToRedisCluster.java b/src/test/java/com/lambdaworks/examples/ConnectToRedisCluster.java deleted file mode 100644 index 9e57c28fa6..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToRedisCluster.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.RedisAdvancedClusterConnection; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; - -/** - * @author Mark Paluch - */ -public class ConnectToRedisCluster { - - public static void main(String[] args) { - - // Syntax: redis://[password@]host[:port] - RedisClusterClient redisClient = RedisClusterClient.create("redis://password@localhost:7379"); - - StatefulRedisClusterConnection connection = redisClient.connect(); - - System.out.println("Connected to Redis"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToRedisClusterSSL.java b/src/test/java/com/lambdaworks/examples/ConnectToRedisClusterSSL.java deleted file mode 100644 index 858329e045..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToRedisClusterSSL.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; - -/** - * @author Mark Paluch - */ -public class ConnectToRedisClusterSSL { - - public static void main(String[] args) { - - // Syntax: rediss://[password@]host[:port] - RedisURI redisURI = RedisURI.create("rediss://password@localhost:7379"); - redisURI.setVerifyPeer(false); // depending on your setup, you might want to disable peer verification - - RedisClusterClient redisClient = RedisClusterClient.create(redisURI); - StatefulRedisClusterConnection connection = redisClient.connect(); - - System.out.println("Connected to Redis"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToRedisClusterWithTopologyRefreshing.java b/src/test/java/com/lambdaworks/examples/ConnectToRedisClusterWithTopologyRefreshing.java deleted file mode 100644 index 10faa5da8b..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToRedisClusterWithTopologyRefreshing.java +++ /dev/null @@ -1,38 +0,0 @@ -package com.lambdaworks.examples; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.cluster.ClusterClientOptions; -import com.lambdaworks.redis.cluster.ClusterTopologyRefreshOptions; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; - -/** - * @author Mark Paluch - */ -public class ConnectToRedisClusterWithTopologyRefreshing { - - public static void main(String[] args) { - - // Syntax: redis://[password@]host[:port] - RedisClusterClient redisClient = RedisClusterClient.create("redis://password@localhost:7379"); - - ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// - .enablePeriodicRefresh(30, TimeUnit.MINUTES)// - .enableAllAdaptiveRefreshTriggers()// - .build(); - - ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder()// - .topologyRefreshOptions(clusterTopologyRefreshOptions)// - .build(); - - redisClient.setOptions(clusterClientOptions); - - StatefulRedisClusterConnection connection = redisClient.connect(); - - System.out.println("Connected to Redis"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToRedisSSL.java b/src/test/java/com/lambdaworks/examples/ConnectToRedisSSL.java deleted file mode 100644 index 9182c6a4ad..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToRedisSSL.java +++ /dev/null @@ -1,24 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.api.StatefulRedisConnection; - -/** - * @author Mark Paluch - */ -public class ConnectToRedisSSL { - - public static void main(String[] args) { - - // Syntax: rediss://[password@]host[:port][/databaseNumber] - // Adopt the port to the stunnel port in front of your Redis instance - RedisClient redisClient = RedisClient.create("rediss://password@localhost:6443/0"); - - StatefulRedisConnection connection = redisClient.connect(); - - System.out.println("Connected to Redis using SSL"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/ConnectToRedisUsingRedisSentinel.java b/src/test/java/com/lambdaworks/examples/ConnectToRedisUsingRedisSentinel.java deleted file mode 100644 index f6c2227211..0000000000 --- a/src/test/java/com/lambdaworks/examples/ConnectToRedisUsingRedisSentinel.java +++ /dev/null @@ -1,23 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.api.StatefulRedisConnection; - -/** - * @author Mark Paluch - */ -public class ConnectToRedisUsingRedisSentinel { - - public static void main(String[] args) { - - // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId - RedisClient redisClient = RedisClient.create("redis-sentinel://localhost:26379,localhost:26380/0#mymaster"); - - StatefulRedisConnection connection = redisClient.connect(); - - System.out.println("Connected to Redis using Redis Sentinel"); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/MySpringBean.java b/src/test/java/com/lambdaworks/examples/MySpringBean.java deleted file mode 100644 index b4ef6ca145..0000000000 --- a/src/test/java/com/lambdaworks/examples/MySpringBean.java +++ /dev/null @@ -1,29 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import org.springframework.beans.factory.annotation.Autowired; - -/** - * @author Mark Paluch - */ -public class MySpringBean { - - private RedisClient redisClient; - - @Autowired - public void setRedisClient(RedisClient redisClient) { - this.redisClient = redisClient; - } - - public String ping() { - - StatefulRedisConnection connection = redisClient.connect(); - - RedisCommands sync = connection.sync(); - String result = sync.ping(); - connection.close(); - return result; - } -} diff --git a/src/test/java/com/lambdaworks/examples/ReadWriteExample.java b/src/test/java/com/lambdaworks/examples/ReadWriteExample.java deleted file mode 100644 index 2761532f83..0000000000 --- a/src/test/java/com/lambdaworks/examples/ReadWriteExample.java +++ /dev/null @@ -1,31 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; - -/** - * @author Mark Paluch - */ -public class ReadWriteExample { - - public static void main(String[] args) { - - // Syntax: redis://[password@]host[:port][/databaseNumber] - RedisClient redisClient = RedisClient.create(RedisURI.create("redis://password@localhost:6379/0")); - - StatefulRedisConnection connection = redisClient.connect(); - - System.out.println("Connected to Redis"); - - RedisCommands sync = connection.sync(); - - sync.set("foo", "bar"); - String value = sync.get("foo"); - System.out.println(value); - - connection.close(); - redisClient.shutdown(); - } -} diff --git a/src/test/java/com/lambdaworks/examples/SpringExample.java b/src/test/java/com/lambdaworks/examples/SpringExample.java deleted file mode 100644 index 5eedd2c601..0000000000 --- a/src/test/java/com/lambdaworks/examples/SpringExample.java +++ /dev/null @@ -1,34 +0,0 @@ -package com.lambdaworks.examples; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import org.springframework.context.support.ClassPathXmlApplicationContext; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisConnection; - -/** - * @author Mark Paluch - */ -public class SpringExample { - - public static void main(String[] args) { - - ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext( - "com/lambdaworks/examples/SpringTest-context.xml"); - - RedisClient client = context.getBean(RedisClient.class); - - StatefulRedisConnection connection = client.connect(); - - RedisCommands sync = connection.sync(); - System.out.println("PING: " + sync.ping()); - connection.close(); - - MySpringBean mySpringBean = context.getBean(MySpringBean.class); - System.out.println("PING: " + mySpringBean.ping()); - - context.close(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/AbstractRedisClientTest.java b/src/test/java/com/lambdaworks/redis/AbstractRedisClientTest.java deleted file mode 100644 index 6a65140f46..0000000000 --- a/src/test/java/com/lambdaworks/redis/AbstractRedisClientTest.java +++ /dev/null @@ -1,84 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import org.junit.After; -import org.junit.Before; -import org.junit.BeforeClass; - -import com.lambdaworks.redis.api.sync.RedisCommands; - -public abstract class AbstractRedisClientTest extends AbstractTest { - - protected static RedisClient client; - protected RedisCommands redis; - - @BeforeClass - public static void setupClient() { - client = DefaultRedisClient.get(); - client.setOptions(ClientOptions.create()); - } - - protected static RedisClient newRedisClient() { - return RedisClient.create(RedisURI.Builder.redis(host, port).build()); - } - - protected RedisCommands connect() { - RedisCommands connect = client.connect().sync(); - return connect; - } - - @Before - public void openConnection() throws Exception { - client.setOptions(ClientOptions.builder().build()); - redis = connect(); - boolean scriptRunning; - do { - - scriptRunning = false; - - try { - redis.flushall(); - redis.flushdb(); - } catch (RedisException e) { - if (e.getMessage() != null && e.getMessage().contains("BUSY")) { - scriptRunning = true; - try { - redis.scriptKill(); - } catch (RedisException e1) { - // I know, it sounds crazy, but there is a possibility where one of the commands above raises BUSY. - // Meanwhile the script ends and a call to SCRIPT KILL says NOTBUSY. - } - } - } - } while (scriptRunning); - } - - @After - public void closeConnection() throws Exception { - if (redis != null) { - redis.close(); - } - } - - public abstract class WithPasswordRequired { - protected abstract void run(RedisClient client) throws Exception; - - public WithPasswordRequired() throws Exception { - try { - redis.configSet("requirepass", passwd); - redis.auth(passwd); - - RedisClient client = newRedisClient(); - try { - run(client); - } finally { - FastShutdown.shutdown(client); - } - } finally { - - redis.configSet("requirepass", ""); - } - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/AbstractTest.java b/src/test/java/com/lambdaworks/redis/AbstractTest.java deleted file mode 100644 index d9c6c086e1..0000000000 --- a/src/test/java/com/lambdaworks/redis/AbstractTest.java +++ /dev/null @@ -1,53 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.Arrays; -import java.util.List; -import java.util.Set; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.junit.Rule; - -import com.lambdaworks.LoggingTestRule; -import com.lambdaworks.redis.internal.LettuceSets; - -/** - * @author Mark Paluch - */ -public class AbstractTest { - - public static final String host = TestSettings.host(); - public static final int port = TestSettings.port(); - public static final String passwd = TestSettings.password(); - - @Rule - public LoggingTestRule loggingTestRule = new LoggingTestRule(false); - - protected Logger log = LogManager.getLogger(getClass()); - protected String key = "key"; - protected String value = "value"; - - public static List list(String... args) { - return Arrays.asList(args); - } - - public static List list(Object... args) { - return Arrays.asList(args); - } - - public static List> svlist(ScoredValue... args) { - return Arrays.asList(args); - } - - public static KeyValue kv(String key, String value) { - return new KeyValue<>(key, value); - } - - public static ScoredValue sv(double score, String value) { - return new ScoredValue(score, value); - } - - public static Set set(String... args) { - return LettuceSets.newHashSet(args); - } -} diff --git a/src/test/java/com/lambdaworks/redis/AllTheAPIsTest.java b/src/test/java/com/lambdaworks/redis/AllTheAPIsTest.java deleted file mode 100644 index 05df78814d..0000000000 --- a/src/test/java/com/lambdaworks/redis/AllTheAPIsTest.java +++ /dev/null @@ -1,248 +0,0 @@ -package com.lambdaworks.redis; - -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.AsyncNodeSelection; - -/** - * @author Mark Paluch - */ -public class AllTheAPIsTest { - - private static RedisClient redisClient = DefaultRedisClient.get(); - private static RedisClusterClient clusterClient; - private static int clusterPort; - - @BeforeClass - public static void beforeClass() throws Exception { - clusterPort = TestSettings.port(900); - clusterClient = RedisClusterClient.create(RedisURI.Builder.redis(TestSettings.host(), clusterPort).build()); - } - - @BeforeClass - public static void afterClass() throws Exception { - if (clusterClient != null) { - FastShutdown.shutdown(clusterClient); - } - } - - // Standalone - @Test - public void standaloneSync() throws Exception { - redisClient.connect().close(); - } - - @Test - public void standaloneAsync() throws Exception { - redisClient.connect().async().close(); - } - - @Test - public void standaloneReactive() throws Exception { - redisClient.connect().reactive().close(); - } - - @Test - public void standaloneStateful() throws Exception { - redisClient.connect().close(); - } - - @Test - public void deprecatedStandaloneAsync() throws Exception { - redisClient.connectAsync().close(); - } - - @Test - public void deprecatedStandaloneReactive() throws Exception { - redisClient.connectAsync().getStatefulConnection().reactive().close(); - } - - @Test - public void deprecatedStandaloneStateful() throws Exception { - redisClient.connectAsync().getStatefulConnection().close(); - } - - // PubSub - @Test - public void pubsubSync() throws Exception { - redisClient.connectPubSub().close(); - } - - @Test - public void pubsubAsync() throws Exception { - redisClient.connectPubSub().close(); - } - - @Test - public void pubsubReactive() throws Exception { - redisClient.connectPubSub().close(); - } - - @Test - public void pubsubStateful() throws Exception { - redisClient.connectPubSub().close(); - } - - // Sentinel - @Test - public void sentinelSync() throws Exception { - redisClient.connectSentinel().sync().close(); - } - - @Test - public void sentinelAsync() throws Exception { - redisClient.connectSentinel().async().close(); - } - - @Test - public void sentinelReactive() throws Exception { - redisClient.connectSentinel().reactive().close(); - } - - @Test - public void sentinelStateful() throws Exception { - redisClient.connectSentinel().close(); - } - - @Test - public void deprecatedSentinelSync() throws Exception { - redisClient.connectSentinelAsync().getStatefulConnection().sync().close(); - } - - @Test - public void deprecatedSentinelAsync() throws Exception { - redisClient.connectSentinelAsync().getStatefulConnection().async().close(); - } - - @Test - public void deprecatedSentinelReactive() throws Exception { - redisClient.connectSentinelAsync().getStatefulConnection().reactive().close(); - } - - @Test - public void deprecatedSentinelStateful() throws Exception { - redisClient.connectSentinelAsync().getStatefulConnection().close(); - } - - // Pool - @Test - public void poolSync() throws Exception { - redisClient.pool().close(); - } - - @Test - public void poolAsync() throws Exception { - redisClient.asyncPool().close(); - } - - // Cluster - @Test - public void clusterSync() throws Exception { - clusterClient.connect().sync().close(); - } - - @Test - public void clusterAsync() throws Exception { - clusterClient.connect().async().close(); - } - - @Test - public void clusterReactive() throws Exception { - clusterClient.connect().reactive().close(); - } - - @Test - public void clusterStateful() throws Exception { - clusterClient.connect().close(); - } - - @Test - public void clusterPubSubSync() throws Exception { - clusterClient.connectPubSub().sync().close(); - } - - @Test - public void clusterPubSubAsync() throws Exception { - clusterClient.connectPubSub().async().close(); - } - - @Test - public void clusterPubSubReactive() throws Exception { - clusterClient.connectPubSub().reactive().close(); - } - - @Test - public void clusterPubSubStateful() throws Exception { - clusterClient.connectPubSub().close(); - } - - @Test - public void deprecatedClusterSync() throws Exception { - clusterClient.connectCluster().getStatefulConnection().sync().close(); - } - - @Test - public void deprecatedClusterAsync() throws Exception { - clusterClient.connectCluster().getStatefulConnection().async().close(); - } - - @Test - public void deprecatedClusterReactive() throws Exception { - clusterClient.connectCluster().getStatefulConnection().reactive().close(); - } - - @Test - public void deprecatedClusterStateful() throws Exception { - clusterClient.connectCluster().getStatefulConnection().close(); - } - - // Advanced Cluster - @Test - public void advancedClusterSync() throws Exception { - StatefulRedisClusterConnection statefulConnection = clusterClient.connectCluster() - .getStatefulConnection(); - RedisURI uri = clusterClient.getPartitions().getPartition(0).getUri(); - statefulConnection.getConnection(uri.getHost(), uri.getPort()).sync(); - statefulConnection.close(); - } - - @Test - public void advancedClusterAsync() throws Exception { - StatefulRedisClusterConnection statefulConnection = clusterClient.connectCluster() - .getStatefulConnection(); - RedisURI uri = clusterClient.getPartitions().getPartition(0).getUri(); - statefulConnection.getConnection(uri.getHost(), uri.getPort()).sync(); - statefulConnection.close(); - } - - @Test - public void advancedClusterReactive() throws Exception { - StatefulRedisClusterConnection statefulConnection = clusterClient.connectCluster() - .getStatefulConnection(); - RedisURI uri = clusterClient.getPartitions().getPartition(0).getUri(); - statefulConnection.getConnection(uri.getHost(), uri.getPort()).reactive(); - statefulConnection.close(); - } - - @Test - public void advancedClusterStateful() throws Exception { - clusterClient.connect().close(); - } - - @Test - public void deprecatedAvancedClusterStateful() throws Exception { - clusterClient.connectCluster().getStatefulConnection().close(); - } - - // Cluster node selection - @Test - public void nodeSelectionClusterAsync() throws Exception { - StatefulRedisClusterConnection statefulConnection = clusterClient.connect(); - AsyncNodeSelection masters = statefulConnection.async().masters(); - statefulConnection.close(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/AsyncConnectionTest.java b/src/test/java/com/lambdaworks/redis/AsyncConnectionTest.java deleted file mode 100644 index 6ea4e377ef..0000000000 --- a/src/test/java/com/lambdaworks/redis/AsyncConnectionTest.java +++ /dev/null @@ -1,151 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.Future; -import java.util.concurrent.TimeUnit; - -import org.junit.After; -import org.junit.Before; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; - -public class AsyncConnectionTest extends AbstractRedisClientTest { - private RedisAsyncConnection async; - - @Rule - public ExpectedException exception = ExpectedException.none(); - - @Before - public void openAsyncConnection() throws Exception { - async = client.connectAsync(); - } - - @After - public void closeAsyncConnection() throws Exception { - async.close(); - } - - @Test(timeout = 10000) - public void multi() throws Exception { - assertThat(async.multi().get()).isEqualTo("OK"); - Future set = async.set(key, value); - Future rpush = async.rpush("list", "1", "2"); - Future> lrange = async.lrange("list", 0, -1); - - assertThat(!set.isDone() && !rpush.isDone() && !rpush.isDone()).isTrue(); - assertThat(async.exec().get()).isEqualTo(list("OK", 2L, list("1", "2"))); - - assertThat(set.get()).isEqualTo("OK"); - assertThat((long) rpush.get()).isEqualTo(2L); - assertThat(lrange.get()).isEqualTo(list("1", "2")); - } - - @Test(timeout = 10000) - public void watch() throws Exception { - assertThat(async.watch(key).get()).isEqualTo("OK"); - - redis.set(key, value + "X"); - - async.multi(); - Future set = async.set(key, value); - Future append = async.append(key, "foo"); - assertThat(async.exec().get()).isEqualTo(list()); - assertThat(set.get()).isNull(); - assertThat(append.get()).isNull(); - } - - @Test(timeout = 10000) - public void futureListener() throws Exception { - - final List run = new ArrayList<>(); - - Runnable listener = new Runnable() { - @Override - public void run() { - run.add(new Object()); - } - }; - - for (int i = 0; i < 1000; i++) { - redis.lpush(key, "" + i); - } - - RedisAsyncConnection connection = client.connectAsync(); - - Long len = connection.llen(key).get(); - assertThat(len.intValue()).isEqualTo(1000); - - RedisFuture> sort = connection.sort(key); - assertThat(sort.isCancelled()).isFalse(); - - sort.thenRun(listener); - - sort.get(); - Thread.sleep(100); - - assertThat(run).hasSize(1); - - connection.close(); - - } - - @Test(timeout = 1000) - public void futureListenerCompleted() throws Exception { - - final List run = new ArrayList<>(); - - Runnable listener = new Runnable() { - @Override - public void run() { - run.add(new Object()); - } - }; - - RedisAsyncConnection connection = client.connectAsync(); - - RedisFuture set = connection.set(key, value); - set.get(); - - set.thenRun(listener); - - assertThat(run).hasSize(1); - - connection.close(); - } - - @Test(timeout = 500) - public void discardCompletesFutures() throws Exception { - async.multi(); - Future set = async.set(key, value); - async.discard(); - assertThat(set.get()).isNull(); - } - - @Test(timeout = 10000) - public void awaitAll() throws Exception { - Future get1 = async.get(key); - Future set = async.set(key, value); - Future get2 = async.get(key); - Future append = async.append(key, value); - - assertThat(LettuceFutures.awaitAll(1, TimeUnit.SECONDS, get1, set, get2, append)).isTrue(); - - assertThat(get1.get()).isNull(); - assertThat(set.get()).isEqualTo("OK"); - assertThat(get2.get()).isEqualTo(value); - assertThat((long) append.get()).isEqualTo(value.length() * 2); - } - - @Test(timeout = 500) - public void awaitAllTimeout() throws Exception { - Future> blpop = async.blpop(1, key); - assertThat(LettuceFutures.awaitAll(1, TimeUnit.NANOSECONDS, blpop)).isFalse(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/ClientMetricsTest.java b/src/test/java/com/lambdaworks/redis/ClientMetricsTest.java deleted file mode 100644 index 30fa45d35f..0000000000 --- a/src/test/java/com/lambdaworks/redis/ClientMetricsTest.java +++ /dev/null @@ -1,116 +0,0 @@ -package com.lambdaworks.redis; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.google.code.tempusfugit.temporal.Timeout.timeout; -import static com.lambdaworks.redis.AbstractRedisClientTest.client; -import static org.assertj.core.api.Assertions.assertThat; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.metrics.CommandLatencyId; -import com.lambdaworks.redis.metrics.CommandMetrics; -import org.junit.*; -import org.springframework.test.util.ReflectionTestUtils; - -import com.google.code.tempusfugit.temporal.Condition; -import com.google.code.tempusfugit.temporal.WaitFor; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.event.Event; -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.event.metrics.CommandLatencyEvent; -import com.lambdaworks.redis.event.metrics.MetricEventPublisher; - -import rx.Subscription; -import rx.functions.Func1; -import rx.observers.TestSubscriber; - -import java.util.Set; -import java.util.concurrent.TimeUnit; - -/** - * @author Mark Paluch - */ -public class ClientMetricsTest extends AbstractTest { - - private RedisCommands redis; - - @BeforeClass - public static void setupClient() { - client = RedisClient.create(RedisURI.Builder.redis(host, port).build()); - } - - @Before - public void before() throws Exception { - redis = client.connect().sync(); - } - - @AfterClass - public static void afterClass() { - FastShutdown.shutdown(client); - } - - @Test - public void testMetricsEvent() throws Exception { - - EventBus eventBus = client.getResources().eventBus(); - MetricEventPublisher publisher = (MetricEventPublisher) ReflectionTestUtils.getField(client.getResources(), - "metricEventPublisher"); - publisher.emitMetricsEvent(); - - TestSubscriber subscriber = new TestSubscriber(); - Subscription subscription = eventBus.get().filter(redisEvent -> redisEvent instanceof CommandLatencyEvent).cast(CommandLatencyEvent.class).subscribe(subscriber); - - generateTestData(); - publisher.emitMetricsEvent(); - - WaitFor.waitOrTimeout(() -> !subscriber.getOnNextEvents().isEmpty(), timeout(seconds(5))); - - subscription.unsubscribe(); - - subscriber.assertValueCount(1); - } - - @Test - public void testMetrics() throws Exception { - - EventBus eventBus = client.getResources().eventBus(); - MetricEventPublisher publisher = (MetricEventPublisher) ReflectionTestUtils.getField(client.getResources(), - "metricEventPublisher"); - publisher.emitMetricsEvent(); - - TestSubscriber subscriber = new TestSubscriber(); - Subscription subscription = eventBus.get().filter(redisEvent -> redisEvent instanceof CommandLatencyEvent).cast(CommandLatencyEvent.class).subscribe(subscriber); - - generateTestData(); - publisher.emitMetricsEvent(); - - WaitFor.waitOrTimeout(() -> !subscriber.getOnNextEvents().isEmpty(), timeout(seconds(5))); - subscription.unsubscribe(); - - subscriber.assertValueCount(1); - - CommandLatencyEvent event = subscriber.getOnNextEvents().get(0); - - Set ids = event.getLatencies().keySet(); - CommandMetrics commandMetrics = event.getLatencies().get(ids.iterator().next()); - assertThat(commandMetrics.getCompletion().getMin()).isBetween(0L, TimeUnit.MILLISECONDS.toMicros(100)); - assertThat(commandMetrics.getCompletion().getMax()).isBetween(0L, TimeUnit.MILLISECONDS.toMicros(200)); - - assertThat(commandMetrics.getFirstResponse().getMin()).isBetween(0L, TimeUnit.MILLISECONDS.toMicros(100)); - assertThat(commandMetrics.getFirstResponse().getMax()).isBetween(0L, TimeUnit.MILLISECONDS.toMicros(200)); - } - - private void generateTestData() { - redis.set(key, value); - redis.set(key, value); - redis.set(key, value); - redis.set(key, value); - redis.set(key, value); - redis.set(key, value); - - redis.get(key); - redis.get(key); - redis.get(key); - redis.get(key); - redis.get(key); - } -} diff --git a/src/test/java/com/lambdaworks/redis/ClientOptionsTest.java b/src/test/java/com/lambdaworks/redis/ClientOptionsTest.java deleted file mode 100644 index b81efa0e22..0000000000 --- a/src/test/java/com/lambdaworks/redis/ClientOptionsTest.java +++ /dev/null @@ -1,339 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.Connections.getChannel; -import static com.lambdaworks.Connections.getConnectionWatchdog; -import static com.lambdaworks.Connections.getStatefulConnection; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.*; - -import io.netty.channel.Channel; - -/** - * @author Mark Paluch - */ -public class ClientOptionsTest extends AbstractRedisClientTest { - - @Test - public void testNew() throws Exception { - checkAssertions(ClientOptions.create()); - } - - @Test - public void testBuilder() throws Exception { - checkAssertions(ClientOptions.builder().build()); - } - - @Test - public void testCopy() throws Exception { - checkAssertions(ClientOptions.copyOf(ClientOptions.builder().build())); - } - - protected void checkAssertions(ClientOptions sut) { - assertThat(sut.isAutoReconnect()).isEqualTo(true); - assertThat(sut.isCancelCommandsOnReconnectFailure()).isEqualTo(false); - assertThat(sut.isPingBeforeActivateConnection()).isEqualTo(false); - assertThat(sut.isSuspendReconnectOnProtocolFailure()).isEqualTo(false); - assertThat(sut.getDisconnectedBehavior()).isEqualTo(ClientOptions.DisconnectedBehavior.DEFAULT); - } - - @Test - public void variousClientOptions() throws Exception { - - RedisAsyncCommands plain = client.connect().async(); - - assertThat(getStatefulConnection(plain).getOptions().isAutoReconnect()).isTrue(); - - client.setOptions(ClientOptions.builder().autoReconnect(false).build()); - RedisAsyncCommands connection = client.connect().async(); - assertThat(getStatefulConnection(connection).getOptions().isAutoReconnect()).isFalse(); - - assertThat(getStatefulConnection(plain).getOptions().isAutoReconnect()).isTrue(); - - plain.close(); - connection.close(); - } - - @Test - public void requestQueueSize() throws Exception { - - client.setOptions(ClientOptions.builder().requestQueueSize(10).build()); - - RedisAsyncCommands connection = client.connect().async(); - getConnectionWatchdog(connection.getStatefulConnection()).setListenOnChannelInactive(false); - - connection.quit(); - - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - - for (int i = 0; i < 10; i++) { - connection.ping(); - } - - try { - connection.ping(); - fail("missing RedisException"); - } catch (RedisException e) { - assertThat(e).hasMessageContaining("Request queue size exceeded"); - } - - connection.close(); - } - - @Test - public void disconnectedWithoutReconnect() throws Exception { - - client.setOptions(ClientOptions.builder().autoReconnect(false).build()); - - RedisAsyncCommands connection = client.connect().async(); - - connection.quit(); - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - try { - connection.get(key); - } catch (Exception e) { - assertThat(e).isInstanceOf(RedisException.class).hasMessageContaining("not connected"); - } finally { - connection.close(); - } - } - - @Test - public void disconnectedRejectCommands() throws Exception { - - client.setOptions(ClientOptions.builder().disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS) - .build()); - - RedisAsyncCommands connection = client.connect().async(); - - getConnectionWatchdog(connection.getStatefulConnection()).setListenOnChannelInactive(false); - connection.quit(); - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - try { - connection.get(key); - } catch (Exception e) { - assertThat(e).isInstanceOf(RedisException.class).hasMessageContaining("not connected"); - } finally { - connection.close(); - } - } - - @Test - public void disconnectedAcceptCommands() throws Exception { - - client.setOptions(ClientOptions.builder().autoReconnect(false) - .disconnectedBehavior(ClientOptions.DisconnectedBehavior.ACCEPT_COMMANDS).build()); - - RedisAsyncCommands connection = client.connect().async(); - - connection.quit(); - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - connection.get(key); - connection.close(); - } - - @Test(timeout = 10000) - public void pingBeforeConnect() throws Exception { - - redis.set(key, value); - client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); - RedisCommands connection = client.connect().sync(); - - try { - String result = connection.get(key); - assertThat(result).isEqualTo(value); - } finally { - connection.close(); - } - } - - @Test - public void pingBeforeConnectWithAuthentication() throws Exception { - - new WithPasswordRequired() { - @Override - protected void run(RedisClient client) throws Exception { - - client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); - RedisURI redisURI = RedisURI.Builder.redis(host, port).withPassword(passwd).build(); - - RedisCommands connection = client.connect(redisURI).sync(); - - try { - String result = connection.info(); - assertThat(result).contains("memory"); - } finally { - connection.close(); - } - - } - }; - } - - @Test - public void pingBeforeConnectWithSslAndAuthentication() throws Exception { - - new WithPasswordRequired() { - @Override - protected void run(RedisClient client) throws Exception { - - client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); - RedisURI redisURI = RedisURI.Builder.redis(host, 6443).withPassword(passwd).withVerifyPeer(false) - .withSsl(true).build(); - - RedisCommands connection = client.connect(redisURI).sync(); - - try { - String result = connection.info(); - assertThat(result).contains("memory"); - } finally { - connection.close(); - } - - } - }; - } - - @Test - public void pingBeforeConnectWithAuthenticationFails() throws Exception { - - new WithPasswordRequired() { - @Override - protected void run(RedisClient client) throws Exception { - - client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); - RedisURI redisURI = RedisURI.builder().redis(host, port).build(); - - try { - client.connect(redisURI); - fail("Missing RedisConnectionException"); - } catch (RedisConnectionException e) { - assertThat(e).hasRootCauseInstanceOf(RedisCommandExecutionException.class); - } - } - }; - } - - @Test - public void pingBeforeConnectWithSslAndAuthenticationFails() throws Exception { - - new WithPasswordRequired() { - @Override - protected void run(RedisClient client) throws Exception { - - client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); - RedisURI redisURI = RedisURI.builder().redis(host, 6443).withVerifyPeer(false).withSsl(true).build(); - - try { - client.connect(redisURI); - fail("Missing RedisConnectionException"); - } catch (RedisConnectionException e) { - assertThat(e).hasRootCauseInstanceOf(RedisCommandExecutionException.class); - } - - } - }; - } - - @Test(timeout = 10000) - public void pingBeforeConnectWithQueuedCommandsAndReconnect() throws Exception { - - StatefulRedisConnection controlConnection = client.connect(); - - client.setOptions(new ClientOptions.Builder().pingBeforeActivateConnection(true).build()); - - Utf8StringCodec codec = new Utf8StringCodec(); - - StatefulRedisConnection redisConnection = client.connect(RedisURI.create("redis://localhost:6479/5")); - redisConnection.async().set("key1", "value1"); - redisConnection.async().set("key2", "value2"); - - RedisFuture sleep = controlConnection.dispatch(new AsyncCommand<>( - new Command<>(CommandType.DEBUG, new StatusOutput<>(codec), new CommandArgs<>(codec).add("SLEEP").add(2)))); - - sleep.await(100, TimeUnit.MILLISECONDS); - - Channel channel = getChannel(redisConnection); - ConnectionWatchdog connectionWatchdog = getConnectionWatchdog(redisConnection); - connectionWatchdog.setReconnectSuspended(true); - - channel.close().get(); - sleep.get(); - - redisConnection.async().get(key).cancel(true); - - RedisFuture getFuture1 = redisConnection.async().get("key1"); - RedisFuture getFuture2 = redisConnection.async().get("key2"); - getFuture1.await(100, TimeUnit.MILLISECONDS); - - connectionWatchdog.setReconnectSuspended(false); - connectionWatchdog.scheduleReconnect(); - - assertThat(getFuture1.get()).isEqualTo("value1"); - assertThat(getFuture2.get()).isEqualTo("value2"); - - controlConnection.close(); - redisConnection.close(); - } - - @Test(timeout = 10000) - public void authenticatedPingBeforeConnectWithQueuedCommandsAndReconnect() throws Exception { - - new WithPasswordRequired() { - - @Override - protected void run(RedisClient client) throws Exception { - - RedisURI redisURI = RedisURI.Builder.redis(host, port).withPassword(passwd).withDatabase(5).build(); - StatefulRedisConnection controlConnection = client.connect(redisURI); - - client.setOptions(new ClientOptions.Builder().pingBeforeActivateConnection(true).build()); - - Utf8StringCodec codec = new Utf8StringCodec(); - - StatefulRedisConnection redisConnection = client.connect(redisURI); - redisConnection.async().set("key1", "value1"); - redisConnection.async().set("key2", "value2"); - - RedisFuture sleep = controlConnection.dispatch(new AsyncCommand<>(new Command<>(CommandType.DEBUG, - new StatusOutput<>(codec), new CommandArgs<>(codec).add("SLEEP").add(2)))); - - sleep.await(100, TimeUnit.MILLISECONDS); - - Channel channel = getChannel(redisConnection); - ConnectionWatchdog connectionWatchdog = getConnectionWatchdog(redisConnection); - connectionWatchdog.setReconnectSuspended(true); - - channel.close().get(); - sleep.get(); - - redisConnection.async().get(key).cancel(true); - - RedisFuture getFuture1 = redisConnection.async().get("key1"); - RedisFuture getFuture2 = redisConnection.async().get("key2"); - getFuture1.await(100, TimeUnit.MILLISECONDS); - - connectionWatchdog.setReconnectSuspended(false); - connectionWatchdog.scheduleReconnect(); - - assertThat(getFuture1.get()).isEqualTo("value1"); - assertThat(getFuture2.get()).isEqualTo("value2"); - - controlConnection.close(); - redisConnection.close(); - } - }; - - } -} diff --git a/src/test/java/com/lambdaworks/redis/ClientTest.java b/src/test/java/com/lambdaworks/redis/ClientTest.java deleted file mode 100644 index 470d332d39..0000000000 --- a/src/test/java/com/lambdaworks/redis/ClientTest.java +++ /dev/null @@ -1,264 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.google.code.tempusfugit.temporal.WaitFor.waitOrTimeout; -import static com.lambdaworks.Connections.getConnectionWatchdog; -import static com.lambdaworks.Connections.getStatefulConnection; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import java.util.concurrent.CancellationException; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; - -import org.junit.FixMethodOrder; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; -import org.junit.runners.MethodSorters; -import org.springframework.test.util.ReflectionTestUtils; - -import com.google.code.tempusfugit.temporal.Condition; -import com.google.code.tempusfugit.temporal.Timeout; -import com.lambdaworks.Wait; -import com.lambdaworks.redis.ClientOptions.DisconnectedBehavior; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; -import com.lambdaworks.redis.server.RandomResponseServer; -import io.netty.channel.Channel; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -public class ClientTest extends AbstractRedisClientTest { - @Rule - public ExpectedException exception = ExpectedException.none(); - - @Override - public void openConnection() throws Exception { - super.openConnection(); - } - - @Override - public void closeConnection() throws Exception { - super.closeConnection(); - } - - @Test(expected = RedisException.class) - public void close() throws Exception { - redis.close(); - redis.get(key); - } - - @Test - public void statefulConnectionFromSync() throws Exception { - assertThat(redis.getStatefulConnection().sync()).isSameAs(redis); - } - - @Test - public void statefulConnectionFromAsync() throws Exception { - RedisAsyncCommands async = client.connect().async(); - assertThat(async.getStatefulConnection().async()).isSameAs(async); - async.close(); - } - - @Test - public void statefulConnectionFromReactive() throws Exception { - RedisAsyncCommands async = client.connect().async(); - assertThat(async.getStatefulConnection().reactive().getStatefulConnection()).isSameAs(async.getStatefulConnection()); - async.close(); - } - - @Test - public void listenerTest() throws Exception { - - final TestConnectionListener listener = new TestConnectionListener(); - - RedisClient client = RedisClient.create(RedisURI.Builder.redis(host, port).build()); - - client.addListener(listener); - - assertThat(listener.onConnected).isNull(); - assertThat(listener.onDisconnected).isNull(); - assertThat(listener.onException).isNull(); - - RedisAsyncCommands connection = client.connect().async(); - - StatefulRedisConnection statefulRedisConnection = getStatefulConnection(connection); - - waitOrTimeout(() -> listener.onConnected != null, Timeout.timeout(seconds(2))); - - assertThat(listener.onConnected).isEqualTo(statefulRedisConnection); - assertThat(listener.onDisconnected).isNull(); - - connection.set(key, value).get(); - connection.close(); - - waitOrTimeout(new Condition() { - - @Override - public boolean isSatisfied() { - return listener.onDisconnected != null; - } - }, Timeout.timeout(seconds(2))); - - assertThat(listener.onConnected).isEqualTo(statefulRedisConnection); - assertThat(listener.onDisconnected).isEqualTo(statefulRedisConnection); - - FastShutdown.shutdown(client); - } - - @Test - public void listenerTestWithRemoval() throws Exception { - - final TestConnectionListener removedListener = new TestConnectionListener(); - final TestConnectionListener retainedListener = new TestConnectionListener(); - - RedisClient client = RedisClient.create(RedisURI.Builder.redis(host, port).build()); - client.addListener(removedListener); - client.addListener(retainedListener); - client.removeListener(removedListener); - - // that's the sut call - client.connect().async(); - - waitOrTimeout(() -> retainedListener.onConnected != null, Timeout.timeout(seconds(2))); - - assertThat(retainedListener.onConnected).isNotNull(); - - assertThat(removedListener.onConnected).isNull(); - assertThat(removedListener.onDisconnected).isNull(); - assertThat(removedListener.onException).isNull(); - - FastShutdown.shutdown(client); - - } - - @Test(expected = RedisException.class) - public void timeout() throws Exception { - redis.setTimeout(0, TimeUnit.MICROSECONDS); - redis.eval(" os.execute(\"sleep \" .. tonumber(1))", ScriptOutputType.STATUS); - } - - @Test - public void reconnect() throws Exception { - - redis.set(key, value); - - redis.quit(); - Thread.sleep(100); - assertThat(redis.get(key)).isEqualTo(value); - redis.quit(); - Thread.sleep(100); - assertThat(redis.get(key)).isEqualTo(value); - redis.quit(); - Thread.sleep(100); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test(expected = RedisCommandInterruptedException.class, timeout = 50) - public void interrupt() throws Exception { - Thread.currentThread().interrupt(); - redis.blpop(0, key); - } - - @Test - public void connectFailure() throws Exception { - RedisClient client = new RedisClient("invalid"); - exception.expect(RedisException.class); - exception.expectMessage("Unable to connect"); - client.connect(); - } - - @Test - public void connectPubSubFailure() throws Exception { - RedisClient client = new RedisClient("invalid"); - exception.expect(RedisException.class); - exception.expectMessage("Unable to connect"); - client.connectPubSub(); - } - - private class TestConnectionListener implements RedisConnectionStateListener { - - public RedisChannelHandler onConnected; - public RedisChannelHandler onDisconnected; - public RedisChannelHandler onException; - - @Override - public void onRedisConnected(RedisChannelHandler connection) { - onConnected = connection; - } - - @Override - public void onRedisDisconnected(RedisChannelHandler connection) { - onDisconnected = connection; - } - - @Override - public void onRedisExceptionCaught(RedisChannelHandler connection, Throwable cause) { - onException = connection; - - } - } - - @Test - public void emptyClient() throws Exception { - - RedisClient client = new RedisClient(); - try { - client.connect(); - } catch (IllegalStateException e) { - assertThat(e).hasMessageContaining("RedisURI"); - } - - try { - client.connect().async(); - } catch (IllegalStateException e) { - assertThat(e).hasMessageContaining("RedisURI"); - } - - try { - client.connect((RedisURI) null); - } catch (IllegalArgumentException e) { - assertThat(e).hasMessageContaining("RedisURI"); - } - - try { - client.connectAsync((RedisURI) null); - } catch (IllegalArgumentException e) { - assertThat(e).hasMessageContaining("RedisURI"); - } - FastShutdown.shutdown(client); - } - - @Test - public void testExceptionWithCause() throws Exception { - RedisException e = new RedisException(new RuntimeException()); - assertThat(e).hasCauseExactlyInstanceOf(RuntimeException.class); - } - - @Test(timeout = 20000) - public void reset() throws Exception { - StatefulRedisConnection connection = client.connect(); - RedisAsyncCommands async = connection.async(); - - connection.sync().set(key, value); - async.reset(); - connection.sync().set(key, value); - connection.sync().flushall(); - - RedisFuture> eval = async.blpop(2, key); - Thread.sleep(100); - assertThat(eval.isDone()).isFalse(); - assertThat(eval.isCancelled()).isFalse(); - - async.reset(); - - assertThat(eval.isCancelled()).isTrue(); - assertThat(eval.isDone()).isTrue(); - - connection.close(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/ConnectionCommandTest.java b/src/test/java/com/lambdaworks/redis/ConnectionCommandTest.java deleted file mode 100644 index ccc3893d4f..0000000000 --- a/src/test/java/com/lambdaworks/redis/ConnectionCommandTest.java +++ /dev/null @@ -1,220 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; -import static org.mockito.Mockito.doThrow; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; - -import java.util.concurrent.ExecutionException; - -import org.assertj.core.api.Assertions; -import org.junit.FixMethodOrder; -import org.junit.Test; -import org.junit.runners.MethodSorters; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.protocol.BaseRedisCommandBuilder; -import com.lambdaworks.redis.protocol.CommandHandler; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -public class ConnectionCommandTest extends AbstractRedisClientTest { - @Test - public void auth() throws Exception { - new WithPasswordRequired() { - @Override - public void run(RedisClient client) { - RedisConnection connection = client.connect().sync(); - try { - connection.ping(); - fail("Server doesn't require authentication"); - } catch (RedisException e) { - assertThat(e.getMessage()).isEqualTo("NOAUTH Authentication required."); - assertThat(connection.auth(passwd)).isEqualTo("OK"); - assertThat(connection.set(key, value)).isEqualTo("OK"); - } - - RedisURI redisURI = RedisURI.Builder.redis(host, port).withDatabase(2).withPassword(passwd).build(); - RedisClient redisClient = new RedisClient(redisURI); - RedisConnection authConnection = redisClient.connect().sync(); - authConnection.ping(); - authConnection.close(); - FastShutdown.shutdown(redisClient); - } - }; - } - - @Test - public void echo() throws Exception { - assertThat(redis.echo("hello")).isEqualTo("hello"); - } - - @Test - public void ping() throws Exception { - assertThat(redis.ping()).isEqualTo("PONG"); - } - - @Test - public void select() throws Exception { - redis.set(key, value); - assertThat(redis.select(1)).isEqualTo("OK"); - assertThat(redis.get(key)).isNull(); - } - - @Test(expected = IllegalArgumentException.class) - public void authNull() throws Exception { - redis.auth(null); - } - - @Test(expected = IllegalArgumentException.class) - public void authEmpty() throws Exception { - redis.auth(""); - } - - @Test - public void authReconnect() throws Exception { - new WithPasswordRequired() { - @Override - public void run(RedisClient client) { - RedisConnection connection = client.connect().sync(); - assertThat(connection.auth(passwd)).isEqualTo("OK"); - assertThat(connection.set(key, value)).isEqualTo("OK"); - connection.quit(); - assertThat(connection.get(key)).isEqualTo(value); - } - }; - } - - @Test - public void selectReconnect() throws Exception { - redis.select(1); - redis.set(key, value); - redis.quit(); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void isValid() throws Exception { - - assertThat(Connections.isValid(redis)).isTrue(); - RedisAsyncCommandsImpl asyncConnection = (RedisAsyncCommandsImpl) client.connectAsync(); - RedisChannelHandler channelHandler = (RedisChannelHandler) asyncConnection - .getStatefulConnection(); - - assertThat(Connections.isValid(asyncConnection)).isTrue(); - assertThat(Connections.isOpen(asyncConnection)).isTrue(); - assertThat(asyncConnection.isOpen()).isTrue(); - assertThat(channelHandler.isClosed()).isFalse(); - - CommandHandler channelWriter = (CommandHandler) channelHandler.getChannelWriter(); - assertThat(channelWriter.isClosed()).isFalse(); - assertThat(channelWriter.isSharable()).isTrue(); - - Connections.close(asyncConnection); - assertThat(Connections.isOpen(asyncConnection)).isFalse(); - assertThat(Connections.isValid(asyncConnection)).isFalse(); - - assertThat(asyncConnection.isOpen()).isFalse(); - assertThat(channelHandler.isClosed()).isTrue(); - - assertThat(channelWriter.isClosed()).isTrue(); - } - - @Test - @SuppressWarnings("unchecked") - public void isValidAsyncExceptions() throws Exception { - - RedisAsyncConnection connection = mock(RedisAsyncConnection.class); - RedisFuture future = mock(RedisFuture.class); - when(connection.ping()).thenReturn(future); - - when(future.get()).thenThrow(new ExecutionException(new RuntimeException())); - assertThat(Connections.isValid(connection)).isFalse(); - - } - - @Test - public void isValidSyncExceptions() throws Exception { - - RedisConnection connection = mock(RedisConnection.class); - - when(connection.ping()).thenThrow(new RuntimeException()); - assertThat(Connections.isValid(connection)).isFalse(); - } - - @Test - public void closeExceptions() throws Exception { - - RedisConnection connection = mock(RedisConnection.class); - doThrow(new RuntimeException()).when(connection).close(); - Connections.close(connection); - } - - @Test(expected = IllegalArgumentException.class) - public void isValidWrongObject() throws Exception { - Connections.isValid(new Object()); - } - - @Test(expected = IllegalArgumentException.class) - public void isOpenWrongObject() throws Exception { - Connections.isOpen(new Object()); - } - - @Test(expected = IllegalArgumentException.class) - public void closeWrongObject() throws Exception { - Connections.close(new Object()); - } - - @Test - public void getSetReconnect() throws Exception { - redis.set(key, value); - redis.quit(); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - @SuppressWarnings("unchecked") - public void authInvalidPassword() throws Exception { - RedisAsyncConnection async = client.connectAsync(); - try { - async.auth("invalid"); - fail("Authenticated with invalid password"); - } catch (RedisException e) { - assertThat(e.getMessage()).isEqualTo("ERR Client sent AUTH, but no password is set"); - StatefulRedisConnection statefulRedisConnection = (StatefulRedisConnection) ReflectionTestUtils - .getField(async, "connection"); - assertThat(ReflectionTestUtils.getField(statefulRedisConnection, "password")).isNull(); - } finally { - async.close(); - } - } - - @Test - @SuppressWarnings("unchecked") - public void selectInvalid() throws Exception { - RedisAsyncConnection async = client.connectAsync(); - try { - async.select(1024); - fail("Selected invalid db index"); - } catch (RedisException e) { - assertThat(e.getMessage()).isEqualTo("ERR invalid DB index"); - StatefulRedisConnection statefulRedisConnection = (StatefulRedisConnection) ReflectionTestUtils - .getField(async, "connection"); - assertThat(ReflectionTestUtils.getField(statefulRedisConnection, "db")).isEqualTo(0); - } finally { - async.close(); - } - } - - @Test - public void testDoubleToString() throws Exception { - - assertThat(LettuceStrings.string(1.1)).isEqualTo("1.1"); - assertThat(LettuceStrings.string(Double.POSITIVE_INFINITY)).isEqualTo("+inf"); - assertThat(LettuceStrings.string(Double.NEGATIVE_INFINITY)).isEqualTo("-inf"); - - } -} diff --git a/src/test/java/com/lambdaworks/redis/CustomCodecTest.java b/src/test/java/com/lambdaworks/redis/CustomCodecTest.java deleted file mode 100644 index a729c17187..0000000000 --- a/src/test/java/com/lambdaworks/redis/CustomCodecTest.java +++ /dev/null @@ -1,146 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.io.ByteArrayInputStream; -import java.io.ByteArrayOutputStream; -import java.io.IOException; -import java.io.ObjectInputStream; -import java.io.ObjectOutputStream; -import java.nio.ByteBuffer; -import java.nio.charset.Charset; -import java.util.List; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.protocol.CommandArgs; -import org.junit.Test; - -import com.lambdaworks.redis.codec.ByteArrayCodec; -import com.lambdaworks.redis.codec.CompressionCodec; -import com.lambdaworks.redis.codec.RedisCodec; -import rx.observers.TestSubscriber; - -public class CustomCodecTest extends AbstractRedisClientTest { - - @Test - public void testJavaSerializer() throws Exception { - StatefulRedisConnection redisConnection = client.connect(new SerializedObjectCodec()); - RedisCommands sync = redisConnection.sync(); - List list = list("one", "two"); - sync.set(key, list); - - assertThat(sync.get(key)).isEqualTo(list); - assertThat(sync.set(key, list)).isEqualTo("OK"); - assertThat(sync.set(key, list, SetArgs.Builder.ex(1))).isEqualTo("OK"); - - redisConnection.close(); - } - - @Test - public void testJavaSerializerRx() throws Exception { - StatefulRedisConnection redisConnection = client.connect(new SerializedObjectCodec()); - List list = list("one", "two"); - - TestSubscriber subscriber = TestSubscriber.create(); - - redisConnection.reactive().set(key, list, SetArgs.Builder.ex(1)).subscribe(subscriber); - subscriber.awaitTerminalEvent(1, TimeUnit.SECONDS); - subscriber.assertCompleted(); - subscriber.assertValue("OK"); - - redisConnection.close(); - } - - @Test - public void testDeflateCompressedJavaSerializer() throws Exception { - RedisCommands connection = client.connect( - CompressionCodec.valueCompressor(new SerializedObjectCodec(), CompressionCodec.CompressionType.DEFLATE)).sync(); - List list = list("one", "two"); - connection.set(key, list); - assertThat(connection.get(key)).isEqualTo(list); - - connection.close(); - } - - @Test - public void testGzipompressedJavaSerializer() throws Exception { - RedisCommands connection = client.connect( - CompressionCodec.valueCompressor(new SerializedObjectCodec(), CompressionCodec.CompressionType.GZIP)).sync(); - List list = list("one", "two"); - connection.set(key, list); - assertThat(connection.get(key)).isEqualTo(list); - - connection.close(); - } - - @Test - public void testByteCodec() throws Exception { - RedisCommands connection = client.connect(new ByteArrayCodec()).sync(); - String value = "üöäü+#"; - connection.set(key.getBytes(), value.getBytes()); - assertThat(connection.get(key.getBytes())).isEqualTo(value.getBytes()); - connection.set(key.getBytes(), null); - assertThat(connection.get(key.getBytes())).isEqualTo(new byte[0]); - - List keys = connection.keys(key.getBytes()); - assertThat(keys).contains(key.getBytes()); - - connection.close(); - } - - @Test - public void testExperimentalByteCodec() throws Exception { - RedisCommands connection = client.connect(CommandArgs.ExperimentalByteArrayCodec.INSTANCE).sync(); - String value = "üöäü+#"; - connection.set(key.getBytes(), value.getBytes()); - assertThat(connection.get(key.getBytes())).isEqualTo(value.getBytes()); - connection.set(key.getBytes(), null); - assertThat(connection.get(key.getBytes())).isEqualTo(new byte[0]); - - List keys = connection.keys(key.getBytes()); - assertThat(keys).contains(key.getBytes()); - connection.close(); - } - - public class SerializedObjectCodec implements RedisCodec { - private Charset charset = Charset.forName("UTF-8"); - - @Override - public String decodeKey(ByteBuffer bytes) { - return charset.decode(bytes).toString(); - } - - @Override - public Object decodeValue(ByteBuffer bytes) { - try { - byte[] array = new byte[bytes.remaining()]; - bytes.get(array); - ObjectInputStream is = new ObjectInputStream(new ByteArrayInputStream(array)); - return is.readObject(); - } catch (Exception e) { - return null; - } - } - - @Override - public ByteBuffer encodeKey(String key) { - return charset.encode(key); - } - - @Override - public ByteBuffer encodeValue(Object value) { - try { - ByteArrayOutputStream bytes = new ByteArrayOutputStream(); - ObjectOutputStream os = new ObjectOutputStream(bytes); - os.writeObject(value); - return ByteBuffer.wrap(bytes.toByteArray()); - } catch (IOException e) { - return null; - } - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/DefaultRedisClient.java b/src/test/java/com/lambdaworks/redis/DefaultRedisClient.java deleted file mode 100644 index 0a0014246d..0000000000 --- a/src/test/java/com/lambdaworks/redis/DefaultRedisClient.java +++ /dev/null @@ -1,33 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.TimeUnit; - -/** - * @author Mark Paluch - */ -public class DefaultRedisClient { - - public final static DefaultRedisClient instance = new DefaultRedisClient(); - - private RedisClient redisClient; - - public DefaultRedisClient() { - redisClient = RedisClient.create(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build()); - Runtime.getRuntime().addShutdownHook(new Thread() { - @Override - public void run() { - FastShutdown.shutdown(redisClient); - } - }); - } - - /** - * Do not close the client. - * - * @return the default redis client for the tests. - */ - public static RedisClient get() { - instance.redisClient.setDefaultTimeout(60, TimeUnit.SECONDS); - return instance.redisClient; - } -} diff --git a/src/test/java/com/lambdaworks/redis/FastShutdown.java b/src/test/java/com/lambdaworks/redis/FastShutdown.java deleted file mode 100644 index ab93ea4af3..0000000000 --- a/src/test/java/com/lambdaworks/redis/FastShutdown.java +++ /dev/null @@ -1,29 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.resource.ClientResources; - -/** - * @author Mark Paluch - */ -public class FastShutdown { - - /** - * Shut down a {@link AbstractRedisClient} with a timeout of 10ms. - * - * @param redisClient - */ - public static void shutdown(AbstractRedisClient redisClient) { - redisClient.shutdown(10, 10, TimeUnit.MILLISECONDS); - } - - /** - * Shut down a {@link ClientResources} client with a timeout of 10ms. - * - * @param clientResources - */ - public static void shutdown(ClientResources clientResources) { - clientResources.shutdown(10, 10, TimeUnit.MILLISECONDS); - } -} diff --git a/src/test/java/com/lambdaworks/redis/GeoModelTest.java b/src/test/java/com/lambdaworks/redis/GeoModelTest.java deleted file mode 100644 index a19ec88e32..0000000000 --- a/src/test/java/com/lambdaworks/redis/GeoModelTest.java +++ /dev/null @@ -1,97 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.*; - -import java.util.Collections; -import java.util.Map; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class GeoModelTest { - - @Test - public void geoWithin() throws Exception { - - GeoWithin sut = new GeoWithin("me", 1.0, 1234L, new GeoCoordinates(1, 2)); - GeoWithin equalsToSut = new GeoWithin("me", 1.0, 1234L, new GeoCoordinates(1, 2)); - - Map, String> map = Collections.singletonMap(sut, "value"); - - assertThat(map.get(equalsToSut)).isEqualTo("value"); - assertThat(sut).isEqualTo(equalsToSut); - assertThat(sut.hashCode()).isEqualTo(equalsToSut.hashCode()); - assertThat(sut.toString()).isEqualTo(equalsToSut.toString()); - - } - - @Test - public void geoWithinSlightlyDifferent() throws Exception { - - GeoWithin sut = new GeoWithin("me", 1.0, 1234L, new GeoCoordinates(1, 2)); - GeoWithin slightlyDifferent = new GeoWithin("me", 1.0, 1234L, new GeoCoordinates(1.1, 2)); - - Map, String> map = Collections.singletonMap(sut, "value"); - - assertThat(map.get(slightlyDifferent)).isNull(); - assertThat(sut).isNotEqualTo(slightlyDifferent); - assertThat(sut.hashCode()).isNotEqualTo(slightlyDifferent.hashCode()); - assertThat(sut.toString()).isNotEqualTo(slightlyDifferent.toString()); - - slightlyDifferent = new GeoWithin("me1", 1.0, 1234L, new GeoCoordinates(1, 2)); - assertThat(sut).isNotEqualTo(slightlyDifferent); - } - - @Test - public void geoWithinEmpty() throws Exception { - - GeoWithin sut = new GeoWithin(null, null, null, null); - GeoWithin equalsToSut = new GeoWithin(null, null, null, null); - - assertThat(sut).isEqualTo(equalsToSut); - assertThat(sut.hashCode()).isEqualTo(equalsToSut.hashCode()); - } - - @Test - public void geoCoordinates() throws Exception { - - GeoCoordinates sut = new GeoCoordinates(1, 2); - GeoCoordinates equalsToSut = new GeoCoordinates(1, 2); - - Map map = Collections.singletonMap(sut, "value"); - - assertThat(map.get(equalsToSut)).isEqualTo("value"); - assertThat(sut).isEqualTo(equalsToSut); - assertThat(sut.hashCode()).isEqualTo(equalsToSut.hashCode()); - assertThat(sut.toString()).isEqualTo(equalsToSut.toString()); - - } - - @Test - public void geoCoordinatesSlightlyDifferent() throws Exception { - - GeoCoordinates sut = new GeoCoordinates(1, 2); - GeoCoordinates slightlyDifferent = new GeoCoordinates(1.1, 2); - - Map map = Collections.singletonMap(sut, "value"); - - assertThat(map.get(slightlyDifferent)).isNull(); - assertThat(sut).isNotEqualTo(slightlyDifferent); - assertThat(sut.hashCode()).isNotEqualTo(slightlyDifferent.hashCode()); - assertThat(sut.toString()).isNotEqualTo(slightlyDifferent.toString()); - - } - - @Test - public void geoCoordinatesEmpty() throws Exception { - - GeoCoordinates sut = new GeoCoordinates(null, null); - GeoCoordinates equalsToSut = new GeoCoordinates(null, null); - - assertThat(sut).isEqualTo(equalsToSut); - assertThat(sut.hashCode()).isEqualTo(equalsToSut.hashCode()); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/JavaRuntimeTest.java b/src/test/java/com/lambdaworks/redis/JavaRuntimeTest.java deleted file mode 100644 index 5f49d528ab..0000000000 --- a/src/test/java/com/lambdaworks/redis/JavaRuntimeTest.java +++ /dev/null @@ -1,30 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.hamcrest.CoreMatchers.startsWith; -import static org.junit.Assume.assumeThat; - -import com.lambdaworks.redis.internal.LettuceClassUtils; -import org.junit.Test; - -import com.lambdaworks.redis.internal.LettuceAssert; - -public class JavaRuntimeTest { - - @Test - public void testJava8() { - assumeThat(System.getProperty("java.version"), startsWith("1.8")); - assertThat(JavaRuntime.AT_LEAST_JDK_8).isTrue(); - } - - @Test - public void testJava9() { - assumeThat(System.getProperty("java.version"), startsWith("1.9")); - assertThat(JavaRuntime.AT_LEAST_JDK_8).isTrue(); - } - - @Test - public void testNotPresentClass() { - assertThat(LettuceClassUtils.isPresent("total.fancy.class.name")).isFalse(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/KeyValueStreamingAdapter.java b/src/test/java/com/lambdaworks/redis/KeyValueStreamingAdapter.java deleted file mode 100644 index 36ee2a4bc1..0000000000 --- a/src/test/java/com/lambdaworks/redis/KeyValueStreamingAdapter.java +++ /dev/null @@ -1,29 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.LinkedHashMap; -import java.util.Map; - -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.KeyValueStreamingChannel; - -/** - * Adapter for a {@link KeyStreamingChannel}. Stores the output in a map. - * - * @param Key type. - * @param Value type. - * @author Mark Paluch - * @since 3.0 - */ -public class KeyValueStreamingAdapter implements KeyValueStreamingChannel { - - private final Map map = new LinkedHashMap<>(); - - @Override - public void onKeyValue(K key, V value) { - map.put(key, value); - } - - public Map getMap() { - return map; - } -} diff --git a/src/test/java/com/lambdaworks/redis/KeyValueTest.java b/src/test/java/com/lambdaworks/redis/KeyValueTest.java deleted file mode 100644 index 1a9a26a589..0000000000 --- a/src/test/java/com/lambdaworks/redis/KeyValueTest.java +++ /dev/null @@ -1,36 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Test; - -public class KeyValueTest { - protected String key = "key"; - protected String value = "value"; - - @Test - public void equals() throws Exception { - KeyValue kv = kv(key, value); - assertThat(kv.equals(kv(key, value))).isTrue(); - assertThat(kv.equals(null)).isFalse(); - assertThat(kv.equals(kv("a", value))).isFalse(); - assertThat(kv.equals(kv(key, "b"))).isFalse(); - } - - @Test - public void testToString() throws Exception { - KeyValue kv = kv(key, value); - assertThat(kv.toString()).isEqualTo(String.format("(%s, %s)", kv.key, kv.value)); - } - - @Test - public void testHashCode() throws Exception { - assertThat(kv(key, value).hashCode() != 0).isTrue(); - } - - protected KeyValue kv(String key, String value) { - return new KeyValue(key, value); - } -} diff --git a/src/test/java/com/lambdaworks/redis/LettuceFuturesTest.java b/src/test/java/com/lambdaworks/redis/LettuceFuturesTest.java deleted file mode 100644 index e553957c0c..0000000000 --- a/src/test/java/com/lambdaworks/redis/LettuceFuturesTest.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis; - -import com.google.common.util.concurrent.SettableFuture; -import org.junit.Before; -import org.junit.Test; - -import java.util.concurrent.TimeUnit; - -import static org.assertj.core.api.Assertions.assertThat; - -/** - * @author Mark Paluch - */ -public class LettuceFuturesTest { - - @Before - public void setUp() throws Exception { - Thread.interrupted(); - } - - @Test(expected = RedisCommandExecutionException.class) - public void awaitAllShouldThrowRedisCommandExecutionException() throws Exception { - - SettableFuture f = SettableFuture.create(); - f.setException(new RedisCommandExecutionException("error")); - - LettuceFutures.awaitAll(1, TimeUnit.SECONDS, f); - } - - @Test(expected = RedisCommandInterruptedException.class) - public void awaitAllShouldThrowRedisCommandInterruptedException() throws Exception { - - SettableFuture f = SettableFuture.create(); - Thread.currentThread().interrupt(); - - LettuceFutures.awaitAll(1, TimeUnit.SECONDS, f); - } - - @Test - public void awaitAllShouldSetInterruptedBit() throws Exception { - - SettableFuture f = SettableFuture.create(); - Thread.currentThread().interrupt(); - - try { - LettuceFutures.awaitAll(1, TimeUnit.SECONDS, f); - } catch (Exception e) { - } - - assertThat(Thread.currentThread().isInterrupted()).isTrue(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/LettucePerformanceTest.java b/src/test/java/com/lambdaworks/redis/LettucePerformanceTest.java deleted file mode 100644 index fa7069eb3f..0000000000 --- a/src/test/java/com/lambdaworks/redis/LettucePerformanceTest.java +++ /dev/null @@ -1,248 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.*; - -import org.apache.logging.log4j.Level; -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.core.LoggerContext; -import org.apache.logging.log4j.core.config.Configuration; -import org.junit.*; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; - -import rx.Observable; - -/** - * @author Mark Paluch - */ -@Ignore -public class LettucePerformanceTest { - - private static RedisClient redisClient = new RedisClient(TestSettings.host(), TestSettings.port()); - private ExecutorService executor; - private CountDownLatch latch = new CountDownLatch(1); - - @Before - public void before() throws Exception { - - LoggerContext ctx = (LoggerContext) LogManager.getContext(); - Configuration config = ctx.getConfiguration(); - config.getLoggerConfig("com.lambdaworks.redis").setLevel(Level.OFF); - config.getLoggerConfig("com.lambdaworks.redis.protocol").setLevel(Level.OFF); - } - - @After - public void after() throws Exception { - LoggerContext ctx = (LoggerContext) LogManager.getContext(); - ctx.reconfigure(); - executor.shutdown(); - executor.awaitTermination(1, TimeUnit.MINUTES); - } - - @AfterClass - public static void afterClass() throws Exception { - redisClient.shutdown(); - } - - /** - * Multi-threaded performance test. - * - * Uses a {@link ThreadPoolExecutor} with thread and connection preheating. Execution tasks are submitted and synchronized - * with a {@link CountDownLatch} - * - * @throws Exception - */ - @Test - public void testSyncAsyncPerformance() throws Exception { - - // TWEAK ME - int threads = 4; - int totalCalls = 250000; - boolean waitForFutureCompletion = true; - boolean connectionPerThread = false; - // Keep in mind, that the size of the event loop threads is CPU count * 4 unless you - // set -Dio.netty.eventLoopThreads=... - // END OF TWEAK ME - - executor = new ThreadPoolExecutor(threads, threads, 1, TimeUnit.MINUTES, new ArrayBlockingQueue(totalCalls)); - - List>>> futurama = new ArrayList<>(); - - preheat(threads); - - final int callsPerThread = totalCalls / threads; - - submitExecutionTasks(threads, futurama, callsPerThread, connectionPerThread); - Thread.sleep(800); - - long start = System.currentTimeMillis(); - latch.countDown(); - - for (Future>> listFuture : futurama) { - for (CompletableFuture future : listFuture.get()) { - if (waitForFutureCompletion) { - future.get(); - } - } - } - - long end = System.currentTimeMillis(); - - long duration = end - start; - double durationSeconds = duration / 1000d; - double opsPerSecond = totalCalls / durationSeconds; - System.out.println(String.format("Sync/Async: Duration: %d ms (%.2f sec), operations: %d, %.2f ops/sec ", duration, - durationSeconds, totalCalls, opsPerSecond)); - - for (Future>> listFuture : futurama) { - for (CompletableFuture future : listFuture.get()) { - future.get(); - } - } - - } - - protected void submitExecutionTasks(int threads, List>>> futurama, - final int callsPerThread, final boolean connectionPerThread) { - final RedisAsyncConnection sharedConnection; - if (!connectionPerThread) { - sharedConnection = redisClient.connectAsync(); - } else { - sharedConnection = null; - } - - for (int i = 0; i < threads; i++) { - Future>> submit = executor.submit(() -> { - - RedisAsyncConnection connection = sharedConnection; - if (connectionPerThread) { - connection = redisClient.connectAsync(); - } - connection.ping().get(); - - List> futures = new ArrayList<>(callsPerThread); - latch.await(); - for (int i1 = 0; i1 < callsPerThread; i1++) { - futures.add(connection.ping().toCompletableFuture()); - } - - return futures; - }); - - futurama.add(submit); - } - } - - /** - * Multi-threaded performance using reactive commands. - * - * Uses a {@link ThreadPoolExecutor} with thread and connection preheating. Execution tasks are submitted and synchronized - * with a {@link CountDownLatch} - * - * @throws Exception - */ - @Test - public void testObservablePerformance() throws Exception { - - // TWEAK ME - int threads = 4; - int totalCalls = 25000; - boolean waitForCompletion = true; - boolean connectionPerThread = false; - // Keep in mind, that the size of the event loop threads is CPU count * 4 unless you - // set -Dio.netty.eventLoopThreads=... - // END OF TWEAK ME - - executor = new ThreadPoolExecutor(threads, threads, 1, TimeUnit.MINUTES, new ArrayBlockingQueue(totalCalls)); - - List>>> futurama = new ArrayList<>(); - - preheat(threads); - final int callsPerThread = totalCalls / threads; - - submitObservableTasks(threads, futurama, callsPerThread, connectionPerThread); - Thread.sleep(800); - - long start = System.currentTimeMillis(); - latch.countDown(); - - for (Future>> listFuture : futurama) { - for (Observable future : listFuture.get()) { - if (waitForCompletion) { - future.toBlocking().last(); - } else { - future.subscribe(); - } - } - } - - long end = System.currentTimeMillis(); - - long duration = end - start; - double durationSeconds = duration / 1000d; - double opsPerSecond = totalCalls / durationSeconds; - System.out.println(String.format("Reactive Duration: %d ms (%.2f sec), operations: %d, %.2f ops/sec ", duration, - durationSeconds, totalCalls, opsPerSecond)); - - } - - protected void submitObservableTasks(int threads, List>>> futurama, final int callsPerThread, - final boolean connectionPerThread) { - final StatefulRedisConnection sharedConnection; - if (!connectionPerThread) { - sharedConnection = redisClient.connectAsync().getStatefulConnection(); - } else { - sharedConnection = null; - } - - for (int i = 0; i < threads; i++) { - Future>> submit = executor.submit(() -> { - - StatefulRedisConnection connection = sharedConnection; - if (connectionPerThread) { - connection = redisClient.connectAsync().getStatefulConnection(); - } - RedisReactiveCommands reactive = connection.reactive(); - - connection.sync().ping(); - - List> observables = new ArrayList<>(callsPerThread); - latch.await(); - for (int i1 = 0; i1 < callsPerThread; i1++) { - observables.add(reactive.ping()); - } - - return observables; - }); - - futurama.add(submit); - } - } - - protected void preheat(int threads) throws Exception { - - List> futures = new ArrayList<>(); - - for (int i = 0; i < threads; i++) { - - futures.add(executor.submit(new Runnable() { - @Override - public void run() { - try { - Thread.sleep(100); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - } - } - })); - } - - for (Future future : futures) { - future.get(); - } - - } -} diff --git a/src/test/java/com/lambdaworks/redis/ListStreamingAdapter.java b/src/test/java/com/lambdaworks/redis/ListStreamingAdapter.java deleted file mode 100644 index 4abe4ce10f..0000000000 --- a/src/test/java/com/lambdaworks/redis/ListStreamingAdapter.java +++ /dev/null @@ -1,42 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.List; -import java.util.Vector; - -import com.lambdaworks.redis.output.KeyStreamingChannel; -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; -import com.lambdaworks.redis.output.ValueStreamingChannel; - -/** - * Streaming adapter which stores every key or/and value in a list. This adapter can be used in KeyStreamingChannels and - * ValueStreamingChannels. - * - * @author Mark Paluch - * @param Valu-Type. - * @since 3.0 - */ -public class ListStreamingAdapter implements KeyStreamingChannel, ValueStreamingChannel, - ScoredValueStreamingChannel { - private final List list = new Vector<>(); - - @Override - public void onKey(T key) { - list.add(key); - - } - - @Override - public void onValue(T value) { - list.add(value); - } - - public List getList() { - return list; - } - - @Override - public void onValue(ScoredValue value) { - list.add(value.value); - } -} diff --git a/src/test/java/com/lambdaworks/redis/MultiConnectionTest.java b/src/test/java/com/lambdaworks/redis/MultiConnectionTest.java deleted file mode 100644 index 4d50c0b3ba..0000000000 --- a/src/test/java/com/lambdaworks/redis/MultiConnectionTest.java +++ /dev/null @@ -1,28 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.Set; -import java.util.concurrent.Future; - -import org.junit.Test; - -public class MultiConnectionTest extends AbstractRedisClientTest { - - @Test - public void twoConnections() throws Exception { - - RedisAsyncConnection connection1 = client.connectAsync(); - - RedisAsyncConnection connection2 = client.connectAsync(); - - connection1.sadd("key", "member1", "member2").get(); - - Future> members = connection2.smembers("key"); - - assertThat(members.get()).hasSize(2); - - connection1.close(); - connection2.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/PipeliningTest.java b/src/test/java/com/lambdaworks/redis/PipeliningTest.java deleted file mode 100644 index eaa42e89c2..0000000000 --- a/src/test/java/com/lambdaworks/redis/PipeliningTest.java +++ /dev/null @@ -1,89 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; - -/** - * @author Mark Paluch - */ -@SuppressWarnings("rawtypes") -public class PipeliningTest extends AbstractRedisClientTest { - - @Test - public void basic() throws Exception { - - StatefulRedisConnection connection = client.connect(); - connection.setAutoFlushCommands(false); - - int iterations = 100; - List> futures = triggerSet(connection.async(), iterations); - - verifyNotExecuted(iterations); - - connection.flushCommands(); - - LettuceFutures.awaitAll(5, TimeUnit.SECONDS, futures.toArray(new RedisFuture[futures.size()])); - - verifyExecuted(iterations); - - connection.close(); - } - - protected void verifyExecuted(int iterations) { - for (int i = 0; i < iterations; i++) { - assertThat(redis.get(key(i))).as("Key " + key(i) + " must be " + value(i)).isEqualTo(value(i)); - } - } - - @Test - public void setAutoFlushTrueDoesNotFlush() throws Exception { - - StatefulRedisConnection connection = client.connect(); - connection.setAutoFlushCommands(false); - - int iterations = 100; - List> futures = triggerSet(connection.async(), iterations); - - verifyNotExecuted(iterations); - - connection.setAutoFlushCommands(true); - - verifyNotExecuted(iterations); - - connection.flushCommands(); - boolean result = LettuceFutures.awaitAll(5, TimeUnit.SECONDS, futures.toArray(new RedisFuture[futures.size()])); - assertThat(result).isTrue(); - - connection.close(); - } - - protected void verifyNotExecuted(int iterations) { - for (int i = 0; i < iterations; i++) { - assertThat(redis.get(key(i))).as("Key " + key(i) + " must be null").isNull(); - } - } - - protected List> triggerSet(RedisAsyncCommands connection, int iterations) { - List> futures = new ArrayList<>(); - for (int i = 0; i < iterations; i++) { - futures.add(connection.set(key(i), value(i))); - } - return futures; - } - - protected String value(int i) { - return value + "-" + i; - } - - protected String key(int i) { - return key + "-" + i; - } -} diff --git a/src/test/java/com/lambdaworks/redis/PoolConnectionTest.java b/src/test/java/com/lambdaworks/redis/PoolConnectionTest.java deleted file mode 100644 index fc374c10f6..0000000000 --- a/src/test/java/com/lambdaworks/redis/PoolConnectionTest.java +++ /dev/null @@ -1,224 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import java.lang.reflect.Proxy; -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import com.google.common.base.Stopwatch; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; - -public class PoolConnectionTest extends AbstractRedisClientTest { - - @Test - public void twoConnections() throws Exception { - - RedisConnectionPool> pool = client.pool(); - RedisCommands c1 = pool.allocateConnection(); - RedisConnection c2 = pool.allocateConnection(); - - String result1 = c1.ping(); - String result2 = c2.ping(); - assertThat(result1).isEqualTo("PONG"); - assertThat(result2).isEqualTo("PONG"); - - c1.close(); - c2.close(); - pool.close(); - } - - @Test(expected = UnsupportedOperationException.class) - public void getStatefulConnection() throws Exception { - - RedisConnectionPool> pool = client.pool(); - RedisCommands c1 = pool.allocateConnection(); - - try { - c1.getStatefulConnection(); - } finally { - c1.close(); - pool.close(); - } - } - - @Test - public void sameConnectionAfterFree() throws Exception { - RedisConnectionPool> pool = client.pool(); - RedisCommands c1 = pool.allocateConnection(); - pool.freeConnection(c1); - assertConnectionStillThere(c1); - - RedisConnection c2 = pool.allocateConnection(); - assertThat(c2).isSameAs(c1); - - c2.close(); - pool.close(); - } - - @Test - public void connectionCloseDoesNotClose() throws Exception { - RedisConnectionPool> pool = client.pool(); - RedisConnection c1 = pool.allocateConnection(); - c1.close(); - RedisConnection actualConnection1 = assertConnectionStillThere(c1); - - RedisConnection c2 = pool.allocateConnection(); - assertThat(c2).isSameAs(c1); - - RedisConnection actualConnection2 = assertConnectionStillThere(c2); - assertThat(actualConnection1).isSameAs(actualConnection2); - - c2.close(); - pool.close(); - } - - @SuppressWarnings("unchecked") - private RedisConnection assertConnectionStillThere(RedisConnection c1) { - // unwrap code from RedisConnectionPool destroyObject - if (Proxy.isProxyClass(c1.getClass())) { - RedisConnectionPool.PooledConnectionInvocationHandler> invocationHandler; - invocationHandler = (RedisConnectionPool.PooledConnectionInvocationHandler>) Proxy - .getInvocationHandler(c1); - - RedisConnection connection = invocationHandler.getConnection(); - assertThat(connection).isNotNull(); - return connection; - } - return null; - } - - @Test - public void releaseConnectionWithClose() throws Exception { - - RedisConnectionPool> pool = client.pool(); - RedisConnection c1 = pool.allocateConnection(); - assertThat(pool.getNumActive()).isEqualTo(1); - c1.close(); - assertThat(pool.getNumActive()).isEqualTo(0); - - pool.allocateConnection(); - assertThat(pool.getNumActive()).isEqualTo(1); - } - - @Test - public void connectionsClosedAfterPoolClose() throws Exception { - - RedisConnectionPool> pool = client.pool(); - RedisCommands c1 = pool.allocateConnection(); - pool.freeConnection(c1); - pool.close(); - - try { - c1.ping(); - fail("Missing Exception: Connection closed"); - } catch (Exception e) { - } - } - - @Test - public void connectionNotClosedWhenBorrowed() throws Exception { - - RedisConnectionPool> pool = client.pool(); - RedisConnection c1 = pool.allocateConnection(); - pool.close(); - - c1.ping(); - c1.close(); - } - - @Test - public void connectionNotClosedWhenBorrowed2() throws Exception { - - RedisConnectionPool> pool = client.pool(); - RedisCommands c1 = pool.allocateConnection(); - pool.freeConnection(c1); - c1 = pool.allocateConnection(); - pool.close(); - - c1.ping(); - c1.close(); - } - - @Test - public void testResourceCleaning() throws Exception { - - RedisClient redisClient = newRedisClient(); - - assertThat(redisClient.getChannelCount()).isEqualTo(0); - assertThat(redisClient.getResourceCount()).isEqualTo(0); - - RedisConnectionPool> pool1 = redisClient.asyncPool(); - - assertThat(redisClient.getChannelCount()).isEqualTo(0); - assertThat(redisClient.getResourceCount()).isEqualTo(1); - - pool1.allocateConnection(); - - assertThat(redisClient.getChannelCount()).isEqualTo(1); - assertThat(redisClient.getResourceCount()).isEqualTo(2); - - RedisConnectionPool> pool2 = redisClient.pool(); - - assertThat(redisClient.getResourceCount()).isEqualTo(3); - - pool2.allocateConnection(); - - assertThat(redisClient.getResourceCount()).isEqualTo(4); - - redisClient.pool().close(); - assertThat(redisClient.getResourceCount()).isEqualTo(4); - - FastShutdown.shutdown(redisClient); - - assertThat(redisClient.getChannelCount()).isEqualTo(0); - assertThat(redisClient.getResourceCount()).isEqualTo(0); - - } - - @Test - public void syncPoolPerformanceTest() throws Exception { - - RedisConnectionPool> pool = client.pool(); - RedisConnection c1 = pool.allocateConnection(); - - c1.ping(); - Stopwatch stopwatch = Stopwatch.createStarted(); - - for (int i = 0; i < 1000; i++) { - c1.ping(); - } - - long elapsed = stopwatch.stop().elapsed(TimeUnit.MILLISECONDS); - - log.info("syncPoolPerformanceTest Duration: " + elapsed + "ms"); - - c1.close(); - pool.close(); - - } - - @Test - public void asyncPoolPerformanceTest() throws Exception { - - RedisConnectionPool> pool = client.asyncPool(); - RedisAsyncConnection c1 = pool.allocateConnection(); - - c1.ping(); - Stopwatch stopwatch = Stopwatch.createStarted(); - - for (int i = 0; i < 1000; i++) { - c1.ping(); - } - - long elapsed = stopwatch.stop().elapsed(TimeUnit.MILLISECONDS); - - log.info("asyncPoolPerformanceTest Duration: " + elapsed + "ms"); - - c1.close(); - pool.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/PrivateAccessorTest.java b/src/test/java/com/lambdaworks/redis/PrivateAccessorTest.java deleted file mode 100644 index 4bf98f434d..0000000000 --- a/src/test/java/com/lambdaworks/redis/PrivateAccessorTest.java +++ /dev/null @@ -1,61 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.lang.reflect.Constructor; -import java.lang.reflect.Modifier; -import java.util.ArrayList; -import java.util.List; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -import com.lambdaworks.codec.Base16; -import com.lambdaworks.codec.CRC16; -import com.lambdaworks.redis.cluster.SlotHash; -import com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser; -import com.lambdaworks.redis.cluster.models.slots.ClusterSlotsParser; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.models.command.CommandDetailParser; -import com.lambdaworks.redis.models.role.RoleParser; -import com.lambdaworks.redis.protocol.LettuceCharsets; - -/** - * @author Mark Paluch - */ -@RunWith(Parameterized.class) -@SuppressWarnings("unchecked") -public class PrivateAccessorTest { - - private Class theClass; - - @Parameterized.Parameters - public static List parameters() { - - List> classes = LettuceLists.unmodifiableList(LettuceStrings.class, LettuceFutures.class, LettuceCharsets.class, - CRC16.class, SlotHash.class, Base16.class, KillArgs.Builder.class, - SortArgs.Builder.class, ZStoreArgs.Builder.class, - ClusterSlotsParser.class, CommandDetailParser.class, RoleParser.class, - ClusterPartitionParser.class); - - List result = new ArrayList<>(); - for (Class aClass : classes) { - result.add(new Object[] { aClass }); - } - - return result; - } - - public PrivateAccessorTest(Class theClass) { - this.theClass = theClass; - } - - @Test - public void testLettuceStrings() throws Exception { - Constructor constructor = theClass.getDeclaredConstructor(); - assertThat(Modifier.isPrivate(constructor.getModifiers())).isTrue(); - constructor.setAccessible(true); - constructor.newInstance(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/ReactiveConnectionTest.java b/src/test/java/com/lambdaworks/redis/ReactiveConnectionTest.java deleted file mode 100644 index b9df3f08d2..0000000000 --- a/src/test/java/com/lambdaworks/redis/ReactiveConnectionTest.java +++ /dev/null @@ -1,218 +0,0 @@ -package com.lambdaworks.redis; - -import static com.google.code.tempusfugit.temporal.Duration.millis; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; - -import org.junit.After; -import org.junit.Before; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; - -import com.lambdaworks.Delay; -import com.lambdaworks.Wait; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; - -import rx.Observable; -import rx.Subscriber; -import rx.observers.TestSubscriber; -import rx.schedulers.Schedulers; - -public class ReactiveConnectionTest extends AbstractRedisClientTest { - - private RedisReactiveCommands reactive; - - @Rule - public ExpectedException exception = ExpectedException.none(); - private StatefulRedisConnection stateful; - - @Before - public void openReactiveConnection() throws Exception { - stateful = client.connect(); - reactive = stateful.reactive(); - } - - @After - public void closeReactiveConnection() throws Exception { - reactive.close(); - } - - @Test - public void doNotFireCommandUntilObservation() throws Exception { - Observable set = reactive.set(key, value); - Delay.delay(millis(200)); - assertThat(redis.get(key)).isNull(); - set.subscribe(); - Wait.untilEquals(value, () -> redis.get(key)).waitOrTimeout(); - - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void fireCommandAfterObserve() throws Exception { - assertThat(reactive.set(key, value).toBlocking().first()).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void isOpen() throws Exception { - assertThat(reactive.isOpen()).isTrue(); - } - - @Test - public void getStatefulConnection() throws Exception { - assertThat(reactive.getStatefulConnection()).isSameAs(stateful); - } - - @Test - public void testCancelCommand() throws Exception { - - List result = new ArrayList<>(); - reactive.clientPause(1000).subscribe(); - reactive.set(key, value).subscribe(new CompletionSubscriber(result)); - Delay.delay(millis(100)); - - reactive.reset(); - assertThat(result).hasSize(1).contains("completed"); - } - - @Test - public void testEcho() throws Exception { - String result = reactive.echo("echo").toBlocking().first(); - assertThat(result).isEqualTo("echo"); - } - - @Test - public void testMultiCancel() throws Exception { - - List result = new ArrayList<>(); - reactive.clientPause(1000).subscribe(); - - Observable set = reactive.set(key, value); - set.subscribe(new CompletionSubscriber(result)); - set.subscribe(new CompletionSubscriber(result)); - set.subscribe(new CompletionSubscriber(result)); - - Delay.delay(millis(100)); - reactive.reset(); - assertThat(result).hasSize(3).contains("completed"); - } - - @Test - public void multiSubscribe() throws Exception { - reactive.set(key, "1").subscribe(); - Observable incr = reactive.incr(key); - incr.subscribe(); - incr.subscribe(); - incr.subscribe(); - - Wait.untilEquals("4", () -> redis.get(key)).waitOrTimeout(); - - assertThat(redis.get(key)).isEqualTo("4"); - } - - @Test - public void transactional() throws Exception { - - final CountDownLatch sync = new CountDownLatch(1); - - RedisReactiveCommands reactive = client.connect().reactive(); - - reactive.multi().subscribe(multiResponse -> { - reactive.set(key, "1").subscribe(); - reactive.incr(key).subscribe(getResponse -> { - sync.countDown(); - }); - reactive.exec().subscribe(); - }); - - sync.await(5, TimeUnit.SECONDS); - - String result = redis.get(key); - assertThat(result).isEqualTo("2"); - } - - @Test - public void reactiveChain() throws Exception { - - Map map = new HashMap<>(); - map.put(key, value); - map.put("key1", "value1"); - - reactive.mset(map).toBlocking().first(); - - List values = reactive.keys("*").flatMap(s -> reactive.get(s)).toList().subscribeOn(Schedulers.immediate()) - .toBlocking().first(); - - assertThat(values).hasSize(2).contains(value, "value1"); - } - - @Test - public void auth() throws Exception { - List errors = new ArrayList<>(); - reactive.auth("error").doOnError(errors::add).subscribe(new TestSubscriber<>()); - Delay.delay(millis(50)); - assertThat(errors).hasSize(1); - } - - @Test - public void subscriberCompletingWithExceptionShouldBeHandledSafely() throws Exception { - - Observable.concat(reactive.set("keyA", "valueA"), reactive.set("keyB", "valueB")).toBlocking().last(); - - reactive.get("keyA").subscribe(createSubscriberWithExceptionOnComplete()); - reactive.get("keyA").subscribe(createSubscriberWithExceptionOnComplete()); - - String valueB = reactive.get("keyB").toBlocking().toFuture().get(); - assertThat(valueB).isEqualTo("valueB"); - } - - private static Subscriber createSubscriberWithExceptionOnComplete() { - return new Subscriber() { - @Override - public void onCompleted() { - throw new RuntimeException("throwing something"); - } - - @Override - public void onError(Throwable e) { - } - - @Override - public void onNext(String s) { - } - }; - } - - private static class CompletionSubscriber extends Subscriber { - - private final List result; - - public CompletionSubscriber(List result) { - this.result = result; - } - - @Override - public void onCompleted() { - result.add("completed"); - } - - @Override - public void onError(Throwable e) { - result.add(e); - } - - @Override - public void onNext(Object o) { - result.add(o); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/ReactiveStreamingOutputTest.java b/src/test/java/com/lambdaworks/redis/ReactiveStreamingOutputTest.java deleted file mode 100644 index 263a793fad..0000000000 --- a/src/test/java/com/lambdaworks/redis/ReactiveStreamingOutputTest.java +++ /dev/null @@ -1,123 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.*; - -import org.junit.After; -import org.junit.Before; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; - -import rx.observers.TestSubscriber; - -import com.lambdaworks.RandomKeys; -import com.lambdaworks.redis.GeoArgs.Unit; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; - -public class ReactiveStreamingOutputTest extends AbstractRedisClientTest { - - private RedisReactiveCommands reactive; - private TestSubscriber subscriber = TestSubscriber.create(); - - @Rule - public ExpectedException exception = ExpectedException.none(); - private StatefulRedisConnection stateful; - - @Before - public void openReactiveConnection() throws Exception { - stateful = client.connect(); - reactive = stateful.reactive(); - } - - @After - public void closeReactiveConnection() throws Exception { - reactive.close(); - } - - @Test - public void keyListCommandShouldReturnAllElements() throws Exception { - - redis.mset(RandomKeys.MAP); - - reactive.keys("*").subscribe(subscriber); - subscriber.awaitTerminalEvent(); - - assertThat(subscriber.getOnNextEvents()).containsAll(RandomKeys.KEYS); - } - - @Test - public void valueListCommandShouldReturnAllElements() throws Exception { - - redis.mset(RandomKeys.MAP); - - reactive.mget(RandomKeys.KEYS.toArray(new String[RandomKeys.COUNT])).subscribe(subscriber); - subscriber.awaitTerminalEvent(); - - assertThat(subscriber.getOnNextEvents()).containsAll(RandomKeys.VALUES); - } - - @Test - public void stringListCommandShouldReturnAllElements() throws Exception { - - reactive.configGet("*").subscribe(subscriber); - subscriber.awaitTerminalEvent(); - - assertThat(subscriber.getOnNextEvents().size()).isGreaterThan(120); - } - - @Test - public void booleanListCommandShouldReturnAllElements() throws Exception { - - TestSubscriber subscriber = TestSubscriber.create(); - - reactive.scriptExists("a", "b", "c").subscribe(subscriber); - subscriber.awaitTerminalEvent(); - - assertThat(subscriber.getOnNextEvents()).hasSize(3).doesNotContainNull(); - } - - @Test - public void scoredValueListCommandShouldReturnAllElements() throws Exception { - - TestSubscriber> subscriber = TestSubscriber.create(); - - redis.zadd(key, 1d, "v1", 2d, "v2", 3d, "v3"); - - reactive.zrangeWithScores(key, 0, -1).subscribe(subscriber); - subscriber.awaitTerminalEvent(); - - assertThat(subscriber.getOnNextEvents()).hasSize(3).contains(sv(1, "v1"), sv(2, "v2"), sv(3, "v3")); - } - - @Test - public void geoWithinListCommandShouldReturnAllElements() throws Exception { - - TestSubscriber> subscriber = TestSubscriber.create(); - - redis.geoadd(key, 50, 20, "value1"); - redis.geoadd(key, 50, 21, "value2"); - - reactive.georadius(key, 50, 20, 1000, Unit.km, new GeoArgs().withHash()).subscribe(subscriber); - subscriber.awaitTerminalEvent(); - - assertThat(subscriber.getOnNextEvents()).hasSize(2).contains( - new GeoWithin("value1", null, 3542523898362974L, null), - new GeoWithin<>("value2", null, 3542609801095198L, null)); - } - - @Test - public void geoCoordinatesListCommandShouldReturnAllElements() throws Exception { - - TestSubscriber subscriber = TestSubscriber.create(); - - redis.geoadd(key, 50, 20, "value1"); - redis.geoadd(key, 50, 21, "value2"); - - reactive.geopos(key, "value1", "value2").subscribe(subscriber); - subscriber.awaitTerminalEvent(); - - assertThat(subscriber.getOnNextEvents()).hasSize(2).doesNotContainNull(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/RedisClientConnectionTest.java b/src/test/java/com/lambdaworks/redis/RedisClientConnectionTest.java deleted file mode 100644 index 0a0c935496..0000000000 --- a/src/test/java/com/lambdaworks/redis/RedisClientConnectionTest.java +++ /dev/null @@ -1,286 +0,0 @@ -package com.lambdaworks.redis; - -import static com.lambdaworks.redis.RedisURI.Builder.redis; -import static org.assertj.core.api.AssertionsForClassTypes.assertThat; - -import java.util.concurrent.TimeUnit; - -import org.junit.Before; -import org.junit.Test; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; - -/** - * @author Mark Paluch - */ -public class RedisClientConnectionTest extends AbstractRedisClientTest { - - public static final Utf8StringCodec CODEC = new Utf8StringCodec(); - public static final int EXPECTED_TIMEOUT = 500; - public static final TimeUnit EXPECTED_TIME_UNIT = TimeUnit.MILLISECONDS; - - @Before - public void before() throws Exception { - client.setDefaultTimeout(EXPECTED_TIMEOUT, EXPECTED_TIME_UNIT); - } - - /* - * Pool/Sync - */ - @Test - public void poolClientUri() throws Exception { - client.pool().close(); - } - - @Test - public void poolClientUriConfig() throws Exception { - client.pool(1, 1).close(); - } - - @Test - public void poolCodecClientUriConfig() throws Exception { - client.pool(CODEC, 1, 1).close(); - } - - /* - * Pool/Async - */ - @Test - public void asyncPoolClientUri() throws Exception { - client.asyncPool().close(); - } - - @Test - public void asyncPoolClientUriConfig() throws Exception { - client.asyncPool(1, 1).close(); - } - - @Test - public void asyncPoolCodecClientUriConfig() throws Exception { - client.asyncPool(CODEC, 1, 1).close(); - } - - /* - * Standalone/Stateful - */ - @Test - public void connectClientUri() throws Exception { - - StatefulRedisConnection connection = client.connect(); - assertTimeout(connection, EXPECTED_TIMEOUT, EXPECTED_TIME_UNIT); - connection.close(); - } - - @Test - public void connectCodecClientUri() throws Exception { - StatefulRedisConnection connection = client.connect(CODEC); - assertTimeout(connection, EXPECTED_TIMEOUT, EXPECTED_TIME_UNIT); - connection.close(); - } - - @Test - public void connectOwnUri() throws Exception { - RedisURI redisURI = redis(host, port).build(); - StatefulRedisConnection connection = client.connect(redisURI); - assertTimeout(connection, redisURI.getTimeout(), redisURI.getUnit()); - connection.close(); - } - - @Test(expected = IllegalArgumentException.class) - public void connectMissingHostAndSocketUri() throws Exception { - client.connect(new RedisURI()); - } - - @Test(expected = IllegalArgumentException.class) - public void connectSentinelMissingHostAndSocketUri() throws Exception { - client.connect(invalidSentinel()); - } - - @Test - public void connectCodecOwnUri() throws Exception { - RedisURI redisURI = redis(host, port).build(); - StatefulRedisConnection connection = client.connect(CODEC, redisURI); - assertTimeout(connection, redisURI.getTimeout(), redisURI.getUnit()); - connection.close(); - } - - @Test(expected = IllegalArgumentException.class) - public void connectCodecMissingHostAndSocketUri() throws Exception { - client.connect(CODEC, new RedisURI()); - } - - @Test(expected = IllegalArgumentException.class) - public void connectcodecSentinelMissingHostAndSocketUri() throws Exception { - client.connect(CODEC, invalidSentinel()); - } - - /* - * Deprecated: Standalone/Async - */ - @Test - public void connectAsyncClientUri() throws Exception { - client.connectAsync().close(); - } - - @Test - public void connectAsyncCodecClientUri() throws Exception { - client.connectAsync(CODEC).close(); - } - - @Test - public void connectAsyncOwnUri() throws Exception { - client.connectAsync(redis(host, port).build()).close(); - } - - @Test - public void connectAsyncCodecOwnUri() throws Exception { - client.connectAsync(CODEC, redis(host, port).build()).close(); - } - - /* - * Standalone/PubSub Stateful - */ - @Test - public void connectPubSubClientUri() throws Exception { - StatefulRedisPubSubConnection connection = client.connectPubSub(); - assertTimeout(connection, EXPECTED_TIMEOUT, EXPECTED_TIME_UNIT); - connection.close(); - } - - @Test - public void connectPubSubCodecClientUri() throws Exception { - StatefulRedisPubSubConnection connection = client.connectPubSub(CODEC); - assertTimeout(connection, EXPECTED_TIMEOUT, EXPECTED_TIME_UNIT); - connection.close(); - } - - @Test - public void connectPubSubOwnUri() throws Exception { - RedisURI redisURI = redis(host, port).build(); - StatefulRedisPubSubConnection connection = client.connectPubSub(redisURI); - assertTimeout(connection, redisURI.getTimeout(), redisURI.getUnit()); - connection.close(); - } - - @Test(expected = IllegalArgumentException.class) - public void connectPubSubMissingHostAndSocketUri() throws Exception { - client.connectPubSub(new RedisURI()); - } - - @Test(expected = IllegalArgumentException.class) - public void connectPubSubSentinelMissingHostAndSocketUri() throws Exception { - client.connectPubSub(invalidSentinel()); - } - - @Test - public void connectPubSubCodecOwnUri() throws Exception { - RedisURI redisURI = redis(host, port).build(); - StatefulRedisPubSubConnection connection = client.connectPubSub(CODEC, redisURI); - assertTimeout(connection, redisURI.getTimeout(), redisURI.getUnit()); - connection.close(); - } - - @Test(expected = IllegalArgumentException.class) - public void connectPubSubCodecMissingHostAndSocketUri() throws Exception { - client.connectPubSub(CODEC, new RedisURI()); - } - - @Test(expected = IllegalArgumentException.class) - public void connectPubSubCodecSentinelMissingHostAndSocketUri() throws Exception { - client.connectPubSub(CODEC, invalidSentinel()); - } - - /* - * Sentinel Stateful - */ - @Test - public void connectSentinelClientUri() throws Exception { - StatefulRedisSentinelConnection connection = client.connectSentinel(); - assertTimeout(connection, EXPECTED_TIMEOUT, EXPECTED_TIME_UNIT); - connection.close(); - } - - @Test - public void connectSentinelCodecClientUri() throws Exception { - StatefulRedisSentinelConnection connection = client.connectSentinel(CODEC); - assertTimeout(connection, EXPECTED_TIMEOUT, EXPECTED_TIME_UNIT); - connection.close(); - } - - @Test(expected = IllegalArgumentException.class) - public void connectSentinelAndMissingHostAndSocketUri() throws Exception { - client.connectSentinel(new RedisURI()); - } - - @Test(expected = IllegalArgumentException.class) - public void connectSentinelSentinelMissingHostAndSocketUri() throws Exception { - client.connectSentinel(invalidSentinel()); - } - - @Test - public void connectSentinelOwnUri() throws Exception { - RedisURI redisURI = redis(host, port).build(); - StatefulRedisSentinelConnection connection = client.connectSentinel(redisURI); - assertTimeout(connection, redisURI.getTimeout(), redisURI.getUnit()); - connection.close(); - } - - @Test - public void connectSentinelCodecOwnUri() throws Exception { - RedisURI redisURI = redis(host, port).build(); - StatefulRedisSentinelConnection connection = client.connectSentinel(CODEC, redisURI); - assertTimeout(connection, redisURI.getTimeout(), redisURI.getUnit()); - connection.close(); - } - - @Test(expected = IllegalArgumentException.class) - public void connectSentinelCodecMissingHostAndSocketUri() throws Exception { - client.connectSentinel(CODEC, new RedisURI()); - } - - @Test(expected = IllegalArgumentException.class) - public void connectSentinelCodecSentinelMissingHostAndSocketUri() throws Exception { - client.connectSentinel(CODEC, invalidSentinel()); - } - - /* - * Deprecated: Sentinel/Async - */ - @Test - public void connectSentinelAsyncClientUri() throws Exception { - client.connectSentinelAsync().close(); - } - - @Test - public void connectSentinelAsyncCodecClientUri() throws Exception { - client.connectSentinelAsync(CODEC).close(); - } - - @Test - public void connectSentineAsynclOwnUri() throws Exception { - client.connectSentinelAsync(redis(host, port).build()).close(); - } - - @Test - public void connectSentinelAsyncCodecOwnUri() throws Exception { - client.connectSentinelAsync(CODEC, redis(host, port).build()).close(); - } - - private RedisURI invalidSentinel() { - RedisURI redisURI = new RedisURI(); - redisURI.getSentinels().add(new RedisURI()); - - return redisURI; - } - - private void assertTimeout(StatefulConnection connection, long expectedTimeout, TimeUnit expectedTimeUnit) { - - assertThat(ReflectionTestUtils.getField(connection, "timeout")).isEqualTo(expectedTimeout); - assertThat(ReflectionTestUtils.getField(connection, "unit")).isEqualTo(expectedTimeUnit); - } -} diff --git a/src/test/java/com/lambdaworks/redis/RedisClientFactoryBeanTest.java b/src/test/java/com/lambdaworks/redis/RedisClientFactoryBeanTest.java deleted file mode 100644 index c8728f8b82..0000000000 --- a/src/test/java/com/lambdaworks/redis/RedisClientFactoryBeanTest.java +++ /dev/null @@ -1,157 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.net.URI; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.support.RedisClientFactoryBean; - -public class RedisClientFactoryBeanTest { - private RedisClientFactoryBean sut = new RedisClientFactoryBean(); - - @After - public void tearDown() throws Exception { - FastShutdown.shutdown(sut.getObject()); - sut.destroy(); - } - - @Test - public void testSimpleUri() throws Exception { - String uri = "redis://localhost/2"; - - sut.setUri(URI.create(uri)); - sut.setPassword("password"); - sut.afterPropertiesSet(); - - RedisURI redisURI = sut.getRedisURI(); - - assertThat(redisURI.getDatabase()).isEqualTo(2); - assertThat(redisURI.getHost()).isEqualTo("localhost"); - assertThat(redisURI.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); - assertThat(new String(redisURI.getPassword())).isEqualTo("password"); - } - - @Test - public void testSimpleUriWithoutDB() throws Exception { - String uri = "redis://localhost/"; - - sut.setUri(URI.create(uri)); - sut.afterPropertiesSet(); - - RedisURI redisURI = sut.getRedisURI(); - - assertThat(redisURI.getDatabase()).isEqualTo(0); - } - - @Test - public void testSimpleUriWithoutDB2() throws Exception { - String uri = "redis://localhost/"; - - sut.setUri(URI.create(uri)); - sut.afterPropertiesSet(); - - RedisURI redisURI = sut.getRedisURI(); - - assertThat(redisURI.getDatabase()).isEqualTo(0); - } - - @Test - public void testSimpleUriWithPort() throws Exception { - String uri = "redis://localhost:1234/0"; - - sut.setUri(URI.create(uri)); - sut.setPassword("password"); - sut.afterPropertiesSet(); - - RedisURI redisURI = sut.getRedisURI(); - - assertThat(redisURI.getDatabase()).isEqualTo(0); - assertThat(redisURI.getHost()).isEqualTo("localhost"); - assertThat(redisURI.getPort()).isEqualTo(1234); - assertThat(new String(redisURI.getPassword())).isEqualTo("password"); - } - - @Test - public void testSentinelUri() throws Exception { - String uri = "redis-sentinel://localhost/1#myMaster"; - - sut.setUri(URI.create(uri)); - sut.setPassword("password"); - sut.afterPropertiesSet(); - - RedisURI redisURI = sut.getRedisURI(); - - assertThat(redisURI.getDatabase()).isEqualTo(1); - - RedisURI sentinelUri = redisURI.getSentinels().get(0); - assertThat(sentinelUri.getHost()).isEqualTo("localhost"); - assertThat(sentinelUri.getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); - assertThat(new String(redisURI.getPassword())).isEqualTo("password"); - assertThat(redisURI.getSentinelMasterId()).isEqualTo("myMaster"); - } - - @Test - public void testSentinelUriWithPort() throws Exception { - String uri = "redis-sentinel://localhost:1234/1#myMaster"; - - sut.setUri(URI.create(uri)); - sut.setPassword("password"); - sut.afterPropertiesSet(); - - RedisURI redisURI = sut.getRedisURI(); - - assertThat(redisURI.getDatabase()).isEqualTo(1); - - RedisURI sentinelUri = redisURI.getSentinels().get(0); - assertThat(sentinelUri.getHost()).isEqualTo("localhost"); - assertThat(sentinelUri.getPort()).isEqualTo(1234); - assertThat(new String(redisURI.getPassword())).isEqualTo("password"); - assertThat(redisURI.getSentinelMasterId()).isEqualTo("myMaster"); - } - - @Test - public void testMultipleSentinelUri() throws Exception { - String uri = "redis-sentinel://localhost,localhost2,localhost3/1#myMaster"; - - sut.setUri(URI.create(uri)); - sut.setPassword("password"); - sut.afterPropertiesSet(); - - RedisURI redisURI = sut.getRedisURI(); - - assertThat(redisURI.getDatabase()).isEqualTo(1); - assertThat(redisURI.getSentinels()).hasSize(3); - - RedisURI sentinelUri = redisURI.getSentinels().get(0); - assertThat(sentinelUri.getHost()).isEqualTo("localhost"); - assertThat(sentinelUri.getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); - assertThat(redisURI.getSentinelMasterId()).isEqualTo("myMaster"); - } - - @Test - public void testMultipleSentinelUriWithPorts() throws Exception { - String uri = "redis-sentinel://localhost,localhost2:1234,localhost3/1#myMaster"; - - sut.setUri(URI.create(uri)); - sut.setPassword("password"); - sut.afterPropertiesSet(); - - RedisURI redisURI = sut.getRedisURI(); - - assertThat(redisURI.getDatabase()).isEqualTo(1); - assertThat(redisURI.getSentinels()).hasSize(3); - - RedisURI sentinelUri1 = redisURI.getSentinels().get(0); - assertThat(sentinelUri1.getHost()).isEqualTo("localhost"); - assertThat(sentinelUri1.getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); - - RedisURI sentinelUri2 = redisURI.getSentinels().get(1); - assertThat(sentinelUri2.getHost()).isEqualTo("localhost2"); - assertThat(sentinelUri2.getPort()).isEqualTo(1234); - assertThat(redisURI.getSentinelMasterId()).isEqualTo("myMaster"); - } -} diff --git a/src/test/java/com/lambdaworks/redis/RedisClientFactoryTest.java b/src/test/java/com/lambdaworks/redis/RedisClientFactoryTest.java deleted file mode 100644 index ecb6f4d786..0000000000 --- a/src/test/java/com/lambdaworks/redis/RedisClientFactoryTest.java +++ /dev/null @@ -1,94 +0,0 @@ -package com.lambdaworks.redis; - -import com.lambdaworks.TestClientResources; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.DefaultClientResources; - -/** - * @author Mark Paluch - */ -public class RedisClientFactoryTest { - - private final static String URI = "redis://" + TestSettings.host() + ":" + TestSettings.port(); - private final static RedisURI REDIS_URI = RedisURI.create(URI); - private static ClientResources DEFAULT_RESOURCES; - - @BeforeClass - public static void beforeClass() throws Exception { - DEFAULT_RESOURCES = TestClientResources.create(); - } - - @AfterClass - public static void afterClass() throws Exception { - FastShutdown.shutdown(DEFAULT_RESOURCES); - } - - @Test - public void plain() throws Exception { - FastShutdown.shutdown(RedisClient.create()); - } - - @Test - public void withStringUri() throws Exception { - FastShutdown.shutdown(RedisClient.create(URI)); - } - - @Test(expected = IllegalArgumentException.class) - public void withStringUriNull() throws Exception { - RedisClient.create((String) null); - } - - @Test - public void withUri() throws Exception { - FastShutdown.shutdown(RedisClient.create(REDIS_URI)); - } - - @Test(expected = IllegalArgumentException.class) - public void withUriNull() throws Exception { - RedisClient.create((RedisURI) null); - } - - @Test - public void clientResources() throws Exception { - FastShutdown.shutdown(RedisClient.create(DEFAULT_RESOURCES)); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesNull() throws Exception { - RedisClient.create((ClientResources) null); - } - - @Test - public void clientResourcesWithStringUri() throws Exception { - FastShutdown.shutdown(RedisClient.create(DEFAULT_RESOURCES, URI)); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesWithStringUriNull() throws Exception { - RedisClient.create(DEFAULT_RESOURCES, (String) null); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesNullWithStringUri() throws Exception { - RedisClient.create(null, URI); - } - - @Test - public void clientResourcesWithUri() throws Exception { - FastShutdown.shutdown(RedisClient.create(DEFAULT_RESOURCES, REDIS_URI)); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesWithUriNull() throws Exception { - RedisClient.create(DEFAULT_RESOURCES, (RedisURI) null); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesNullWithUri() throws Exception { - RedisClient.create(null, REDIS_URI); - } -} diff --git a/src/test/java/com/lambdaworks/redis/RedisClientTest.java b/src/test/java/com/lambdaworks/redis/RedisClientTest.java deleted file mode 100644 index 16b8ca7e23..0000000000 --- a/src/test/java/com/lambdaworks/redis/RedisClientTest.java +++ /dev/null @@ -1,111 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.lang.reflect.Field; -import java.util.Map; -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.DefaultClientResources; -import com.lambdaworks.redis.resource.DefaultEventLoopGroupProvider; -import io.netty.util.concurrent.EventExecutorGroup; - -/** - * @author Mark Paluch - */ -public class RedisClientTest { - - @Test - public void reuseClientConnections() throws Exception { - - // given - DefaultClientResources clientResources = DefaultClientResources.create(); - Map, EventExecutorGroup> eventLoopGroups = getExecutors(clientResources); - - RedisClient redisClient1 = newClient(clientResources); - RedisClient redisClient2 = newClient(clientResources); - connectAndClose(redisClient1); - connectAndClose(redisClient2); - - // when - EventExecutorGroup executor = eventLoopGroups.values().iterator().next(); - redisClient1.shutdown(0, 0, TimeUnit.MILLISECONDS); - - // then - connectAndClose(redisClient2); - - clientResources.shutdown(0, 0, TimeUnit.MILLISECONDS).get(); - - assertThat(eventLoopGroups).isEmpty(); - assertThat(executor.isShuttingDown()).isTrue(); - assertThat(clientResources.eventExecutorGroup().isShuttingDown()).isTrue(); - } - - @Test - public void reuseClientConnectionsShutdownTwoClients() throws Exception { - - // given - DefaultClientResources clientResources = DefaultClientResources.create(); - Map, EventExecutorGroup> eventLoopGroups = getExecutors(clientResources); - - RedisClient redisClient1 = newClient(clientResources); - RedisClient redisClient2 = newClient(clientResources); - connectAndClose(redisClient1); - connectAndClose(redisClient2); - - // when - EventExecutorGroup executor = eventLoopGroups.values().iterator().next(); - - redisClient1.shutdown(0, 0, TimeUnit.MILLISECONDS); - assertThat(executor.isShutdown()).isFalse(); - connectAndClose(redisClient2); - redisClient2.shutdown(0, 0, TimeUnit.MILLISECONDS); - - // then - assertThat(eventLoopGroups).isEmpty(); - assertThat(executor.isShutdown()).isTrue(); - assertThat(clientResources.eventExecutorGroup().isShuttingDown()).isFalse(); - - // cleanup - clientResources.shutdown(0, 0, TimeUnit.MILLISECONDS).get(); - assertThat(clientResources.eventExecutorGroup().isShuttingDown()).isTrue(); - } - - @Test - public void managedClientResources() throws Exception { - - // given - RedisClient redisClient1 = RedisClient.create(RedisURI.create(TestSettings.host(), TestSettings.port())); - ClientResources clientResources = redisClient1.getResources(); - Map, EventExecutorGroup> eventLoopGroups = getExecutors(clientResources); - connectAndClose(redisClient1); - - // when - EventExecutorGroup executor = eventLoopGroups.values().iterator().next(); - - redisClient1.shutdown(0, 0, TimeUnit.MILLISECONDS); - - // then - assertThat(eventLoopGroups).isEmpty(); - assertThat(executor.isShuttingDown()).isTrue(); - assertThat(clientResources.eventExecutorGroup().isShuttingDown()).isTrue(); - } - - private void connectAndClose(RedisClient client) { - client.connect().close(); - } - - private RedisClient newClient(DefaultClientResources clientResources) { - return RedisClient.create(clientResources, RedisURI.create(TestSettings.host(), TestSettings.port())); - } - - private Map, EventExecutorGroup> getExecutors(ClientResources clientResources) - throws Exception { - Field eventLoopGroupsField = DefaultEventLoopGroupProvider.class.getDeclaredField("eventLoopGroups"); - eventLoopGroupsField.setAccessible(true); - return (Map) eventLoopGroupsField.get(clientResources.eventLoopGroupProvider()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/RedisURIBuilderTest.java b/src/test/java/com/lambdaworks/redis/RedisURIBuilderTest.java deleted file mode 100644 index d6b3e1c1f6..0000000000 --- a/src/test/java/com/lambdaworks/redis/RedisURIBuilderTest.java +++ /dev/null @@ -1,202 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.io.File; -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -public class RedisURIBuilderTest { - - @Test - public void sentinel() throws Exception { - RedisURI result = RedisURI.Builder.sentinel("localhost").withTimeout(2, TimeUnit.HOURS).build(); - assertThat(result.getSentinels()).hasSize(1); - assertThat(result.getTimeout()).isEqualTo(2); - assertThat(result.getUnit()).isEqualTo(TimeUnit.HOURS); - } - - @Test(expected = IllegalStateException.class) - public void sentinelWithHostShouldFail() throws Exception { - RedisURI.builder().sentinel("localhost").withHost("localhost"); - } - - @Test - public void sentinelWithPort() throws Exception { - RedisURI result = RedisURI.Builder.sentinel("localhost", 1).withTimeout(2, TimeUnit.HOURS).build(); - assertThat(result.getSentinels()).hasSize(1); - assertThat(result.getTimeout()).isEqualTo(2); - assertThat(result.getUnit()).isEqualTo(TimeUnit.HOURS); - } - - @Test(expected = IllegalStateException.class) - public void shouldFailIfBuilderIsEmpty() throws Exception { - RedisURI.builder().build(); - } - - @Test - public void redisWithHostAndPort() throws Exception { - RedisURI result = RedisURI.builder().withHost("localhost").withPort(1234).build(); - - assertThat(result.getSentinels()).isEmpty(); - assertThat(result.getHost()).isEqualTo("localhost"); - assertThat(result.getPort()).isEqualTo(1234); - } - - @Test - public void redisWithPort() throws Exception { - RedisURI result = RedisURI.Builder.redis("localhost").withPort(1234).build(); - - assertThat(result.getSentinels()).isEmpty(); - assertThat(result.getHost()).isEqualTo("localhost"); - assertThat(result.getPort()).isEqualTo(1234); - } - - @Test(expected = IllegalArgumentException.class) - public void redisHostAndPortWithInvalidPort() throws Exception { - RedisURI.Builder.redis("localhost", -1); - } - - @Test(expected = IllegalArgumentException.class) - public void redisWithInvalidPort() throws Exception { - RedisURI.Builder.redis("localhost").withPort(65536); - } - - @Test - public void redisFromUrl() throws Exception { - RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS + "://password@localhost/1"); - - assertThat(result.getSentinels()).isEmpty(); - assertThat(result.getHost()).isEqualTo("localhost"); - assertThat(result.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); - assertThat(result.getPassword()).isEqualTo("password".toCharArray()); - assertThat(result.isSsl()).isFalse(); - } - - @Test - public void redisFromUrlNoPassword() throws Exception { - RedisURI redisURI = RedisURI.create("redis://localhost:1234/5"); - assertThat(redisURI.getPassword()).isNull(); - - redisURI = RedisURI.create("redis://h:@localhost.com:14589"); - assertThat(redisURI.getPassword()).isNull(); - } - - @Test - public void redisFromUrlPassword() throws Exception { - RedisURI redisURI = RedisURI.create("redis://h:password@localhost.com:14589"); - assertThat(redisURI.getPassword()).isEqualTo("password".toCharArray()); - } - - @Test - public void redisWithSSL() throws Exception { - RedisURI result = RedisURI.Builder.redis("localhost").withSsl(true).withStartTls(true).build(); - - assertThat(result.getSentinels()).isEmpty(); - assertThat(result.getHost()).isEqualTo("localhost"); - assertThat(result.isSsl()).isTrue(); - assertThat(result.isStartTls()).isTrue(); - } - - @Test - public void redisSslFromUrl() throws Exception { - RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SECURE + "://:password@localhost/1"); - - assertThat(result.getSentinels()).isEmpty(); - assertThat(result.getHost()).isEqualTo("localhost"); - assertThat(result.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); - assertThat(result.getPassword()).isEqualTo("password".toCharArray()); - assertThat(result.isSsl()).isTrue(); - } - - @Test - public void redisSentinelFromUrl() throws Exception { - RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SENTINEL + "://password@localhost/1#master"); - - assertThat(result.getSentinels()).hasSize(1); - assertThat(result.getHost()).isNull(); - assertThat(result.getPort()).isEqualTo(0); - assertThat(result.getPassword()).isEqualTo("password".toCharArray()); - assertThat(result.getSentinelMasterId()).isEqualTo("master"); - assertThat(result.toString()).contains("master"); - - result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SENTINEL + "://password@host1:1,host2:3423,host3/1#master"); - - assertThat(result.getSentinels()).hasSize(3); - assertThat(result.getHost()).isNull(); - assertThat(result.getPort()).isEqualTo(0); - assertThat(result.getPassword()).isEqualTo("password".toCharArray()); - assertThat(result.getSentinelMasterId()).isEqualTo("master"); - - RedisURI sentinel1 = result.getSentinels().get(0); - assertThat(sentinel1.getPort()).isEqualTo(1); - assertThat(sentinel1.getHost()).isEqualTo("host1"); - - RedisURI sentinel2 = result.getSentinels().get(1); - assertThat(sentinel2.getPort()).isEqualTo(3423); - assertThat(sentinel2.getHost()).isEqualTo("host2"); - - RedisURI sentinel3 = result.getSentinels().get(2); - assertThat(sentinel3.getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); - assertThat(sentinel3.getHost()).isEqualTo("host3"); - - } - - @Test(expected = IllegalArgumentException.class) - public void redisSentinelWithInvalidPort() throws Exception { - RedisURI.Builder.sentinel("a", 65536); - } - - @Test(expected = IllegalArgumentException.class) - public void redisSentinelWithMasterIdAndInvalidPort() throws Exception { - RedisURI.Builder.sentinel("a", 65536, ""); - } - - @Test(expected = IllegalArgumentException.class) - public void redisSentinelWithNullMasterId() throws Exception { - RedisURI.Builder.sentinel("a", 1, null); - } - - @Test(expected = IllegalStateException.class) - public void redisSentinelWithSSLNotPossible() throws Exception { - RedisURI.Builder.sentinel("a", 1, "master").withSsl(true); - } - - @Test(expected = IllegalStateException.class) - public void redisSentinelWithTLSNotPossible() throws Exception { - RedisURI.Builder.sentinel("a", 1, "master").withStartTls(true); - } - - @Test(expected = IllegalArgumentException.class) - public void invalidScheme() throws Exception { - RedisURI.create("http://www.web.de"); - } - - @Test - public void redisSocket() throws Exception { - File file = new File("work/socket-6479").getCanonicalFile(); - RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SOCKET + "://" + file.getCanonicalPath()); - - assertThat(result.getSocket()).isEqualTo(file.getCanonicalPath()); - assertThat(result.getSentinels()).isEmpty(); - assertThat(result.getPassword()).isNull(); - assertThat(result.getHost()).isNull(); - assertThat(result.getPort()).isEqualTo(0); - assertThat(result.isSsl()).isFalse(); - } - - @Test - public void redisSocketWithPassword() throws Exception { - File file = new File("work/socket-6479").getCanonicalFile(); - RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SOCKET + "://password@" + file.getCanonicalPath()); - - assertThat(result.getSocket()).isEqualTo(file.getCanonicalPath()); - assertThat(result.getSentinels()).isEmpty(); - assertThat(result.getPassword()).isEqualTo("password".toCharArray()); - assertThat(result.getHost()).isNull(); - assertThat(result.getPort()).isEqualTo(0); - assertThat(result.isSsl()).isFalse(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/RedisURITest.java b/src/test/java/com/lambdaworks/redis/RedisURITest.java deleted file mode 100644 index 7845c8d11a..0000000000 --- a/src/test/java/com/lambdaworks/redis/RedisURITest.java +++ /dev/null @@ -1,200 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.LinkedHashMap; -import java.util.Map; -import java.util.Set; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.internal.LettuceSets; -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class RedisURITest { - - @Test - public void equalsTest() throws Exception { - - RedisURI redisURI1 = RedisURI.create("redis://auth@localhost:1234/5"); - RedisURI redisURI2 = RedisURI.create("redis://auth@localhost:1234/5"); - RedisURI redisURI3 = RedisURI.create("redis://auth@localhost:1231/5"); - - assertThat(redisURI1).isEqualTo(redisURI2); - assertThat(redisURI1.hashCode()).isEqualTo(redisURI2.hashCode()); - assertThat(redisURI1.toString()).contains("localhost").contains("1234"); - - assertThat(redisURI3).isNotEqualTo(redisURI2); - assertThat(redisURI3.hashCode()).isNotEqualTo(redisURI2.hashCode()); - } - - @Test - public void setUsage() throws Exception { - - RedisURI redisURI1 = RedisURI.create("redis://auth@localhost:1234/5"); - RedisURI redisURI2 = RedisURI.create("redis://auth@localhost:1234/5"); - RedisURI redisURI3 = RedisURI.create("redis://auth@localhost:1234/6"); - - Set set = LettuceSets.unmodifiableSet(redisURI1, redisURI2, redisURI3); - - assertThat(set).hasSize(2); - } - - @Test - public void mapUsage() throws Exception { - - RedisURI redisURI1 = RedisURI.create("redis://auth@localhost:1234/5"); - RedisURI redisURI2 = RedisURI.create("redis://auth@localhost:1234/5"); - - Map map = new LinkedHashMap<>(); - map.put(redisURI1, "something"); - - assertThat(map.get(redisURI2)).isEqualTo("something"); - } - - @Test - public void simpleUriTest() throws Exception { - RedisURI redisURI = RedisURI.create("redis://localhost:6379"); - assertThat(redisURI.toURI().toString()).isEqualTo("redis://localhost"); - } - - @Test - public void sslUriTest() throws Exception { - RedisURI redisURI = RedisURI.create("redis+ssl://localhost:6379"); - assertThat(redisURI.toURI().toString()).isEqualTo("rediss://localhost:6379"); - } - - @Test - public void tlsUriTest() throws Exception { - RedisURI redisURI = RedisURI.create("redis+tls://localhost:6379"); - assertThat(redisURI.toURI().toString()).isEqualTo("redis+tls://localhost:6379"); - } - - @Test - public void sentinelEqualsTest() throws Exception { - - RedisURI redisURI1 = RedisURI.create("redis-sentinel://auth@h1:222,h2,h3:1234/5?sentinelMasterId=masterId"); - RedisURI redisURI2 = RedisURI.create("redis-sentinel://auth@h1:222,h2,h3:1234/5#masterId"); - RedisURI redisURI3 = RedisURI.create("redis-sentinel://auth@h1,h2,h3:1234/5#OtherMasterId"); - - assertThat(redisURI1).isEqualTo(redisURI2); - assertThat(redisURI1.hashCode()).isEqualTo(redisURI2.hashCode()); - assertThat(redisURI1.toString()).contains("h1"); - - assertThat(redisURI3).isNotEqualTo(redisURI2); - assertThat(redisURI3.hashCode()).isNotEqualTo(redisURI2.hashCode()); - } - - @Test - public void sentinelUriTest() throws Exception { - - RedisURI redisURI = RedisURI.create("redis-sentinel://auth@h1:222,h2,h3:1234/5?sentinelMasterId=masterId"); - assertThat(redisURI.getSentinelMasterId()).isEqualTo("masterId"); - assertThat(redisURI.getSentinels().get(0).getPort()).isEqualTo(222); - assertThat(redisURI.getSentinels().get(1).getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); - assertThat(redisURI.getSentinels().get(2).getPort()).isEqualTo(1234); - assertThat(redisURI.getDatabase()).isEqualTo(5); - - assertThat(redisURI.toURI().toString()).isEqualTo( - "redis-sentinel://auth@h1:222,h2,h3:1234?database=5&sentinelMasterId=masterId"); - } - - @Test - public void socketEqualsTest() throws Exception { - - RedisURI redisURI1 = RedisURI.create("redis-socket:///var/tmp/socket"); - RedisURI redisURI2 = RedisURI.create("redis-socket:///var/tmp/socket"); - RedisURI redisURI3 = RedisURI.create("redis-socket:///var/tmp/other-socket?db=2"); - - assertThat(redisURI1).isEqualTo(redisURI2); - assertThat(redisURI1.hashCode()).isEqualTo(redisURI2.hashCode()); - assertThat(redisURI1.toString()).contains("/var/tmp/socket"); - - assertThat(redisURI3).isNotEqualTo(redisURI2); - assertThat(redisURI3.hashCode()).isNotEqualTo(redisURI2.hashCode()); - } - - @Test - public void socketUriTest() throws Exception { - - RedisURI redisURI = RedisURI.create("redis-socket:///var/tmp/other-socket?db=2"); - - assertThat(redisURI.getDatabase()).isEqualTo(2); - assertThat(redisURI.getSocket()).isEqualTo("/var/tmp/other-socket"); - assertThat(redisURI.toURI().toString()).isEqualTo("redis-socket:///var/tmp/other-socket?database=2"); - } - - @Test - public void socketAltUriTest() throws Exception { - - RedisURI redisURI = RedisURI.create("redis+socket:///var/tmp/other-socket?db=2"); - - assertThat(redisURI.getDatabase()).isEqualTo(2); - assertThat(redisURI.getSocket()).isEqualTo("/var/tmp/other-socket"); - assertThat(redisURI.toURI().toString()).isEqualTo("redis-socket:///var/tmp/other-socket?database=2"); - } - - @Test - public void timeoutParsingTest() throws Exception { - checkUriTimeout("redis://auth@localhost:1234/5?timeout=5000", 5000, TimeUnit.MILLISECONDS); - checkUriTimeout("redis://auth@localhost:1234/5?timeout=5000ms", 5000, TimeUnit.MILLISECONDS); - checkUriTimeout("redis://auth@localhost:1234/5?timeout=5s", 5, TimeUnit.SECONDS); - checkUriTimeout("redis://auth@localhost:1234/5?timeout=100us", 100, TimeUnit.MICROSECONDS); - checkUriTimeout("redis://auth@localhost:1234/5?TIMEOUT=1000000NS", 1000000, TimeUnit.NANOSECONDS); - checkUriTimeout("redis://auth@localhost:1234/5?timeout=60m", 60, TimeUnit.MINUTES); - checkUriTimeout("redis://auth@localhost:1234/5?timeout=24h", 24, TimeUnit.HOURS); - checkUriTimeout("redis://auth@localhost:1234/5?timeout=1d", 1, TimeUnit.DAYS); - - checkUriTimeout("redis://auth@localhost:1234/5?timeout=-1", 0, TimeUnit.MILLISECONDS); - - RedisURI defaultUri = new RedisURI(); - checkUriTimeout("redis://auth@localhost:1234/5?timeout=junk", defaultUri.getTimeout(), defaultUri.getUnit()); - - RedisURI redisURI = RedisURI.create("redis://auth@localhost:1234/5?timeout=5000ms"); - assertThat(redisURI.toURI().toString()).isEqualTo("redis://auth@localhost:1234?database=5&timeout=5000ms"); - } - - @Test - public void queryStringDecodingTest() throws Exception { - String timeout = "%74%69%6D%65%6F%75%74"; - String eq = "%3d"; - String s = "%73"; - checkUriTimeout("redis://auth@localhost:1234/5?" + timeout + eq + "5" + s, 5, TimeUnit.SECONDS); - } - - @Test - public void timeoutParsingWithJunkParamTest() throws Exception { - RedisURI redisURI1 = RedisURI.create("redis-sentinel://auth@localhost:1234/5?timeout=5s;junkparam=#master-instance"); - assertThat(redisURI1.getTimeout()).isEqualTo(5); - assertThat(redisURI1.getUnit()).isEqualTo(TimeUnit.SECONDS); - assertThat(redisURI1.getSentinelMasterId()).isEqualTo("master-instance"); - } - - private RedisURI checkUriTimeout(String uri, long expectedTimeout, TimeUnit expectedUnit) { - RedisURI redisURI1 = RedisURI.create(uri); - assertThat(redisURI1.getTimeout()).isEqualTo(expectedTimeout); - assertThat(redisURI1.getUnit()).isEqualTo(expectedUnit); - return redisURI1; - } - - @Test - public void databaseParsingTest() throws Exception { - RedisURI redisURI = RedisURI.create("redis://auth@localhost:1234/?database=5"); - assertThat(redisURI.getDatabase()).isEqualTo(5); - - assertThat(redisURI.toURI().toString()).isEqualTo("redis://auth@localhost:1234?database=5"); - } - - @Test - public void parsingWithInvalidValuesTest() throws Exception { - RedisURI redisURI = RedisURI - .create("redis://@host:1234/?database=AAA&database=&timeout=&timeout=XYZ&sentinelMasterId="); - assertThat(redisURI.getDatabase()).isEqualTo(0); - assertThat(redisURI.getSentinelMasterId()).isNull(); - - assertThat(redisURI.toURI().toString()).isEqualTo("redis://host:1234"); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/ScanCursorTest.java b/src/test/java/com/lambdaworks/redis/ScanCursorTest.java deleted file mode 100644 index 3b7a5f7906..0000000000 --- a/src/test/java/com/lambdaworks/redis/ScanCursorTest.java +++ /dev/null @@ -1,25 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Test; - -public class ScanCursorTest { - - @Test - public void testFactory() throws Exception { - ScanCursor scanCursor = ScanCursor.of("dummy"); - assertThat(scanCursor.getCursor()).isEqualTo("dummy"); - assertThat(scanCursor.isFinished()).isFalse(); - } - - @Test(expected = UnsupportedOperationException.class) - public void setCursorOnImmutableInstance() throws Exception { - ScanCursor.INITIAL.setCursor(""); - } - - @Test(expected = UnsupportedOperationException.class) - public void setFinishedOnImmutableInstance() throws Exception { - ScanCursor.INITIAL.setFinished(false); - } -} diff --git a/src/test/java/com/lambdaworks/redis/ScoredValueStreamingAdapter.java b/src/test/java/com/lambdaworks/redis/ScoredValueStreamingAdapter.java deleted file mode 100644 index 51a211c22c..0000000000 --- a/src/test/java/com/lambdaworks/redis/ScoredValueStreamingAdapter.java +++ /dev/null @@ -1,23 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.ArrayList; -import java.util.List; - -import com.lambdaworks.redis.output.ScoredValueStreamingChannel; - -/** - * @author Mark Paluch - * @since 3.0 - */ -public class ScoredValueStreamingAdapter implements ScoredValueStreamingChannel { - private List> list = new ArrayList<>(); - - @Override - public void onValue(ScoredValue value) { - list.add(value); - } - - public List> getList() { - return list; - } -} diff --git a/src/test/java/com/lambdaworks/redis/ScoredValueTest.java b/src/test/java/com/lambdaworks/redis/ScoredValueTest.java deleted file mode 100644 index 33d843f585..0000000000 --- a/src/test/java/com/lambdaworks/redis/ScoredValueTest.java +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Test; - -public class ScoredValueTest { - @Test - public void equals() throws Exception { - ScoredValue sv1 = new ScoredValue(1.0, "a"); - assertThat(sv1.equals(new ScoredValue(1.0, "a"))).isTrue(); - assertThat(sv1.equals(null)).isFalse(); - assertThat(sv1.equals(new ScoredValue(1.1, "a"))).isFalse(); - assertThat(sv1.equals(new ScoredValue(1.0, "b"))).isFalse(); - } - - @Test - public void testToString() throws Exception { - ScoredValue sv1 = new ScoredValue(1.0, "a"); - assertThat(sv1.toString()).isEqualTo(String.format("(%f, %s)", sv1.score, sv1.value)); - } - - @Test - public void testHashCode() throws Exception { - assertThat(new ScoredValue(1.0, "a").hashCode() != 0).isTrue(); - assertThat(new ScoredValue(0.0, "a").hashCode() != 0).isTrue(); - assertThat(new ScoredValue(0.0, null).hashCode() == 0).isTrue(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/SocketOptionsTest.java b/src/test/java/com/lambdaworks/redis/SocketOptionsTest.java deleted file mode 100644 index eedbbf3222..0000000000 --- a/src/test/java/com/lambdaworks/redis/SocketOptionsTest.java +++ /dev/null @@ -1,70 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import java.net.SocketException; -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import io.netty.channel.ConnectTimeoutException; - -/** - * @author Mark Paluch - */ -public class SocketOptionsTest extends AbstractRedisClientTest { - - @Test - public void testNew() throws Exception { - checkAssertions(SocketOptions.create()); - } - - @Test - public void testBuilder() throws Exception { - - SocketOptions sut = SocketOptions.builder().connectTimeout(1, TimeUnit.MINUTES).keepAlive(true).tcpNoDelay(true) - .build(); - - assertThat(sut.isKeepAlive()).isEqualTo(true); - assertThat(sut.isTcpNoDelay()).isEqualTo(true); - assertThat(sut.getConnectTimeout()).isEqualTo(1); - assertThat(sut.getConnectTimeoutUnit()).isEqualTo(TimeUnit.MINUTES); - } - - @Test - public void testCopy() throws Exception { - checkAssertions(SocketOptions.copyOf(SocketOptions.builder().build())); - } - - protected void checkAssertions(SocketOptions sut) { - assertThat(sut.isKeepAlive()).isEqualTo(false); - assertThat(sut.isTcpNoDelay()).isEqualTo(false); - assertThat(sut.getConnectTimeout()).isEqualTo(10); - assertThat(sut.getConnectTimeoutUnit()).isEqualTo(TimeUnit.SECONDS); - } - - @Test(timeout = 1000) - public void testConnectTimeout() { - - SocketOptions socketOptions = SocketOptions.builder().connectTimeout(100, TimeUnit.MILLISECONDS).build(); - client.setOptions(ClientOptions.builder().socketOptions(socketOptions).build()); - - try { - client.connect(RedisURI.create("2:4:5:5::1", 60000)); - fail("Missing RedisConnectionException"); - } catch (RedisConnectionException e) { - - if (e.getCause() instanceof ConnectTimeoutException) { - assertThat(e).hasRootCauseInstanceOf(ConnectTimeoutException.class); - assertThat(e.getCause()).hasMessageContaining("connection timed out"); - return; - } - - if (e.getCause() instanceof SocketException) { - // Network is unreachable or No route to host are OK as well. - return; - } - } - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/SyncAsyncApiConvergenceTest.java b/src/test/java/com/lambdaworks/redis/SyncAsyncApiConvergenceTest.java deleted file mode 100644 index 33febca596..0000000000 --- a/src/test/java/com/lambdaworks/redis/SyncAsyncApiConvergenceTest.java +++ /dev/null @@ -1,70 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.lang.reflect.*; -import java.util.ArrayList; -import java.util.List; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; - -/** - * @author Mark Paluch - * @since 3.0 - */ -@RunWith(Parameterized.class) -public class SyncAsyncApiConvergenceTest { - - private Method method; - - @SuppressWarnings("rawtypes") - private Class asyncClass = RedisAsyncCommands.class; - - @Parameterized.Parameters(name = "Method {0}/{1}") - public static List parameters() { - - List result = new ArrayList<>(); - Method[] methods = RedisCommands.class.getMethods(); - for (Method method : methods) { - result.add(new Object[] { method.getName(), method }); - } - - return result; - } - - public SyncAsyncApiConvergenceTest(String methodName, Method method) { - this.method = method; - } - - @Test - public void testMethodPresentOnAsyncApi() throws Exception { - Method method = asyncClass.getMethod(this.method.getName(), this.method.getParameterTypes()); - assertThat(method).isNotNull(); - } - - @Test - public void testSameResultType() throws Exception { - Method method = asyncClass.getMethod(this.method.getName(), this.method.getParameterTypes()); - Type returnType = method.getGenericReturnType(); - - if (method.getReturnType().equals(RedisFuture.class)) { - ParameterizedType genericReturnType = (ParameterizedType) method.getGenericReturnType(); - Type[] actualTypeArguments = genericReturnType.getActualTypeArguments(); - - if (actualTypeArguments[0] instanceof GenericArrayType) { - GenericArrayType arrayType = (GenericArrayType) actualTypeArguments[0]; - returnType = Array.newInstance((Class) arrayType.getGenericComponentType(), 0).getClass(); - } else { - returnType = actualTypeArguments[0]; - } - } - - assertThat(returnType.toString()).describedAs(this.method.toString()).isEqualTo( - this.method.getGenericReturnType().toString()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/TestEventLoopGroupProvider.java b/src/test/java/com/lambdaworks/redis/TestEventLoopGroupProvider.java deleted file mode 100644 index dd47f7bbe5..0000000000 --- a/src/test/java/com/lambdaworks/redis/TestEventLoopGroupProvider.java +++ /dev/null @@ -1,42 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.resource.DefaultEventLoopGroupProvider; - -import io.netty.util.concurrent.DefaultPromise; -import io.netty.util.concurrent.EventExecutorGroup; -import io.netty.util.concurrent.ImmediateEventExecutor; -import io.netty.util.concurrent.Promise; - -/** - * A {@link com.lambdaworks.redis.resource.EventLoopGroupProvider} suitable for testing. Preserves the event loop groups between - * tests. Every time a new {@link TestEventLoopGroupProvider} instance is created, shutdown hook is added - * {@link Runtime#addShutdownHook(Thread)}. - * - * @author Mark Paluch - */ -public class TestEventLoopGroupProvider extends DefaultEventLoopGroupProvider { - - public TestEventLoopGroupProvider() { - super(10); - Runtime.getRuntime().addShutdownHook(new Thread() { - @Override - public void run() { - try { - TestEventLoopGroupProvider.this.shutdown(100, 100, TimeUnit.MILLISECONDS).get(10, TimeUnit.SECONDS); - } catch (Exception e) { - e.printStackTrace(); - } - } - }); - } - - @Override - public Promise release(EventExecutorGroup eventLoopGroup, long quietPeriod, long timeout, TimeUnit unit) { - DefaultPromise result = new DefaultPromise(ImmediateEventExecutor.INSTANCE); - result.setSuccess(true); - - return result; - } -} diff --git a/src/test/java/com/lambdaworks/redis/TestSettings.java b/src/test/java/com/lambdaworks/redis/TestSettings.java deleted file mode 100644 index b15ef373d3..0000000000 --- a/src/test/java/com/lambdaworks/redis/TestSettings.java +++ /dev/null @@ -1,113 +0,0 @@ -package com.lambdaworks.redis; - -import java.net.Inet4Address; -import java.net.InetAddress; -import java.net.UnknownHostException; - -/** - * This class provides settings used while testing. You can override these using system properties. - * - * @author Mark Paluch - */ -public class TestSettings { - private TestSettings() { - - } - - /** - * - * @return hostname of your redis instance. Defaults to {@literal localhost}. Can be overriden with - * {@code -Dhost=YourHostName} - */ - public static String host() { - return System.getProperty("host", "localhost"); - } - - /** - * - * @return unix domain socket name of your redis instance. Defaults to {@literal work/socket-6479}. Can be overriden with - * {@code -Ddomainsocket=YourSocket} - */ - public static String socket() { - return System.getProperty("domainsocket", "work/socket-6479"); - } - - /** - * - * @return unix domain socket name of your redis sentinel instance. Defaults to {@literal work/socket-26379}. Can be - * overriden with {@code -Dsentineldomainsocket=YourSocket} - */ - public static String sentinelSocket() { - return System.getProperty("sentineldomainsocket", "work/socket-26379"); - } - - /** - * - * @return resolved address of {@link #host()} - * @throws IllegalStateException when hostname cannot be resolved - */ - public static String hostAddr() { - try { - InetAddress[] allByName = InetAddress.getAllByName(host()); - for (InetAddress inetAddress : allByName) { - if (inetAddress instanceof Inet4Address) { - return inetAddress.getHostAddress(); - } - } - return InetAddress.getByName(host()).getHostAddress(); - } catch (UnknownHostException e) { - throw new IllegalStateException(e); - } - } - - /** - * - * @return password of your redis instance. Defaults to {@literal passwd}. Can be overriden with - * {@code -Dpassword=YourPassword} - */ - public static String password() { - return System.getProperty("password", "passwd"); - } - - /** - * - * @return port of your redis instance. Defaults to {@literal 6479}. Can be overriden with {@code -Dport=1234} - */ - public static int port() { - return Integer.valueOf(System.getProperty("port", "6479")); - } - - /** - * - * @return sslport of your redis instance. Defaults to {@literal 6443}. Can be overriden with {@code -Dsslport=1234} - */ - public static int sslPort() { - return Integer.valueOf(System.getProperty("sslport", "6443")); - } - - /** - * - * @return {@link #port()} with added {@literal 500} - */ - public static int nonexistentPort() { - return port() + 500; - } - - /** - * - * @param offset - * @return {@link #port()} with added {@literal offset} - */ - public static int port(int offset) { - return port() + offset; - } - - /** - * - * @param offset - * @return {@link #sslPort()} with added {@literal offset} - */ - public static int sslPort(int offset) { - return sslPort() + offset; - } -} diff --git a/src/test/java/com/lambdaworks/redis/TimeTest.java b/src/test/java/com/lambdaworks/redis/TimeTest.java deleted file mode 100644 index cdba0183bd..0000000000 --- a/src/test/java/com/lambdaworks/redis/TimeTest.java +++ /dev/null @@ -1,29 +0,0 @@ -package com.lambdaworks.redis; - -import java.util.concurrent.TimeUnit; - -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; - -import static org.assertj.core.api.Assertions.assertThat; - -public class TimeTest { - RedisClient client = RedisClient.create(); - - @Before - public void setUp() throws Exception { - client.setDefaultTimeout(15, TimeUnit.SECONDS); - } - - @After - public void after() throws Exception { - FastShutdown.shutdown(client); - } - - @Test - public void testTime() throws Exception { - assertThat(client.makeTimeout()).isEqualTo(15000); - } -} diff --git a/src/test/java/com/lambdaworks/redis/UnixDomainSocketTest.java b/src/test/java/com/lambdaworks/redis/UnixDomainSocketTest.java deleted file mode 100644 index ad1555467d..0000000000 --- a/src/test/java/com/lambdaworks/redis/UnixDomainSocketTest.java +++ /dev/null @@ -1,186 +0,0 @@ -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; -import static org.junit.Assume.assumeTrue; - -import java.io.File; -import java.io.IOException; -import java.util.Locale; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Rule; -import org.junit.Test; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.sentinel.SentinelRule; -import io.netty.util.internal.SystemPropertyUtil; - -/** - * @author Mark Paluch - */ -public class UnixDomainSocketTest { - - public static final String MASTER_ID = "mymaster"; - - private static RedisClient sentinelClient; - - @Rule - public SentinelRule sentinelRule = new SentinelRule(sentinelClient, false, 26379, 26380); - - protected Logger log = LogManager.getLogger(getClass()); - - protected String key = "key"; - protected String value = "value"; - - @BeforeClass - public static void setupClient() { - sentinelClient = getRedisSentinelClient(); - } - - @AfterClass - public static void shutdownClient() { - FastShutdown.shutdown(sentinelClient); - } - - @Test - public void standalone_Linux_x86_64_RedisClientWithSocket() throws Exception { - - linuxOnly(); - - RedisURI redisURI = getSocketRedisUri(); - - RedisClient redisClient = RedisClient.create(redisURI); - - StatefulRedisConnection connection = redisClient.connect(); - - someRedisAction(connection.sync()); - connection.close(); - - FastShutdown.shutdown(redisClient); - } - - @Test - public void standalone_Linux_x86_64_ConnectToSocket() throws Exception { - - linuxOnly(); - - RedisURI redisURI = getSocketRedisUri(); - - RedisClient redisClient = RedisClient.create(); - - StatefulRedisConnection connection = redisClient.connect(redisURI); - - someRedisAction(connection.sync()); - connection.close(); - - FastShutdown.shutdown(redisClient); - } - - private void linuxOnly() { - String osName = SystemPropertyUtil.get("os.name").toLowerCase(Locale.UK).trim(); - assumeTrue("Only supported on Linux, your os is " + osName, osName.startsWith("linux")); - } - - private RedisURI getSocketRedisUri() throws IOException { - File file = new File(TestSettings.socket()).getCanonicalFile(); - return RedisURI.create(RedisURI.URI_SCHEME_REDIS_SOCKET + "://" + file.getCanonicalPath()); - } - - private RedisURI getSentinelSocketRedisUri() throws IOException { - File file = new File(TestSettings.sentinelSocket()).getCanonicalFile(); - return RedisURI.create(RedisURI.URI_SCHEME_REDIS_SOCKET + "://" + file.getCanonicalPath()); - } - - @Test - public void sentinel_Linux_x86_64_RedisClientWithSocket() throws Exception { - - linuxOnly(); - - RedisURI uri = new RedisURI(); - uri.getSentinels().add(getSentinelSocketRedisUri()); - uri.setSentinelMasterId("mymaster"); - - RedisClient redisClient = RedisClient.create(uri); - - StatefulRedisConnection connection = redisClient.connect(); - - someRedisAction(connection.sync()); - - connection.close(); - - RedisSentinelAsyncConnection sentinelConnection = redisClient.connectSentinelAsync(); - - assertThat(sentinelConnection.ping().get()).isEqualTo("PONG"); - sentinelConnection.close(); - - FastShutdown.shutdown(redisClient); - } - - @Test - public void sentinel_Linux_x86_64_ConnectToSocket() throws Exception { - - linuxOnly(); - - RedisURI uri = new RedisURI(); - uri.getSentinels().add(getSentinelSocketRedisUri()); - uri.setSentinelMasterId("mymaster"); - - RedisClient redisClient = RedisClient.create(); - - StatefulRedisConnection connection = redisClient.connect(uri); - - someRedisAction(connection.sync()); - - connection.close(); - - RedisSentinelAsyncConnection sentinelConnection = redisClient.connectSentinelAsync(uri); - - assertThat(sentinelConnection.ping().get()).isEqualTo("PONG"); - sentinelConnection.close(); - - FastShutdown.shutdown(redisClient); - } - - @Test - public void sentinel_Linux_x86_64_socket_and_inet() throws Exception { - - sentinelRule.waitForMaster(MASTER_ID); - linuxOnly(); - - RedisURI uri = new RedisURI(); - uri.getSentinels().add(getSentinelSocketRedisUri()); - uri.getSentinels().add(RedisURI.create(RedisURI.URI_SCHEME_REDIS + "://" + TestSettings.host() + ":26379")); - uri.setSentinelMasterId(MASTER_ID); - - RedisClient redisClient = new RedisClient(uri); - - RedisSentinelAsyncConnection sentinelConnection = redisClient - .connectSentinelAsync(getSentinelSocketRedisUri()); - log.info("Masters: " + sentinelConnection.masters().get()); - - try { - redisClient.connect(); - fail("Missing validation exception"); - } catch (RedisConnectionException e) { - assertThat(e).hasMessageContaining("You cannot mix unix domain socket and IP socket URI's"); - } finally { - FastShutdown.shutdown(redisClient); - } - - } - - private void someRedisAction(RedisConnection connection) { - connection.set(key, value); - String result = connection.get(key); - - assertThat(result).isEqualTo(value); - } - - protected static RedisClient getRedisSentinelClient() { - return new RedisClient(RedisURI.Builder.sentinel(TestSettings.host(), MASTER_ID).build()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/Utf8StringCodecTest.java b/src/test/java/com/lambdaworks/redis/Utf8StringCodecTest.java deleted file mode 100644 index bd04ef84cd..0000000000 --- a/src/test/java/com/lambdaworks/redis/Utf8StringCodecTest.java +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.Arrays; - -import org.junit.Test; - -public class Utf8StringCodecTest extends AbstractRedisClientTest { - @Test - public void decodeHugeBuffer() throws Exception { - char[] huge = new char[8192]; - Arrays.fill(huge, 'A'); - String value = new String(huge); - redis.set(key, value); - assertThat(redis.get(key)).isEqualTo(value); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/AbstractClusterTest.java b/src/test/java/com/lambdaworks/redis/cluster/AbstractClusterTest.java deleted file mode 100644 index 5d31ed5a29..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/AbstractClusterTest.java +++ /dev/null @@ -1,66 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.internal.LettuceLists; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Rule; - -import com.lambdaworks.redis.AbstractTest; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; - -/** - * @author Mark Paluch - */ -public class AbstractClusterTest extends AbstractTest { - - public static final String host = TestSettings.hostAddr(); - - public static final int SLOT_A = SlotHash.getSlot("a".getBytes()); - public static final int SLOT_B = SlotHash.getSlot("b".getBytes()); - - - // default test cluster 2 masters + 2 slaves - public static final int port1 = 7379; - public static final int port2 = port1 + 1; - public static final int port3 = port1 + 2; - public static final int port4 = port1 + 3; - - // master+slave or master+master - public static final int port5 = port1 + 4; - public static final int port6 = port1 + 5; - - // auth cluster - public static final int port7 = port1 + 6; - public static final String KEY_A = "a"; - public static final String KEY_B = "b"; - - protected static RedisClusterClient clusterClient; - - @Rule - public ClusterRule clusterRule = new ClusterRule(clusterClient, port1, port2, port3, port4); - - @BeforeClass - public static void setupClusterClient() throws Exception { - clusterClient = RedisClusterClient.create(LettuceLists.unmodifiableList(RedisURI.Builder.redis(host, port1).build())); - } - - @AfterClass - public static void shutdownClusterClient() { - FastShutdown.shutdown(clusterClient); - } - - public static int[] createSlots(int from, int to) { - int[] result = new int[to - from]; - int counter = 0; - for (int i = from; i < to; i++) { - result[counter++] = i; - - } - return result; - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/AdvancedClusterClientTest.java b/src/test/java/com/lambdaworks/redis/cluster/AdvancedClusterClientTest.java deleted file mode 100644 index deb08b9808..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/AdvancedClusterClientTest.java +++ /dev/null @@ -1,616 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.*; -import java.util.concurrent.TimeUnit; -import java.util.stream.Collectors; - -import org.junit.After; -import org.junit.Before; -import org.junit.Ignore; -import org.junit.Test; - -import com.lambdaworks.RandomKeys; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisAdvancedClusterReactiveCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisClusterReactiveCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - */ -@SuppressWarnings("rawtypes") -public class AdvancedClusterClientTest extends AbstractClusterTest { - - public static final String KEY_ON_NODE_1 = "a"; - public static final String KEY_ON_NODE_2 = "b"; - - private RedisAdvancedClusterAsyncCommands commands; - private RedisAdvancedClusterCommands syncCommands; - private StatefulRedisClusterConnection clusterConnection; - - @Before - public void before() throws Exception { - clusterClient.reloadPartitions(); - clusterConnection = clusterClient.connect(); - commands = clusterConnection.async(); - syncCommands = clusterConnection.sync(); - } - - @After - public void after() throws Exception { - commands.close(); - } - - @Test - public void nodeConnections() throws Exception { - - assertThat(clusterClient.getPartitions()).hasSize(4); - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - RedisClusterAsyncConnection nodeConnection = commands.getConnection(redisClusterNode.getNodeId()); - - String myid = nodeConnection.clusterMyId().get(); - assertThat(myid).isEqualTo(redisClusterNode.getNodeId()); - } - } - - @Test(expected = RedisException.class) - public void unknownNodeId() throws Exception { - - commands.getConnection("unknown"); - } - - @Test(expected = RedisException.class) - public void invalidHost() throws Exception { - commands.getConnection("invalid-host", -1); - } - - @Test - public void partitions() throws Exception { - - Partitions partitions = commands.getStatefulConnection().getPartitions(); - assertThat(partitions).hasSize(4); - } - - @Test - public void doWeirdThingsWithClusterconnections() throws Exception { - - assertThat(clusterClient.getPartitions()).hasSize(4); - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - RedisClusterAsyncConnection nodeConnection = commands.getConnection(redisClusterNode.getNodeId()); - - nodeConnection.close(); - - RedisClusterAsyncConnection nextConnection = commands.getConnection(redisClusterNode.getNodeId()); - assertThat(commands).isNotSameAs(nextConnection); - } - } - - @Test - public void differentConnections() throws Exception { - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - RedisClusterAsyncConnection nodeId = commands.getConnection(redisClusterNode.getNodeId()); - RedisClusterAsyncConnection hostAndPort = commands - .getConnection(redisClusterNode.getUri().getHost(), redisClusterNode.getUri().getPort()); - - assertThat(nodeId).isNotSameAs(hostAndPort); - } - - StatefulRedisClusterConnection statefulConnection = commands.getStatefulConnection(); - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - - StatefulRedisConnection nodeId = statefulConnection.getConnection(redisClusterNode.getNodeId()); - StatefulRedisConnection hostAndPort = statefulConnection - .getConnection(redisClusterNode.getUri().getHost(), redisClusterNode.getUri().getPort()); - - assertThat(nodeId).isNotSameAs(hostAndPort); - } - - RedisAdvancedClusterCommands sync = statefulConnection.sync(); - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - - RedisClusterCommands nodeId = sync.getConnection(redisClusterNode.getNodeId()); - RedisClusterCommands hostAndPort = sync.getConnection(redisClusterNode.getUri().getHost(), - redisClusterNode.getUri().getPort()); - - assertThat(nodeId).isNotSameAs(hostAndPort); - } - - RedisAdvancedClusterReactiveCommands rx = statefulConnection.reactive(); - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - - RedisClusterReactiveCommands nodeId = rx.getConnection(redisClusterNode.getNodeId()); - RedisClusterReactiveCommands hostAndPort = rx.getConnection(redisClusterNode.getUri().getHost(), - redisClusterNode.getUri().getPort()); - - assertThat(nodeId).isNotSameAs(hostAndPort); - } - } - - @Test - public void msetRegular() throws Exception { - - Map mset = Collections.singletonMap(key, value); - - String result = syncCommands.mset(mset); - - assertThat(result).isEqualTo("OK"); - assertThat(syncCommands.get(key)).isEqualTo(value); - } - - @Test - public void msetCrossSlot() throws Exception { - - Map mset = prepareMset(); - - String result = syncCommands.mset(mset); - - assertThat(result).isEqualTo("OK"); - - for (String mykey : mset.keySet()) { - String s1 = syncCommands.get(mykey); - assertThat(s1).isEqualTo("value-" + mykey); - } - } - - protected Map prepareMset() { - Map mset = new HashMap<>(); - for (char c = 'a'; c < 'z'; c++) { - String key = new String(new char[] { c, c, c }); - mset.put(key, "value-" + key); - } - return mset; - } - - @Test - public void msetnxCrossSlot() throws Exception { - - Map mset = prepareMset(); - - RedisFuture result = commands.msetnx(mset); - - assertThat(result.get()).isTrue(); - - for (String mykey : mset.keySet()) { - String s1 = commands.get(mykey).get(); - assertThat(s1).isEqualTo("value-" + mykey); - } - } - - @Test - public void mgetRegular() throws Exception { - - msetRegular(); - List result = syncCommands.mget(key); - - assertThat(result).hasSize(1); - } - - @Test - public void mgetCrossSlot() throws Exception { - - msetCrossSlot(); - List keys = new ArrayList<>(); - List expectation = new ArrayList<>(); - for (char c = 'a'; c < 'z'; c++) { - String key = new String(new char[] { c, c, c }); - keys.add(key); - expectation.add("value-" + key); - } - - List result = syncCommands.mget(keys.toArray(new String[keys.size()])); - - assertThat(result).hasSize(keys.size()); - assertThat(result).isEqualTo(expectation); - } - - @Test - public void delRegular() throws Exception { - - msetRegular(); - Long result = syncCommands.unlink(key); - - assertThat(result).isEqualTo(1); - assertThat(commands.get(key).get()).isNull(); - } - - @Test - public void delCrossSlot() throws Exception { - - List keys = prepareKeys(); - - Long result = syncCommands.del(keys.toArray(new String[keys.size()])); - - assertThat(result).isEqualTo(25); - - for (String mykey : keys) { - String s1 = syncCommands.get(mykey); - assertThat(s1).isNull(); - } - } - - @Test - public void unlinkRegular() throws Exception { - - msetRegular(); - Long result = syncCommands.unlink(key); - - assertThat(result).isEqualTo(1); - assertThat(syncCommands.get(key)).isNull(); - } - - @Test - public void unlinkCrossSlot() throws Exception { - - List keys = prepareKeys(); - - Long result = syncCommands.unlink(keys.toArray(new String[keys.size()])); - - assertThat(result).isEqualTo(25); - - for (String mykey : keys) { - String s1 = syncCommands.get(mykey); - assertThat(s1).isNull(); - } - } - - protected List prepareKeys() throws Exception { - msetCrossSlot(); - List keys = new ArrayList<>(); - for (char c = 'a'; c < 'z'; c++) { - String key = new String(new char[] { c, c, c }); - keys.add(key); - } - return keys; - } - - @Test - public void clientSetname() throws Exception { - - String name = "test-cluster-client"; - - assertThat(clusterClient.getPartitions().size()).isGreaterThan(0); - - syncCommands.clientSetname(name); - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - RedisClusterCommands nodeConnection = commands.getStatefulConnection().sync() - .getConnection(redisClusterNode.getNodeId()); - assertThat(nodeConnection.clientList()).contains(name); - } - } - - @Test(expected = RedisCommandExecutionException.class) - public void clientSetnameRunOnError() throws Exception { - syncCommands.clientSetname("not allowed"); - } - - @Test - public void dbSize() throws Exception { - - writeKeysToTwoNodes(); - - RedisClusterCommands nodeConnection1 = clusterConnection.getConnection(host, port1).sync(); - RedisClusterCommands nodeConnection2 = clusterConnection.getConnection(host, port2).sync(); - - assertThat(nodeConnection1.dbsize()).isEqualTo(1); - assertThat(nodeConnection2.dbsize()).isEqualTo(1); - - Long dbsize = syncCommands.dbsize(); - assertThat(dbsize).isEqualTo(2); - } - - @Test - public void flushall() throws Exception { - - writeKeysToTwoNodes(); - - assertThat(syncCommands.flushall()).isEqualTo("OK"); - - Long dbsize = syncCommands.dbsize(); - assertThat(dbsize).isEqualTo(0); - } - - @Test - public void flushdb() throws Exception { - - writeKeysToTwoNodes(); - - assertThat(syncCommands.flushdb()).isEqualTo("OK"); - - Long dbsize = syncCommands.dbsize(); - assertThat(dbsize).isEqualTo(0); - } - - @Test - public void keys() throws Exception { - - writeKeysToTwoNodes(); - - assertThat(syncCommands.keys("*")).contains(KEY_ON_NODE_1, KEY_ON_NODE_2); - } - - @Test - public void keysStreaming() throws Exception { - - writeKeysToTwoNodes(); - ListStreamingAdapter result = new ListStreamingAdapter<>(); - - assertThat(syncCommands.keys(result, "*")).isEqualTo(2); - assertThat(result.getList()).contains(KEY_ON_NODE_1, KEY_ON_NODE_2); - } - - @Test - public void randomKey() throws Exception { - - writeKeysToTwoNodes(); - - assertThat(syncCommands.randomkey()).isIn(KEY_ON_NODE_1, KEY_ON_NODE_2); - } - - @Test - public void scriptFlush() throws Exception { - assertThat(syncCommands.scriptFlush()).isEqualTo("OK"); - } - - @Test - public void scriptKill() throws Exception { - assertThat(syncCommands.scriptKill()).isEqualTo("OK"); - } - - @Test - @Ignore("Run me manually, I will shutdown all your cluster nodes so you need to restart the Redis Cluster after this test") - public void shutdown() throws Exception { - syncCommands.shutdown(true); - } - - @Test - public void testSync() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.set(key, value); - assertThat(sync.get(key)).isEqualTo(value); - - RedisClusterCommands node2Connection = sync.getConnection(host, port2); - assertThat(node2Connection.get(key)).isEqualTo(value); - - assertThat(sync.getStatefulConnection()).isSameAs(commands.getStatefulConnection()); - } - - @Test - public void routeCommandTonoAddrPartition() throws Exception { - - RedisClusterCommands sync = clusterClient.connect().sync(); - try { - - Partitions partitions = clusterClient.getPartitions(); - for (RedisClusterNode partition : partitions) { - partition.setUri(RedisURI.create("redis://non.existent.host:1234")); - } - - sync.set("A", "value");// 6373 - } catch (Exception e) { - assertThat(e).isInstanceOf(RedisException.class).hasMessageContaining("Unable to connect to"); - } finally { - clusterClient.getPartitions().clear(); - clusterClient.reloadPartitions(); - } - sync.close(); - } - - @Test - public void routeCommandToForbiddenHostOnRedirect() throws Exception { - - RedisClusterCommands sync = clusterClient.connect().sync(); - try { - - Partitions partitions = clusterClient.getPartitions(); - for (RedisClusterNode partition : partitions) { - partition.setSlots(Collections.singletonList(0)); - if (partition.getUri().getPort() == 7380) { - partition.setSlots(Collections.singletonList(6373)); - } else { - partition.setUri(RedisURI.create("redis://non.existent.host:1234")); - } - } - - partitions.updateCache(); - - sync.set("A", "value");// 6373 - } catch (Exception e) { - assertThat(e).isInstanceOf(RedisException.class).hasMessageContaining("not allowed"); - } finally { - clusterClient.getPartitions().clear(); - clusterClient.reloadPartitions(); - } - sync.close(); - } - - @Test - public void getConnectionToNotAClusterMemberForbidden() throws Exception { - - RedisAdvancedClusterConnection sync = clusterClient.connectCluster(); - try { - sync.getConnection(TestSettings.host(), TestSettings.port()); - } catch (RedisException e) { - assertThat(e).hasRootCauseExactlyInstanceOf(IllegalArgumentException.class); - } - sync.close(); - } - - @Test - public void getConnectionToNotAClusterMemberAllowed() throws Exception { - - clusterClient.setOptions(ClusterClientOptions.builder().validateClusterNodeMembership(false).build()); - StatefulRedisClusterConnection connection = clusterClient.connect(); - connection.getConnection(TestSettings.host(), TestSettings.port()); - connection.close(); - } - - @Test - public void pipelining() throws Exception { - - RedisClusterCommands verificationConnection = clusterClient.connect().sync(); - - // preheat the first connection - commands.get(key(0)).get(); - - int iterations = 1000; - commands.setAutoFlushCommands(false); - List> futures = new ArrayList<>(); - for (int i = 0; i < iterations; i++) { - futures.add(commands.set(key(i), value(i))); - } - - for (int i = 0; i < iterations; i++) { - assertThat(verificationConnection.get(key(i))).as("Key " + key(i) + " must be null").isNull(); - } - - commands.flushCommands(); - boolean result = LettuceFutures.awaitAll(5, TimeUnit.SECONDS, futures.toArray(new RedisFuture[futures.size()])); - assertThat(result).isTrue(); - - for (int i = 0; i < iterations; i++) { - assertThat(verificationConnection.get(key(i))).as("Key " + key(i) + " must be " + value(i)).isEqualTo(value(i)); - } - - verificationConnection.close(); - - } - - @Test - public void transactions() throws Exception { - - commands.multi(); - commands.set(key, value); - commands.discard(); - - commands.multi(); - commands.set(key, value); - commands.exec(); - } - - @Test - public void clusterScan() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.mset(RandomKeys.MAP); - - Set allKeys = new HashSet<>(); - - KeyScanCursor scanCursor = null; - - do { - if (scanCursor == null) { - scanCursor = sync.scan(); - } else { - scanCursor = sync.scan(scanCursor); - } - allKeys.addAll(scanCursor.getKeys()); - } while (!scanCursor.isFinished()); - - assertThat(allKeys).containsAll(RandomKeys.KEYS); - - } - - @Test - public void clusterScanWithArgs() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.mset(RandomKeys.MAP); - - Set allKeys = new HashSet<>(); - - KeyScanCursor scanCursor = null; - - do { - if (scanCursor == null) { - scanCursor = sync.scan(ScanArgs.Builder.matches("a*")); - } else { - scanCursor = sync.scan(scanCursor, ScanArgs.Builder.matches("a*")); - } - allKeys.addAll(scanCursor.getKeys()); - } while (!scanCursor.isFinished()); - - assertThat(allKeys).containsAll(RandomKeys.KEYS.stream().filter(k -> k.startsWith("a")).collect(Collectors.toList())); - - } - - @Test - public void clusterScanStreaming() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.mset(RandomKeys.MAP); - - ListStreamingAdapter adapter = new ListStreamingAdapter<>(); - - StreamScanCursor scanCursor = null; - - do { - if (scanCursor == null) { - scanCursor = sync.scan(adapter); - } else { - scanCursor = sync.scan(adapter, scanCursor); - } - } while (!scanCursor.isFinished()); - - assertThat(adapter.getList()).containsAll(RandomKeys.KEYS); - - } - - @Test - public void clusterScanStreamingWithArgs() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.mset(RandomKeys.MAP); - - ListStreamingAdapter adapter = new ListStreamingAdapter<>(); - - StreamScanCursor scanCursor = null; - do { - if (scanCursor == null) { - scanCursor = sync.scan(adapter, ScanArgs.Builder.matches("a*")); - } else { - scanCursor = sync.scan(adapter, scanCursor, ScanArgs.Builder.matches("a*")); - } - } while (!scanCursor.isFinished()); - - assertThat(adapter.getList()) - .containsAll(RandomKeys.KEYS.stream().filter(k -> k.startsWith("a")).collect(Collectors.toList())); - - } - - @Test(expected = IllegalArgumentException.class) - public void clusterScanCursorFinished() throws Exception { - syncCommands.scan(ScanCursor.FINISHED); - } - - @Test(expected = IllegalArgumentException.class) - public void clusterScanCursorNotReused() throws Exception { - syncCommands.scan(ScanCursor.of("dummy")); - } - - protected String value(int i) { - return value + "-" + i; - } - - protected String key(int i) { - return key + "-" + i; - } - - private void writeKeysToTwoNodes() { - syncCommands.set(KEY_ON_NODE_1, value); - syncCommands.set(KEY_ON_NODE_2, value); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/AdvancedClusterReactiveTest.java b/src/test/java/com/lambdaworks/redis/cluster/AdvancedClusterReactiveTest.java deleted file mode 100644 index 760a08e030..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/AdvancedClusterReactiveTest.java +++ /dev/null @@ -1,385 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.stream.Collectors; - -import org.junit.After; -import org.junit.Before; -import org.junit.Ignore; -import org.junit.Test; - -import rx.Observable; - -import com.lambdaworks.RandomKeys; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisAdvancedClusterReactiveCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisClusterReactiveCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.commands.rx.RxSyncInvocationHandler; -import com.lambdaworks.redis.internal.LettuceLists; - -/** - * @author Mark Paluch - */ -public class AdvancedClusterReactiveTest extends AbstractClusterTest { - - public static final String KEY_ON_NODE_1 = "a"; - public static final String KEY_ON_NODE_2 = "b"; - - private RedisAdvancedClusterReactiveCommands commands; - private RedisCommands syncCommands; - - @Before - public void before() throws Exception { - commands = clusterClient.connectClusterAsync().getStatefulConnection().reactive(); - syncCommands = RxSyncInvocationHandler.sync(commands.getStatefulConnection()); - } - - @After - public void after() throws Exception { - commands.close(); - } - - @Test(expected = RedisException.class) - public void unknownNodeId() throws Exception { - - commands.getConnection("unknown"); - } - - @Test(expected = RedisException.class) - public void invalidHost() throws Exception { - commands.getConnection("invalid-host", -1); - } - - @Test - public void doWeirdThingsWithClusterconnections() throws Exception { - - assertThat(clusterClient.getPartitions()).hasSize(4); - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - RedisClusterReactiveCommands nodeConnection = commands.getConnection(redisClusterNode.getNodeId()); - - nodeConnection.close(); - - RedisClusterReactiveCommands nextConnection = commands.getConnection(redisClusterNode.getNodeId()); - assertThat(commands).isNotSameAs(nextConnection); - } - } - - @Test - public void msetCrossSlot() throws Exception { - - Observable mset = commands.mset(RandomKeys.MAP); - List result = LettuceLists.newList(mset.toBlocking().toIterable()); - assertThat(result).hasSize(1).contains("OK"); - - for (String mykey : RandomKeys.KEYS) { - String s1 = syncCommands.get(mykey); - assertThat(s1).isEqualTo(RandomKeys.MAP.get(mykey)); - } - } - - @Test - public void msetnxCrossSlot() throws Exception { - - List result = LettuceLists.newList(commands.msetnx(RandomKeys.MAP).toBlocking().toIterable()); - - assertThat(result).hasSize(1).contains(true); - - for (String mykey : RandomKeys.KEYS) { - String s1 = syncCommands.get(mykey); - assertThat(s1).isEqualTo(RandomKeys.MAP.get(mykey)); - } - } - - @Test - public void mgetCrossSlot() throws Exception { - - msetCrossSlot(); - - Map> partitioned = SlotHash.partition(new Utf8StringCodec(), RandomKeys.KEYS); - assertThat(partitioned.size()).isGreaterThan(100); - - Observable observable = commands.mget(RandomKeys.KEYS.toArray(new String[RandomKeys.COUNT])); - List result = observable.toList().toBlocking().single(); - - assertThat(result).hasSize(RandomKeys.COUNT); - assertThat(result).isEqualTo(RandomKeys.VALUES); - } - - @Test - public void mgetCrossSlotStreaming() throws Exception { - - msetCrossSlot(); - - ListStreamingAdapter result = new ListStreamingAdapter<>(); - - Observable observable = commands.mget(result, RandomKeys.KEYS.toArray(new String[RandomKeys.COUNT])); - Long count = getSingle(observable); - - assertThat(result.getList()).hasSize(RandomKeys.COUNT); - assertThat(count).isEqualTo(RandomKeys.COUNT); - } - - @Test - public void delCrossSlot() throws Exception { - - msetCrossSlot(); - - Observable observable = commands.del(RandomKeys.KEYS.toArray(new String[RandomKeys.COUNT])); - Long result = getSingle(observable); - - assertThat(result).isEqualTo(RandomKeys.COUNT); - - for (String mykey : RandomKeys.KEYS) { - String s1 = syncCommands.get(mykey); - assertThat(s1).isNull(); - } - } - - @Test - public void unlinkCrossSlot() throws Exception { - - msetCrossSlot(); - - Observable observable = commands.unlink(RandomKeys.KEYS.toArray(new String[RandomKeys.COUNT])); - Long result = getSingle(observable); - - assertThat(result).isEqualTo(RandomKeys.COUNT); - - for (String mykey : RandomKeys.KEYS) { - String s1 = syncCommands.get(mykey); - assertThat(s1).isNull(); - } - } - - @Test - public void clientSetname() throws Exception { - - String name = "test-cluster-client"; - - assertThat(clusterClient.getPartitions().size()).isGreaterThan(0); - - getSingle(commands.clientSetname(name)); - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - RedisClusterCommands nodeConnection = commands.getStatefulConnection().sync() - .getConnection(redisClusterNode.getNodeId()); - assertThat(nodeConnection.clientList()).contains(name); - } - } - - @Test(expected = Exception.class) - public void clientSetnameRunOnError() throws Exception { - getSingle(commands.clientSetname("not allowed")); - } - - @Test - public void dbSize() throws Exception { - - writeKeysToTwoNodes(); - - Long dbsize = getSingle(commands.dbsize()); - assertThat(dbsize).isEqualTo(2); - } - - @Test - public void flushall() throws Exception { - - writeKeysToTwoNodes(); - - assertThat(getSingle(commands.flushall())).isEqualTo("OK"); - - Long dbsize = syncCommands.dbsize(); - assertThat(dbsize).isEqualTo(0); - } - - @Test - public void flushdb() throws Exception { - - writeKeysToTwoNodes(); - - assertThat(getSingle(commands.flushdb())).isEqualTo("OK"); - - Long dbsize = syncCommands.dbsize(); - assertThat(dbsize).isEqualTo(0); - } - - @Test - public void keys() throws Exception { - - writeKeysToTwoNodes(); - - List result = commands.keys("*").toList().toBlocking().single(); - - assertThat(result).contains(KEY_ON_NODE_1, KEY_ON_NODE_2); - } - - @Test - public void keysStreaming() throws Exception { - - writeKeysToTwoNodes(); - ListStreamingAdapter result = new ListStreamingAdapter<>(); - - assertThat(getSingle(commands.keys(result, "*"))).isEqualTo(2); - assertThat(result.getList()).contains(KEY_ON_NODE_1, KEY_ON_NODE_2); - } - - @Test - public void randomKey() throws Exception { - - writeKeysToTwoNodes(); - - assertThat(getSingle(commands.randomkey())).isIn(KEY_ON_NODE_1, KEY_ON_NODE_2); - } - - @Test - public void scriptFlush() throws Exception { - assertThat(getSingle(commands.scriptFlush())).isEqualTo("OK"); - } - - @Test - public void scriptKill() throws Exception { - assertThat(getSingle(commands.scriptKill())).isEqualTo("OK"); - } - - @Test - @Ignore("Run me manually, I will shutdown all your cluster nodes so you need to restart the Redis Cluster after this test") - public void shutdown() throws Exception { - commands.shutdown(true).subscribe(); - } - - @Test - public void readFromSlaves() throws Exception { - - RedisClusterReactiveCommands connection = commands.getConnection(host, port4); - connection.readOnly().toBlocking().first(); - commands.set(key, value).toBlocking().first(); - NodeSelectionAsyncTest.waitForReplication(commands.getStatefulConnection().async(), key, port4); - - AtomicBoolean error = new AtomicBoolean(); - connection.get(key).doOnError(throwable -> error.set(true)).toBlocking().toFuture().get(); - - assertThat(error.get()).isFalse(); - - connection.readWrite().toBlocking().first(); - - try { - connection.get(key).doOnError(throwable -> error.set(true)).toBlocking().first(); - fail("Missing exception"); - } catch (Exception e) { - assertThat(error.get()).isTrue(); - } - } - - @Test - public void clusterScan() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.mset(RandomKeys.MAP); - - Set allKeys = new HashSet<>(); - - KeyScanCursor scanCursor = null; - do { - - if (scanCursor == null) { - scanCursor = getSingle(commands.scan()); - } else { - scanCursor = getSingle(commands.scan(scanCursor)); - } - allKeys.addAll(scanCursor.getKeys()); - } while (!scanCursor.isFinished()); - - assertThat(allKeys).containsAll(RandomKeys.KEYS); - - } - - @Test - public void clusterScanWithArgs() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.mset(RandomKeys.MAP); - - Set allKeys = new HashSet<>(); - - KeyScanCursor scanCursor = null; - do { - - if (scanCursor == null) { - scanCursor = getSingle(commands.scan(ScanArgs.Builder.matches("a*"))); - } else { - scanCursor = getSingle(commands.scan(scanCursor, ScanArgs.Builder.matches("a*"))); - } - allKeys.addAll(scanCursor.getKeys()); - } while (!scanCursor.isFinished()); - - assertThat(allKeys).containsAll(RandomKeys.KEYS.stream().filter(k -> k.startsWith("a")).collect(Collectors.toList())); - - } - - @Test - public void clusterScanStreaming() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.mset(RandomKeys.MAP); - - ListStreamingAdapter adapter = new ListStreamingAdapter<>(); - - StreamScanCursor scanCursor = null; - do { - - if (scanCursor == null) { - scanCursor = getSingle(commands.scan(adapter)); - } else { - scanCursor = getSingle(commands.scan(adapter, scanCursor)); - } - } while (!scanCursor.isFinished()); - - assertThat(adapter.getList()).containsAll(RandomKeys.KEYS); - - } - - @Test - public void clusterScanStreamingWithArgs() throws Exception { - - RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); - sync.mset(RandomKeys.MAP); - - ListStreamingAdapter adapter = new ListStreamingAdapter<>(); - - StreamScanCursor scanCursor = null; - do { - - if (scanCursor == null) { - scanCursor = getSingle(commands.scan(adapter, ScanArgs.Builder.matches("a*"))); - } else { - scanCursor = getSingle(commands.scan(adapter, scanCursor, ScanArgs.Builder.matches("a*"))); - } - } while (!scanCursor.isFinished()); - - assertThat(adapter.getList()).containsAll( - RandomKeys.KEYS.stream().filter(k -> k.startsWith("a")).collect(Collectors.toList())); - - } - - private T getSingle(Observable observable) { - return observable.toBlocking().single(); - } - - private void writeKeysToTwoNodes() { - syncCommands.set(KEY_ON_NODE_1, value); - syncCommands.set(KEY_ON_NODE_2, value); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ByteCodecClusterTest.java b/src/test/java/com/lambdaworks/redis/cluster/ByteCodecClusterTest.java deleted file mode 100644 index 448d4f623c..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ByteCodecClusterTest.java +++ /dev/null @@ -1,43 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Test; - -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.codec.ByteArrayCodec; - -/** - * @author Mark Paluch - */ -public class ByteCodecClusterTest extends AbstractClusterTest { - - @Test - public void testByteCodec() throws Exception { - - StatefulRedisClusterConnection connection = clusterClient.connect(new ByteArrayCodec()); - - connection.sync().set(key.getBytes(), value.getBytes()); - assertThat(connection.sync().get(key.getBytes())).isEqualTo(value.getBytes()); - } - - @Test - public void deprecatedTestByteCodec() throws Exception { - - RedisAdvancedClusterCommands commands = clusterClient.connectCluster(new ByteArrayCodec()); - - commands.set(key.getBytes(), value.getBytes()); - assertThat(commands.get(key.getBytes())).isEqualTo(value.getBytes()); - } - - @Test - public void deprecatedTestAsyncByteCodec() throws Exception { - - RedisAdvancedClusterAsyncCommands commands = clusterClient.connectClusterAsync(new ByteArrayCodec()); - - commands.set(key.getBytes(), value.getBytes()).get(); - assertThat(commands.get(key.getBytes()).get()).isEqualTo(value.getBytes()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterClientOptionsTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterClientOptionsTest.java deleted file mode 100644 index 7697c4b78a..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterClientOptionsTest.java +++ /dev/null @@ -1,45 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class ClusterClientOptionsTest { - - @Test - public void testCopy() throws Exception { - - ClusterClientOptions options = ClusterClientOptions.builder().closeStaleConnections(true).refreshClusterView(true) - .autoReconnect(false).requestQueueSize(100).suspendReconnectOnProtocolFailure(true).maxRedirects(1234) - .validateClusterNodeMembership(false).build(); - - ClusterClientOptions copy = ClusterClientOptions.copyOf(options); - - assertThat(copy.getRefreshPeriod()).isEqualTo(options.getRefreshPeriod()); - assertThat(copy.getRefreshPeriodUnit()).isEqualTo(options.getRefreshPeriodUnit()); - assertThat(copy.isCloseStaleConnections()).isEqualTo(options.isCloseStaleConnections()); - assertThat(copy.isRefreshClusterView()).isEqualTo(options.isRefreshClusterView()); - assertThat(copy.isValidateClusterNodeMembership()).isEqualTo(options.isValidateClusterNodeMembership()); - assertThat(copy.getRequestQueueSize()).isEqualTo(options.getRequestQueueSize()); - assertThat(copy.isAutoReconnect()).isEqualTo(options.isAutoReconnect()); - assertThat(copy.isCancelCommandsOnReconnectFailure()).isEqualTo(options.isCancelCommandsOnReconnectFailure()); - assertThat(copy.isSuspendReconnectOnProtocolFailure()).isEqualTo(options.isSuspendReconnectOnProtocolFailure()); - assertThat(copy.getMaxRedirects()).isEqualTo(options.getMaxRedirects()); - } - - @Test - public void enablesRefreshUsingDeprecatedMethods() throws Exception { - - ClusterClientOptions options = ClusterClientOptions.builder().refreshClusterView(true) - .refreshPeriod(10, TimeUnit.MINUTES).build(); - - assertThat(options.getRefreshPeriod()).isEqualTo(10); - assertThat(options.getRefreshPeriodUnit()).isEqualTo(TimeUnit.MINUTES); - assertThat(options.isRefreshClusterView()).isEqualTo(true); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterCommandInternalsTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterCommandInternalsTest.java deleted file mode 100644 index 4b7d8580b3..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterCommandInternalsTest.java +++ /dev/null @@ -1,101 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.verify; - -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.RedisChannelWriter; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.AsyncCommand; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandType; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.runners.MockitoJUnitRunner; - -@RunWith(MockitoJUnitRunner.class) -public class ClusterCommandInternalsTest { - - @Mock - private RedisChannelWriter writerMock; - - private ClusterCommand sut; - private Command command = new Command(CommandType.TYPE, - new StatusOutput(new Utf8StringCodec()), null); - - @Before - public void before() throws Exception { - sut = new ClusterCommand(command, writerMock, 1); - } - - @Test - public void testException() throws Exception { - - sut.completeExceptionally(new Exception()); - assertThat(sut.isCompleted()); - } - - @Test - public void testCancel() throws Exception { - - assertThat(command.isCancelled()).isFalse(); - sut.cancel(); - assertThat(command.isCancelled()).isTrue(); - } - - @Test - public void testComplete() throws Exception { - - sut.complete(); - assertThat(sut.isCompleted()).isTrue(); - assertThat(sut.isCancelled()).isFalse(); - } - - @Test - public void testRedirect() throws Exception { - - sut.getOutput().setError("MOVED 1234 127.0.0.1:1000"); - sut.complete(); - - assertThat(sut.isCompleted()).isFalse(); - assertThat(sut.isCancelled()).isFalse(); - verify(writerMock).write(sut); - } - - @Test - public void testRedirectLimit() throws Exception { - - sut.getOutput().setError("MOVED 1234 127.0.0.1:1000"); - sut.complete(); - - sut.getOutput().setError("MOVED 1234 127.0.0.1:1000"); - sut.complete(); - - assertThat(sut.isCompleted()).isTrue(); - assertThat(sut.isCancelled()).isFalse(); - verify(writerMock).write(sut); - } - - @Test - public void testCompleteListener() throws Exception { - - final List someList = new ArrayList<>(); - - AsyncCommand asyncCommand = new AsyncCommand<>(sut); - - asyncCommand.thenRun(() -> someList.add("")); - asyncCommand.complete(); - asyncCommand.await(1, TimeUnit.MINUTES); - - assertThat(sut.isCompleted()).isTrue(); - assertThat(someList.size()).describedAs("Inner listener has to add one element").isEqualTo(1); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterCommandTest.java deleted file mode 100644 index 3cdce47aa1..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterCommandTest.java +++ /dev/null @@ -1,269 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.google.code.tempusfugit.temporal.Timeout.timeout; -import static com.lambdaworks.redis.cluster.ClusterTestUtil.getNodeId; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.List; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; - -import org.junit.*; -import org.junit.runners.MethodSorters; - -import com.google.code.tempusfugit.temporal.WaitFor; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.slots.ClusterSlotRange; -import com.lambdaworks.redis.cluster.models.slots.ClusterSlotsParser; -import com.lambdaworks.redis.internal.LettuceLists; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -public class ClusterCommandTest extends AbstractClusterTest { - - protected static RedisClient client; - - protected StatefulRedisConnection connection; - - protected RedisClusterAsyncCommands async; - - protected RedisClusterCommands sync; - - @BeforeClass - public static void setupClient() throws Exception { - client = RedisClient.create(RedisURI.Builder.redis(host, port1).build()); - clusterClient = RedisClusterClient.create(LettuceLists.newList(RedisURI.Builder.redis(host, port1).build())); - } - - @AfterClass - public static void shutdownClient() { - FastShutdown.shutdown(client); - FastShutdown.shutdown(clusterClient); - } - - @Before - public void before() throws Exception { - - clusterRule.getClusterClient().reloadPartitions(); - - connection = client.connect(RedisURI.Builder.redis(host, port1).build()); - sync = connection.sync(); - async = connection.async(); - - } - - @After - public void after() throws Exception { - connection.close(); - } - - @Test - public void statefulConnectionFromSync() throws Exception { - RedisAdvancedClusterConnection sync = clusterClient.connectCluster(); - assertThat(sync.getStatefulConnection().sync()).isSameAs(sync); - } - - @Test - public void statefulConnectionFromAsync() throws Exception { - RedisAsyncConnection async = client.connectAsync(); - assertThat(async.getStatefulConnection().async()).isSameAs(async); - } - - @Test - public void testClusterBumpEpoch() throws Exception { - - RedisFuture future = async.clusterBumpepoch(); - - String result = future.get(); - - assertThat(result).matches("(BUMPED|STILL).*"); - } - - @Test - public void testClusterInfo() throws Exception { - - RedisFuture future = async.clusterInfo(); - - String result = future.get(); - - assertThat(result).contains("cluster_known_nodes:"); - assertThat(result).contains("cluster_slots_fail:0"); - assertThat(result).contains("cluster_state:"); - } - - @Test - public void testClusterNodes() throws Exception { - - String result = sync.clusterNodes(); - - assertThat(result).contains("connected"); - assertThat(result).contains("master"); - assertThat(result).contains("myself"); - } - - @Test - public void testClusterNodesSync() throws Exception { - - RedisClusterConnection connection = clusterClient.connectCluster(); - - String string = connection.clusterNodes(); - connection.close(); - - assertThat(string).contains("connected"); - assertThat(string).contains("master"); - assertThat(string).contains("myself"); - } - - @Test - public void testClusterSlaves() throws Exception { - - sync.set("b", value); - RedisFuture replication = async.waitForReplication(1, 5); - assertThat(replication.get()).isGreaterThan(0L); - } - - @Test - public void testAsking() throws Exception { - assertThat(sync.asking()).isEqualTo("OK"); - } - - @Test - public void testReset() throws Exception { - - clusterClient.reloadPartitions(); - RedisAdvancedClusterAsyncCommandsImpl connection = (RedisAdvancedClusterAsyncCommandsImpl) clusterClient - .connectClusterAsync(); - - RedisFuture setA = connection.set("a", "myValue1"); - setA.get(); - - connection.reset(); - - setA = connection.set("a", "myValue1"); - - assertThat(setA.getError()).isNull(); - assertThat(setA.get()).isEqualTo("OK"); - - connection.close(); - - } - - @Test - public void testClusterSlots() throws Exception { - - List reply = sync.clusterSlots(); - assertThat(reply.size()).isGreaterThan(1); - - List parse = ClusterSlotsParser.parse(reply); - assertThat(parse).hasSize(2); - - ClusterSlotRange clusterSlotRange = parse.get(0); - assertThat(clusterSlotRange.getFrom()).isEqualTo(0); - assertThat(clusterSlotRange.getTo()).isEqualTo(11999); - - assertThat(clusterSlotRange.getMaster()).isNotNull(); - assertThat(clusterSlotRange.getSlaves()).isNotNull(); - assertThat(clusterSlotRange.toString()).contains(ClusterSlotRange.class.getSimpleName()); - - } - - @Test - public void readOnly() throws Exception { - - // cluster node 3 is a slave for key "b" - String key = "b"; - assertThat(SlotHash.getSlot(key)).isEqualTo(3300); - prepareReadonlyTest(key); - - // assume cluster node 3 is a slave for the master 1 - RedisConnection connect3 = client.connect(RedisURI.Builder.redis(host, port3).build()).sync(); - - assertThat(connect3.readOnly()).isEqualTo("OK"); - waitUntilValueIsVisible(key, connect3); - - String resultBViewedBySlave = connect3.get("b"); - assertThat(resultBViewedBySlave).isEqualTo(value); - connect3.quit(); - - resultBViewedBySlave = connect3.get("b"); - assertThat(resultBViewedBySlave).isEqualTo(value); - - } - - @Test - public void readOnlyWithReconnect() throws Exception { - - // cluster node 3 is a slave for key "b" - String key = "b"; - assertThat(SlotHash.getSlot(key)).isEqualTo(3300); - prepareReadonlyTest(key); - - // assume cluster node 3 is a slave for the master 1 - RedisConnection connect3 = client.connect(RedisURI.Builder.redis(host, port3).build()).sync(); - - assertThat(connect3.readOnly()).isEqualTo("OK"); - connect3.quit(); - waitUntilValueIsVisible(key, connect3); - - String resultViewedBySlave = connect3.get("b"); - assertThat(resultViewedBySlave).isEqualTo(value); - - } - - protected void waitUntilValueIsVisible(String key, RedisConnection connection) throws InterruptedException, - TimeoutException { - WaitFor.waitOrTimeout(() -> connection.get(key) != null, timeout(seconds(5))); - } - - protected void prepareReadonlyTest(String key) throws InterruptedException, TimeoutException, - java.util.concurrent.ExecutionException { - - async.set(key, value); - - String resultB = async.get(key).get(); - assertThat(resultB).isEqualTo(value); - Thread.sleep(500); // give some time to replicate - } - - @Test - public void readOnlyReadWrite() throws Exception { - - // cluster node 3 is a slave for key "b" - String key = "b"; - assertThat(SlotHash.getSlot(key)).isEqualTo(3300); - prepareReadonlyTest(key); - - // assume cluster node 3 is a slave for the master 1 - final RedisConnection connect3 = client.connect(RedisURI.Builder.redis(host, port3).build()).sync(); - - try { - connect3.get("b"); - } catch (Exception e) { - assertThat(e).hasMessageContaining("MOVED"); - } - - assertThat(connect3.readOnly()).isEqualTo("OK"); - waitUntilValueIsVisible(key, connect3); - - connect3.readWrite(); - try { - connect3.get("b"); - } catch (Exception e) { - assertThat(e).hasMessageContaining("MOVED"); - } - } - - @Test - public void clusterSlaves() throws Exception { - - String nodeId = getNodeId(sync); - List result = sync.clusterSlaves(nodeId); - - assertThat(result.size()).isGreaterThan(0); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterDistributionChannelWriterTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterDistributionChannelWriterTest.java deleted file mode 100644 index 0aa168270f..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterDistributionChannelWriterTest.java +++ /dev/null @@ -1,49 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Test; - -import com.lambdaworks.redis.internal.HostAndPort; - -/** - * @author Mark Paluch - */ -public class ClusterDistributionChannelWriterTest { - - @Test - public void shouldParseAskTargetCorrectly() throws Exception { - - HostAndPort askTarget = ClusterDistributionChannelWriter.getAskTarget("ASK 1234 127.0.0.1:6381"); - - assertThat(askTarget.getHostText()).isEqualTo("127.0.0.1"); - assertThat(askTarget.getPort()).isEqualTo(6381); - } - - @Test - public void shouldParseIPv6AskTargetCorrectly() throws Exception { - - HostAndPort askTarget = ClusterDistributionChannelWriter.getAskTarget("ASK 1234 1:2:3:4::6:6381"); - - assertThat(askTarget.getHostText()).isEqualTo("1:2:3:4::6"); - assertThat(askTarget.getPort()).isEqualTo(6381); - } - - @Test - public void shouldParseMovedTargetCorrectly() throws Exception { - - HostAndPort moveTarget = ClusterDistributionChannelWriter.getMoveTarget("MOVED 1234 127.0.0.1:6381"); - - assertThat(moveTarget.getHostText()).isEqualTo("127.0.0.1"); - assertThat(moveTarget.getPort()).isEqualTo(6381); - } - - @Test - public void shouldParseIPv6MovedTargetCorrectly() throws Exception { - - HostAndPort moveTarget = ClusterDistributionChannelWriter.getMoveTarget("MOVED 1234 1:2:3:4::6:6381"); - - assertThat(moveTarget.getHostText()).isEqualTo("1:2:3:4::6"); - assertThat(moveTarget.getPort()).isEqualTo(6381); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterNodeCommandHandlerTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterNodeCommandHandlerTest.java deleted file mode 100644 index 0b5e640678..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterNodeCommandHandlerTest.java +++ /dev/null @@ -1,155 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.AssertionsForClassTypes.assertThat; -import static org.assertj.core.api.AssertionsForClassTypes.fail; -import static org.mockito.Matchers.any; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.verifyZeroInteractions; -import static org.mockito.Mockito.when; - -import java.util.Queue; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.LinkedBlockingQueue; - -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.runners.MockitoJUnitRunner; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.AsyncCommand; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.RedisCommand; -import com.lambdaworks.redis.resource.ClientResources; - -/** - * @author Mark Paluch - */ -@RunWith(MockitoJUnitRunner.class) -public class ClusterNodeCommandHandlerTest { - - private AsyncCommand command = new AsyncCommand<>( - new Command<>(CommandType.APPEND, new StatusOutput(new Utf8StringCodec()), null)); - - private Queue> queue = new LinkedBlockingQueue<>(); - - @Mock - private ClientOptions clientOptions; - - @Mock - private ClientResources clientResources; - - @Mock - private RedisChannelWriter clusterChannelWriter; - - private ClusterNodeCommandHandler sut; - - @Before - public void before() throws Exception { - - sut = new ClusterNodeCommandHandler(clientOptions, clientResources, queue, clusterChannelWriter); - } - - @Test - public void closeWithoutCommands() throws Exception { - - sut.close(); - verifyZeroInteractions(clusterChannelWriter); - } - - @Test - public void closeWithQueuedCommands() throws Exception { - - when(clientOptions.isAutoReconnect()).thenReturn(true); - queue.add(command); - - sut.close(); - - verify(clusterChannelWriter).write(command); - } - - @Test - public void closeWithCancelledQueuedCommands() throws Exception { - - when(clientOptions.isAutoReconnect()).thenReturn(true); - queue.add(command); - command.cancel(); - - sut.close(); - - verifyZeroInteractions(clusterChannelWriter); - } - - @Test - public void closeWithQueuedCommandsFails() throws Exception { - - when(clientOptions.isAutoReconnect()).thenReturn(true); - queue.add(command); - when(clusterChannelWriter.write(any())).thenThrow(new RedisException("meh")); - - sut.close(); - - assertThat(command.isDone()).isTrue(); - - try { - - command.get(); - fail("Expected ExecutionException"); - } catch (ExecutionException e) { - assertThat(e).hasCauseExactlyInstanceOf(RedisException.class); - } - } - - @Test - public void closeWithBufferedCommands() throws Exception { - - when(clientOptions.isAutoReconnect()).thenReturn(true); - when(clientOptions.getRequestQueueSize()).thenReturn(1000); - when(clientOptions.getDisconnectedBehavior()).thenReturn(ClientOptions.DisconnectedBehavior.ACCEPT_COMMANDS); - sut.write(command); - - sut.close(); - - verify(clusterChannelWriter).write(command); - } - - @Test - public void closeWithCancelledBufferedCommands() throws Exception { - - when(clientOptions.isAutoReconnect()).thenReturn(true); - when(clientOptions.getRequestQueueSize()).thenReturn(1000); - when(clientOptions.getDisconnectedBehavior()).thenReturn(ClientOptions.DisconnectedBehavior.ACCEPT_COMMANDS); - sut.write(command); - command.cancel(); - - sut.close(); - - verifyZeroInteractions(clusterChannelWriter); - } - - @Test - public void closeWithBufferedCommandsFails() throws Exception { - - when(clientOptions.isAutoReconnect()).thenReturn(true); - when(clientOptions.getRequestQueueSize()).thenReturn(1000); - when(clientOptions.getDisconnectedBehavior()).thenReturn(ClientOptions.DisconnectedBehavior.ACCEPT_COMMANDS); - sut.write(command); - when(clusterChannelWriter.write(any())).thenThrow(new RedisException("")); - - sut.close(); - - try { - - command.get(); - fail("Expected ExecutionException"); - } catch (ExecutionException e) { - assertThat(e).hasCauseExactlyInstanceOf(RedisException.class); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterPartiallyDownTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterPartiallyDownTest.java deleted file mode 100644 index a4cb0b0f2b..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterPartiallyDownTest.java +++ /dev/null @@ -1,119 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Fail.fail; - -import java.net.ConnectException; -import java.util.ArrayList; -import java.util.HashSet; -import java.util.List; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.TestClientResources; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.resource.ClientResources; - -/** - * @author Mark Paluch - */ -public class ClusterPartiallyDownTest extends AbstractTest { - private static ClientResources clientResources = TestClientResources.create(); - - private static int port1 = 7579; - private static int port2 = 7580; - private static int port3 = 7581; - private static int port4 = 7582; - - private static final RedisURI URI_1 = RedisURI.create(TestSettings.host(), port1); - private static final RedisURI URI_2 = RedisURI.create(TestSettings.host(), port2); - private static final RedisURI URI_3 = RedisURI.create(TestSettings.host(), port3); - private static final RedisURI URI_4 = RedisURI.create(TestSettings.host(), port4); - - private RedisClusterClient redisClusterClient; - - @Before - public void before() throws Exception { - - } - - @After - public void after() throws Exception { - redisClusterClient.shutdown(); - } - - @Test - public void connectToPartiallyDownCluster() throws Exception { - - List seed = LettuceLists.unmodifiableList(URI_1, URI_2, URI_3, URI_4); - redisClusterClient = RedisClusterClient.create(clientResources, seed); - StatefulRedisClusterConnection connection = redisClusterClient.connect(); - - assertThat(connection.sync().ping()).isEqualTo("PONG"); - - connection.close(); - } - - @Test - public void operateOnPartiallyDownCluster() throws Exception { - - List seed = LettuceLists.unmodifiableList(URI_1, URI_2, URI_3, URI_4); - redisClusterClient = RedisClusterClient.create(clientResources, seed); - StatefulRedisClusterConnection connection = redisClusterClient.connect(); - - String key_10439 = "aaa"; - assertThat(SlotHash.getSlot(key_10439)).isEqualTo(10439); - - try { - connection.sync().get(key_10439); - fail("Missing RedisException"); - } catch (RedisConnectionException e) { - assertThat(e).hasRootCauseInstanceOf( - ConnectException.class); - } - - connection.close(); - } - - @Test - public void seedNodesAreOffline() throws Exception { - - List seed = LettuceLists.unmodifiableList(URI_1, URI_2, URI_3); - redisClusterClient = RedisClusterClient.create(clientResources, seed); - - try { - redisClusterClient.connect(); - fail("Missing RedisException"); - } catch (RedisException e) { - assertThat(e).hasNoCause(); - } - } - - @Test - public void partitionNodesAreOffline() throws Exception { - - List seed = LettuceLists.unmodifiableList(URI_1, URI_2, URI_3); - redisClusterClient = RedisClusterClient.create(clientResources, seed); - - Partitions partitions = new Partitions(); - partitions.addPartition( - new RedisClusterNode(URI_1, "a", true, null, 0, 0, 0, new ArrayList<>(), new HashSet<>())); - partitions.addPartition( - new RedisClusterNode(URI_2, "b", true, null, 0, 0, 0, new ArrayList<>(), new HashSet<>())); - - redisClusterClient.setPartitions(partitions); - - try { - redisClusterClient.connect(); - fail("Missing RedisConnectionException"); - } catch (RedisConnectionException e) { - assertThat(e).hasRootCauseInstanceOf(ConnectException.class); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterPartitionParserTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterPartitionParserTest.java deleted file mode 100644 index d89abff6b6..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterPartitionParserTest.java +++ /dev/null @@ -1,138 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.hamcrest.CoreMatchers.hasItem; -import static org.junit.Assert.assertThat; - -import java.util.Collections; -import java.util.HashSet; -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.LettuceLists; - -public class ClusterPartitionParserTest { - - private static String nodes = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" - + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999 [8000->-4213a8dabb94f92eb6a860f4d0729e6a25d43e0c] [5461-<-c37ab8396be428403d4e55c0d317348be27ed973]\n" - + "4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 127.0.0.1:7379 myself,slave 4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 0 0 1 connected 0-6999 7001-7999 12001\n" - + "5f4a2236d00008fba7ac0dd24b95762b446767bd :0 myself,master - 0 0 1 connected [5460->-5f4a2236d00008fba7ac0dd24b95762b446767bd] [5461-<-5f4a2236d00008fba7ac0dd24b95762b446767bd]"; - - private static String nodesWithIPv6Addresses = "c37ab8396be428403d4e55c0d317348be27ed973 affe:affe:123:34::1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" - + "3d005a179da7d8dc1adae6409d47b39c369e992b [dead:beef:dead:beef::1]:7380 master - 0 1401258245007 2 disconnected 8000-11999 [8000->-4213a8dabb94f92eb6a860f4d0729e6a25d43e0c] [5461-<-c37ab8396be428403d4e55c0d317348be27ed973]\n" - + "4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 127.0.0.1:7379 myself,slave 4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 0 0 1 connected 0-6999 7001-7999 12001\n" - + "5f4a2236d00008fba7ac0dd24b95762b446767bd :0 myself,master - 0 0 1 connected [5460->-5f4a2236d00008fba7ac0dd24b95762b446767bd] [5461-<-5f4a2236d00008fba7ac0dd24b95762b446767bd]"; - - private static String nodesWithBusPort = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381@17381 slave 4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 0 1454482721690 3 connected\n" - + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380@17380 master - 0 1454482721690 0 connected 12000-16383\n" - + "4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 127.0.0.1:7379@17379 myself,master - 0 0 1 connected 0-11999\n" - + "5f4a2236d00008fba7ac0dd24b95762b446767bd 127.0.0.1:7382@17382 slave 3d005a179da7d8dc1adae6409d47b39c369e992b 0 1454482721690 2 connected"; - - @Test - public void shouldParseNodesCorrectly() throws Exception { - - Partitions result = ClusterPartitionParser.parse(nodes); - - assertThat(result.getPartitions()).hasSize(4); - - RedisClusterNode p1 = result.getPartitions().get(0); - - assertThat(p1.getNodeId()).isEqualTo("c37ab8396be428403d4e55c0d317348be27ed973"); - assertThat(p1.getUri().getHost()).isEqualTo("127.0.0.1"); - assertThat(p1.getUri().getPort()).isEqualTo(7381); - assertThat(p1.getSlaveOf()).isNull(); - assertThat(p1.getFlags()).isEqualTo(Collections.singleton(RedisClusterNode.NodeFlag.MASTER)); - assertThat(p1.getPingSentTimestamp()).isEqualTo(111); - assertThat(p1.getPongReceivedTimestamp()).isEqualTo(1401258245007L); - assertThat(p1.getConfigEpoch()).isEqualTo(222); - assertThat(p1.isConnected()).isTrue(); - - assertThat(p1.getSlots(), hasItem(7000)); - assertThat(p1.getSlots(), hasItem(12000)); - assertThat(p1.getSlots(), hasItem(12002)); - assertThat(p1.getSlots(), hasItem(12003)); - assertThat(p1.getSlots(), hasItem(16383)); - - RedisClusterNode p3 = result.getPartitions().get(2); - - assertThat(p3.getSlaveOf()).isEqualTo("4213a8dabb94f92eb6a860f4d0729e6a25d43e0c"); - assertThat(p3.toString()).contains(RedisClusterNode.class.getSimpleName()); - assertThat(result.toString()).contains(Partitions.class.getSimpleName()); - } - - @Test - public void shouldParseNodesWithBusPort() throws Exception { - - Partitions result = ClusterPartitionParser.parse(nodesWithBusPort); - - assertThat(result.getPartitions()).hasSize(4); - - RedisClusterNode p1 = result.getPartitions().get(0); - - assertThat(p1.getNodeId()).isEqualTo("c37ab8396be428403d4e55c0d317348be27ed973"); - assertThat(p1.getUri().getHost()).isEqualTo("127.0.0.1"); - assertThat(p1.getUri().getPort()).isEqualTo(7381); - } - - @Test - public void shouldParseNodesIPv6Address() throws Exception { - - Partitions result = ClusterPartitionParser.parse(nodesWithIPv6Addresses); - - assertThat(result.getPartitions()).hasSize(4); - - RedisClusterNode p1 = result.getPartitions().get(0); - - assertThat(p1.getUri().getHost()).isEqualTo("affe:affe:123:34::1"); - assertThat(p1.getUri().getPort()).isEqualTo(7381); - - RedisClusterNode p2 = result.getPartitions().get(1); - - assertThat(p2.getUri().getHost()).isEqualTo("dead:beef:dead:beef::1"); - assertThat(p2.getUri().getPort()).isEqualTo(7380); - } - - @Test - public void getNodeByHashShouldReturnCorrectNode() throws Exception { - - Partitions partitions = ClusterPartitionParser.parse(nodes); - assertThat(partitions.getPartitionBySlot(7000).getNodeId()).isEqualTo("c37ab8396be428403d4e55c0d317348be27ed973"); - assertThat(partitions.getPartitionBySlot(5460).getNodeId()).isEqualTo("4213a8dabb94f92eb6a860f4d0729e6a25d43e0c"); - } - - @Test - public void testModel() throws Exception { - RedisClusterNode node = mockRedisClusterNode(); - - assertThat(node.toString()).contains(RedisClusterNode.class.getSimpleName()); - assertThat(node.hasSlot(1)).isTrue(); - assertThat(node.hasSlot(9)).isFalse(); - } - - protected RedisClusterNode mockRedisClusterNode() { - RedisClusterNode node = new RedisClusterNode(); - node.setConfigEpoch(1); - node.setConnected(true); - node.setFlags(new HashSet<>()); - node.setNodeId("abcd"); - node.setPingSentTimestamp(2); - node.setPongReceivedTimestamp(3); - node.setSlaveOf("me"); - node.setSlots(LettuceLists.unmodifiableList(1, 2, 3)); - node.setUri(new RedisURI("localhost", 1, 1, TimeUnit.DAYS)); - return node; - } - - @Test - public void createNode() throws Exception { - RedisClusterNode original = mockRedisClusterNode(); - RedisClusterNode created = RedisClusterNode.of(original.getNodeId()); - - assertThat(original).isEqualTo(created); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterReactiveCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterReactiveCommandTest.java deleted file mode 100644 index cd37113a31..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterReactiveCommandTest.java +++ /dev/null @@ -1,146 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.cluster.ClusterTestUtil.getNodeId; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.List; - -import com.lambdaworks.redis.internal.LettuceLists; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.FixMethodOrder; -import org.junit.Test; -import org.junit.runners.MethodSorters; - -import rx.Observable; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.cluster.api.rx.RedisClusterReactiveCommands; -import com.lambdaworks.redis.cluster.models.slots.ClusterSlotRange; -import com.lambdaworks.redis.cluster.models.slots.ClusterSlotsParser; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -@SuppressWarnings("unchecked") -public class ClusterReactiveCommandTest extends AbstractClusterTest { - - protected static RedisClient client; - - protected RedisClusterReactiveCommands reactive; - protected RedisAsyncCommands async; - - @BeforeClass - public static void setupClient() throws Exception { - setupClusterClient(); - client = RedisClient.create(RedisURI.Builder.redis(host, port1).build()); - clusterClient = RedisClusterClient.create(LettuceLists.unmodifiableList(RedisURI.Builder.redis(host, port1).build())); - - } - - @AfterClass - public static void shutdownClient() { - shutdownClusterClient(); - FastShutdown.shutdown(client); - FastShutdown.shutdown(clusterClient); - } - - @Before - public void before() throws Exception { - - clusterRule.getClusterClient().reloadPartitions(); - - async = client.connectAsync(RedisURI.Builder.redis(host, port1).build()); - reactive = async.getStatefulConnection().reactive(); - } - - @After - public void after() throws Exception { - async.close(); - } - - @Test - public void testClusterBumpEpoch() throws Exception { - - String result = first(reactive.clusterBumpepoch()); - - assertThat(result).matches("(BUMPED|STILL).*"); - } - - @Test - public void testClusterInfo() throws Exception { - - String status = first(reactive.clusterInfo()); - - assertThat(status).contains("cluster_known_nodes:"); - assertThat(status).contains("cluster_slots_fail:0"); - assertThat(status).contains("cluster_state:"); - } - - @Test - public void testClusterNodes() throws Exception { - - String string = first(reactive.clusterNodes()); - - assertThat(string).contains("connected"); - assertThat(string).contains("master"); - assertThat(string).contains("myself"); - } - - @Test - public void testClusterNodesSync() throws Exception { - - String string = first(reactive.clusterNodes()); - - assertThat(string).contains("connected"); - assertThat(string).contains("master"); - assertThat(string).contains("myself"); - } - - @Test - public void testClusterSlaves() throws Exception { - - Long replication = first(reactive.waitForReplication(1, 5)); - assertThat(replication).isNotNull(); - } - - @Test - public void testAsking() throws Exception { - assertThat(first(reactive.asking())).isEqualTo("OK"); - } - - @Test - public void testClusterSlots() throws Exception { - - List reply = reactive.clusterSlots().toList().toBlocking().first(); - assertThat(reply.size()).isGreaterThan(1); - - List parse = ClusterSlotsParser.parse(reply); - assertThat(parse).hasSize(2); - - ClusterSlotRange clusterSlotRange = parse.get(0); - assertThat(clusterSlotRange.getFrom()).isEqualTo(0); - assertThat(clusterSlotRange.getTo()).isEqualTo(11999); - - assertThat(clusterSlotRange.getMaster()).isNotNull(); - assertThat(clusterSlotRange.getSlaves()).isNotNull(); - assertThat(clusterSlotRange.toString()).contains(ClusterSlotRange.class.getSimpleName()); - } - - @Test - public void clusterSlaves() throws Exception { - - String nodeId = getNodeId(async.getStatefulConnection().sync()); - List result = reactive.clusterSlaves(nodeId).toList().toBlocking().first(); - - assertThat(result.size()).isGreaterThan(0); - } - - private T first(Observable observable) { - return observable.toBlocking().first(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterSetup.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterSetup.java deleted file mode 100644 index 6ed809d13d..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterSetup.java +++ /dev/null @@ -1,152 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeoutException; -import java.util.stream.Stream; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - */ -public class ClusterSetup { - - /** - * Setup a cluster consisting of two members (see {@link AbstractClusterTest#port5} to {@link AbstractClusterTest#port6}). - * Two masters (0-11999 and 12000-16383) - * - * @param clusterRule - * @throws InterruptedException - * @throws ExecutionException - * @throws TimeoutException - */ - public static void setup2Masters(ClusterRule clusterRule) throws InterruptedException, ExecutionException, TimeoutException { - - clusterRule.clusterReset(); - clusterRule.meet(AbstractClusterTest.host, AbstractClusterTest.port5); - clusterRule.meet(AbstractClusterTest.host, AbstractClusterTest.port6); - - RedisAdvancedClusterAsyncCommands connection = clusterRule.getClusterClient().connectClusterAsync(); - Wait.untilTrue(() -> { - - clusterRule.getClusterClient().reloadPartitions(); - return clusterRule.getClusterClient().getPartitions().size() == 2; - - }).waitOrTimeout(); - - Partitions partitions = clusterRule.getClusterClient().getPartitions(); - for (RedisClusterNode partition : partitions) { - - if (!partition.getSlots().isEmpty()) { - RedisClusterAsyncCommands nodeConnection = connection.getConnection(partition.getNodeId()); - - for (Integer slot : partition.getSlots()) { - - try { - nodeConnection.clusterDelSlots(slot); - } catch (Exception e) { - } - } - } - } - - - RedisClusterAsyncCommands node1 = connection.getConnection(AbstractClusterTest.host, - AbstractClusterTest.port5); - node1.clusterAddSlots(AbstractClusterTest.createSlots(0, 12000)); - - RedisClusterAsyncCommands node2 = connection.getConnection(AbstractClusterTest.host, - AbstractClusterTest.port6); - node2.clusterAddSlots(AbstractClusterTest.createSlots(12000, 16384)); - - Wait.untilTrue(clusterRule::isStable).waitOrTimeout(); - - Wait.untilEquals( - 2L, - () -> { - clusterRule.getClusterClient().reloadPartitions(); - - return partitionStream(clusterRule).filter( - redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)).count(); - }).waitOrTimeout(); - - connection.close(); - } - - /** - * Setup a cluster consisting of two members (see {@link AbstractClusterTest#port5} to {@link AbstractClusterTest#port6}). - * One master (0-16383) and one slave. - * - * @param clusterRule - * @throws InterruptedException - * @throws ExecutionException - * @throws TimeoutException - */ - public static void setupMasterWithSlave(ClusterRule clusterRule) throws InterruptedException, ExecutionException, - TimeoutException { - - clusterRule.clusterReset(); - clusterRule.meet(AbstractClusterTest.host, AbstractClusterTest.port5); - clusterRule.meet(AbstractClusterTest.host, AbstractClusterTest.port6); - - RedisAdvancedClusterAsyncCommands connection = clusterRule.getClusterClient().connectClusterAsync(); - StatefulRedisClusterConnection statefulConnection = connection.getStatefulConnection(); - - Wait.untilEquals(2, () -> { - clusterRule.getClusterClient().reloadPartitions(); - return clusterRule.getClusterClient().getPartitions().size(); - }).waitOrTimeout(); - - RedisClusterCommands node1 = statefulConnection.getConnection(TestSettings.hostAddr(), - AbstractClusterTest.port5).sync(); - node1.clusterAddSlots(AbstractClusterTest.createSlots(0, 16384)); - - Wait.untilTrue(clusterRule::isStable).waitOrTimeout(); - - connection.getConnection(AbstractClusterTest.host, AbstractClusterTest.port6).clusterReplicate(node1.clusterMyId()) - .get(); - - clusterRule.getClusterClient().reloadPartitions(); - - Wait.untilEquals( - 1L, - () -> { - clusterRule.getClusterClient().reloadPartitions(); - return partitionStream(clusterRule).filter( - redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)).count(); - }).waitOrTimeout(); - - Wait.untilEquals( - 1L, - () -> { - clusterRule.getClusterClient().reloadPartitions(); - return partitionStream(clusterRule).filter( - redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)).count(); - }).waitOrTimeout(); - - connection.close(); - } - - protected static Stream partitionStream(ClusterRule clusterRule) { - return clusterRule.getClusterClient().getPartitions().getPartitions().stream(); - } - - private static boolean is2Masters2Slaves(ClusterRule clusterRule) { - RedisClusterClient clusterClient = clusterRule.getClusterClient(); - - long slaves = clusterClient.getPartitions().stream() - .filter(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)).count(); - long masters = clusterClient.getPartitions().stream() - .filter(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)).count(); - - return slaves == 2 && masters == 2; - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterTestUtil.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterTestUtil.java deleted file mode 100644 index 4803a39788..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterTestUtil.java +++ /dev/null @@ -1,81 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.lang.reflect.InvocationHandler; -import java.lang.reflect.Proxy; - -import com.lambdaworks.redis.RedisClusterConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - * @since 3.0 - */ -public class ClusterTestUtil { - - /** - * Retrieve the cluster node Id from the {@code connection}. - * - * @param connection - * @return - */ - public static String getNodeId(RedisClusterCommands connection) { - RedisClusterNode ownPartition = getOwnPartition(connection); - if (ownPartition != null) { - return ownPartition.getNodeId(); - } - - return null; - } - - /** - * Retrieve the {@link RedisClusterNode} from the {@code connection}. - * - * @param connection - * @return - */ - public static RedisClusterNode getOwnPartition(RedisClusterCommands connection) { - Partitions partitions = ClusterPartitionParser.parse(connection.clusterNodes()); - - for (RedisClusterNode partition : partitions) { - if (partition.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - return partition; - } - } - return null; - } - - /** - * Flush databases of all cluster nodes. - * - * @param connection the cluster connection - */ - public static void flushDatabaseOfAllNodes(StatefulRedisClusterConnection connection) { - for (RedisClusterNode node : connection.getPartitions()) { - try { - connection.getConnection(node.getNodeId()).sync().flushall(); - connection.getConnection(node.getNodeId()).sync().flushdb(); - } catch (Exception o_O) { - // ignore - } - } - } - - /** - * Create an API wrapper which exposes the {@link RedisCommands} API by using internally a cluster connection. - * - * @param connection - * @return - */ - public static RedisCommands redisCommandsOverCluster( - StatefulRedisClusterConnection connection) { - StatefulRedisClusterConnectionImpl clusterConnection = (StatefulRedisClusterConnectionImpl) connection; - InvocationHandler h = clusterConnection.syncInvocationHandler(); - return (RedisCommands) Proxy.newProxyInstance(ClusterTestUtil.class.getClassLoader(), - new Class[] { RedisCommands.class }, h); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshOptionsTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshOptionsTest.java deleted file mode 100644 index a69c254bb6..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshOptionsTest.java +++ /dev/null @@ -1,95 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import com.lambdaworks.redis.cluster.ClusterTopologyRefreshOptions.RefreshTrigger; - -/** - * @author Mark Paluch - */ -public class ClusterTopologyRefreshOptionsTest { - - @Test - public void testBuilder() throws Exception { - - ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.builder()// - .enablePeriodicRefresh(true).refreshPeriod(10, TimeUnit.MINUTES)// - .dynamicRefreshSources(false) // - .enableAdaptiveRefreshTrigger(RefreshTrigger.MOVED_REDIRECT)// - .adaptiveRefreshTriggersTimeout(15, TimeUnit.MILLISECONDS)// - .closeStaleConnections(false)// - .refreshTriggersReconnectAttempts(2)// - .build(); - - assertThat(options.getRefreshPeriod()).isEqualTo(10); - assertThat(options.getRefreshPeriodUnit()).isEqualTo(TimeUnit.MINUTES); - assertThat(options.isCloseStaleConnections()).isEqualTo(false); - assertThat(options.isPeriodicRefreshEnabled()).isTrue(); - assertThat(options.useDynamicRefreshSources()).isFalse(); - assertThat(options.getAdaptiveRefreshTimeout()).isEqualTo(15); - assertThat(options.getAdaptiveRefreshTimeoutUnit()).isEqualTo(TimeUnit.MILLISECONDS); - assertThat(options.getAdaptiveRefreshTriggers()).containsOnly(RefreshTrigger.MOVED_REDIRECT); - assertThat(options.getRefreshTriggersReconnectAttempts()).isEqualTo(2); - } - - @Test - public void testCopy() throws Exception { - - ClusterTopologyRefreshOptions master = ClusterTopologyRefreshOptions.builder()// - .enablePeriodicRefresh(true).refreshPeriod(10, TimeUnit.MINUTES)// - .dynamicRefreshSources(false) // - .enableAdaptiveRefreshTrigger(RefreshTrigger.MOVED_REDIRECT)// - .adaptiveRefreshTriggersTimeout(15, TimeUnit.MILLISECONDS)// - .closeStaleConnections(false)// - .refreshTriggersReconnectAttempts(2)// - .build(); - - ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.copyOf(master); - - assertThat(options.getRefreshPeriod()).isEqualTo(10); - assertThat(options.getRefreshPeriodUnit()).isEqualTo(TimeUnit.MINUTES); - assertThat(options.isCloseStaleConnections()).isEqualTo(false); - assertThat(options.isPeriodicRefreshEnabled()).isTrue(); - assertThat(options.useDynamicRefreshSources()).isFalse(); - assertThat(options.getAdaptiveRefreshTimeout()).isEqualTo(15); - assertThat(options.getAdaptiveRefreshTimeoutUnit()).isEqualTo(TimeUnit.MILLISECONDS); - assertThat(options.getAdaptiveRefreshTriggers()).containsOnly(RefreshTrigger.MOVED_REDIRECT); - assertThat(options.getRefreshTriggersReconnectAttempts()).isEqualTo(2); - } - - @Test - public void testDefault() throws Exception { - - ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.create(); - - assertThat(options.getRefreshPeriod()).isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_REFRESH_PERIOD); - assertThat(options.getRefreshPeriodUnit()).isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_REFRESH_PERIOD_UNIT); - assertThat(options.isCloseStaleConnections()).isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_CLOSE_STALE_CONNECTIONS); - assertThat(options.isPeriodicRefreshEnabled()).isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_PERIODIC_REFRESH_ENABLED).isFalse(); - assertThat(options.useDynamicRefreshSources()).isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_DYNAMIC_REFRESH_SOURCES) - .isTrue(); - assertThat(options.getAdaptiveRefreshTimeout()) - .isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_ADAPTIVE_REFRESH_TIMEOUT); - assertThat(options.getAdaptiveRefreshTimeoutUnit()) - .isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_ADAPTIVE_REFRESH_TIMEOUT_UNIT); - assertThat(options.getAdaptiveRefreshTriggers()) - .isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_ADAPTIVE_REFRESH_TRIGGERS); - assertThat(options.getRefreshTriggersReconnectAttempts()) - .isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_REFRESH_TRIGGERS_RECONNECT_ATTEMPTS); - } - - @Test - public void testEnabled() throws Exception { - - ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.enabled(); - - assertThat(options.isPeriodicRefreshEnabled()).isTrue(); - assertThat(options.useDynamicRefreshSources()).isTrue(); - assertThat(options.getAdaptiveRefreshTriggers()).contains(RefreshTrigger.ASK_REDIRECT, RefreshTrigger.MOVED_REDIRECT, - RefreshTrigger.PERSISTENT_RECONNECTS); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshSchedulerTest.java b/src/test/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshSchedulerTest.java deleted file mode 100644 index b51813a7c7..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterTopologyRefreshSchedulerTest.java +++ /dev/null @@ -1,198 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.mockito.Matchers.any; -import static org.mockito.Mockito.never; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.when; - -import java.util.concurrent.TimeUnit; - -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.runners.MockitoJUnitRunner; - -import com.lambdaworks.redis.resource.ClientResources; - -import io.netty.util.concurrent.EventExecutorGroup; - -/** - * @author Mark Paluch - */ -@RunWith(MockitoJUnitRunner.class) -public class ClusterTopologyRefreshSchedulerTest { - - private ClusterTopologyRefreshScheduler sut; - - private ClusterTopologyRefreshOptions immediateRefresh = ClusterTopologyRefreshOptions.builder().enablePeriodicRefresh(1, TimeUnit.MILLISECONDS) - .enableAllAdaptiveRefreshTriggers().build(); - - private ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder() - .topologyRefreshOptions(immediateRefresh).build(); - - @Mock - private ClientResources clientResources; - - @Mock - private RedisClusterClient clusterClient; - - @Mock - private EventExecutorGroup eventExecutors; - - @Before - public void before() throws Exception { - - when(clientResources.eventExecutorGroup()).thenReturn(eventExecutors); - - sut = new ClusterTopologyRefreshScheduler(clusterClient, clientResources); - } - - @Test - public void runShouldSubmitRefreshShouldTrigger() throws Exception { - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - - sut.run(); - verify(eventExecutors).submit(any(Runnable.class)); - } - - @Test - public void runnableShouldCallPartitionRefresh() throws Exception { - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - - when(eventExecutors.submit(any(Runnable.class))).then(invocation -> { - ((Runnable) invocation.getArguments()[0]).run(); - return null; - }); - - sut.run(); - - verify(clusterClient).reloadPartitions(); - } - - @Test - public void shouldNotSubmitIfOptionsNotSet() throws Exception { - - sut.run(); - verify(eventExecutors, never()).submit(any(Runnable.class)); - } - - @Test - public void shouldNotSubmitIfExecutorIsShuttingDown() throws Exception { - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - when(eventExecutors.isShuttingDown()).thenReturn(true); - - sut.run(); - verify(eventExecutors, never()).submit(any(Runnable.class)); - } - - @Test - public void shouldNotSubmitIfExecutorIsShutdown() throws Exception { - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - when(eventExecutors.isShutdown()).thenReturn(true); - - sut.run(); - verify(eventExecutors, never()).submit(any(Runnable.class)); - } - - @Test - public void shouldNotSubmitIfExecutorIsTerminated() throws Exception { - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - when(eventExecutors.isTerminated()).thenReturn(true); - - sut.run(); - verify(eventExecutors, never()).submit(any(Runnable.class)); - } - - @Test - public void shouldTriggerRefreshOnAskRedirection() throws Exception { - - ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = ClusterTopologyRefreshOptions.builder() - .enableAllAdaptiveRefreshTriggers().build(); - - ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder() - .topologyRefreshOptions(clusterTopologyRefreshOptions).build(); - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - - sut.onAskRedirection(); - verify(eventExecutors).submit(any(Runnable.class)); - } - - @Test - public void shouldNotTriggerAdaptiveRefreshUsingDefaults() throws Exception { - - ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = ClusterTopologyRefreshOptions.create(); - - ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder() - .topologyRefreshOptions(clusterTopologyRefreshOptions).build(); - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - - sut.onAskRedirection(); - verify(eventExecutors, never()).submit(any(Runnable.class)); - } - - @Test - public void shouldTriggerRefreshOnMovedRedirection() throws Exception { - - ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) - .build(); - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - - sut.onMovedRedirection(); - verify(eventExecutors).submit(any(Runnable.class)); - } - - @Test - public void shouldTriggerRefreshOnReconnect() throws Exception { - - ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) - .build(); - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - - sut.onReconnection(10); - verify(eventExecutors).submit(any(Runnable.class)); - } - - @Test - public void shouldNotTriggerRefreshOnFirstReconnect() throws Exception { - - ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) - .build(); - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - - sut.onReconnection(1); - verify(eventExecutors, never()).submit(any(Runnable.class)); - } - - @Test - public void shouldRateLimitAdaptiveRequests() throws Exception { - - ClusterTopologyRefreshOptions adaptiveTimeout = ClusterTopologyRefreshOptions.builder().enablePeriodicRefresh(false) - .enableAllAdaptiveRefreshTriggers().adaptiveRefreshTriggersTimeout(50, TimeUnit.MILLISECONDS).build(); - - ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(adaptiveTimeout) - .build(); - - when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); - - for (int i = 0; i < 10; i++) { - sut.onAskRedirection(); - } - - Thread.sleep(100); - sut.onAskRedirection(); - - verify(eventExecutors, times(2)).submit(any(Runnable.class)); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/NodeSelectionAsyncTest.java b/src/test/java/com/lambdaworks/redis/cluster/NodeSelectionAsyncTest.java deleted file mode 100644 index 1544c755dc..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/NodeSelectionAsyncTest.java +++ /dev/null @@ -1,249 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.ScriptOutputType.STATUS; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.*; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CompletionStage; - -import com.google.code.tempusfugit.temporal.WaitFor; -import com.lambdaworks.redis.internal.LettuceSets; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.AsyncExecutions; -import com.lambdaworks.redis.cluster.api.async.AsyncNodeSelection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - */ -public class NodeSelectionAsyncTest extends AbstractClusterTest { - - private RedisAdvancedClusterAsyncCommands commands; - private RedisAdvancedClusterCommands syncCommands; - private StatefulRedisClusterConnection clusterConnection; - - @Before - public void before() throws Exception { - clusterClient.reloadPartitions(); - clusterConnection = clusterClient.connect(); - commands = clusterConnection.async(); - syncCommands = clusterConnection.sync(); - } - - @After - public void after() throws Exception { - commands.close(); - } - - @Test - public void testMultiNodeOperations() throws Exception { - - List expectation = new ArrayList<>(); - for (char c = 'a'; c < 'z'; c++) { - String key = new String(new char[] { c, c, c }); - expectation.add(key); - commands.set(key, value).get(); - } - - List result = new Vector<>(); - - CompletableFuture.allOf(commands.masters().commands().keys(result::add, "*").futures()).get(); - - assertThat(result).hasSize(expectation.size()); - - Collections.sort(expectation); - Collections.sort(result); - - assertThat(result).isEqualTo(expectation); - } - - @Test - public void testNodeSelectionCount() throws Exception { - assertThat(commands.all().size()).isEqualTo(4); - assertThat(commands.slaves().size()).isEqualTo(2); - assertThat(commands.masters().size()).isEqualTo(2); - - assertThat(commands.nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MYSELF)).size()).isEqualTo( - 1); - } - - @Test - public void testNodeSelection() throws Exception { - - AsyncNodeSelection onlyMe = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( - RedisClusterNode.NodeFlag.MYSELF)); - Map> map = onlyMe.asMap(); - - assertThat(map).hasSize(1); - - RedisClusterAsyncCommands node = onlyMe.commands(0); - assertThat(node).isNotNull(); - - RedisClusterNode redisClusterNode = onlyMe.node(0); - assertThat(redisClusterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MYSELF); - - assertThat(onlyMe.asMap()).hasSize(1); - } - - @Test - public void testDynamicNodeSelection() throws Exception { - - Partitions partitions = commands.getStatefulConnection().getPartitions(); - partitions.forEach(redisClusterNode -> redisClusterNode.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MASTER))); - - AsyncNodeSelection selection = commands.nodes( - redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF), true); - - assertThat(selection.asMap()).hasSize(0); - partitions.getPartition(0) - .setFlags(LettuceSets.unmodifiableSet(RedisClusterNode.NodeFlag.MYSELF, RedisClusterNode.NodeFlag.MASTER)); - assertThat(selection.asMap()).hasSize(1); - - partitions.getPartition(1) - .setFlags(LettuceSets.unmodifiableSet(RedisClusterNode.NodeFlag.MYSELF, RedisClusterNode.NodeFlag.MASTER)); - assertThat(selection.asMap()).hasSize(2); - - } - - @Test - public void testNodeSelectionAsyncPing() throws Exception { - - AsyncNodeSelection onlyMe = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( - RedisClusterNode.NodeFlag.MYSELF)); - Map> map = onlyMe.asMap(); - - assertThat(map).hasSize(1); - - AsyncExecutions ping = onlyMe.commands().ping(); - CompletionStage completionStage = ping.get(onlyMe.node(0)); - - assertThat(completionStage.toCompletableFuture().get()).isEqualTo("PONG"); - } - - @Test - public void testStaticNodeSelection() throws Exception { - - AsyncNodeSelection selection = commands.nodes( - redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF), false); - - assertThat(selection.asMap()).hasSize(1); - - commands.getStatefulConnection().getPartitions().getPartition(2) - .setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MYSELF)); - - assertThat(selection.asMap()).hasSize(1); - } - - @Test - public void testAsynchronicityOfMultiNodeExecution() throws Exception { - - RedisAdvancedClusterAsyncCommands connection2 = clusterClient.connectClusterAsync(); - - AsyncNodeSelection masters = connection2.masters(); - CompletableFuture.allOf(masters.commands().configSet("lua-time-limit", "10").futures()).get(); - AsyncExecutions eval = masters.commands().eval("while true do end", STATUS, new String[0]); - - for (CompletableFuture future : eval.futures()) { - assertThat(future.isDone()).isFalse(); - assertThat(future.isCancelled()).isFalse(); - } - Thread.sleep(200); - - AsyncExecutions kill = commands.masters().commands().scriptKill(); - CompletableFuture.allOf(kill.futures()).get(); - - for (CompletionStage execution : kill) { - assertThat(execution.toCompletableFuture().get()).isEqualTo("OK"); - } - - CompletableFuture.allOf(eval.futures()).exceptionally(throwable -> null).get(); - for (CompletableFuture future : eval.futures()) { - assertThat(future.isDone()).isTrue(); - } - } - - @Test - public void testSlavesReadWrite() throws Exception { - - AsyncNodeSelection nodes = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( - RedisClusterNode.NodeFlag.SLAVE)); - - assertThat(nodes.size()).isEqualTo(2); - - commands.set(key, value).get(); - waitForReplication(key, port4); - - List t = new ArrayList<>(); - AsyncExecutions keys = nodes.commands().get(key); - keys.stream().forEach(lcs -> { - lcs.toCompletableFuture().exceptionally(throwable -> { - t.add(throwable); - return null; - }); - }); - - CompletableFuture.allOf(keys.futures()).exceptionally(throwable -> null).get(); - - assertThat(t.size()).isGreaterThan(0); - } - - @Test - public void testSlavesWithReadOnly() throws Exception { - - AsyncNodeSelection nodes = commands.slaves(redisClusterNode -> redisClusterNode - .is(RedisClusterNode.NodeFlag.SLAVE)); - - assertThat(nodes.size()).isEqualTo(2); - - commands.set(key, value).get(); - waitForReplication(key, port4); - - List t = new ArrayList<>(); - List strings = new ArrayList<>(); - AsyncExecutions keys = nodes.commands().get(key); - keys.stream().forEach(lcs -> { - lcs.toCompletableFuture().exceptionally(throwable -> { - t.add(throwable); - return null; - }); - lcs.thenAccept(strings::add); - }); - - CompletableFuture.allOf(keys.futures()).exceptionally(throwable -> null).get(); - Wait.untilEquals(1, () -> t.size()).waitOrTimeout(); - - assertThat(t).hasSize(1); - assertThat(strings).hasSize(1).contains(value); - } - - protected void waitForReplication(String key, int port) throws Exception { - waitForReplication(commands, key, port); - } - - protected static void waitForReplication(RedisAdvancedClusterAsyncCommands commands, String key, int port) - throws Exception { - - AsyncNodeSelection selection = commands - .slaves(redisClusterNode -> redisClusterNode.getUri().getPort() == port); - Wait.untilNotEquals(null, () -> { - for (CompletableFuture future : selection.commands().get(key).futures()) { - if (future.get() != null) { - return future.get(); - } - } - return null; - }).waitOrTimeout(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/NodeSelectionSyncTest.java b/src/test/java/com/lambdaworks/redis/cluster/NodeSelectionSyncTest.java deleted file mode 100644 index c14a497ea3..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/NodeSelectionSyncTest.java +++ /dev/null @@ -1,222 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.ScriptOutputType.STATUS; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Fail.fail; - -import java.util.*; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.internal.LettuceSets; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.RedisCommandExecutionException; -import com.lambdaworks.redis.RedisCommandTimeoutException; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.sync.Executions; -import com.lambdaworks.redis.cluster.api.sync.NodeSelection; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * @author Mark Paluch - */ -public class NodeSelectionSyncTest extends AbstractClusterTest { - - private RedisAdvancedClusterCommands commands; - private StatefulRedisClusterConnection clusterConnection; - - @Before - public void before() throws Exception { - clusterClient.reloadPartitions(); - clusterConnection = clusterClient.connect(); - commands = clusterConnection.sync(); - } - - @After - public void after() throws Exception { - commands.close(); - } - - @Test - public void testMultiNodeOperations() throws Exception { - - List expectation = new ArrayList<>(); - for (char c = 'a'; c < 'z'; c++) { - String key = new String(new char[] { c, c, c }); - expectation.add(key); - commands.set(key, value); - } - - List result = new Vector<>(); - - Executions executions = commands.masters().commands().keys(result::add, "*"); - - assertThat(executions).hasSize(2); - - Collections.sort(expectation); - Collections.sort(result); - - assertThat(result).isEqualTo(expectation); - } - - @Test - public void testNodeSelectionCount() throws Exception { - assertThat(commands.all().size()).isEqualTo(4); - assertThat(commands.slaves().size()).isEqualTo(2); - assertThat(commands.masters().size()).isEqualTo(2); - - assertThat(commands.nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MYSELF)).size()).isEqualTo( - 1); - } - - @Test - public void testNodeSelection() throws Exception { - - NodeSelection onlyMe = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( - RedisClusterNode.NodeFlag.MYSELF)); - Map> map = onlyMe.asMap(); - - assertThat(map).hasSize(1); - - RedisCommands node = onlyMe.commands(0); - assertThat(node).isNotNull(); - - RedisClusterNode redisClusterNode = onlyMe.node(0); - assertThat(redisClusterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MYSELF); - - assertThat(onlyMe.asMap()).hasSize(1); - } - - @Test - public void testDynamicNodeSelection() throws Exception { - - Partitions partitions = commands.getStatefulConnection().getPartitions(); - partitions.forEach(redisClusterNode -> redisClusterNode.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MASTER))); - - NodeSelection selection = commands.nodes( - redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF), true); - - assertThat(selection.asMap()).hasSize(0); - partitions.getPartition(0) - .setFlags(LettuceSets.unmodifiableSet(RedisClusterNode.NodeFlag.MYSELF, RedisClusterNode.NodeFlag.MASTER)); - assertThat(selection.asMap()).hasSize(1); - - partitions.getPartition(1) - .setFlags(LettuceSets.unmodifiableSet(RedisClusterNode.NodeFlag.MYSELF, RedisClusterNode.NodeFlag.MASTER)); - assertThat(selection.asMap()).hasSize(2); - - } - - @Test - public void testNodeSelectionPing() throws Exception { - - NodeSelection onlyMe = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( - RedisClusterNode.NodeFlag.MYSELF)); - Map> map = onlyMe.asMap(); - - assertThat(map).hasSize(1); - - Executions ping = onlyMe.commands().ping(); - - assertThat(ping.get(onlyMe.node(0))).isEqualTo("PONG"); - } - - @Test - public void testStaticNodeSelection() throws Exception { - - NodeSelection selection = commands.nodes( - redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF), false); - - assertThat(selection.asMap()).hasSize(1); - - commands.getStatefulConnection().getPartitions().getPartition(2) - .setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MYSELF)); - - assertThat(selection.asMap()).hasSize(1); - } - - @Test - public void testAsynchronicityOfMultiNodeExecution() throws Exception { - - RedisAdvancedClusterCommands connection2 = clusterClient.connect().sync(); - - connection2.setTimeout(1, TimeUnit.SECONDS); - NodeSelection masters = connection2.masters(); - masters.commands().configSet("lua-time-limit", "10"); - - Executions eval = null; - try { - eval = masters.commands().eval("while true do end", STATUS, new String[0]); - fail("missing exception"); - } catch (RedisCommandTimeoutException e) { - assertThat(e).hasMessageContaining("Command timed out for node(s)"); - } - - Executions kill = commands.masters().commands().scriptKill(); - } - - @Test - public void testSlavesReadWrite() throws Exception { - - NodeSelection nodes = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( - RedisClusterNode.NodeFlag.SLAVE)); - - assertThat(nodes.size()).isEqualTo(2); - - commands.set(key, value); - waitForReplication(key, port4); - - try { - - nodes.commands().get(key); - fail("Missing RedisCommandExecutionException: MOVED"); - } catch (RedisCommandExecutionException e) { - assertThat(e.getSuppressed().length).isGreaterThan(0); - } - } - - @Test - public void testSlavesWithReadOnly() throws Exception { - - int slot = SlotHash.getSlot(key); - Optional master = clusterConnection.getPartitions().getPartitions().stream() - .filter(redisClusterNode -> redisClusterNode.hasSlot(slot)).findFirst(); - - NodeSelection nodes = commands.slaves(redisClusterNode -> redisClusterNode - .is(RedisClusterNode.NodeFlag.SLAVE) && redisClusterNode.getSlaveOf().equals(master.get().getNodeId())); - - assertThat(nodes.size()).isEqualTo(1); - - commands.set(key, value); - waitForReplication(key, port4); - - Executions keys = nodes.commands().get(key); - assertThat(keys).hasSize(1).contains(value); - } - - protected void waitForReplication(String key, int port) throws Exception { - waitForReplication(commands, key, port); - } - - protected static void waitForReplication(RedisAdvancedClusterCommands commands, String key, int port) - throws Exception { - - NodeSelection selection = commands - .slaves(redisClusterNode -> redisClusterNode.getUri().getPort() == port); - Wait.untilNotEquals(null, () -> { - - Executions strings = selection.commands().get(key); - if (strings.stream().filter(s -> s != null).findFirst().isPresent()) { - return "OK"; - } - - return null; - }).waitOrTimeout(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/PipelinedRedisFutureTest.java b/src/test/java/com/lambdaworks/redis/cluster/PipelinedRedisFutureTest.java deleted file mode 100644 index d6a770d2f8..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/PipelinedRedisFutureTest.java +++ /dev/null @@ -1,41 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Test; - -import java.util.HashMap; - -/** - * @author Mark Paluch - */ -public class PipelinedRedisFutureTest { - - private PipelinedRedisFuture sut; - - @Test - public void testComplete() throws Exception { - - String other = "other"; - - sut = new PipelinedRedisFuture<>(new HashMap<>(), o -> other); - - sut.complete(""); - assertThat(sut.get()).isEqualTo(other); - assertThat(sut.getError()).isNull(); - - } - - @Test - public void testCompleteExceptionally() throws Exception { - - String other = "other"; - - sut = new PipelinedRedisFuture<>(new HashMap<>(), o -> other); - - sut.completeExceptionally(new Exception()); - assertThat(sut.get()).isEqualTo(other); - assertThat(sut.getError()).isNull(); - - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/PooledClusterConnectionProviderTest.java b/src/test/java/com/lambdaworks/redis/cluster/PooledClusterConnectionProviderTest.java deleted file mode 100644 index 1cd9a2c0eb..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/PooledClusterConnectionProviderTest.java +++ /dev/null @@ -1,141 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.fail; -import static org.mockito.Matchers.any; -import static org.mockito.Matchers.eq; -import static org.mockito.Mockito.*; - -import java.util.Collections; -import java.util.List; -import java.util.stream.Collectors; -import java.util.stream.IntStream; - -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.runners.MockitoJUnitRunner; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterConnectionProvider.Intent; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.codec.Utf8StringCodec; - -/** - * @author Mark Paluch - */ -@RunWith(MockitoJUnitRunner.class) -public class PooledClusterConnectionProviderTest { - - public static final Utf8StringCodec CODEC = new Utf8StringCodec(); - - private PooledClusterConnectionProvider sut; - - @Mock - private RedisClusterClient clientMock; - - @Mock - private RedisChannelWriter writerMock; - - @Mock - StatefulRedisConnection nodeConnectionMock; - - @Mock - RedisCommands commandsMock; - - private Partitions partitions = new Partitions(); - - @Before - public void before() throws Exception { - - sut = new PooledClusterConnectionProvider<>(clientMock, writerMock, CODEC); - - List slots1 = IntStream.range(0, 8192).boxed().collect(Collectors.toList()); - List slots2 = IntStream.range(8192, SlotHash.SLOT_COUNT).boxed().collect(Collectors.toList()); - - partitions.add(new RedisClusterNode(RedisURI.create("localhost", 1), "1", true, null, 0, 0, 0, slots1, - Collections.singleton(RedisClusterNode.NodeFlag.MASTER))); - partitions.add(new RedisClusterNode(RedisURI.create("localhost", 2), "2", true, "1", 0, 0, 0, slots2, - Collections.singleton(RedisClusterNode.NodeFlag.SLAVE))); - - sut.setPartitions(partitions); - - when(nodeConnectionMock.sync()).thenReturn(commandsMock); - } - - @Test - public void shouldObtainConnection() throws Exception { - - when(clientMock.connectToNode(eq(CODEC), eq("localhost:1"), any(), any())).thenReturn(nodeConnectionMock); - - StatefulRedisConnection connection = sut.getConnection(Intent.READ, 1); - - assertThat(connection).isSameAs(nodeConnectionMock); - verify(connection).setAutoFlushCommands(true); - verifyNoMoreInteractions(connection); - } - - @Test - public void shouldObtainConnectionReadFromSlave() throws Exception { - - when(clientMock.connectToNode(eq(CODEC), eq("localhost:2"), any(), any())).thenReturn(nodeConnectionMock); - - sut.setReadFrom(ReadFrom.SLAVE); - - StatefulRedisConnection connection = sut.getConnection(Intent.READ, 1); - - assertThat(connection).isSameAs(nodeConnectionMock); - verify(connection).sync(); - verify(commandsMock).readOnly(); - verify(connection).setAutoFlushCommands(true); - } - - @Test - public void shouldCloseConnectionOnConnectFailure() throws Exception { - - when(clientMock.connectToNode(eq(CODEC), eq("localhost:2"), any(), any())).thenReturn(nodeConnectionMock); - doThrow(new RuntimeException()).when(commandsMock).readOnly(); - - sut.setReadFrom(ReadFrom.SLAVE); - - try { - sut.getConnection(Intent.READ, 1); - fail("Missing RedisException"); - } catch (RedisException e) { - assertThat(e).hasRootCauseInstanceOf(RuntimeException.class); - } - - verify(nodeConnectionMock).close(); - verify(clientMock).connectToNode(eq(CODEC), eq("localhost:2"), any(), any()); - } - - @Test - public void shouldRetryConnectionAttemptAfterConnectionAttemptWasBroken() throws Exception { - - when(clientMock.connectToNode(eq(CODEC), eq("localhost:2"), any(), any())).thenReturn(nodeConnectionMock); - doThrow(new RuntimeException()).when(commandsMock).readOnly(); - - sut.setReadFrom(ReadFrom.SLAVE); - - try { - sut.getConnection(Intent.READ, 1); - fail("Missing RedisException"); - } catch (RedisException e) { - assertThat(e).hasRootCauseInstanceOf(RuntimeException.class); - } - verify(nodeConnectionMock).close(); - - doReturn("OK").when(commandsMock).readOnly(); - - sut.getConnection(Intent.READ, 1); - - verify(clientMock, times(2)).connectToNode(eq(CODEC), eq("localhost:2"), any(), any()); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/cluster/ReadFromTest.java b/src/test/java/com/lambdaworks/redis/cluster/ReadFromTest.java deleted file mode 100644 index 8837029a88..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ReadFromTest.java +++ /dev/null @@ -1,106 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.Collections; -import java.util.Iterator; -import java.util.List; - -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.ReadFrom; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -/** - * @author Mark Paluch - */ -public class ReadFromTest { - - private Partitions sut = new Partitions(); - private RedisClusterNode nearest = new RedisClusterNode(); - private RedisClusterNode master = new RedisClusterNode(); - private RedisClusterNode slave = new RedisClusterNode(); - - @Before - public void before() throws Exception { - master.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MASTER)); - nearest.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.SLAVE)); - slave.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.SLAVE)); - - sut.addPartition(nearest); - sut.addPartition(master); - sut.addPartition(slave); - } - - @Test - public void master() throws Exception { - List result = ReadFrom.MASTER.select(getNodes()); - assertThat(result).hasSize(1).containsOnly(master); - } - - @Test - public void masterPreferred() throws Exception { - List result = ReadFrom.MASTER_PREFERRED.select(getNodes()); - assertThat(result).hasSize(3).containsExactly(master, nearest, slave); - } - - @Test - public void slave() throws Exception { - List result = ReadFrom.SLAVE.select(getNodes()); - assertThat(result).hasSize(2).contains(nearest, slave); - } - - @Test - public void nearest() throws Exception { - List result = ReadFrom.NEAREST.select(getNodes()); - assertThat(result).hasSize(3).containsExactly(nearest, master, slave); - } - - @Test(expected = IllegalArgumentException.class) - public void valueOfNull() throws Exception { - ReadFrom.valueOf(null); - } - - @Test(expected = IllegalArgumentException.class) - public void valueOfUnknown() throws Exception { - ReadFrom.valueOf("unknown"); - } - - @Test - public void valueOfNearest() throws Exception { - assertThat(ReadFrom.valueOf("nearest")).isEqualTo(ReadFrom.NEAREST); - } - - @Test - public void valueOfMaster() throws Exception { - assertThat(ReadFrom.valueOf("master")).isEqualTo(ReadFrom.MASTER); - } - - @Test - public void valueOfMasterPreferred() throws Exception { - assertThat(ReadFrom.valueOf("masterPreferred")).isEqualTo(ReadFrom.MASTER_PREFERRED); - } - - @Test - public void valueOfSlave() throws Exception { - assertThat(ReadFrom.valueOf("slave")).isEqualTo(ReadFrom.SLAVE); - } - - private ReadFrom.Nodes getNodes() { - return new ReadFrom.Nodes() { - @Override - public List getNodes() { - return (List) sut.getPartitions(); - } - - @Override - public Iterator iterator() { - return getNodes().iterator(); - } - }; - - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ReadOnlyCommandsTest.java b/src/test/java/com/lambdaworks/redis/cluster/ReadOnlyCommandsTest.java deleted file mode 100644 index 447074e19d..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/ReadOnlyCommandsTest.java +++ /dev/null @@ -1,25 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.ProtocolKeyword; -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class ReadOnlyCommandsTest { - - @Test - public void testCount() throws Exception { - assertThat(ReadOnlyCommands.READ_ONLY_COMMANDS).hasSize(69); - } - - @Test - public void testResolvableCommandNames() throws Exception { - for (ProtocolKeyword readOnlyCommand : ReadOnlyCommands.READ_ONLY_COMMANDS) { - assertThat(readOnlyCommand.name()).isEqualTo(CommandType.valueOf(readOnlyCommand.name()).name()); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterClientFactoryTest.java b/src/test/java/com/lambdaworks/redis/cluster/RedisClusterClientFactoryTest.java deleted file mode 100644 index 7dafc83502..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterClientFactoryTest.java +++ /dev/null @@ -1,130 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.util.Arrays; -import java.util.List; -import java.util.concurrent.TimeUnit; - -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.TestClientResources; -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.resource.ClientResources; - -/** - * @author Mark Paluch - */ -public class RedisClusterClientFactoryTest { - - private final static String URI = "redis://" + TestSettings.host() + ":" + TestSettings.port(); - private final static RedisURI REDIS_URI = RedisURI.create(URI); - private static final List REDIS_URIS = LettuceLists.newList(REDIS_URI); - private static ClientResources DEFAULT_RESOURCES; - - @BeforeClass - public static void beforeClass() throws Exception { - DEFAULT_RESOURCES = TestClientResources.create(); - } - - @AfterClass - public static void afterClass() throws Exception { - DEFAULT_RESOURCES.shutdown(100, 100, TimeUnit.MILLISECONDS).get(); - } - - @Test - public void withStringUri() throws Exception { - FastShutdown.shutdown(RedisClusterClient.create(URI)); - } - - @Test(expected = IllegalArgumentException.class) - public void withStringUriNull() throws Exception { - RedisClusterClient.create((String) null); - } - - @Test - public void withUri() throws Exception { - FastShutdown.shutdown(RedisClusterClient.create(REDIS_URI)); - } - - @Test(expected = IllegalArgumentException.class) - public void withUriUri() throws Exception { - RedisClusterClient.create((RedisURI) null); - } - - @Test - public void withUriIterable() throws Exception { - FastShutdown.shutdown(RedisClusterClient.create(LettuceLists.newList(REDIS_URI))); - } - - @Test(expected = IllegalArgumentException.class) - public void withUriIterableNull() throws Exception { - RedisClusterClient.create((Iterable) null); - } - - @Test - public void clientResourcesWithStringUri() throws Exception { - FastShutdown.shutdown(RedisClusterClient.create(DEFAULT_RESOURCES, URI)); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesWithStringUriNull() throws Exception { - RedisClusterClient.create(DEFAULT_RESOURCES, (String) null); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesNullWithStringUri() throws Exception { - RedisClusterClient.create(null, URI); - } - - @Test - public void clientResourcesWithUri() throws Exception { - FastShutdown.shutdown(RedisClusterClient.create(DEFAULT_RESOURCES, REDIS_URI)); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesWithUriNull() throws Exception { - RedisClusterClient.create(DEFAULT_RESOURCES, (RedisURI) null); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesWithUriUri() throws Exception { - RedisClusterClient.create(null, REDIS_URI); - } - - @Test - public void clientResourcesWithUriIterable() throws Exception { - FastShutdown.shutdown(RedisClusterClient.create(DEFAULT_RESOURCES, LettuceLists.newList(REDIS_URI))); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesWithUriIterableNull() throws Exception { - RedisClusterClient.create(DEFAULT_RESOURCES, (Iterable) null); - } - - @Test(expected = IllegalArgumentException.class) - public void clientResourcesNullWithUriIterable() throws Exception { - RedisClusterClient.create(null, REDIS_URIS); - } - - @Test(expected = IllegalArgumentException.class) - public void clientWithDifferentSslSettings() throws Exception { - RedisClusterClient.create(Arrays.asList(RedisURI.create("redis://host1"), RedisURI.create("redis+ssl://host1"))); - } - - @Test(expected = IllegalArgumentException.class) - public void clientWithDifferentTlsSettings() throws Exception { - RedisClusterClient.create(Arrays.asList(RedisURI.create("rediss://host1"), RedisURI.create("redis+tls://host1"))); - } - - @Test(expected = IllegalArgumentException.class) - public void clientWithDifferentVerifyPeerSettings() throws Exception { - RedisURI redisURI = RedisURI.create("rediss://host1"); - redisURI.setVerifyPeer(false); - - RedisClusterClient.create(Arrays.asList(redisURI, RedisURI.create("rediss://host1"))); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterClientTest.java b/src/test/java/com/lambdaworks/redis/cluster/RedisClusterClientTest.java deleted file mode 100644 index 8d1a9fc6b9..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterClientTest.java +++ /dev/null @@ -1,548 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.RedisClientConnectionTest.CODEC; -import static com.lambdaworks.redis.cluster.ClusterTestUtil.getOwnPartition; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.concurrent.TimeUnit; - -import org.assertj.core.api.AssertionsForClassTypes; -import org.junit.*; -import org.junit.runners.MethodSorters; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.protocol.AsyncCommand; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -@SuppressWarnings("unchecked") -public class RedisClusterClientTest extends AbstractClusterTest { - - protected static RedisClient client; - - protected StatefulRedisConnection redis1; - protected StatefulRedisConnection redis2; - protected StatefulRedisConnection redis3; - protected StatefulRedisConnection redis4; - - protected RedisClusterCommands redissync1; - protected RedisClusterCommands redissync2; - protected RedisClusterCommands redissync3; - protected RedisClusterCommands redissync4; - - protected RedisAdvancedClusterCommands sync; - - @BeforeClass - public static void setupClient() throws Exception { - setupClusterClient(); - client = RedisClient.create(RedisURI.Builder.redis(host, port1).build()); - clusterClient = RedisClusterClient.create(Collections.singletonList(RedisURI.Builder.redis(host, port1).build())); - } - - @AfterClass - public static void shutdownClient() { - shutdownClusterClient(); - FastShutdown.shutdown(client); - FastShutdown.shutdown(clusterClient); - } - - @Before - public void before() throws Exception { - - clusterClient.setOptions(ClusterClientOptions.create()); - clusterRule.getClusterClient().reloadPartitions(); - - redis1 = client.connect(RedisURI.Builder.redis(host, port1).build()); - redis2 = client.connect(RedisURI.Builder.redis(host, port2).build()); - redis3 = client.connect(RedisURI.Builder.redis(host, port3).build()); - redis4 = client.connect(RedisURI.Builder.redis(host, port4).build()); - - redissync1 = redis1.sync(); - redissync2 = redis2.sync(); - redissync3 = redis3.sync(); - redissync4 = redis4.sync(); - - clusterClient.reloadPartitions(); - sync = clusterClient.connectCluster(); - } - - @After - public void after() throws Exception { - sync.close(); - redis1.close(); - - redissync1.close(); - redissync2.close(); - redissync3.close(); - redissync4.close(); - } - - @Test - public void statefulConnectionFromSync() throws Exception { - RedisAdvancedClusterConnection sync = clusterClient.connectCluster(); - assertThat(sync.getStatefulConnection().sync()).isSameAs(sync); - sync.close(); - } - - @Test - public void statefulConnectionFromAsync() throws Exception { - RedisAsyncConnection async = client.connectAsync(); - assertThat(async.getStatefulConnection().async()).isSameAs(async); - async.close(); - } - - @Test - public void shouldApplyTimeoutOnRegularConnection() throws Exception { - - clusterClient.setDefaultTimeout(1, TimeUnit.MINUTES); - - StatefulRedisClusterConnection connection = clusterClient.connect(); - - assertTimeout(connection, 1, TimeUnit.MINUTES); - assertTimeout(connection.getConnection(host, port1), 1, TimeUnit.MINUTES); - - connection.close(); - } - - @Test - public void shouldApplyTimeoutOnRegularConnectionUsingCodec() throws Exception { - - clusterClient.setDefaultTimeout(1, TimeUnit.MINUTES); - - StatefulRedisClusterConnection connection = clusterClient.connect(CODEC); - - assertTimeout(connection, 1, TimeUnit.MINUTES); - assertTimeout(connection.getConnection(host, port1), 1, TimeUnit.MINUTES); - - connection.close(); - } - - @Test - public void shouldApplyTimeoutOnPubSubConnection() throws Exception { - - clusterClient.setDefaultTimeout(1, TimeUnit.MINUTES); - - StatefulRedisPubSubConnection connection = clusterClient.connectPubSub(); - - assertTimeout(connection, 1, TimeUnit.MINUTES); - connection.close(); - } - - @Test - public void shouldApplyTimeoutOnPubSubConnectionUsingCodec() throws Exception { - - clusterClient.setDefaultTimeout(1, TimeUnit.MINUTES); - - StatefulRedisPubSubConnection connection = clusterClient.connectPubSub(CODEC); - - assertTimeout(connection, 1, TimeUnit.MINUTES); - connection.close(); - } - - @Test - public void reloadPartitions() throws Exception { - assertThat(clusterClient.getPartitions()).hasSize(4); - - assertThat(clusterClient.getPartitions().getPartition(0).getUri()); - assertThat(clusterClient.getPartitions().getPartition(1).getUri()); - assertThat(clusterClient.getPartitions().getPartition(2).getUri()); - assertThat(clusterClient.getPartitions().getPartition(3).getUri()); - - clusterClient.reloadPartitions(); - - assertThat(clusterClient.getPartitions().getPartition(0).getUri()); - assertThat(clusterClient.getPartitions().getPartition(1).getUri()); - assertThat(clusterClient.getPartitions().getPartition(2).getUri()); - assertThat(clusterClient.getPartitions().getPartition(3).getUri()); - } - - @Test - public void testClusteredOperations() throws Exception { - - SlotHash.getSlot(KEY_B.getBytes()); // 3300 -> Node 1 and Slave (Node 3) - SlotHash.getSlot(KEY_A.getBytes()); // 15495 -> Node 2 - - RedisFuture result = redis1.async().set(KEY_B, value); - assertThat(result.getError()).isEqualTo(null); - assertThat(redissync1.set(KEY_B, "value")).isEqualTo("OK"); - - RedisFuture resultMoved = redis1.async().set(KEY_A, value); - try { - resultMoved.get(); - } catch (Exception e) { - assertThat(e.getMessage()).contains("MOVED 15495"); - } - - clusterClient.reloadPartitions(); - RedisClusterAsyncConnection connection = clusterClient.connectClusterAsync(); - - RedisFuture setA = connection.set(KEY_A, value); - setA.get(); - - assertThat(setA.getError()).isNull(); - assertThat(setA.get()).isEqualTo("OK"); - - RedisFuture setB = connection.set(KEY_B, "myValue2"); - assertThat(setB.get()).isEqualTo("OK"); - - RedisFuture setD = connection.set("d", "myValue2"); - assertThat(setD.get()).isEqualTo("OK"); - - connection.close(); - - } - - @Test - public void testReset() throws Exception { - - clusterClient.reloadPartitions(); - RedisAdvancedClusterAsyncCommandsImpl connection = (RedisAdvancedClusterAsyncCommandsImpl) clusterClient - .connectClusterAsync(); - - RedisFuture setA = connection.set(KEY_A, value); - setA.get(); - - connection.reset(); - - setA = connection.set(KEY_A, "myValue1"); - - assertThat(setA.getError()).isNull(); - assertThat(setA.get()).isEqualTo("OK"); - - connection.close(); - - } - - @Test - @SuppressWarnings({ "rawtypes" }) - public void testClusterCommandRedirection() throws Exception { - - RedisAdvancedClusterAsyncCommands connection = clusterClient.connect().async(); - - // Command on node within the default connection - assertThat(connection.set(KEY_B, value).get()).isEqualTo("OK"); - - // gets redirection to node 3 - assertThat(connection.set(KEY_A, value).get()).isEqualTo("OK"); - connection.close(); - } - - @Test - @SuppressWarnings({ "rawtypes" }) - public void testClusterRedirection() throws Exception { - - RedisAdvancedClusterAsyncCommands connection = clusterClient.connect().async(); - Partitions partitions = clusterClient.getPartitions(); - - for (RedisClusterNode partition : partitions) { - partition.setSlots(new ArrayList<>()); - if (partition.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - - int[] slots = createSlots(0, 16384); - for (int i = 0; i < slots.length; i++) { - partition.getSlots().add(i); - } - } - } - partitions.updateCache(); - - // appropriate cluster node - RedisFuture setB = connection.set(KEY_B, value); - - assertThat(setB).isInstanceOf(AsyncCommand.class); - - setB.get(10, TimeUnit.SECONDS); - assertThat(setB.getError()).isNull(); - assertThat(setB.get()).isEqualTo("OK"); - - // gets redirection to node 3 - RedisFuture setA = connection.set(KEY_A, value); - - assertThat(setA instanceof AsyncCommand).isTrue(); - - setA.get(10, TimeUnit.SECONDS); - assertThat(setA.getError()).isNull(); - assertThat(setA.get()).isEqualTo("OK"); - - connection.close(); - } - - @Test - @SuppressWarnings({ "rawtypes" }) - public void testClusterRedirectionLimit() throws Exception { - - clusterClient.setOptions(ClusterClientOptions.builder().maxRedirects(0).build()); - RedisAdvancedClusterAsyncCommands connection = clusterClient.connect().async(); - Partitions partitions = clusterClient.getPartitions(); - - for (RedisClusterNode partition : partitions) { - partition.setSlots(new ArrayList<>()); - if (partition.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - int[] slots = createSlots(0, 16384); - for (int i = 0; i < slots.length; i++) { - partition.getSlots().add(i); - } - } - } - partitions.updateCache(); - - // gets redirection to node 3 - RedisFuture setA = connection.set(KEY_A, value); - - assertThat(setA instanceof AsyncCommand).isTrue(); - - setA.await(10, TimeUnit.SECONDS); - assertThat(setA.getError()).isEqualTo("MOVED 15495 127.0.0.1:7380"); - - connection.close(); - } - - @Test(expected = RedisException.class) - public void closeConnection() throws Exception { - - try (RedisAdvancedClusterCommands connection = clusterClient.connect().sync()) { - - List time = connection.time(); - assertThat(time).hasSize(2); - - connection.close(); - - connection.time(); - } - } - - @Test - public void clusterAuth() throws Exception { - - RedisClusterClient clusterClient = new RedisClusterClient( - RedisURI.Builder.redis(TestSettings.host(), port7).withPassword("foobared").build()); - - try (RedisAdvancedClusterConnection connection = clusterClient.connectCluster()) { - - List time = connection.time(); - assertThat(time).hasSize(2); - - connection.getStatefulConnection().async().quit().get(); - - time = connection.time(); - assertThat(time).hasSize(2); - - char[] password = (char[]) ReflectionTestUtils.getField(connection.getStatefulConnection(), "password"); - assertThat(new String(password)).isEqualTo("foobared"); - } finally { - FastShutdown.shutdown(clusterClient); - - } - } - - @Test(expected = RedisException.class) - public void clusterNeedsAuthButNotSupplied() throws Exception { - - RedisClusterClient clusterClient = new RedisClusterClient(RedisURI.Builder.redis(TestSettings.host(), port7).build()); - - try (RedisClusterCommands connection = clusterClient.connectCluster()) { - - List time = connection.time(); - assertThat(time).hasSize(2); - } finally { - FastShutdown.shutdown(clusterClient); - } - } - - @Test - public void noClusterNodeAvailable() throws Exception { - - RedisClusterClient clusterClient = new RedisClusterClient(RedisURI.Builder.redis(host, 40400).build()); - try { - clusterClient.connectCluster(); - fail("Missing RedisException"); - } catch (RedisException e) { - assertThat(e).isInstanceOf(RedisException.class); - } - } - - @Test - public void getClusterNodeConnection() throws Exception { - - RedisClusterNode redis1Node = getOwnPartition(redissync2); - - RedisClusterCommands connection = sync.getConnection(TestSettings.hostAddr(), port2); - - String result = connection.clusterMyId(); - assertThat(result).isEqualTo(redis1Node.getNodeId()); - - } - - @Test - public void operateOnNodeConnection() throws Exception { - - sync.set(KEY_A, value); - sync.set(KEY_B, "d"); - - StatefulRedisConnection statefulRedisConnection = sync.getStatefulConnection() - .getConnection(TestSettings.hostAddr(), port2); - - RedisClusterCommands connection = statefulRedisConnection.sync(); - - assertThat(connection.get(KEY_A)).isEqualTo(value); - try { - connection.get(KEY_B); - fail("missing RedisCommandExecutionException: MOVED"); - } catch (RedisException e) { - assertThat(e).hasMessageContaining("MOVED"); - } - } - - @Test - public void testStatefulConnection() throws Exception { - RedisAdvancedClusterAsyncCommands async = sync.getStatefulConnection().async(); - - assertThat(async.ping().get()).isEqualTo("PONG"); - } - - @Test(expected = RedisException.class) - public void getButNoPartitionForSlothash() throws Exception { - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - redisClusterNode.setSlots(new ArrayList<>()); - } - RedisChannelHandler rch = (RedisChannelHandler) sync.getStatefulConnection(); - ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) rch - .getChannelWriter(); - writer.setPartitions(clusterClient.getPartitions()); - clusterClient.getPartitions().reload(clusterClient.getPartitions().getPartitions()); - - sync.get(key); - } - - @Test - public void readOnlyOnCluster() throws Exception { - - sync.readOnly(); - // commands are dispatched to a different connection, therefore it works for us. - sync.set(KEY_B, value); - - sync.getStatefulConnection().async().quit().get(); - - assertThat(ReflectionTestUtils.getField(sync.getStatefulConnection(), "readOnly")).isEqualTo(Boolean.TRUE); - - sync.readWrite(); - - assertThat(ReflectionTestUtils.getField(sync.getStatefulConnection(), "readOnly")).isEqualTo(Boolean.FALSE); - RedisClusterClient clusterClient = new RedisClusterClient(RedisURI.Builder.redis(host, 40400).build()); - try { - clusterClient.connectCluster(); - fail("Missing RedisException"); - } catch (RedisException e) { - assertThat(e).isInstanceOf(RedisException.class); - } - } - - @Test - public void getKeysInSlot() throws Exception { - - sync.set(KEY_A, value); - sync.set(KEY_B, value); - - List keysA = sync.clusterGetKeysInSlot(SLOT_A, 10); - assertThat(keysA).isEqualTo(Collections.singletonList(KEY_A)); - - List keysB = sync.clusterGetKeysInSlot(SLOT_B, 10); - assertThat(keysB).isEqualTo(Collections.singletonList(KEY_B)); - - } - - @Test - public void countKeysInSlot() throws Exception { - - sync.set(KEY_A, value); - sync.set(KEY_B, value); - - Long result = sync.clusterCountKeysInSlot(SLOT_A); - assertThat(result).isEqualTo(1L); - - result = sync.clusterCountKeysInSlot(SLOT_B); - assertThat(result).isEqualTo(1L); - - int slotZZZ = SlotHash.getSlot("ZZZ".getBytes()); - result = sync.clusterCountKeysInSlot(slotZZZ); - assertThat(result).isEqualTo(0L); - - } - - @Test - public void testClusterCountFailureReports() throws Exception { - RedisClusterNode ownPartition = getOwnPartition(redissync1); - assertThat(redissync1.clusterCountFailureReports(ownPartition.getNodeId())).isGreaterThanOrEqualTo(0); - } - - @Test - public void testClusterKeyslot() throws Exception { - assertThat(redissync1.clusterKeyslot(KEY_A)).isEqualTo(SLOT_A); - assertThat(SlotHash.getSlot(KEY_A)).isEqualTo(SLOT_A); - } - - @Test - public void testClusterSaveconfig() throws Exception { - assertThat(redissync1.clusterSaveconfig()).isEqualTo("OK"); - } - - @Test - public void testClusterSetConfigEpoch() throws Exception { - try { - redissync1.clusterSetConfigEpoch(1L); - } catch (RedisException e) { - assertThat(e).hasMessageContaining("ERR The user can assign a config epoch only"); - } - } - - @Test - public void testReadFrom() throws Exception { - StatefulRedisClusterConnection statefulConnection = sync.getStatefulConnection(); - - assertThat(statefulConnection.getReadFrom()).isEqualTo(ReadFrom.MASTER); - - statefulConnection.setReadFrom(ReadFrom.NEAREST); - assertThat(statefulConnection.getReadFrom()).isEqualTo(ReadFrom.NEAREST); - } - - @Test(expected = IllegalArgumentException.class) - public void testReadFromNull() throws Exception { - sync.getStatefulConnection().setReadFrom(null); - } - - @Test - public void testPfmerge() throws Exception { - RedisAdvancedClusterConnection connection = clusterClient.connectCluster(); - - assertThat(SlotHash.getSlot("key2660")).isEqualTo(SlotHash.getSlot("key7112")).isEqualTo(SlotHash.getSlot("key8885")); - - connection.pfadd("key2660", "rand", "mat"); - connection.pfadd("key7112", "mat", "perrin"); - - connection.pfmerge("key8885", "key2660", "key7112"); - - assertThat(connection.pfcount("key8885")).isEqualTo(3); - - connection.close(); - } - - private void assertTimeout(StatefulConnection connection, long expectedTimeout, TimeUnit expectedTimeUnit) { - - AssertionsForClassTypes.assertThat(ReflectionTestUtils.getField(connection, "timeout")).isEqualTo(expectedTimeout); - AssertionsForClassTypes.assertThat(ReflectionTestUtils.getField(connection, "unit")).isEqualTo(expectedTimeUnit); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterPasswordSecuredSslTest.java b/src/test/java/com/lambdaworks/redis/cluster/RedisClusterPasswordSecuredSslTest.java deleted file mode 100644 index 111b1b0304..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterPasswordSecuredSslTest.java +++ /dev/null @@ -1,161 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.TestSettings.*; -import static org.assertj.core.api.Assertions.*; -import static org.junit.Assume.*; - -import java.io.File; -import java.util.List; -import java.util.stream.Collectors; - -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.Sockets; -import com.lambdaworks.redis.AbstractTest; -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisCommandExecutionException; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.sync.Executions; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; - -/** - * @author Mark Paluch - */ -public class RedisClusterPasswordSecuredSslTest extends AbstractTest { - - public static final String KEYSTORE = "work/keystore.jks"; - - public static final int CLUSTER_PORT_SSL_1 = 7443; - public static final int CLUSTER_PORT_SSL_2 = 7444; - public static final int CLUSTER_PORT_SSL_3 = 7445; - - public static final String SLOT_1_KEY = "8HMdi"; - public static final String SLOT_16352_KEY = "UyAa4KqoWgPGKa"; - - public static RedisURI redisURI = RedisURI.builder().redis(host(), CLUSTER_PORT_SSL_1).withPassword("foobared") - .withSsl(true).withVerifyPeer(false).build(); - public static RedisClusterClient redisClient = RedisClusterClient.create(redisURI); - - @Before - public void before() throws Exception { - assumeTrue("Assume that stunnel runs on port 7443", Sockets.isOpen(host(), CLUSTER_PORT_SSL_1)); - assumeTrue("Assume that stunnel runs on port 7444", Sockets.isOpen(host(), CLUSTER_PORT_SSL_2)); - assumeTrue("Assume that stunnel runs on port 7445", Sockets.isOpen(host(), CLUSTER_PORT_SSL_3)); - assertThat(new File(KEYSTORE)).exists(); - System.setProperty("javax.net.ssl.trustStore", KEYSTORE); - } - - @AfterClass - public static void afterClass() { - FastShutdown.shutdown(redisClient); - } - - @Test - public void defaultClusterConnectionShouldWork() throws Exception { - - StatefulRedisClusterConnection connection = redisClient.connect(); - assertThat(connection.sync().ping()).isEqualTo("PONG"); - - connection.close(); - } - - @Test - public void partitionViewShouldContainClusterPorts() throws Exception { - - StatefulRedisClusterConnection connection = redisClient.connect(); - List ports = connection.getPartitions().stream().map(redisClusterNode -> redisClusterNode.getUri().getPort()) - .collect(Collectors.toList()); - connection.close(); - - assertThat(ports).contains(CLUSTER_PORT_SSL_1, CLUSTER_PORT_SSL_2, CLUSTER_PORT_SSL_3); - } - - @Test - public void routedOperationsAreWorking() throws Exception { - - StatefulRedisClusterConnection connection = redisClient.connect(); - RedisAdvancedClusterCommands sync = connection.sync(); - - sync.set(SLOT_1_KEY, "value1"); - sync.set(SLOT_16352_KEY, "value2"); - - assertThat(sync.get(SLOT_1_KEY)).isEqualTo("value1"); - assertThat(sync.get(SLOT_16352_KEY)).isEqualTo("value2"); - - connection.close(); - } - - @Test - public void nodeConnectionsShouldWork() throws Exception { - - StatefulRedisClusterConnection connection = redisClient.connect(); - - // slave - StatefulRedisConnection node2Connection = connection.getConnection(hostAddr(), 7444); - - try { - node2Connection.sync().get(SLOT_1_KEY); - } catch (RedisCommandExecutionException e) { - assertThat(e).hasMessage("MOVED 1 127.0.0.1:7443"); - } - - connection.close(); - } - - @Test - public void nodeSelectionApiShouldWork() throws Exception { - - StatefulRedisClusterConnection connection = redisClient.connect(); - - Executions ping = connection.sync().all().commands().ping(); - assertThat(ping).hasSize(3).contains("PONG"); - - connection.close(); - } - - @Test - public void connectionWithoutPasswordShouldFail() throws Exception { - - RedisURI redisURI = RedisURI.builder().redis(host(), CLUSTER_PORT_SSL_1).withSsl(true).withVerifyPeer(false) - .build(); - RedisClusterClient redisClusterClient = RedisClusterClient.create(redisURI); - - try { - redisClusterClient.reloadPartitions(); - } catch (RedisException e) { - assertThat(e).hasMessageContaining("Cannot retrieve initial cluster"); - } finally { - FastShutdown.shutdown(redisClusterClient); - } - } - - @Test - public void connectionWithoutPasswordShouldFail2() throws Exception { - - RedisURI redisURI = RedisURI.builder().redis(host(), CLUSTER_PORT_SSL_1).withSsl(true).withVerifyPeer(false) - .build(); - RedisClusterClient redisClusterClient = RedisClusterClient.create(redisURI); - - try { - redisClusterClient.connect(); - } catch (RedisException e) { - assertThat(e).hasMessageContaining("Cannot retrieve initial cluster"); - } finally { - FastShutdown.shutdown(redisClusterClient); - } - } - - @Test - public void clusterNodeRefreshWorksForMultipleIterations() throws Exception { - - redisClient.reloadPartitions(); - redisClient.reloadPartitions(); - redisClient.reloadPartitions(); - redisClient.reloadPartitions(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterReadFromTest.java b/src/test/java/com/lambdaworks/redis/cluster/RedisClusterReadFromTest.java deleted file mode 100644 index e5898beef5..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterReadFromTest.java +++ /dev/null @@ -1,90 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; - -import com.lambdaworks.redis.ReadFrom; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.FixMethodOrder; -import org.junit.Test; -import org.junit.runners.MethodSorters; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; - -import java.util.Collections; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -@SuppressWarnings("unchecked") -public class RedisClusterReadFromTest extends AbstractClusterTest { - - protected RedisAdvancedClusterCommands sync; - protected StatefulRedisClusterConnection connection; - - @BeforeClass - public static void setupClient() throws Exception { - setupClusterClient(); - clusterClient = new RedisClusterClient(Collections.singletonList(RedisURI.Builder.redis(host, port1).build())); - } - - @AfterClass - public static void shutdownClient() { - shutdownClusterClient(); - FastShutdown.shutdown(clusterClient); - } - - @Before - public void before() throws Exception { - clusterRule.getClusterClient().reloadPartitions(); - connection = clusterClient.connect(); - sync = connection.sync(); - } - - @After - public void after() throws Exception { - sync.close(); - } - - @Test - public void defaultTest() throws Exception { - assertThat(connection.getReadFrom()).isEqualTo(ReadFrom.MASTER); - } - - @Test - public void readWriteMaster() throws Exception { - connection.setReadFrom(ReadFrom.MASTER); - sync.set(key, value); - assertThat(sync.get(key)).isEqualTo(value); - } - - @Test - public void readWriteMasterPreferred() throws Exception { - connection.setReadFrom(ReadFrom.MASTER_PREFERRED); - sync.set(key, value); - assertThat(sync.get(key)).isEqualTo(value); - } - - @Test - public void readWriteSlave() throws Exception { - connection.setReadFrom(ReadFrom.SLAVE); - - sync.set(key, "value1"); - - connection.getConnection(host, port2).sync().waitForReplication(1, 1000); - assertThat(sync.get(key)).isEqualTo("value1"); - } - - @Test - public void readWriteNearest() throws Exception { - connection.setReadFrom(ReadFrom.NEAREST); - - sync.set(key, "value1"); - - connection.getConnection(host, port2).sync().waitForReplication(1, 1000); - assertThat(sync.get(key)).isEqualTo("value1"); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterSetupTest.java b/src/test/java/com/lambdaworks/redis/cluster/RedisClusterSetupTest.java deleted file mode 100644 index c69617085b..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterSetupTest.java +++ /dev/null @@ -1,574 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.google.code.tempusfugit.temporal.Timeout.timeout; -import static com.lambdaworks.redis.cluster.ClusterTestUtil.getNodeId; -import static com.lambdaworks.redis.cluster.ClusterTestUtil.getOwnPartition; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Fail.fail; - -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; - -import org.junit.*; - -import com.google.code.tempusfugit.temporal.Condition; -import com.google.code.tempusfugit.temporal.WaitFor; -import com.lambdaworks.Connections; -import com.lambdaworks.Futures; -import com.lambdaworks.Wait; -import com.lambdaworks.category.SlowTests; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -/** - * Test for mutable cluster setup scenarios. - * - * @author Mark Paluch - * @since 3.0 - */ -@SuppressWarnings({ "unchecked" }) -@SlowTests -public class RedisClusterSetupTest extends AbstractTest { - - public static final String host = TestSettings.hostAddr(); - - private static RedisClusterClient clusterClient; - private static RedisClient client = DefaultRedisClient.get(); - - private RedisClusterCommands redis1; - private RedisClusterCommands redis2; - - @Rule - public ClusterRule clusterRule = new ClusterRule(clusterClient, AbstractClusterTest.port5, AbstractClusterTest.port6); - - @BeforeClass - public static void setupClient() { - clusterClient = RedisClusterClient.create(RedisURI.Builder.redis(host, AbstractClusterTest.port5).build()); - } - - @AfterClass - public static void shutdownClient() { - FastShutdown.shutdown(clusterClient); - } - - @Before - public void openConnection() throws Exception { - redis1 = client.connect(RedisURI.Builder.redis(AbstractClusterTest.host, AbstractClusterTest.port5).build()).sync(); - redis2 = client.connect(RedisURI.Builder.redis(AbstractClusterTest.host, AbstractClusterTest.port6).build()).sync(); - clusterRule.clusterReset(); - } - - @After - public void closeConnection() throws Exception { - redis1.close(); - redis2.close(); - } - - @Test - public void clusterMeet() throws Exception { - - clusterRule.clusterReset(); - - Partitions partitionsBeforeMeet = ClusterPartitionParser.parse(redis1.clusterNodes()); - assertThat(partitionsBeforeMeet.getPartitions()).hasSize(1); - - String result = redis1.clusterMeet(host, AbstractClusterTest.port6); - assertThat(result).isEqualTo("OK"); - - Wait.untilEquals(2, () -> ClusterPartitionParser.parse(redis1.clusterNodes()).size()).waitOrTimeout(); - - Partitions partitionsAfterMeet = ClusterPartitionParser.parse(redis1.clusterNodes()); - assertThat(partitionsAfterMeet.getPartitions()).hasSize(2); - } - - @Test - public void clusterForget() throws Exception { - - clusterRule.clusterReset(); - - String result = redis1.clusterMeet(host, AbstractClusterTest.port6); - assertThat(result).isEqualTo("OK"); - Wait.untilTrue(() -> redis1.clusterNodes().contains(redis2.clusterMyId())).waitOrTimeout(); - Wait.untilTrue(() -> redis2.clusterNodes().contains(redis1.clusterMyId())).waitOrTimeout(); - Wait.untilTrue(() -> { - Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); - if (partitions.size() != 2) { - return false; - } - for (RedisClusterNode redisClusterNode : partitions) { - if (redisClusterNode.is(RedisClusterNode.NodeFlag.HANDSHAKE)) { - return false; - } - } - return true; - }).waitOrTimeout(); - - redis1.clusterForget(redis2.clusterMyId()); - - Wait.untilEquals(1, () -> ClusterPartitionParser.parse(redis1.clusterNodes()).size()).waitOrTimeout(); - - Partitions partitionsAfterForget = ClusterPartitionParser.parse(redis1.clusterNodes()); - assertThat(partitionsAfterForget.getPartitions()).hasSize(1); - } - - @Test - public void clusterDelSlots() throws Exception { - - ClusterSetup.setup2Masters(clusterRule); - - redis1.clusterDelSlots(1, 2, 5, 6); - - Wait.untilEquals(11996, () -> getOwnPartition(redis1).getSlots().size()).waitOrTimeout(); - } - - @Test - public void clusterSetSlots() throws Exception { - - ClusterSetup.setup2Masters(clusterRule); - - redis1.clusterSetSlotNode(6, getNodeId(redis2)); - - waitForSlots(redis1, 11999); - waitForSlots(redis2, 4384); - - Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); - for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { - if (redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - assertThat(redisClusterNode.getSlots()).contains(1, 2, 3, 4, 5).doesNotContain(6); - } - } - } - - @Test - public void clusterSlotMigrationImport() throws Exception { - - ClusterSetup.setup2Masters(clusterRule); - - String nodeId2 = getNodeId(redis2); - assertThat(redis1.clusterSetSlotMigrating(6, nodeId2)).isEqualTo("OK"); - assertThat(redis1.clusterSetSlotImporting(15000, nodeId2)).isEqualTo("OK"); - - assertThat(redis1.clusterSetSlotStable(6)).isEqualTo("OK"); - } - - @Test - public void clusterTopologyRefresh() throws Exception { - - clusterClient.setOptions( - ClusterClientOptions.builder().refreshClusterView(true).refreshPeriod(5, TimeUnit.SECONDS).build()); - clusterClient.reloadPartitions(); - - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - assertThat(clusterClient.getPartitions()).hasSize(1); - - ClusterSetup.setup2Masters(clusterRule); - assertThat(clusterClient.getPartitions()).hasSize(2); - - clusterConnection.close(); - } - - @Test - public void changeTopologyWhileOperations() throws Exception { - - ClusterSetup.setup2Masters(clusterRule); - - ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = ClusterTopologyRefreshOptions.builder() - .enableAllAdaptiveRefreshTriggers().build(); - - clusterClient - .setOptions(ClusterClientOptions.builder().topologyRefreshOptions(clusterTopologyRefreshOptions).build()); - StatefulRedisClusterConnection connection = clusterClient.connect(); - RedisAdvancedClusterCommands sync = connection.sync(); - RedisAdvancedClusterAsyncCommands async = connection.async(); - - Partitions partitions = connection.getPartitions(); - assertThat(partitions.getPartitionBySlot(0).getSlots().size()).isEqualTo(12000); - assertThat(partitions.getPartitionBySlot(16380).getSlots().size()).isEqualTo(4384); - assertRoutedExecution(async); - - sync.del("A"); - sync.del("t"); - sync.del("p"); - - shiftAllSlotsToNode1(); - assertRoutedExecution(async); - - Wait.untilTrue(() -> { - if (clusterClient.getPartitions().size() == 2) { - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - if (redisClusterNode.getSlots().size() > 16380) { - return true; - } - } - } - - return false; - }).waitOrTimeout(); - - assertThat(partitions.getPartitionBySlot(0).getSlots().size()).isEqualTo(16384); - - assertThat(sync.get("A")).isEqualTo("value"); - assertThat(sync.get("t")).isEqualTo("value"); - assertThat(sync.get("p")).isEqualTo("value"); - - async.close(); - } - - @Test - public void slotMigrationShouldUseAsking() throws Exception { - - ClusterSetup.setup2Masters(clusterRule); - - StatefulRedisClusterConnection connection = clusterClient.connect(); - - RedisAdvancedClusterCommands sync = connection.sync(); - RedisAdvancedClusterAsyncCommands async = connection.async(); - - Partitions partitions = connection.getPartitions(); - assertThat(partitions.getPartitionBySlot(0).getSlots().size()).isEqualTo(12000); - assertThat(partitions.getPartitionBySlot(16380).getSlots().size()).isEqualTo(4384); - - redis1.clusterSetSlotMigrating(3300, redis2.clusterMyId()); - redis2.clusterSetSlotImporting(3300, redis1.clusterMyId()); - - assertThat(sync.get("b")).isNull(); - - async.close(); - } - - @Test - public void disconnectedConnectionRejectTest() throws Exception { - - clusterClient.setOptions(ClusterClientOptions.builder().refreshClusterView(true).refreshPeriod(1, TimeUnit.SECONDS) - .disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - clusterClient.setOptions(ClusterClientOptions.builder() - .disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS).refreshClusterView(false).build()); - ClusterSetup.setup2Masters(clusterRule); - - assertRoutedExecution(clusterConnection); - - RedisClusterNode partition1 = getOwnPartition(redis1); - RedisClusterAsyncCommands node1Connection = clusterConnection - .getConnection(partition1.getUri().getHost(), partition1.getUri().getPort()); - - shiftAllSlotsToNode1(); - - suspendConnection(node1Connection); - - RedisFuture set = clusterConnection.set("t", "value"); // 15891 - - set.await(5, TimeUnit.SECONDS); - try { - set.get(); - fail("Missing RedisException"); - } catch (ExecutionException e) { - assertThat(e).hasRootCauseInstanceOf(RedisException.class).hasMessageContaining("not connected"); - } finally { - clusterConnection.close(); - } - } - - @Test - public void atLeastOnceForgetNodeFailover() throws Exception { - - clusterClient.setOptions( - ClusterClientOptions.builder().refreshClusterView(true).refreshPeriod(1, TimeUnit.SECONDS).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connectClusterAsync(); - clusterClient.setOptions(ClusterClientOptions.builder().refreshClusterView(false).build()); - ClusterSetup.setup2Masters(clusterRule); - - assertRoutedExecution(clusterConnection); - - RedisClusterNode partition1 = getOwnPartition(redis1); - RedisClusterNode partition2 = getOwnPartition(redis2); - RedisClusterAsyncCommands node2Connection = clusterConnection - .getConnection(partition2.getUri().getHost(), partition2.getUri().getPort()); - - shiftAllSlotsToNode1(); - - suspendConnection(node2Connection); - - List> futures = new ArrayList<>(); - - futures.add(clusterConnection.set("t", "value")); // 15891 - futures.add(clusterConnection.set("p", "value")); // 16023 - - clusterConnection.set("A", "value").get(1, TimeUnit.SECONDS); // 6373 - - for (RedisFuture future : futures) { - assertThat(future.isDone()).isFalse(); - assertThat(future.isCancelled()).isFalse(); - } - redis1.clusterForget(partition2.getNodeId()); - redis2.clusterForget(partition1.getNodeId()); - - clusterClient.setOptions(ClusterClientOptions.builder().refreshClusterView(true).build()); - waitUntilOnlyOnePartition(); - - Wait.untilTrue(() -> Futures.areAllCompleted(futures)).waitOrTimeout(); - - assertRoutedExecution(clusterConnection); - - clusterConnection.close(); - - } - - @Test - public void expireStaleNodeIdConnections() throws Exception { - - clusterClient.setOptions( - ClusterClientOptions.builder().refreshClusterView(true).refreshPeriod(1, TimeUnit.SECONDS).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connectClusterAsync(); - - ClusterSetup.setup2Masters(clusterRule); - - PooledClusterConnectionProvider clusterConnectionProvider = getPooledClusterConnectionProvider( - clusterConnection); - - assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(0); - - assertRoutedExecution(clusterConnection); - - assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(2); - - Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); - for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { - if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - redis1.clusterForget(redisClusterNode.getNodeId()); - } - } - - partitions = ClusterPartitionParser.parse(redis2.clusterNodes()); - for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { - if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - redis2.clusterForget(redisClusterNode.getNodeId()); - } - } - - Wait.untilEquals(1, () -> clusterClient.getPartitions().size()).waitOrTimeout(); - Wait.untilEquals(1, () -> clusterConnectionProvider.getConnectionCount()).waitOrTimeout(); - - clusterConnection.close(); - - } - - private void assertRoutedExecution(RedisClusterAsyncCommands clusterConnection) throws Exception { - assertExecuted(clusterConnection.set("A", "value")); // 6373 - assertExecuted(clusterConnection.set("t", "value")); // 15891 - assertExecuted(clusterConnection.set("p", "value")); // 16023 - } - - @Test - public void doNotExpireStaleNodeIdConnections() throws Exception { - - clusterClient.setOptions(ClusterClientOptions.builder().closeStaleConnections(false).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - ClusterSetup.setup2Masters(clusterRule); - - PooledClusterConnectionProvider clusterConnectionProvider = getPooledClusterConnectionProvider( - clusterConnection); - - assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(0); - - assertRoutedExecution(clusterConnection); - - assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(2); - - Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); - for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { - if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - redis1.clusterForget(redisClusterNode.getNodeId()); - } - } - - partitions = ClusterPartitionParser.parse(redis2.clusterNodes()); - for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { - if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - redis2.clusterForget(redisClusterNode.getNodeId()); - } - } - - clusterClient.reloadPartitions(); - - assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(2); - - clusterConnection.close(); - - } - - @Test - public void expireStaleHostAndPortConnections() throws Exception { - - clusterClient.setOptions(ClusterClientOptions.builder().build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connectClusterAsync(); - - ClusterSetup.setup2Masters(clusterRule); - - final PooledClusterConnectionProvider clusterConnectionProvider = getPooledClusterConnectionProvider( - clusterConnection); - - assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(0); - - assertRoutedExecution(clusterConnection); - assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(2); - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - clusterConnection.getConnection(redisClusterNode.getUri().getHost(), redisClusterNode.getUri().getPort()); - clusterConnection.getConnection(redisClusterNode.getNodeId()); - } - - assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(4); - - Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); - for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { - if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - redis1.clusterForget(redisClusterNode.getNodeId()); - } - } - - partitions = ClusterPartitionParser.parse(redis2.clusterNodes()); - for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { - if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { - redis2.clusterForget(redisClusterNode.getNodeId()); - } - } - - clusterClient.reloadPartitions(); - - Wait.untilEquals(1, () -> clusterClient.getPartitions().size()).waitOrTimeout(); - Wait.untilEquals(2L, () -> clusterConnectionProvider.getConnectionCount()).waitOrTimeout(); - - clusterConnection.close(); - } - - @Test - public void readFromSlaveTest() throws Exception { - - ClusterSetup.setup2Masters(clusterRule); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - clusterConnection.getStatefulConnection().setReadFrom(ReadFrom.SLAVE); - - clusterConnection.set(key, value).get(); - - try { - clusterConnection.get(key); - } catch (RedisException e) { - assertThat(e).hasMessageContaining("Cannot determine a partition to read for slot"); - } - - clusterConnection.close(); - } - - @Test - public void readFromNearestTest() throws Exception { - - ClusterSetup.setup2Masters(clusterRule); - RedisAdvancedClusterCommands clusterConnection = clusterClient.connect().sync(); - clusterConnection.getStatefulConnection().setReadFrom(ReadFrom.NEAREST); - - clusterConnection.set(key, value); - - assertThat(clusterConnection.get(key)).isEqualTo(value); - - clusterConnection.close(); - } - - protected PooledClusterConnectionProvider getPooledClusterConnectionProvider( - RedisAdvancedClusterAsyncCommands clusterAsyncConnection) { - - RedisChannelHandler channelHandler = getChannelHandler(clusterAsyncConnection); - ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) channelHandler.getChannelWriter(); - return (PooledClusterConnectionProvider) writer.getClusterConnectionProvider(); - } - - private RedisChannelHandler getChannelHandler( - RedisAdvancedClusterAsyncCommands clusterAsyncConnection) { - return (RedisChannelHandler) clusterAsyncConnection.getStatefulConnection(); - } - - private void assertExecuted(RedisFuture set) throws Exception { - set.get(5, TimeUnit.SECONDS); - assertThat(set.getError()).isNull(); - assertThat(set.get()).isEqualTo("OK"); - } - - private void waitUntilOnlyOnePartition() throws InterruptedException, TimeoutException { - Wait.untilEquals(1, () -> clusterClient.getPartitions().size()).waitOrTimeout(); - Wait.untilTrue(() -> { - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - if (redisClusterNode.getSlots().size() > 16380) { - return true; - } - } - - return false; - }).waitOrTimeout(); - } - - private void suspendConnection(RedisClusterAsyncCommands asyncCommands) - throws InterruptedException, TimeoutException { - Connections.getConnectionWatchdog(((RedisAsyncCommands) asyncCommands).getStatefulConnection()) - .setReconnectSuspended(true); - asyncCommands.quit(); - WaitFor.waitOrTimeout(() -> !asyncCommands.isOpen(), timeout(seconds(6))); - } - - protected void shiftAllSlotsToNode1() throws InterruptedException, TimeoutException { - - redis1.clusterDelSlots(AbstractClusterTest.createSlots(12000, 16384)); - redis2.clusterDelSlots(AbstractClusterTest.createSlots(12000, 16384)); - - waitForSlots(redis2, 0); - - final RedisClusterNode redis2Partition = getOwnPartition(redis2); - WaitFor.waitOrTimeout(new Condition() { - @Override - public boolean isSatisfied() { - Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); - RedisClusterNode partition = partitions.getPartitionByNodeId(redis2Partition.getNodeId()); - - if (!partition.getSlots().isEmpty()) { - removeRemaining(partition); - } - - return partition.getSlots().size() == 0; - } - - private void removeRemaining(RedisClusterNode partition) { - try { - redis1.clusterDelSlots(toIntArray(partition.getSlots())); - } catch (Exception o_O) { - // ignore - } - } - }, timeout(seconds(10))); - - redis1.clusterAddSlots(RedisClusterClientTest.createSlots(12000, 16384)); - waitForSlots(redis1, 16384); - - Wait.untilTrue(clusterRule::isStable).waitOrTimeout(); - } - - private int[] toIntArray(List list) { - return list.parallelStream().mapToInt(Integer::intValue).toArray(); - } - - private void waitForSlots(RedisClusterCommands connection, int slotCount) - throws InterruptedException, TimeoutException { - Wait.untilEquals(slotCount, () -> getOwnPartition(connection).getSlots().size()).waitOrTimeout(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterStressScenariosTest.java b/src/test/java/com/lambdaworks/redis/cluster/RedisClusterStressScenariosTest.java deleted file mode 100644 index 11a946771b..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/RedisClusterStressScenariosTest.java +++ /dev/null @@ -1,214 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.google.code.tempusfugit.temporal.Timeout.timeout; -import static com.lambdaworks.redis.cluster.ClusterTestUtil.getOwnPartition; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.junit.*; -import org.junit.runners.MethodSorters; - -import com.google.code.tempusfugit.temporal.Duration; -import com.google.code.tempusfugit.temporal.ThreadSleep; -import com.google.code.tempusfugit.temporal.WaitFor; -import com.lambdaworks.Wait; -import com.lambdaworks.category.SlowTests; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -@SuppressWarnings("unchecked") -@SlowTests -public class RedisClusterStressScenariosTest extends AbstractTest { - - public static final String host = TestSettings.hostAddr(); - - protected static RedisClient client; - protected static RedisClusterClient clusterClient; - - protected Logger log = LogManager.getLogger(getClass()); - - protected StatefulRedisConnection redis5; - protected StatefulRedisConnection redis6; - - protected RedisClusterCommands redissync5; - protected RedisClusterCommands redissync6; - - protected String key = "key"; - protected String value = "value"; - - @Rule - public ClusterRule clusterRule = new ClusterRule(clusterClient, AbstractClusterTest.port5, AbstractClusterTest.port6); - - @BeforeClass - public static void setupClient() throws Exception { - client = RedisClient.create(RedisURI.Builder.redis(host, AbstractClusterTest.port5).build()); - clusterClient = RedisClusterClient.create( - Collections.singletonList(RedisURI.Builder.redis(host, AbstractClusterTest.port5) - .build())); - } - - @AfterClass - public static void shutdownClient() { - FastShutdown.shutdown(client); - } - - @Before - public void before() throws Exception { - - ClusterSetup.setupMasterWithSlave(clusterRule); - - redis5 = client.connect(RedisURI.Builder.redis(host, AbstractClusterTest.port5).build()); - redis6 = client.connect(RedisURI.Builder.redis(host, AbstractClusterTest.port6).build()); - - redissync5 = redis5.sync(); - redissync6 = redis6.sync(); - clusterClient.reloadPartitions(); - - WaitFor.waitOrTimeout(() -> { - return clusterRule.isStable(); - }, timeout(seconds(5)), new ThreadSleep(Duration.millis(500))); - - } - - @After - public void after() throws Exception { - redis5.close(); - - redissync5.close(); - redissync6.close(); - } - - @Test - public void testClusterFailover() throws Exception { - - log.info("Cluster node 5 is master"); - log.info("Cluster nodes seen from node 5:\n" + redissync5.clusterNodes()); - log.info("Cluster nodes seen from node 6:\n" + redissync6.clusterNodes()); - - Wait.untilTrue(() -> getOwnPartition(redissync5).is(RedisClusterNode.NodeFlag.MASTER)).waitOrTimeout(); - Wait.untilTrue(() -> getOwnPartition(redissync6).is(RedisClusterNode.NodeFlag.SLAVE)).waitOrTimeout(); - - String failover = redissync6.clusterFailover(true); - assertThat(failover).isEqualTo("OK"); - - Wait.untilTrue(() -> getOwnPartition(redissync6).is(RedisClusterNode.NodeFlag.MASTER)).waitOrTimeout(); - Wait.untilTrue(() -> getOwnPartition(redissync5).is(RedisClusterNode.NodeFlag.SLAVE)).waitOrTimeout(); - - log.info("Cluster nodes seen from node 5 after clusterFailover:\n" + redissync5.clusterNodes()); - log.info("Cluster nodes seen from node 6 after clusterFailover:\n" + redissync6.clusterNodes()); - - RedisClusterNode redis5Node = getOwnPartition(redissync5); - RedisClusterNode redis6Node = getOwnPartition(redissync6); - - assertThat(redis5Node.getFlags()).contains(RedisClusterNode.NodeFlag.SLAVE); - assertThat(redis6Node.getFlags()).contains(RedisClusterNode.NodeFlag.MASTER); - - } - - @Test - public void testClusterConnectionStability() throws Exception { - - RedisAdvancedClusterAsyncCommandsImpl connection = (RedisAdvancedClusterAsyncCommandsImpl) clusterClient - .connectClusterAsync(); - - RedisChannelHandler statefulConnection = (RedisChannelHandler) connection.getStatefulConnection(); - - connection.set("a", "b"); - ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) statefulConnection - .getChannelWriter(); - - StatefulRedisConnectionImpl statefulSlotConnection = (StatefulRedisConnectionImpl) writer - .getClusterConnectionProvider().getConnection(ClusterConnectionProvider.Intent.WRITE, 3300); - - final RedisAsyncConnection slotConnection = statefulSlotConnection.async(); - - slotConnection.set("a", "b"); - slotConnection.close(); - - WaitFor.waitOrTimeout(() -> !slotConnection.isOpen(), timeout(seconds(5))); - - assertThat(statefulSlotConnection.isClosed()).isTrue(); - assertThat(statefulSlotConnection.isOpen()).isFalse(); - - assertThat(connection.isOpen()).isTrue(); - assertThat(statefulConnection.isOpen()).isTrue(); - assertThat(statefulConnection.isClosed()).isFalse(); - - try { - connection.set("a", "b"); - } catch (RedisException e) { - assertThat(e).hasMessageContaining("Connection is closed"); - } - - connection.close(); - - } - - @Test(timeout = 20000) - public void distributedClusteredAccessAsync() throws Exception { - - RedisClusterAsyncConnection connection = clusterClient.connectClusterAsync(); - - List> futures = new ArrayList<>(); - for (int i = 0; i < 100; i++) { - futures.add(connection.set("a" + i, "myValue1" + i)); - futures.add(connection.set("b" + i, "myValue2" + i)); - futures.add(connection.set("d" + i, "myValue3" + i)); - } - - for (RedisFuture future : futures) { - future.get(); - } - - for (int i = 0; i < 100; i++) { - RedisFuture setA = connection.get("a" + i); - RedisFuture setB = connection.get("b" + i); - RedisFuture setD = connection.get("d" + i); - - setA.get(); - setB.get(); - setD.get(); - - assertThat(setA.getError()).isNull(); - assertThat(setB.getError()).isNull(); - assertThat(setD.getError()).isNull(); - - assertThat(setA.get()).isEqualTo("myValue1" + i); - assertThat(setB.get()).isEqualTo("myValue2" + i); - assertThat(setD.get()).isEqualTo("myValue3" + i); - } - - connection.close(); - } - - @Test - public void distributedClusteredAccessSync() throws Exception { - - RedisClusterConnection connection = clusterClient.connectCluster(); - - for (int i = 0; i < 100; i++) { - connection.set("a" + i, "myValue1" + i); - connection.set("b" + i, "myValue2" + i); - connection.set("d" + i, "myValue3" + i); - } - - for (int i = 0; i < 100; i++) { - - assertThat(connection.get("a" + i)).isEqualTo("myValue1" + i); - assertThat(connection.get("b" + i)).isEqualTo("myValue2" + i); - assertThat(connection.get("d" + i)).isEqualTo("myValue3" + i); - } - - connection.close(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/RedisRxClusterClientTest.java b/src/test/java/com/lambdaworks/redis/cluster/RedisRxClusterClientTest.java deleted file mode 100644 index 626dac4ef7..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/RedisRxClusterClientTest.java +++ /dev/null @@ -1,137 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static com.lambdaworks.redis.cluster.ClusterTestUtil.getOwnPartition; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.Collections; -import java.util.List; - -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.FixMethodOrder; -import org.junit.Test; -import org.junit.runners.MethodSorters; - -import rx.Observable; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.rx.RedisAdvancedClusterReactiveCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -@SuppressWarnings("unchecked") -public class RedisRxClusterClientTest extends AbstractClusterTest { - - protected static RedisClient client; - - protected StatefulRedisClusterConnection connection; - protected RedisAdvancedClusterCommands sync; - protected RedisAdvancedClusterReactiveCommands rx; - - @BeforeClass - public static void setupClient() throws Exception { - setupClusterClient(); - client = RedisClient.create(RedisURI.Builder.redis(host, port1).build()); - clusterClient = RedisClusterClient.create(Collections.singletonList(RedisURI.Builder.redis(host, port1).build())); - } - - @AfterClass - public static void shutdownClient() { - shutdownClusterClient(); - FastShutdown.shutdown(client); - FastShutdown.shutdown(clusterClient); - } - - @Before - public void before() throws Exception { - - clusterRule.getClusterClient().reloadPartitions(); - - clusterClient.reloadPartitions(); - connection = clusterClient.connect(); - sync = connection.sync(); - rx = connection.reactive(); - } - - @After - public void after() throws Exception { - connection.close(); - } - - @Test - public void testClusterCommandRedirection() throws Exception { - // Command on node within the default connection - assertThat(getSingle(rx.set(KEY_B, "myValue1"))).isEqualTo("OK"); - - // gets redirection to node 3 - assertThat(getSingle(rx.set(KEY_A, "myValue1"))).isEqualTo("OK"); - } - - @Test - public void getKeysInSlot() throws Exception { - - sync.set(KEY_A, value); - sync.set(KEY_B, value); - - List keysA = getSingle(rx.clusterGetKeysInSlot(SLOT_A, 10).toList()); - assertThat(keysA).isEqualTo(Collections.singletonList(KEY_A)); - - List keysB = getSingle(rx.clusterGetKeysInSlot(SLOT_B, 10).toList()); - assertThat(keysB).isEqualTo(Collections.singletonList(KEY_B)); - } - - @Test - public void countKeysInSlot() throws Exception { - - sync.set(KEY_A, value); - sync.set(KEY_B, value); - - Long result = getSingle(rx.clusterCountKeysInSlot(SLOT_A)); - assertThat(result).isEqualTo(1L); - - result = getSingle(rx.clusterCountKeysInSlot(SLOT_B)); - assertThat(result).isEqualTo(1L); - - int slotZZZ = SlotHash.getSlot("ZZZ".getBytes()); - result = getSingle(rx.clusterCountKeysInSlot(slotZZZ)); - assertThat(result).isEqualTo(0L); - } - - @Test - public void testClusterCountFailureReports() throws Exception { - RedisClusterNode ownPartition = getOwnPartition(sync); - assertThat(getSingle(rx.clusterCountFailureReports(ownPartition.getNodeId()))).isGreaterThanOrEqualTo(0); - } - - @Test - public void testClusterKeyslot() throws Exception { - assertThat(getSingle(rx.clusterKeyslot(KEY_A))).isEqualTo(SLOT_A); - assertThat(SlotHash.getSlot(KEY_A)).isEqualTo(SLOT_A); - } - - @Test - public void testClusterSaveconfig() throws Exception { - assertThat(getSingle(rx.clusterSaveconfig())).isEqualTo("OK"); - } - - @Test - public void testClusterSetConfigEpoch() throws Exception { - try { - getSingle(rx.clusterSetConfigEpoch(1L)); - } catch (RedisException e) { - assertThat(e).hasMessageContaining("ERR The user can assign a config epoch only"); - } - } - - private T getSingle(Observable observable) { - return observable.toBlocking().single(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/RoundRobinSocketAddressSupplierTest.java b/src/test/java/com/lambdaworks/redis/cluster/RoundRobinSocketAddressSupplierTest.java deleted file mode 100644 index fa8c9ce5a1..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/RoundRobinSocketAddressSupplierTest.java +++ /dev/null @@ -1,94 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.mockito.Mockito.when; - -import java.util.ArrayList; -import java.util.HashSet; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.DnsResolvers; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.runners.MockitoJUnitRunner; - -/** - * @author Mark Paluch - */ -@RunWith(MockitoJUnitRunner.class) -public class RoundRobinSocketAddressSupplierTest { - - private static RedisURI hap1 = new RedisURI("127.0.0.1", 1, 1, TimeUnit.SECONDS); - private static RedisURI hap2 = new RedisURI("127.0.0.1", 2, 1, TimeUnit.SECONDS); - private static RedisURI hap3 = new RedisURI("127.0.0.1", 3, 1, TimeUnit.SECONDS); - private static RedisURI hap4 = new RedisURI("127.0.0.1", 4, 1, TimeUnit.SECONDS); - private static Partitions partitions; - - @Mock - private ClientResources clientResourcesMock; - - @BeforeClass - public static void beforeClass() throws Exception { - hap1.getResolvedAddress(); - hap2.getResolvedAddress(); - hap3.getResolvedAddress(); - } - - @Before - public void before() throws Exception { - - when(clientResourcesMock.dnsResolver()).thenReturn(DnsResolvers.JVM_DEFAULT); - - partitions = new Partitions(); - partitions.addPartition( - new RedisClusterNode(hap1, "1", true, "", 0, 0, 0, new ArrayList<>(), new HashSet<>())); - partitions.addPartition( - new RedisClusterNode(hap2, "2", true, "", 0, 0, 0, new ArrayList<>(), new HashSet<>())); - partitions.addPartition( - new RedisClusterNode(hap3, "3", true, "", 0, 0, 0, new ArrayList<>(), new HashSet<>())); - - partitions.updateCache(); - } - - @Test - public void noOffset() throws Exception { - - RoundRobinSocketAddressSupplier sut = new RoundRobinSocketAddressSupplier(partitions, redisClusterNodes -> redisClusterNodes, - clientResourcesMock); - - assertThat(sut.get()).isEqualTo(hap1.getResolvedAddress()); - assertThat(sut.get()).isEqualTo(hap2.getResolvedAddress()); - assertThat(sut.get()).isEqualTo(hap3.getResolvedAddress()); - assertThat(sut.get()).isEqualTo(hap1.getResolvedAddress()); - - assertThat(sut.get()).isNotEqualTo(hap3.getResolvedAddress()); - } - - @Test - public void partitionTableChanges() throws Exception { - - RoundRobinSocketAddressSupplier sut = new RoundRobinSocketAddressSupplier(partitions, redisClusterNodes -> redisClusterNodes, - clientResourcesMock); - - assertThat(sut.get()).isEqualTo(hap1.getResolvedAddress()); - assertThat(sut.get()).isEqualTo(hap2.getResolvedAddress()); - - partitions.add( - new RedisClusterNode(hap4, "4", true, "", 0, 0, 0, new ArrayList<>(), new HashSet<>())); - - assertThat(sut.get()).isEqualTo(hap1.getResolvedAddress()); - assertThat(sut.get()).isEqualTo(hap2.getResolvedAddress()); - assertThat(sut.get()).isEqualTo(hap3.getResolvedAddress()); - assertThat(sut.get()).isEqualTo(hap4.getResolvedAddress()); - assertThat(sut.get()).isEqualTo(hap1.getResolvedAddress()); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/SlotHashTest.java b/src/test/java/com/lambdaworks/redis/cluster/SlotHashTest.java deleted file mode 100644 index 39c5010e2e..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/SlotHashTest.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import static org.assertj.core.api.Assertions.*; - -import org.junit.Test; - -/** - * @author Mark Paluch - * @since 3.0 - */ -public class SlotHashTest { - - @Test - public void testHash() throws Exception { - int result = SlotHash.getSlot("123456789".getBytes()); - assertThat(result).isEqualTo(0x31C3); - - } - - @Test - public void testHashWithHash() throws Exception { - int result = SlotHash.getSlot("key{123456789}a".getBytes()); - assertThat(result).isEqualTo(0x31C3); - - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/CustomClusterCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/CustomClusterCommandTest.java deleted file mode 100644 index aa4ce06e5f..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/CustomClusterCommandTest.java +++ /dev/null @@ -1,126 +0,0 @@ -package com.lambdaworks.redis.cluster.commands; - -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.cluster.AbstractClusterTest; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.sync.RedisAdvancedClusterCommands; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.commands.CustomCommandTest; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.*; - -import rx.Observable; - -/** - * @author Mark Paluch - */ -public class CustomClusterCommandTest extends AbstractClusterTest { - - private static final Utf8StringCodec utf8StringCodec = new Utf8StringCodec(); - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection redisClusterConnection; - private RedisAdvancedClusterCommands redis; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient( - RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redisClusterConnection = redisClusterClient.connect(); - redis = redisClusterConnection.sync(); - ClusterTestUtil.flushDatabaseOfAllNodes(redisClusterConnection); - } - - @Test - public void dispatchSet() throws Exception { - - String response = redis.dispatch(CustomCommandTest.MyCommands.SET, new StatusOutput<>(utf8StringCodec), - new CommandArgs<>(utf8StringCodec).addKey(key).addValue(value)); - - assertThat(response).isEqualTo("OK"); - } - - @Test - public void dispatchWithoutArgs() throws Exception { - - String response = redis.dispatch(CustomCommandTest.MyCommands.INFO, new StatusOutput<>(utf8StringCodec)); - - assertThat(response).contains("connected_clients"); - } - - @Test(expected = RedisCommandExecutionException.class) - public void dispatchShouldFailForWrongDataType() throws Exception { - - redis.hset(key, key, value); - redis.dispatch(CommandType.GET, new StatusOutput<>(utf8StringCodec), new CommandArgs<>(utf8StringCodec).addKey(key)); - } - - @Test - public void standaloneAsyncPing() throws Exception { - - RedisCommand command = new Command<>(CustomCommandTest.MyCommands.PING, - new StatusOutput<>(new Utf8StringCodec()), null); - - AsyncCommand async = new AsyncCommand<>(command); - redisClusterConnection.dispatch(async); - - assertThat(async.get()).isEqualTo("PONG"); - } - - @Test - public void standaloneFireAndForget() throws Exception { - - RedisCommand command = new Command<>(CustomCommandTest.MyCommands.PING, - new StatusOutput<>(new Utf8StringCodec()), null); - redisClusterConnection.dispatch(command); - assertThat(command.isCancelled()).isFalse(); - - } - - @Test - public void standaloneReactivePing() throws Exception { - - RedisCommand command = new Command<>(CustomCommandTest.MyCommands.PING, - new StatusOutput<>(new Utf8StringCodec()), null); - ReactiveCommandDispatcher dispatcher = new ReactiveCommandDispatcher<>(command, - redisClusterConnection, false); - - String result = Observable.create(dispatcher).toBlocking().first(); - - assertThat(result).isEqualTo("PONG"); - } - - public enum MyCommands implements ProtocolKeyword { - PING, SET, INFO; - - private final byte name[]; - - MyCommands() { - // cache the bytes for the command name. Reduces memory and cpu pressure when using commands. - name = name().getBytes(); - } - - @Override - public byte[] getBytes() { - return name; - } - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/GeoClusterCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/GeoClusterCommandTest.java deleted file mode 100644 index a5db0e2d21..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/GeoClusterCommandTest.java +++ /dev/null @@ -1,89 +0,0 @@ -package com.lambdaworks.redis.cluster.commands; - -import static com.lambdaworks.redis.cluster.ClusterTestUtil.flushDatabaseOfAllNodes; - -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Ignore; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.commands.GeoCommandTest; - -/** - * @author Mark Paluch - */ -public class GeoClusterCommandTest extends GeoCommandTest { - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient( - RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - flushDatabaseOfAllNodes(clusterConnection); - } - - @Override - @SuppressWarnings("unchecked") - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connectCluster().getStatefulConnection(); - return ClusterTestUtil.redisCommandsOverCluster(clusterConnection); - } - - @Ignore("MULTI not available on Redis Cluster") - @Override - public void geoaddWithTransaction() throws Exception { - } - - @Ignore("MULTI not available on Redis Cluster") - @Override - public void geoaddMultiWithTransaction() throws Exception { - } - - @Ignore("MULTI not available on Redis Cluster") - @Override - public void georadiusWithTransaction() throws Exception { - } - - @Ignore("MULTI not available on Redis Cluster") - @Override - public void geodistWithTransaction() throws Exception { - } - - @Ignore("MULTI not available on Redis Cluster") - @Override - public void georadiusWithArgsAndTransaction() throws Exception { - } - - @Ignore("MULTI not available on Redis Cluster") - @Override - public void georadiusbymemberWithArgsAndTransaction() throws Exception { - } - - @Ignore("MULTI not available on Redis Cluster") - @Override - public void geoposWithTransaction() throws Exception { - } - - @Ignore("MULTI not available on Redis Cluster") - @Override - public void geohashWithTransaction() throws Exception { - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/HashClusterCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/HashClusterCommandTest.java deleted file mode 100644 index 0e51560adf..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/HashClusterCommandTest.java +++ /dev/null @@ -1,46 +0,0 @@ -package com.lambdaworks.redis.cluster.commands; - -import com.lambdaworks.redis.FastShutdown; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.commands.HashCommandTest; - -/** - * @author Mark Paluch - */ -public class HashClusterCommandTest extends HashCommandTest { - - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - ClusterTestUtil.flushDatabaseOfAllNodes(clusterConnection); - } - - @Override - @SuppressWarnings("unchecked") - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connectCluster().getStatefulConnection(); - return ClusterTestUtil.redisCommandsOverCluster(clusterConnection); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/KeyClusterCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/KeyClusterCommandTest.java deleted file mode 100644 index 0539530639..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/KeyClusterCommandTest.java +++ /dev/null @@ -1,98 +0,0 @@ -package com.lambdaworks.redis.cluster.commands; - -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; - -/** - * @author Mark Paluch - */ -public class KeyClusterCommandTest extends AbstractRedisClientTest { - - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = RedisClusterClient - .create(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - ClusterTestUtil.flushDatabaseOfAllNodes(clusterConnection); - } - - @Override - @SuppressWarnings("unchecked") - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connect(); - return ClusterTestUtil.redisCommandsOverCluster(clusterConnection); - } - - @Test - public void del() throws Exception { - - redis.set(key, "value"); - redis.set("a", "value"); - redis.set("b", "value"); - - assertThat(redis.del(key, "a", "b")).isEqualTo(3); - assertThat(redis.exists(key)).isFalse(); - assertThat(redis.exists("a")).isFalse(); - assertThat(redis.exists("b")).isFalse(); - } - - @Test - public void exists() throws Exception { - - assertThat(redis.exists(key, "a", "b")).isEqualTo(0); - - redis.set(key, "value"); - redis.set("a", "value"); - redis.set("b", "value"); - - assertThat(redis.exists(key, "a", "b")).isEqualTo(3); - } - - @Test - public void touch() throws Exception { - - redis.set(key, "value"); - redis.set("a", "value"); - redis.set("b", "value"); - - assertThat(redis.touch(key, "a", "b")).isEqualTo(3); - assertThat(redis.exists(key, "a", "b")).isEqualTo(3); - } - - @Test - public void unlink() throws Exception { - - redis.set(key, "value"); - redis.set("a", "value"); - redis.set("b", "value"); - - assertThat(redis.unlink(key, "a", "b")).isEqualTo(3); - assertThat(redis.exists(key)).isFalse(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/ListClusterCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/ListClusterCommandTest.java deleted file mode 100644 index 7fafa83c66..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/ListClusterCommandTest.java +++ /dev/null @@ -1,88 +0,0 @@ -package com.lambdaworks.redis.cluster.commands; - -import com.lambdaworks.redis.FastShutdown; -import org.assertj.core.api.Assertions; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.commands.ListCommandTest; - -import static org.assertj.core.api.Assertions.assertThat; - -/** - * @author Mark Paluch - */ -public class ListClusterCommandTest extends ListCommandTest { - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - ClusterTestUtil.flushDatabaseOfAllNodes(clusterConnection); - } - - @Override - @SuppressWarnings("unchecked") - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connectCluster().getStatefulConnection(); - return ClusterTestUtil.redisCommandsOverCluster(clusterConnection); - } - - // re-implementation because keys have to be on the same slot - @Test - public void brpoplpush() throws Exception { - - redis.rpush("UKPDHs8Zlp", "1", "2"); - redis.rpush("br7EPz9bbj", "3", "4"); - assertThat(redis.brpoplpush(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo("2"); - assertThat(redis.lrange("UKPDHs8Zlp", 0, -1)).isEqualTo(list("1")); - assertThat(redis.lrange("br7EPz9bbj", 0, -1)).isEqualTo(list("2", "3", "4")); - } - - @Test - public void brpoplpushTimeout() throws Exception { - assertThat(redis.brpoplpush(1, "UKPDHs8Zlp", "br7EPz9bbj")).isNull(); - } - - @Test - public void blpop() throws Exception { - redis.rpush("br7EPz9bbj", "2", "3"); - assertThat(redis.blpop(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo(kv("br7EPz9bbj", "2")); - } - - @Test - public void brpop() throws Exception { - redis.rpush("br7EPz9bbj", "2", "3"); - assertThat(redis.brpop(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo(kv("br7EPz9bbj", "3")); - } - - @Test - public void rpoplpush() throws Exception { - assertThat(redis.rpoplpush("UKPDHs8Zlp", "br7EPz9bbj")).isNull(); - redis.rpush("UKPDHs8Zlp", "1", "2"); - redis.rpush("br7EPz9bbj", "3", "4"); - assertThat(redis.rpoplpush("UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo("2"); - assertThat(redis.lrange("UKPDHs8Zlp", 0, -1)).isEqualTo(list("1")); - assertThat(redis.lrange("br7EPz9bbj", 0, -1)).isEqualTo(list("2", "3", "4")); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/StringClusterCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/StringClusterCommandTest.java deleted file mode 100644 index 9d4691090c..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/StringClusterCommandTest.java +++ /dev/null @@ -1,77 +0,0 @@ -package com.lambdaworks.redis.cluster.commands; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.LinkedHashMap; -import java.util.Map; - -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.ListStreamingAdapter; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.commands.StringCommandTest; -import com.lambdaworks.redis.internal.LettuceSets; - -/** - * @author Mark Paluch - */ -public class StringClusterCommandTest extends StringCommandTest { - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - ClusterTestUtil.flushDatabaseOfAllNodes(clusterConnection); - } - - @Override - @SuppressWarnings("unchecked") - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connectCluster().getStatefulConnection(); - return ClusterTestUtil.redisCommandsOverCluster(clusterConnection); - } - - @Test - public void msetnx() throws Exception { - redis.set("one", "1"); - Map map = new LinkedHashMap<>(); - map.put("one", "1"); - map.put("two", "2"); - assertThat(redis.msetnx(map)).isTrue(); - redis.del("one"); - assertThat(redis.msetnx(map)).isTrue(); - assertThat(redis.get("two")).isEqualTo("2"); - } - - @Test - public void mgetStreaming() throws Exception { - setupMget(); - - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - Long count = redis.mget(streamingAdapter, "one", "two"); - - assertThat(LettuceSets.newHashSet(streamingAdapter.getList())).isEqualTo(LettuceSets.newHashSet(list("1", "2"))); - - assertThat(count.intValue()).isEqualTo(2); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/rx/HashClusterRxCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/rx/HashClusterRxCommandTest.java deleted file mode 100644 index b597b9ebaa..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/rx/HashClusterRxCommandTest.java +++ /dev/null @@ -1,46 +0,0 @@ -package com.lambdaworks.redis.cluster.commands.rx; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.commands.HashCommandTest; -import com.lambdaworks.redis.commands.rx.RxSyncInvocationHandler; - -/** - * @author Mark Paluch - */ -public class HashClusterRxCommandTest extends HashCommandTest { - - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - ClusterTestUtil.flushDatabaseOfAllNodes(clusterConnection); - } - - @Override - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connectCluster().getStatefulConnection(); - return RxSyncInvocationHandler.sync(redisClusterClient.connectCluster().getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/rx/KeyClusterRxCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/rx/KeyClusterRxCommandTest.java deleted file mode 100644 index 7b79a2baf6..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/rx/KeyClusterRxCommandTest.java +++ /dev/null @@ -1,46 +0,0 @@ -package com.lambdaworks.redis.cluster.commands.rx; - -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.commands.KeyClusterCommandTest; -import com.lambdaworks.redis.commands.rx.RxSyncInvocationHandler; - -/** - * @author Mark Paluch - */ -public class KeyClusterRxCommandTest extends KeyClusterCommandTest { - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient( - RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - ClusterTestUtil.flushDatabaseOfAllNodes(clusterConnection); - } - - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connect(); - return RxSyncInvocationHandler.sync(redisClusterClient.connect()); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/rx/ListClusterRxCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/rx/ListClusterRxCommandTest.java deleted file mode 100644 index 24c76669a3..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/rx/ListClusterRxCommandTest.java +++ /dev/null @@ -1,88 +0,0 @@ -package com.lambdaworks.redis.cluster.commands.rx; - -import com.lambdaworks.redis.FastShutdown; -import org.assertj.core.api.Assertions; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.commands.ListCommandTest; -import com.lambdaworks.redis.commands.rx.RxSyncInvocationHandler; - -import static org.assertj.core.api.Assertions.assertThat; - -/** - * @author Mark Paluch - */ -public class ListClusterRxCommandTest extends ListCommandTest { - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - ClusterTestUtil.flushDatabaseOfAllNodes(clusterConnection); - } - - @Override - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connectCluster().getStatefulConnection(); - return RxSyncInvocationHandler.sync(redisClusterClient.connectCluster().getStatefulConnection()); - } - - // re-implementation because keys have to be on the same slot - @Test - public void brpoplpush() throws Exception { - - redis.rpush("UKPDHs8Zlp", "1", "2"); - redis.rpush("br7EPz9bbj", "3", "4"); - assertThat(redis.brpoplpush(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo("2"); - assertThat(redis.lrange("UKPDHs8Zlp", 0, -1)).isEqualTo(list("1")); - assertThat(redis.lrange("br7EPz9bbj", 0, -1)).isEqualTo(list("2", "3", "4")); - } - - @Test - public void brpoplpushTimeout() throws Exception { - assertThat(redis.brpoplpush(1, "UKPDHs8Zlp", "br7EPz9bbj")).isNull(); - } - - @Test - public void blpop() throws Exception { - redis.rpush("br7EPz9bbj", "2", "3"); - assertThat(redis.blpop(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo(kv("br7EPz9bbj", "2")); - } - - @Test - public void brpop() throws Exception { - redis.rpush("br7EPz9bbj", "2", "3"); - assertThat(redis.brpop(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo(kv("br7EPz9bbj", "3")); - } - - @Test - public void rpoplpush() throws Exception { - assertThat(redis.rpoplpush("UKPDHs8Zlp", "br7EPz9bbj")).isNull(); - redis.rpush("UKPDHs8Zlp", "1", "2"); - redis.rpush("br7EPz9bbj", "3", "4"); - assertThat(redis.rpoplpush("UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo("2"); - assertThat(redis.lrange("UKPDHs8Zlp", 0, -1)).isEqualTo(list("1")); - assertThat(redis.lrange("br7EPz9bbj", 0, -1)).isEqualTo(list("2", "3", "4")); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/commands/rx/StringClusterRxCommandTest.java b/src/test/java/com/lambdaworks/redis/cluster/commands/rx/StringClusterRxCommandTest.java deleted file mode 100644 index f9fcff9f30..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/commands/rx/StringClusterRxCommandTest.java +++ /dev/null @@ -1,81 +0,0 @@ -package com.lambdaworks.redis.cluster.commands.rx; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.LinkedHashMap; -import java.util.Map; - -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import rx.Observable; - -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.ClusterTestUtil; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.rx.RedisAdvancedClusterReactiveCommands; -import com.lambdaworks.redis.commands.StringCommandTest; -import com.lambdaworks.redis.commands.rx.RxSyncInvocationHandler; - -/** - * @author Mark Paluch - */ -public class StringClusterRxCommandTest extends StringCommandTest { - private static RedisClusterClient redisClusterClient; - private StatefulRedisClusterConnection clusterConnection; - - @BeforeClass - public static void setupClient() { - redisClusterClient = new RedisClusterClient(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)).build()); - } - - @AfterClass - public static void closeClient() { - FastShutdown.shutdown(redisClusterClient); - } - - @Before - public void openConnection() throws Exception { - redis = connect(); - ClusterTestUtil.flushDatabaseOfAllNodes(clusterConnection); - } - - @Override - protected RedisCommands connect() { - clusterConnection = redisClusterClient.connectCluster().getStatefulConnection(); - return RxSyncInvocationHandler.sync(redisClusterClient.connectCluster().getStatefulConnection()); - } - - @Test - public void msetnx() throws Exception { - redis.set("one", "1"); - Map map = new LinkedHashMap<>(); - map.put("one", "1"); - map.put("two", "2"); - assertThat(redis.msetnx(map)).isTrue(); - redis.del("one"); - assertThat(redis.msetnx(map)).isTrue(); - assertThat(redis.get("two")).isEqualTo("2"); - } - - @Test - public void mget() throws Exception { - - redis.set(key, value); - redis.set("key1", value); - redis.set("key2", value); - - RedisAdvancedClusterReactiveCommands reactive = clusterConnection.reactive(); - - Observable mget = reactive.mget(key, "key1", "key2"); - String first = mget.toBlocking().first(); - assertThat(first).isEqualTo(value); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/models/partitions/PartitionsTest.java b/src/test/java/com/lambdaworks/redis/cluster/models/partitions/PartitionsTest.java deleted file mode 100644 index 17b3317a18..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/models/partitions/PartitionsTest.java +++ /dev/null @@ -1,320 +0,0 @@ -package com.lambdaworks.redis.cluster.models.partitions; - -import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat; - -import java.util.Arrays; -import java.util.HashSet; -import java.util.Iterator; - -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; - -/** - * @author Mark Paluch - */ -public class PartitionsTest { - - private RedisClusterNode node1 = new RedisClusterNode(RedisURI.create("localhost", 6379), "a", true, "", 0, 0, 0, - Arrays.asList(1, 2, 3), new HashSet<>()); - private RedisClusterNode node2 = new RedisClusterNode(RedisURI.create("localhost", 6380), "b", true, "", 0, 0, 0, - Arrays.asList(4, 5, 6), new HashSet<>()); - - @Test - public void contains() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - - assertThat(partitions.contains(node1)).isTrue(); - assertThat(partitions.contains(node2)).isFalse(); - } - - @Test - public void containsUsesReadView() throws Exception { - - Partitions partitions = new Partitions(); - partitions.getPartitions().add(node1); - - assertThat(partitions.contains(node1)).isFalse(); - partitions.updateCache(); - assertThat(partitions.contains(node1)).isTrue(); - } - - @Test - public void containsAll() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - partitions.add(node2); - - assertThat(partitions.containsAll(Arrays.asList(node1, node2))).isTrue(); - } - - @Test - public void containsAllUsesReadView() throws Exception { - - Partitions partitions = new Partitions(); - partitions.getPartitions().add(node1); - - assertThat(partitions.containsAll(Arrays.asList(node1))).isFalse(); - partitions.updateCache(); - assertThat(partitions.containsAll(Arrays.asList(node1))).isTrue(); - } - - @Test - public void add() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - partitions.add(node2); - - assertThat(partitions.getPartitionBySlot(1)).isEqualTo(node1); - assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); - } - - @Test - public void addPartitionClearsCache() throws Exception { - - Partitions partitions = new Partitions(); - partitions.addPartition(node1); - - assertThat(partitions.getPartitionBySlot(1)).isNull(); - } - - @Test - public void addAll() throws Exception { - - Partitions partitions = new Partitions(); - partitions.addAll(Arrays.asList(node1, node2)); - - assertThat(partitions.getPartitionBySlot(1)).isEqualTo(node1); - assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); - } - - @Test - public void getPartitionBySlot() throws Exception { - - Partitions partitions = new Partitions(); - - assertThat(partitions.getPartitionBySlot(1)).isNull(); - - partitions.add(node1); - assertThat(partitions.getPartitionBySlot(1)).isEqualTo(node1); - } - - @Test - public void remove() throws Exception { - - Partitions partitions = new Partitions(); - partitions.addAll(Arrays.asList(node1, node2)); - partitions.remove(node1); - - assertThat(partitions.getPartitionBySlot(1)).isNull(); - assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); - } - - @Test - public void removeAll() throws Exception { - - Partitions partitions = new Partitions(); - partitions.addAll(Arrays.asList(node1, node2)); - partitions.removeAll(Arrays.asList(node1)); - - assertThat(partitions.getPartitionBySlot(1)).isNull(); - assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); - } - - @Test - public void clear() throws Exception { - - Partitions partitions = new Partitions(); - partitions.addAll(Arrays.asList(node1, node2)); - partitions.clear(); - - assertThat(partitions.getPartitionBySlot(1)).isNull(); - assertThat(partitions.getPartitionBySlot(5)).isNull(); - } - - @Test - public void retainAll() throws Exception { - - Partitions partitions = new Partitions(); - partitions.addAll(Arrays.asList(node1, node2)); - partitions.retainAll(Arrays.asList(node2)); - - assertThat(partitions.getPartitionBySlot(1)).isNull(); - assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); - } - - @Test - public void toArray() throws Exception { - - Partitions partitions = new Partitions(); - partitions.addAll(Arrays.asList(node1, node2)); - - assertThat(partitions.toArray()).contains(node1, node2); - } - - @Test - public void toArrayUsesReadView() throws Exception { - - Partitions partitions = new Partitions(); - partitions.getPartitions().addAll(Arrays.asList(node1, node2)); - - assertThat(partitions.toArray()).doesNotContain(node1, node2); - partitions.updateCache(); - assertThat(partitions.toArray()).contains(node1, node2); - } - - @Test - public void toArray2() throws Exception { - - Partitions partitions = new Partitions(); - partitions.addAll(Arrays.asList(node1, node2)); - - assertThat(partitions.toArray(new RedisClusterNode[2])).contains(node1, node2); - } - - @Test - public void toArray2UsesReadView() throws Exception { - - Partitions partitions = new Partitions(); - partitions.getPartitions().addAll(Arrays.asList(node1, node2)); - - assertThat(partitions.toArray(new RedisClusterNode[2])).doesNotContain(node1, node2); - - partitions.updateCache(); - - assertThat(partitions.toArray(new RedisClusterNode[2])).contains(node1, node2); - } - - @Test - public void getPartitionByNodeId() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - partitions.add(node2); - - assertThat(partitions.getPartitionByNodeId("a")).isEqualTo(node1); - assertThat(partitions.getPartitionByNodeId("c")).isNull(); - } - - @Test - public void reload() throws Exception { - - RedisClusterNode other = new RedisClusterNode(RedisURI.create("localhost", 6666), "c", true, "", 0, 0, 0, - Arrays.asList(1, 2, 3, 4, 5, 6), new HashSet<>()); - - Partitions partitions = new Partitions(); - partitions.add(other); - - partitions.reload(Arrays.asList(node1, node1)); - - assertThat(partitions.getPartitionByNodeId("a")).isEqualTo(node1); - assertThat(partitions.getPartitionBySlot(1)).isEqualTo(node1); - } - - @Test - public void reloadEmpty() throws Exception { - - Partitions partitions = new Partitions(); - partitions.reload(Arrays.asList()); - - assertThat(partitions.getPartitionBySlot(1)).isNull(); - } - - @Test - public void isEmpty() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - - assertThat(partitions.isEmpty()).isFalse(); - } - - @Test - public void size() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - - assertThat(partitions.size()).isEqualTo(1); - } - - @Test - public void sizeUsesReadView() throws Exception { - - Partitions partitions = new Partitions(); - partitions.getPartitions().add(node1); - - assertThat(partitions.size()).isEqualTo(0); - - partitions.updateCache(); - - assertThat(partitions.size()).isEqualTo(1); - } - - @Test - public void getPartition() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - - assertThat(partitions.getPartition(0)).isEqualTo(node1); - } - - @Test - public void iterator() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - - assertThat(partitions.iterator().next()).isEqualTo(node1); - } - - @Test - public void iteratorUsesReadView() throws Exception { - - Partitions partitions = new Partitions(); - partitions.getPartitions().add(node1); - - assertThat(partitions.iterator().hasNext()).isFalse(); - partitions.updateCache(); - - assertThat(partitions.iterator().hasNext()).isTrue(); - } - - @Test - public void iteratorIsSafeDuringUpdate() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - partitions.add(node2); - - Iterator iterator = partitions.iterator(); - - partitions.remove(node2); - - assertThat(iterator.hasNext()).isTrue(); - assertThat(iterator.next()).isEqualTo(node1); - assertThat(iterator.next()).isEqualTo(node2); - - iterator = partitions.iterator(); - - partitions.remove(node2); - - assertThat(iterator.hasNext()).isTrue(); - assertThat(iterator.next()).isEqualTo(node1); - assertThat(iterator.hasNext()).isFalse(); - } - - @Test - public void testToString() throws Exception { - - Partitions partitions = new Partitions(); - partitions.add(node1); - - assertThat(partitions.toString()).startsWith("Partitions ["); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/cluster/models/partitions/RedisClusterNodeTest.java b/src/test/java/com/lambdaworks/redis/cluster/models/partitions/RedisClusterNodeTest.java deleted file mode 100644 index 759a965dee..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/models/partitions/RedisClusterNodeTest.java +++ /dev/null @@ -1,28 +0,0 @@ -package com.lambdaworks.redis.cluster.models.partitions; - -import static org.assertj.core.api.Assertions.*; - -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; - -public class RedisClusterNodeTest { - @Test - public void testEquality() throws Exception { - RedisClusterNode node = new RedisClusterNode(); - - assertThat(node).isEqualTo(new RedisClusterNode()); - assertThat(node.hashCode()).isEqualTo(new RedisClusterNode().hashCode()); - - node.setUri(new RedisURI()); - assertThat(node.hashCode()).isNotEqualTo(new RedisClusterNode()); - - } - - @Test - public void testToString() throws Exception { - RedisClusterNode node = new RedisClusterNode(); - - assertThat(node.toString()).contains(RedisClusterNode.class.getSimpleName()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotsParserTest.java b/src/test/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotsParserTest.java deleted file mode 100644 index 7a27c29b40..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/models/slots/ClusterSlotsParserTest.java +++ /dev/null @@ -1,168 +0,0 @@ -package com.lambdaworks.redis.cluster.models.slots; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.ArrayList; -import java.util.Arrays; -import java.util.List; - -import org.junit.Test; - -import com.google.common.net.HostAndPort; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.LettuceLists; - -@SuppressWarnings("unchecked") -public class ClusterSlotsParserTest { - - @Test - public void testEmpty() throws Exception { - List result = ClusterSlotsParser.parse(new ArrayList<>()); - assertThat(result).isNotNull().isEmpty(); - } - - @Test - public void testOneString() throws Exception { - List result = ClusterSlotsParser.parse(LettuceLists.newList("")); - assertThat(result).isNotNull().isEmpty(); - } - - @Test - public void testOneStringInList() throws Exception { - List list = Arrays.asList(LettuceLists.newList("0")); - List result = ClusterSlotsParser.parse(list); - assertThat(result).isNotNull().isEmpty(); - } - - @Test - public void testParse() throws Exception { - List list = Arrays.asList(LettuceLists.newList("0", "1", LettuceLists.newList("1", "2"))); - List result = ClusterSlotsParser.parse(list); - assertThat(result).hasSize(1); - - assertThat(result.get(0).getMaster()).isNotNull(); - assertThat(result.get(0).getMasterNode()).isNotNull(); - } - - @Test - public void testParseWithSlave() throws Exception { - List list = Arrays.asList(LettuceLists.newList("100", "200", LettuceLists.newList("1", "2", "nodeId1"), - LettuceLists.newList("1", 2, "nodeId2"))); - List result = ClusterSlotsParser.parse(list); - assertThat(result).hasSize(1); - ClusterSlotRange clusterSlotRange = result.get(0); - - assertThat(clusterSlotRange.getMaster()).isNotNull(); - assertThat(clusterSlotRange.getMaster().getHostText()).isEqualTo("1"); - assertThat(clusterSlotRange.getMaster().getPort()).isEqualTo(2); - - RedisClusterNode masterNode = clusterSlotRange.getMasterNode(); - assertThat(masterNode).isNotNull(); - assertThat(masterNode.getNodeId()).isEqualTo("nodeId1"); - assertThat(masterNode.getUri().getHost()).isEqualTo("1"); - assertThat(masterNode.getUri().getPort()).isEqualTo(2); - assertThat(masterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MASTER); - assertThat(masterNode.getSlots()).contains(100, 101, 199, 200); - assertThat(masterNode.getSlots()).doesNotContain(99, 201); - assertThat(masterNode.getSlots()).hasSize(101); - - - assertThat(clusterSlotRange.getSlaves()).hasSize(1); - assertThat(clusterSlotRange.getSlaveNodes()).hasSize(1); - - HostAndPort slave = clusterSlotRange.getSlaves().get(0); - assertThat(slave.getHostText()).isEqualTo("1"); - assertThat(slave.getPort()).isEqualTo(2); - - RedisClusterNode slaveNode = clusterSlotRange.getSlaveNodes().get(0); - - assertThat(slaveNode.getNodeId()).isEqualTo("nodeId2"); - assertThat(slaveNode.getSlaveOf()).isEqualTo("nodeId1"); - assertThat(slaveNode.getFlags()).contains(RedisClusterNode.NodeFlag.SLAVE); - } - - @Test - public void testSameNode() throws Exception { - List list = Arrays.asList( - LettuceLists.newList("100", "200", LettuceLists.newList("1", "2", "nodeId1"), - LettuceLists.newList("1", 2, "nodeId2")), - LettuceLists.newList("200", "300", LettuceLists.newList("1", "2", "nodeId1"), - LettuceLists.newList("1", 2, "nodeId2"))); - - List result = ClusterSlotsParser.parse(list); - assertThat(result).hasSize(2); - - assertThat(result.get(0).getMasterNode()).isSameAs(result.get(1).getMasterNode()); - - RedisClusterNode masterNode = result.get(0).getMasterNode(); - assertThat(masterNode).isNotNull(); - assertThat(masterNode.getNodeId()).isEqualTo("nodeId1"); - assertThat(masterNode.getUri().getHost()).isEqualTo("1"); - assertThat(masterNode.getUri().getPort()).isEqualTo(2); - assertThat(masterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MASTER); - assertThat(masterNode.getSlots()).contains(100, 101, 199, 200, 203); - assertThat(masterNode.getSlots()).doesNotContain(99, 301); - assertThat(masterNode.getSlots()).hasSize(201); - } - - @Test - public void testHostAndPortConstructor() throws Exception { - - ClusterSlotRange clusterSlotRange = new ClusterSlotRange(100, 200, HostAndPort.fromParts("1", 2), LettuceLists.newList( - HostAndPort.fromParts("1", 2))); - - RedisClusterNode masterNode = clusterSlotRange.getMasterNode(); - assertThat(masterNode).isNotNull(); - assertThat(masterNode.getNodeId()).isNull(); - assertThat(masterNode.getUri().getHost()).isEqualTo("1"); - assertThat(masterNode.getUri().getPort()).isEqualTo(2); - assertThat(masterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MASTER); - - assertThat(clusterSlotRange.getSlaves()).hasSize(1); - assertThat(clusterSlotRange.getSlaveNodes()).hasSize(1); - - HostAndPort slave = clusterSlotRange.getSlaves().get(0); - assertThat(slave.getHostText()).isEqualTo("1"); - assertThat(slave.getPort()).isEqualTo(2); - - RedisClusterNode slaveNode = clusterSlotRange.getSlaveNodes().get(0); - - assertThat(slaveNode.getNodeId()).isNull(); - assertThat(slaveNode.getSlaveOf()).isNull(); - assertThat(slaveNode.getFlags()).contains(RedisClusterNode.NodeFlag.SLAVE); - - } - - @Test - public void testParseWithSlaveAndNodeIds() throws Exception { - List list = Arrays.asList(LettuceLists.newList("0", "1", LettuceLists.newList("1", "2"), LettuceLists.newList("1", 2))); - List result = ClusterSlotsParser.parse(list); - assertThat(result).hasSize(1); - assertThat(result.get(0).getMaster()).isNotNull(); - assertThat(result.get(0).getSlaves()).hasSize(1); - } - - @Test(expected = IllegalArgumentException.class) - public void testParseInvalidMaster() throws Exception { - List list = Arrays.asList(LettuceLists.newList("0", "1", LettuceLists.newList("1"))); - ClusterSlotsParser.parse(list); - } - - @Test(expected = IllegalArgumentException.class) - public void testParseInvalidMaster2() throws Exception { - List list = Arrays.asList(LettuceLists.newList("0", "1", "")); - ClusterSlotsParser.parse(list); - } - - @Test - public void testModel() throws Exception { - - ClusterSlotRange range = new ClusterSlotRange(); - range.setFrom(1); - range.setTo(2); - range.setSlaves(new ArrayList<>()); - range.setMaster(HostAndPort.fromHost("localhost")); - - assertThat(range.toString()).contains(ClusterSlotRange.class.getSimpleName()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/pubsub/PubSubClusterTest.java b/src/test/java/com/lambdaworks/redis/cluster/pubsub/PubSubClusterTest.java deleted file mode 100644 index 98530cbdcd..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/pubsub/PubSubClusterTest.java +++ /dev/null @@ -1,175 +0,0 @@ -package com.lambdaworks.redis.cluster.pubsub; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.List; -import java.util.concurrent.BlockingQueue; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.AbstractClusterTest; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.LettuceFactories; -import com.lambdaworks.redis.pubsub.RedisPubSubListener; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; - -/** - * @author Mark Paluch - */ -public class PubSubClusterTest extends AbstractClusterTest implements RedisPubSubListener { - - private BlockingQueue channels; - private BlockingQueue patterns; - private BlockingQueue messages; - private BlockingQueue counts; - - private StatefulRedisClusterConnection connection; - private StatefulRedisPubSubConnection pubSubConnection; - private StatefulRedisPubSubConnection pubSubConnection2; - - @Before - public void openPubSubConnection() throws Exception { - connection = clusterClient.connect(); - pubSubConnection = clusterClient.connectPubSub(); - pubSubConnection2 = clusterClient.connectPubSub(); - channels = LettuceFactories.newBlockingQueue(); - patterns = LettuceFactories.newBlockingQueue(); - messages = LettuceFactories.newBlockingQueue(); - counts = LettuceFactories.newBlockingQueue(); - } - - @After - public void closePubSubConnection() throws Exception { - connection.close(); - pubSubConnection.close(); - pubSubConnection2.close(); - } - - @Test - public void testRegularClientPubSubChannels() throws Exception { - - String nodeId = pubSubConnection.sync().clusterMyId(); - RedisClusterNode otherNode = getOtherThan(nodeId); - pubSubConnection.sync().subscribe(key); - - List channelsOnSubscribedNode = connection.getConnection(nodeId).sync().pubsubChannels(); - assertThat(channelsOnSubscribedNode).hasSize(1); - - List channelsOnOtherNode = connection.getConnection(otherNode.getNodeId()).sync().pubsubChannels(); - assertThat(channelsOnOtherNode).isEmpty(); - } - - @Test - public void testRegularClientPublish() throws Exception { - - String nodeId = pubSubConnection.sync().clusterMyId(); - RedisClusterNode otherNode = getOtherThan(nodeId); - pubSubConnection.sync().subscribe(key); - pubSubConnection.addListener(this); - - connection.getConnection(nodeId).sync().publish(key, value); - assertThat(messages.take()).isEqualTo(value); - - connection.getConnection(otherNode.getNodeId()).sync().publish(key, value); - assertThat(messages.take()).isEqualTo(value); - } - - - @Test - public void testPubSubClientPublish() throws Exception { - - String nodeId = pubSubConnection.sync().clusterMyId(); - pubSubConnection.sync().subscribe(key); - pubSubConnection.addListener(this); - - assertThat(pubSubConnection2.sync().clusterMyId()).isEqualTo(nodeId); - - pubSubConnection2.sync().publish(key, value); - assertThat(messages.take()).isEqualTo(value); - } - - @Test - public void testConnectToLeastClientsNode() throws Exception { - - clusterClient.reloadPartitions(); - String nodeId = pubSubConnection.sync().clusterMyId(); - - StatefulRedisPubSubConnection connectionAfterPartitionReload = clusterClient.connectPubSub(); - String newConnectionNodeId = connectionAfterPartitionReload.sync().clusterMyId(); - connectionAfterPartitionReload.close(); - - assertThat(nodeId).isNotEqualTo(newConnectionNodeId); - } - - @Test - public void testRegularClientPubSubPublish() throws Exception { - - String nodeId = pubSubConnection.sync().clusterMyId(); - RedisClusterNode otherNode = getOtherThan(nodeId); - pubSubConnection.sync().subscribe(key); - pubSubConnection.addListener(this); - - List channelsOnSubscribedNode = connection.getConnection(nodeId).sync().pubsubChannels(); - assertThat(channelsOnSubscribedNode).hasSize(1); - - RedisCommands otherNodeConnection = connection.getConnection(otherNode.getNodeId()).sync(); - otherNodeConnection.publish(key, value); - assertThat(channels.take()).isEqualTo(key); - - } - - private RedisClusterNode getOtherThan(String nodeId) { - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - if (redisClusterNode.getNodeId().equals(nodeId)) { - continue; - } - return redisClusterNode; - } - - throw new IllegalStateException("No other nodes than " + nodeId + " available"); - } - - // RedisPubSubListener implementation - - @Override - public void message(String channel, String message) { - channels.add(channel); - messages.add(message); - } - - @Override - public void message(String pattern, String channel, String message) { - patterns.add(pattern); - channels.add(channel); - messages.add(message); - } - - @Override - public void subscribed(String channel, long count) { - channels.add(channel); - counts.add(count); - } - - @Override - public void psubscribed(String pattern, long count) { - patterns.add(pattern); - counts.add(count); - } - - @Override - public void unsubscribed(String channel, long count) { - channels.add(channel); - counts.add(count); - } - - @Override - public void punsubscribed(String pattern, long count) { - patterns.add(pattern); - counts.add(count); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/topology/ClusterTopologyRefreshTest.java b/src/test/java/com/lambdaworks/redis/cluster/topology/ClusterTopologyRefreshTest.java deleted file mode 100644 index a0216c668e..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/topology/ClusterTopologyRefreshTest.java +++ /dev/null @@ -1,362 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.mockito.Matchers.any; -import static org.mockito.Matchers.eq; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.verifyNoMoreInteractions; -import static org.mockito.Mockito.when; - -import java.net.InetSocketAddress; -import java.nio.ByteBuffer; -import java.util.*; -import java.util.concurrent.TimeUnit; - -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.runners.MockitoJUnitRunner; - -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.DnsResolvers; - -/** - * @author Mark Paluch - */ -@RunWith(MockitoJUnitRunner.class) -public class ClusterTopologyRefreshTest { - - public final static long COMMAND_TIMEOUT_NS = TimeUnit.MILLISECONDS.toNanos(10); - - public static final String NODE_1_VIEW = "1 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n" - + "2 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - public static final String NODE_2_VIEW = "1 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" - + "2 127.0.0.1:7381 master,myself - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - private ClusterTopologyRefresh sut; - - @Mock - private RedisClusterClient client; - - @Mock - private StatefulRedisConnection connection; - - @Mock - private ClientResources clientResources; - - @Mock - private NodeConnectionFactory nodeConnectionFactory; - - @Mock - private StatefulRedisConnection connection1; - - @Mock - private RedisAsyncCommands asyncCommands1; - - @Mock - private StatefulRedisConnection connection2; - - @Mock - private RedisAsyncCommands asyncCommands2; - - @Before - public void before() throws Exception { - - when(clientResources.dnsResolver()).thenReturn(DnsResolvers.JVM_DEFAULT); - when(connection1.async()).thenReturn(asyncCommands1); - when(connection2.async()).thenReturn(asyncCommands2); - - when(connection1.dispatch(any())).thenAnswer(invocation -> { - - TimedAsyncCommand command = (TimedAsyncCommand) invocation.getArguments()[0]; - if (command.getType() == CommandType.CLUSTER) { - command.getOutput().set(ByteBuffer.wrap(NODE_1_VIEW.getBytes())); - command.complete(); - } - - if (command.getType() == CommandType.CLIENT) { - command.getOutput().set(ByteBuffer.wrap("c1\nc2\n".getBytes())); - command.complete(); - } - - command.encodedAtNs = 10; - command.completedAtNs = 50; - - return command; - }); - - when(connection2.dispatch(any())).thenAnswer(invocation -> { - - TimedAsyncCommand command = (TimedAsyncCommand) invocation.getArguments()[0]; - if (command.getType() == CommandType.CLUSTER) { - command.getOutput().set(ByteBuffer.wrap(NODE_2_VIEW.getBytes())); - command.complete(); - } - - if (command.getType() == CommandType.CLIENT) { - command.getOutput().set(ByteBuffer.wrap("".getBytes())); - command.complete(); - } - - command.encodedAtNs = 10; - command.completedAtNs = 20; - - return command; - }); - - sut = new ClusterTopologyRefresh(nodeConnectionFactory, clientResources); - } - - @Test - public void getNodeSpecificViewsNode1IsFasterThanNode2() throws Exception { - - Requests requests = createClusterNodesRequests(1, NODE_1_VIEW); - requests = createClusterNodesRequests(2, NODE_2_VIEW).mergeWith(requests); - - Requests clientRequests = createClientListRequests(1, "c1\nc2\n").mergeWith(createClientListRequests(2, "c1\nc2\n")); - - NodeTopologyViews nodeSpecificViews = sut.getNodeSpecificViews(requests, clientRequests, COMMAND_TIMEOUT_NS); - - Collection values = nodeSpecificViews.toMap().values(); - - assertThat(values).hasSize(2); - - for (Partitions value : values) { - assertThat(value).extracting("nodeId").containsExactly("1", "2"); - } - } - - @Test - public void getNodeSpecificViewTestingNoAddrFilter() throws Exception { - - String nodes1 = "n1 10.37.110.63:7000 slave n3 0 1452553664848 43 connected\n" - + "n2 10.37.110.68:7000 slave n6 0 1452553664346 45 connected\n" - + "badSlave :0 slave,fail,noaddr n5 1449160058028 1449160053146 46 disconnected\n" - + "n3 10.37.110.69:7000 master - 0 1452553662842 43 connected 3829-6787 7997-9999\n" - + "n4 10.37.110.62:7000 slave n3 0 1452553663844 43 connected\n" - + "n5 10.37.110.70:7000 myself,master - 0 0 46 connected 10039-14999\n" - + "n6 10.37.110.65:7000 master - 0 1452553663844 45 connected 0-3828 6788-7996 10000-10038 15000-16383"; - - Requests clusterNodesRequests = createClusterNodesRequests(1, nodes1); - Requests clientRequests = createClientListRequests(1, "c1\nc2\n"); - - NodeTopologyViews nodeSpecificViews = sut.getNodeSpecificViews(clusterNodesRequests, clientRequests, - COMMAND_TIMEOUT_NS); - - List values = new ArrayList<>(nodeSpecificViews.toMap().values()); - - assertThat(values).hasSize(1); - - for (Partitions value : values) { - assertThat(value).extracting("nodeId").containsOnly("n1", "n2", "n3", "n4", "n5", "n6"); - } - - RedisClusterNodeSnapshot firstPartition = (RedisClusterNodeSnapshot) values.get(0).getPartition(0); - RedisClusterNodeSnapshot selfPartition = (RedisClusterNodeSnapshot) values.get(0).getPartition(4); - assertThat(firstPartition.getConnectedClients()).isEqualTo(2); - assertThat(selfPartition.getConnectedClients()).isNull(); - - } - - @Test - public void getNodeSpecificViewsNode2IsFasterThanNode1() throws Exception { - - Requests clusterNodesRequests = createClusterNodesRequests(5, NODE_1_VIEW); - clusterNodesRequests = createClusterNodesRequests(1, NODE_2_VIEW).mergeWith(clusterNodesRequests); - - Requests clientRequests = createClientListRequests(5, "c1\nc2\n").mergeWith(createClientListRequests(1, "c1\nc2\n")); - - NodeTopologyViews nodeSpecificViews = sut.getNodeSpecificViews(clusterNodesRequests, clientRequests, - COMMAND_TIMEOUT_NS); - List values = new ArrayList<>(nodeSpecificViews.toMap().values()); - - assertThat(values).hasSize(2); - - for (Partitions value : values) { - assertThat(value).extracting("nodeId").containsExactly("2", "1"); - } - } - - @Test - public void shouldAttemptToConnectOnlyOnce() throws Exception { - - List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380), RedisURI.create("127.0.0.1", 7381)); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) - .thenReturn((StatefulRedisConnection) connection1); - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) - .thenThrow(new RedisException("connection failed")); - - sut.loadViews(seed, true); - - verify(nodeConnectionFactory).connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); - verify(nodeConnectionFactory).connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381))); - } - - @Test - public void shouldShouldDiscoverNodes() throws Exception { - - List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380)); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) - .thenReturn((StatefulRedisConnection) connection1); - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) - .thenReturn((StatefulRedisConnection) connection2); - - sut.loadViews(seed, true); - - verify(nodeConnectionFactory).connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); - verify(nodeConnectionFactory).connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381))); - } - - @Test - public void shouldShouldNotDiscoverNodes() throws Exception { - - List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380)); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) - .thenReturn((StatefulRedisConnection) connection1); - - sut.loadViews(seed, false); - - verify(nodeConnectionFactory).connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); - verifyNoMoreInteractions(nodeConnectionFactory); - } - - @Test - public void shouldNotFailOnDuplicateSeedNodes() throws Exception { - - List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380), RedisURI.create("127.0.0.1", 7381), - RedisURI.create("127.0.0.1", 7381)); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) - .thenReturn((StatefulRedisConnection) connection1); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) - .thenReturn((StatefulRedisConnection) connection2); - - sut.loadViews(seed, true); - - verify(nodeConnectionFactory).connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); - verify(nodeConnectionFactory).connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381))); - } - - @Test - public void undiscoveredAdditionalNodesShouldBeLastUsingClientCount() throws Exception { - - List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380)); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) - .thenReturn((StatefulRedisConnection) connection1); - - Map partitionsMap = sut.loadViews(seed, false); - - Partitions partitions = partitionsMap.values().iterator().next(); - - List nodes = TopologyComparators.sortByClientCount(partitions); - - assertThat(nodes).hasSize(2).extracting(RedisClusterNode::getUri).containsSequence(seed.get(0), - RedisURI.create("127.0.0.1", 7381)); - } - - @Test - public void discoveredAdditionalNodesShouldBeOrderedUsingClientCount() throws Exception { - - List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380)); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) - .thenReturn((StatefulRedisConnection) connection1); - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) - .thenReturn((StatefulRedisConnection) connection2); - - Map partitionsMap = sut.loadViews(seed, true); - - Partitions partitions = partitionsMap.values().iterator().next(); - - List nodes = TopologyComparators.sortByClientCount(partitions); - - assertThat(nodes).hasSize(2).extracting(RedisClusterNode::getUri).containsSequence(RedisURI.create("127.0.0.1", 7381), - seed.get(0)); - } - - @Test - public void undiscoveredAdditionalNodesShouldBeLastUsingLatency() throws Exception { - - List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380)); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) - .thenReturn((StatefulRedisConnection) connection1); - - Map partitionsMap = sut.loadViews(seed, false); - - Partitions partitions = partitionsMap.values().iterator().next(); - - List nodes = TopologyComparators.sortByLatency(partitions); - - assertThat(nodes).hasSize(2).extracting(RedisClusterNode::getUri).containsSequence(seed.get(0), - RedisURI.create("127.0.0.1", 7381)); - } - - @Test - public void discoveredAdditionalNodesShouldBeOrderedUsingLatency() throws Exception { - - List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380)); - - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) - .thenReturn((StatefulRedisConnection) connection1); - when(nodeConnectionFactory.connectToNode(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) - .thenReturn((StatefulRedisConnection) connection2); - - Map partitionsMap = sut.loadViews(seed, true); - - Partitions partitions = partitionsMap.values().iterator().next(); - - List nodes = TopologyComparators.sortByLatency(partitions); - - assertThat(nodes).hasSize(2).extracting(RedisClusterNode::getUri).containsSequence(RedisURI.create("127.0.0.1", 7381), - seed.get(0)); - } - - protected Requests createClusterNodesRequests(int duration, String nodes) { - - RedisURI redisURI = RedisURI.create("redis://localhost:" + duration); - Connections connections = new Connections(); - connections.addConnection(redisURI, connection); - - Requests requests = connections.requestTopology(); - TimedAsyncCommand command = requests.rawViews.get(redisURI); - - command.getOutput().set(ByteBuffer.wrap(nodes.getBytes())); - command.complete(); - command.encodedAtNs = 0; - command.completedAtNs = duration; - - return requests; - - } - - protected Requests createClientListRequests(int duration, String response) { - - RedisURI redisURI = RedisURI.create("redis://localhost:" + duration); - Connections connections = new Connections(); - connections.addConnection(redisURI, connection); - - Requests requests = connections.requestTopology(); - TimedAsyncCommand command = requests.rawViews.get(redisURI); - - command.getOutput().set(ByteBuffer.wrap(response.getBytes())); - command.complete(); - - return requests; - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/topology/NodeTopologyViewsTest.java b/src/test/java/com/lambdaworks/redis/cluster/topology/NodeTopologyViewsTest.java deleted file mode 100644 index 3f926654da..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/topology/NodeTopologyViewsTest.java +++ /dev/null @@ -1,51 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.Arrays; -import java.util.Set; - -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; - -/** - * @author Mark Paluch - */ -public class NodeTopologyViewsTest { - - @Test - public void shouldReuseKnownUris() throws Exception { - - RedisURI localhost = RedisURI.create("localhost", 6479); - RedisURI otherhost = RedisURI.create("127.0.0.2", 7000); - - RedisURI host3 = RedisURI.create("127.0.0.3", 7000); - - String viewByLocalhost = "1 127.0.0.1:6479 master,myself - 0 1401258245007 2 connected 8000-11999\n" - + "2 127.0.0.2:7000 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" - + "3 127.0.0.3:7000 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - String viewByOtherhost = "1 127.0.0.1:6479 master - 0 1401258245007 2 connected 8000-11999\n" - + "2 127.0.0.2:7000 master,myself - 111 1401258245007 222 connected 7000 12000 12002-16383\n" - + "3 127.0.0.3:7000 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - NodeTopologyView localhostView = new NodeTopologyView(localhost, viewByLocalhost, "", 0); - NodeTopologyView otherhostView = new NodeTopologyView(otherhost, viewByOtherhost, "", 0); - - NodeTopologyViews nodeTopologyViews = new NodeTopologyViews(Arrays.asList(localhostView, otherhostView)); - - Set clusterNodes = nodeTopologyViews.getClusterNodes(); - assertThat(clusterNodes).contains(localhost, otherhost, host3); - } - - @Test(expected = IllegalStateException.class) - public void shouldFailWithoutOwnPartition() throws Exception { - - RedisURI localhost = RedisURI.create("localhost", 6479); - - String viewByLocalhost = "1 127.0.0.1:6479 master - 0 1401258245007 2 connected 8000-11999\n"; - - new NodeTopologyView(localhost, viewByLocalhost, "", 0); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/cluster/topology/RequestsTest.java b/src/test/java/com/lambdaworks/redis/cluster/topology/RequestsTest.java deleted file mode 100644 index 52835654db..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/topology/RequestsTest.java +++ /dev/null @@ -1,100 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.nio.ByteBuffer; -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandType; - -/** - * @author Mark Paluch - */ -public class RequestsTest { - - @Test - public void shouldCreateTopologyView() throws Exception { - - RedisURI redisURI = RedisURI.create("localhost", 6379); - - Requests clusterNodesRequests = new Requests(); - String clusterNodesOutput = "1 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n"; - clusterNodesRequests.addRequest(redisURI, getCommand(clusterNodesOutput)); - - Requests clientListRequests = new Requests(); - String clientListOutput = "id=2 addr=127.0.0.1:58919 fd=6 name= age=3 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client\n"; - clientListRequests.addRequest(redisURI, getCommand(clientListOutput)); - - NodeTopologyView nodeTopologyView = NodeTopologyView.from(redisURI, clusterNodesRequests, clientListRequests); - - assertThat(nodeTopologyView.isAvailable()).isTrue(); - assertThat(nodeTopologyView.getConnectedClients()).isEqualTo(1); - assertThat(nodeTopologyView.getPartitions()).hasSize(1); - assertThat(nodeTopologyView.getClusterNodes()).isEqualTo(clusterNodesOutput); - assertThat(nodeTopologyView.getClientList()).isEqualTo(clientListOutput); - } - - @Test - public void shouldCreateTopologyViewWithoutClientCount() throws Exception { - - RedisURI redisURI = RedisURI.create("localhost", 6379); - - Requests clusterNodesRequests = new Requests(); - String clusterNodesOutput = "1 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n"; - clusterNodesRequests.addRequest(redisURI, getCommand(clusterNodesOutput)); - - Requests clientListRequests = new Requests(); - - NodeTopologyView nodeTopologyView = NodeTopologyView.from(redisURI, clusterNodesRequests, clientListRequests); - - assertThat(nodeTopologyView.isAvailable()).isFalse(); - assertThat(nodeTopologyView.getConnectedClients()).isEqualTo(0); - assertThat(nodeTopologyView.getPartitions()).isEmpty(); - assertThat(nodeTopologyView.getClusterNodes()).isNull(); - } - - @Test - public void awaitShouldReturnAwaitedTime() throws Exception { - - RedisURI redisURI = RedisURI.create("localhost", 6379); - Requests requests = new Requests(); - Command command = new Command(CommandType.TYPE, - new StatusOutput<>(new Utf8StringCodec())); - TimedAsyncCommand timedAsyncCommand = new TimedAsyncCommand(command); - - requests.addRequest(redisURI, timedAsyncCommand); - - assertThat(requests.await(100, TimeUnit.MILLISECONDS)).isGreaterThan(TimeUnit.MILLISECONDS.toNanos(90)); - } - - @Test - public void awaitShouldReturnAwaitedTimeIfNegative() throws Exception { - - RedisURI redisURI = RedisURI.create("localhost", 6379); - Requests requests = new Requests(); - Command command = new Command(CommandType.TYPE, - new StatusOutput<>(new Utf8StringCodec())); - TimedAsyncCommand timedAsyncCommand = new TimedAsyncCommand(command); - - requests.addRequest(redisURI, timedAsyncCommand); - - assertThat(requests.await(-1, TimeUnit.MILLISECONDS)).isEqualTo(0); - - } - - private TimedAsyncCommand getCommand(String response) { - Command command = new Command(CommandType.TYPE, - new StatusOutput<>(new Utf8StringCodec())); - TimedAsyncCommand timedAsyncCommand = new TimedAsyncCommand(command); - - command.getOutput().set(ByteBuffer.wrap(response.getBytes())); - timedAsyncCommand.complete(); - return timedAsyncCommand; - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/cluster/topology/TopologyComparatorsTest.java b/src/test/java/com/lambdaworks/redis/cluster/topology/TopologyComparatorsTest.java deleted file mode 100644 index 2859785e7c..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/topology/TopologyComparatorsTest.java +++ /dev/null @@ -1,314 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import static com.lambdaworks.redis.cluster.topology.TopologyComparators.isChanged; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.util.Lists.newArrayList; - -import java.util.*; - -import com.lambdaworks.redis.RedisURI; -import org.junit.Test; - -import com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import com.lambdaworks.redis.internal.LettuceLists; - -/** - * @author Mark Paluch - */ -public class TopologyComparatorsTest { - - private RedisClusterNodeSnapshot node1 = createNode("1"); - private RedisClusterNodeSnapshot node2 = createNode("2"); - private RedisClusterNodeSnapshot node3 = createNode("3"); - - private static RedisClusterNodeSnapshot createNode(String nodeId) { - RedisClusterNodeSnapshot result = new RedisClusterNodeSnapshot(); - result.setNodeId(nodeId); - result.setUri(RedisURI.create("localhost", Integer.parseInt(nodeId))); - return result; - } - - @Test - public void latenciesForAllNodes() throws Exception { - - Map map = new HashMap<>(); - map.put(node1.getNodeId(), 1L); - map.put(node2.getNodeId(), 2L); - map.put(node3.getNodeId(), 3L); - - runTest(map, newArrayList(node1, node2, node3), newArrayList(node3, node1, node2)); - runTest(map, newArrayList(node1, node2, node3), newArrayList(node1, node2, node3)); - runTest(map, newArrayList(node1, node2, node3), newArrayList(node3, node2, node1)); - } - - @Test - public void latenciesForTwoNodes_N1_N2() throws Exception { - - Map map = new HashMap<>(); - map.put(node1.getNodeId(), 1L); - map.put(node2.getNodeId(), 2L); - - runTest(map, newArrayList(node1, node2, node3), newArrayList(node3, node1, node2)); - runTest(map, newArrayList(node1, node2, node3), newArrayList(node1, node2, node3)); - runTest(map, newArrayList(node1, node2, node3), newArrayList(node3, node2, node1)); - } - - @Test - public void latenciesForTwoNodes_N2_N3() throws Exception { - - Map map = new HashMap<>(); - map.put(node3.getNodeId(), 1L); - map.put(node2.getNodeId(), 2L); - - runTest(map, newArrayList(node3, node2, node1), newArrayList(node3, node1, node2)); - runTest(map, newArrayList(node3, node2, node1), newArrayList(node1, node2, node3)); - runTest(map, newArrayList(node3, node2, node1), newArrayList(node3, node2, node1)); - } - - @Test - public void latenciesForOneNode() throws Exception { - - Map map = Collections.singletonMap(node2.getNodeId(), 2L); - - runTest(map, newArrayList(node2, node3, node1), newArrayList(node3, node1, node2)); - runTest(map, newArrayList(node2, node1, node3), newArrayList(node1, node2, node3)); - runTest(map, newArrayList(node2, node3, node1), newArrayList(node3, node2, node1)); - } - - @Test(expected = AssertionError.class) - public void shouldFail() throws Exception { - - Map map = Collections.singletonMap(node2.getNodeId(), 2L); - - runTest(map, newArrayList(node2, node1, node3), newArrayList(node3, node1, node2)); - } - - @Test - public void testLatencyComparator() throws Exception { - - RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); - node1.setLatencyNs(1L); - - RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); - node2.setLatencyNs(2L); - - RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); - node3.setLatencyNs(3L); - - List list = LettuceLists.newList(node2, node3, node1); - Collections.sort(list, TopologyComparators.LatencyComparator.INSTANCE); - - assertThat(list).containsSequence(node1, node2, node3); - } - - @Test - public void testLatencyComparatorWithSomeNodesWithoutStats() throws Exception { - - RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); - node1.setLatencyNs(1L); - - RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); - node2.setLatencyNs(2L); - - RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); - RedisClusterNodeSnapshot node4 = new RedisClusterNodeSnapshot(); - - List list = LettuceLists.newList(node2, node3, node4, node1); - Collections.sort(list, TopologyComparators.LatencyComparator.INSTANCE); - - assertThat(list).containsSequence(node1, node2, node3, node4); - } - - @Test - public void testClientComparator() throws Exception { - - RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); - node1.setConnectedClients(1); - - RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); - node2.setConnectedClients(2); - - RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); - node3.setConnectedClients(3); - - List list = LettuceLists.newList(node2, node3, node1); - Collections.sort(list, TopologyComparators.ClientCountComparator.INSTANCE); - - assertThat(list).containsSequence(node1, node2, node3); - } - - @Test - public void testClientComparatorWithSomeNodesWithoutStats() throws Exception { - - RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); - node1.setConnectedClients(1); - - RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); - node2.setConnectedClients(2); - - RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); - RedisClusterNodeSnapshot node4 = new RedisClusterNodeSnapshot(); - - List list = LettuceLists.newList(node2, node3, node4, node1); - Collections.sort(list, TopologyComparators.ClientCountComparator.INSTANCE); - - assertThat(list).containsSequence(node1, node2, node3, node4); - } - - @Test - public void testLatencyComparatorWithoutClients() throws Exception { - - RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); - node1.setConnectedClients(1); - - RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); - node2.setConnectedClients(null); - - RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); - node3.setConnectedClients(3); - - List list = LettuceLists.newList(node2, node3, node1); - Collections.sort(list, TopologyComparators.LatencyComparator.INSTANCE); - - assertThat(list).containsSequence(node1, node3, node2); - } - - @Test - public void testFixedOrdering1() throws Exception { - - List list = LettuceLists.newList(node2, node3, node1); - List fixedOrder = LettuceLists.newList(node1.getUri(), node2.getUri(), node3.getUri()); - - assertThat(TopologyComparators.predefinedSort(list, fixedOrder)).containsSequence(node1, node2, node3); - } - - @Test - public void testFixedOrdering2() throws Exception { - - List list = LettuceLists.newList(node2, node3, node1); - List fixedOrder = LettuceLists.newList(node3.getUri(), node2.getUri(), node1.getUri()); - - assertThat(TopologyComparators.predefinedSort(list, fixedOrder)).containsSequence(node3, node2, node1); - } - - @Test - public void testFixedOrderingNoFixedPart() throws Exception { - - List list = LettuceLists.newList(node2, node3, node1); - List fixedOrder = LettuceLists.newList(); - - assertThat(TopologyComparators.predefinedSort(list, fixedOrder)).containsSequence(node1, node2, node3); - } - - @Test - public void testFixedOrderingPartiallySpecifiedOrder() throws Exception { - - List list = LettuceLists.newList(node2, node3, node1); - List fixedOrder = LettuceLists.newList(node3.getUri(), node1.getUri()); - - assertThat(TopologyComparators.predefinedSort(list, fixedOrder)).containsSequence(node3, node1, node2); - } - - @Test - public void isChangedSamePartitions() throws Exception { - - String nodes = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" - + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n"; - - Partitions partitions1 = ClusterPartitionParser.parse(nodes); - Partitions partitions2 = ClusterPartitionParser.parse(nodes); - assertThat(isChanged(partitions1, partitions2)).isFalse(); - } - - @Test - public void isChangedDifferentOrder() throws Exception { - String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - String nodes2 = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master,myself - 111 1401258245007 222 connected 7000 12000 12002-16383\n" - + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n"; - - assertThat(nodes1).isNotEqualTo(nodes2); - Partitions partitions1 = ClusterPartitionParser.parse(nodes1); - Partitions partitions2 = ClusterPartitionParser.parse(nodes2); - assertThat(isChanged(partitions1, partitions2)).isFalse(); - } - - @Test - public void isChangedPortChanged() throws Exception { - String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7382 master - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - String nodes2 = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" - + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n"; - - Partitions partitions1 = ClusterPartitionParser.parse(nodes1); - Partitions partitions2 = ClusterPartitionParser.parse(nodes2); - assertThat(isChanged(partitions1, partitions2)).isFalse(); - } - - @Test - public void isChangedSlotsChanged() throws Exception { - String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - String nodes2 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12001-16383\n"; - - Partitions partitions1 = ClusterPartitionParser.parse(nodes1); - Partitions partitions2 = ClusterPartitionParser.parse(nodes2); - assertThat(isChanged(partitions1, partitions2)).isTrue(); - } - - @Test - public void isChangedNodeIdChanged() throws Exception { - String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - String nodes2 = "3d005a179da7d8dc1adae6409d47b39c369e992aa 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - Partitions partitions1 = ClusterPartitionParser.parse(nodes1); - Partitions partitions2 = ClusterPartitionParser.parse(nodes2); - assertThat(isChanged(partitions1, partitions2)).isTrue(); - } - - @Test - public void isChangedFlagsChangedSlaveToMaster() throws Exception { - String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 slave - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - String nodes2 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - Partitions partitions1 = ClusterPartitionParser.parse(nodes1); - Partitions partitions2 = ClusterPartitionParser.parse(nodes2); - assertThat(isChanged(partitions1, partitions2)).isTrue(); - } - - @Test - public void isChangedFlagsChangedMasterToSlave() throws Exception { - String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - String nodes2 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 slave - 0 1401258245007 2 disconnected 8000-11999\n" - + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; - - Partitions partitions1 = ClusterPartitionParser.parse(nodes1); - Partitions partitions2 = ClusterPartitionParser.parse(nodes2); - assertThat(isChanged(partitions1, partitions2)).isTrue(); - } - - protected void runTest(Map map, List expectation, - List nodes) { - - for (RedisClusterNodeSnapshot node : nodes) { - node.setLatencyNs(map.get(node.getNodeId())); - } - List result = TopologyComparators.sortByLatency((Iterable) nodes); - - assertThat(result).containsExactly(expectation.toArray(new RedisClusterNodeSnapshot[expectation.size()])); - } -} diff --git a/src/test/java/com/lambdaworks/redis/cluster/topology/TopologyRefreshTest.java b/src/test/java/com/lambdaworks/redis/cluster/topology/TopologyRefreshTest.java deleted file mode 100644 index a16e817159..0000000000 --- a/src/test/java/com/lambdaworks/redis/cluster/topology/TopologyRefreshTest.java +++ /dev/null @@ -1,311 +0,0 @@ -package com.lambdaworks.redis.cluster.topology; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicReference; -import java.util.function.BiFunction; - -import io.netty.util.concurrent.ScheduledFuture; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.Wait; -import com.lambdaworks.category.SlowTests; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.async.BaseRedisAsyncCommands; -import com.lambdaworks.redis.cluster.AbstractClusterTest; -import com.lambdaworks.redis.cluster.ClusterClientOptions; -import com.lambdaworks.redis.cluster.ClusterTopologyRefreshOptions; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.cluster.api.async.RedisAdvancedClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; -import org.springframework.test.util.ReflectionTestUtils; - -/** - * Test for topology refreshing. - * - * @author Mark Paluch - */ -@SuppressWarnings({ "unchecked" }) -@SlowTests -public class TopologyRefreshTest extends AbstractTest { - - public static final String host = TestSettings.hostAddr(); - private static RedisClient client = DefaultRedisClient.get(); - - private RedisClusterClient clusterClient; - private RedisClusterCommands redis1; - private RedisClusterCommands redis2; - - @Before - public void openConnection() throws Exception { - clusterClient = RedisClusterClient.create(client.getResources(), - RedisURI.Builder.redis(host, AbstractClusterTest.port1).build()); - redis1 = client.connect(RedisURI.Builder.redis(AbstractClusterTest.host, AbstractClusterTest.port1).build()).sync(); - redis2 = client.connect(RedisURI.Builder.redis(AbstractClusterTest.host, AbstractClusterTest.port2).build()).sync(); - } - - @After - public void closeConnection() throws Exception { - redis1.close(); - redis2.close(); - FastShutdown.shutdown(clusterClient); - } - - @Test - public void shouldUnsubscribeTopologyRefresh() throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() - .enablePeriodicRefresh(true) // - .build(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - AtomicBoolean clusterTopologyRefreshActivated = (AtomicBoolean) ReflectionTestUtils - .getField(clusterClient, "clusterTopologyRefreshActivated"); - - AtomicReference> clusterTopologyRefreshFuture = (AtomicReference) ReflectionTestUtils - .getField(clusterClient, "clusterTopologyRefreshFuture"); - - assertThat(clusterTopologyRefreshActivated.get()).isTrue(); - assertThat(clusterTopologyRefreshFuture.get()).isNotNull(); - - ScheduledFuture scheduledFuture = clusterTopologyRefreshFuture.get(); - - clusterConnection.close(); - - FastShutdown.shutdown(clusterClient); - - assertThat(clusterTopologyRefreshActivated.get()).isFalse(); - assertThat(clusterTopologyRefreshFuture.get()).isNull(); - assertThat(scheduledFuture.isCancelled()).isTrue(); - - } - - @Test - public void changeTopologyWhileOperations() throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() - .enablePeriodicRefresh(true)// - .refreshPeriod(1, TimeUnit.SECONDS)// - .build(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - clusterClient.getPartitions().clear(); - - Wait.untilTrue(() -> { - return !clusterClient.getPartitions().isEmpty(); - }).waitOrTimeout(); - - clusterConnection.close(); - } - - @Test - public void dynamicSourcesProvidesClientCountForAllNodes() throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.create(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { - assertThat(redisClusterNode).isInstanceOf(RedisClusterNodeSnapshot.class); - - RedisClusterNodeSnapshot snapshot = (RedisClusterNodeSnapshot) redisClusterNode; - assertThat(snapshot.getConnectedClients()).isNotNull().isGreaterThanOrEqualTo(0); - } - - clusterConnection.close(); - } - - @Test - public void staticSourcesProvidesClientCountForSeedNodes() throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() - .dynamicRefreshSources(false).build(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - Partitions partitions = clusterClient.getPartitions(); - RedisClusterNodeSnapshot node1 = (RedisClusterNodeSnapshot) partitions.getPartitionBySlot(0); - assertThat(node1.getConnectedClients()).isGreaterThanOrEqualTo(1); - - RedisClusterNodeSnapshot node2 = (RedisClusterNodeSnapshot) partitions.getPartitionBySlot(15000); - assertThat(node2.getConnectedClients()).isNull(); - - clusterConnection.close(); - } - - @Test - public void adaptiveTopologyUpdateOnDisconnectNodeIdConnection() throws Exception { - - runReconnectTest((clusterConnection, node) -> { - RedisClusterAsyncCommands connection = clusterConnection.getConnection(node.getUri().getHost(), - node.getUri().getPort()); - - return connection; - }); - } - - @Test - public void adaptiveTopologyUpdateOnDisconnectHostAndPortConnection() throws Exception { - - runReconnectTest((clusterConnection, node) -> { - RedisClusterAsyncCommands connection = clusterConnection.getConnection(node.getUri().getHost(), - node.getUri().getPort()); - - return connection; - }); - } - - @Test - public void adaptiveTopologyUpdateOnDisconnectDefaultConnection() throws Exception { - - runReconnectTest((clusterConnection, node) -> { - return clusterConnection; - }); - } - - @Test - public void adaptiveTopologyUpdateIsRateLimited() throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// - .adaptiveRefreshTriggersTimeout(1, TimeUnit.HOURS)// - .refreshTriggersReconnectAttempts(0)// - .enableAllAdaptiveRefreshTriggers()// - .build(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - clusterClient.getPartitions().clear(); - clusterConnection.quit(); - - Wait.untilTrue(() -> { - return !clusterClient.getPartitions().isEmpty(); - }).waitOrTimeout(); - - clusterClient.getPartitions().clear(); - clusterConnection.quit(); - - Thread.sleep(1000); - - assertThat(clusterClient.getPartitions()).isEmpty(); - - clusterConnection.close(); - } - - @Test - public void adaptiveTopologyUpdatetUsesTimeout() throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// - .adaptiveRefreshTriggersTimeout(500, TimeUnit.MILLISECONDS)// - .refreshTriggersReconnectAttempts(0)// - .enableAllAdaptiveRefreshTriggers()// - .build(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - clusterConnection.quit(); - Thread.sleep(1000); - - Wait.untilTrue(() -> { - return !clusterClient.getPartitions().isEmpty(); - }).waitOrTimeout(); - - clusterClient.getPartitions().clear(); - clusterConnection.quit(); - - Wait.untilTrue(() -> { - return !clusterClient.getPartitions().isEmpty(); - }).waitOrTimeout(); - - clusterConnection.close(); - } - - @Test - public void adaptiveTriggerDoesNotFireOnSingleReconnect() throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// - .enableAllAdaptiveRefreshTriggers()// - .build(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - clusterClient.getPartitions().clear(); - - clusterConnection.quit(); - Thread.sleep(500); - - assertThat(clusterClient.getPartitions()).isEmpty(); - clusterConnection.close(); - } - - @Test - public void adaptiveTriggerOnMoveRedirection() throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// - .enableAdaptiveRefreshTrigger(ClusterTopologyRefreshOptions.RefreshTrigger.MOVED_REDIRECT)// - .build(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - - StatefulRedisClusterConnection connection = clusterClient.connect(); - RedisAdvancedClusterAsyncCommands clusterConnection = connection.async(); - - Partitions partitions = connection.getPartitions(); - RedisClusterNode node1 = partitions.getPartitionBySlot(0); - RedisClusterNode node2 = partitions.getPartitionBySlot(12000); - - node2.getSlots().addAll(node1.getSlots()); - node1.getSlots().clear(); - partitions.updateCache(); - - assertThat(clusterClient.getPartitions().getPartitionByNodeId(node1.getNodeId()).getSlots()).hasSize(0); - assertThat(clusterClient.getPartitions().getPartitionByNodeId(node2.getNodeId()).getSlots()).hasSize(16384); - - clusterConnection.set("b", value); // slot 3300 - - Wait.untilEquals(12000, new Wait.Supplier() { - @Override - public Integer get() throws Exception { - return clusterClient.getPartitions().getPartitionByNodeId(node1.getNodeId()).getSlots().size(); - } - }).waitOrTimeout(); - - assertThat(clusterClient.getPartitions().getPartitionByNodeId(node1.getNodeId()).getSlots()).hasSize(12000); - assertThat(clusterClient.getPartitions().getPartitionByNodeId(node2.getNodeId()).getSlots()).hasSize(4384); - clusterConnection.close(); - } - - private void runReconnectTest( - BiFunction, RedisClusterNode, BaseRedisAsyncCommands> function) - throws Exception { - - ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// - .refreshTriggersReconnectAttempts(0)// - .enableAllAdaptiveRefreshTriggers()// - .build(); - clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); - RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); - - RedisClusterNode node = clusterClient.getPartitions().getPartition(0); - BaseRedisAsyncCommands closeable = function.apply(clusterConnection, node); - clusterClient.getPartitions().clear(); - - closeable.quit(); - - Wait.untilTrue(() -> { - return !clusterClient.getPartitions().isEmpty(); - }).waitOrTimeout(); - - closeable.close(); - clusterConnection.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/codec/CompressionCodecTest.java b/src/test/java/com/lambdaworks/redis/codec/CompressionCodecTest.java deleted file mode 100644 index c058e0c04e..0000000000 --- a/src/test/java/com/lambdaworks/redis/codec/CompressionCodecTest.java +++ /dev/null @@ -1,77 +0,0 @@ -package com.lambdaworks.redis.codec; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.io.IOException; -import java.nio.ByteBuffer; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class CompressionCodecTest { - - private String key = "key"; - private byte[] keyGzipBytes = new byte[] { 31, -117, 8, 0, 0, 0, 0, 0, 0, 0, -53, 78, -83, 4, 0, -87, -85, -112, -118, 3, - 0, 0, 0 }; - private byte[] keyDeflateBytes = new byte[] { 120, -100, -53, 78, -83, 4, 0, 2, -121, 1, 74 }; - private String value = "value"; - - @Test - public void keyPassthroughTest() throws Exception { - RedisCodec sut = CompressionCodec.valueCompressor(new Utf8StringCodec(), - CompressionCodec.CompressionType.GZIP); - ByteBuffer byteBuffer = sut.encodeKey(value); - assertThat(toString(byteBuffer.duplicate())).isEqualTo(value); - - String s = sut.decodeKey(byteBuffer); - assertThat(s).isEqualTo(value); - } - - @Test - public void gzipValueTest() throws Exception { - RedisCodec sut = CompressionCodec.valueCompressor(new Utf8StringCodec(), - CompressionCodec.CompressionType.GZIP); - ByteBuffer byteBuffer = sut.encodeValue(key); - assertThat(toBytes(byteBuffer.duplicate())).isEqualTo(keyGzipBytes); - - String s = sut.decodeValue(ByteBuffer.wrap(keyGzipBytes)); - assertThat(s).isEqualTo(key); - } - - @Test - public void deflateValueTest() throws Exception { - RedisCodec sut = CompressionCodec.valueCompressor(new Utf8StringCodec(), - CompressionCodec.CompressionType.DEFLATE); - ByteBuffer byteBuffer = sut.encodeValue(key); - assertThat(toBytes(byteBuffer.duplicate())).isEqualTo(keyDeflateBytes); - - String s = sut.decodeValue(ByteBuffer.wrap(keyDeflateBytes)); - assertThat(s).isEqualTo(key); - } - - @Test(expected = IllegalStateException.class) - public void wrongCompressionTypeOnDecode() throws Exception { - RedisCodec sut = CompressionCodec.valueCompressor(new Utf8StringCodec(), - CompressionCodec.CompressionType.DEFLATE); - - sut.decodeValue(ByteBuffer.wrap(keyGzipBytes)); - } - - private String toString(ByteBuffer buffer) throws IOException { - byte[] bytes = toBytes(buffer); - return new String(bytes, "UTF-8"); - } - - private byte[] toBytes(ByteBuffer buffer) { - byte[] bytes; - if (buffer.hasArray()) { - bytes = buffer.array(); - } else { - bytes = new byte[buffer.remaining()]; - buffer.get(bytes); - } - return bytes; - } -} diff --git a/src/test/java/com/lambdaworks/redis/codec/StringCodecTest.java b/src/test/java/com/lambdaworks/redis/codec/StringCodecTest.java deleted file mode 100644 index ec067ee3fa..0000000000 --- a/src/test/java/com/lambdaworks/redis/codec/StringCodecTest.java +++ /dev/null @@ -1,105 +0,0 @@ -package com.lambdaworks.redis.codec; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.nio.ByteBuffer; -import java.nio.charset.StandardCharsets; - -import org.junit.Test; - -import com.lambdaworks.redis.protocol.LettuceCharsets; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.Unpooled; - -/** - * @author Mark Paluch - */ -public class StringCodecTest { - - String teststring = "hello üäü~∑†®†ª€∂‚¶¢ Wørld"; - String teststringPlain = "hello uufadsfasdfadssdfadfs"; - - @Test - public void encodeUtf8Buf() throws Exception { - - StringCodec codec = new StringCodec(LettuceCharsets.UTF8); - - ByteBuf buffer = Unpooled.buffer(1234); - codec.encode(teststring, buffer); - - assertThat(buffer.toString(StandardCharsets.UTF_8)).isEqualTo(teststring); - } - - @Test - public void encodeAsciiBuf() throws Exception { - - StringCodec codec = new StringCodec(LettuceCharsets.ASCII); - - ByteBuf buffer = Unpooled.buffer(1234); - codec.encode(teststringPlain, buffer); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(teststringPlain); - } - - @Test - public void encodeIso88591Buf() throws Exception { - - StringCodec codec = new StringCodec(StandardCharsets.ISO_8859_1); - - ByteBuf buffer = Unpooled.buffer(1234); - codec.encodeValue(teststringPlain, buffer); - - assertThat(buffer.toString(StandardCharsets.ISO_8859_1)).isEqualTo(teststringPlain); - } - - @Test - public void encodeAndDecodeUtf8Buf() throws Exception { - - StringCodec codec = new StringCodec(LettuceCharsets.UTF8); - - ByteBuf buffer = Unpooled.buffer(1234); - codec.encodeKey(teststring, buffer); - - assertThat(codec.decodeKey(buffer.nioBuffer())).isEqualTo(teststring); - } - - @Test - public void encodeAndDecodeUtf8() throws Exception { - - StringCodec codec = new StringCodec(LettuceCharsets.UTF8); - ByteBuffer byteBuffer = codec.encodeKey(teststring); - - assertThat(codec.decodeKey(byteBuffer)).isEqualTo(teststring); - } - - @Test - public void encodeAndDecodeAsciiBuf() throws Exception { - - StringCodec codec = new StringCodec(LettuceCharsets.ASCII); - - ByteBuf buffer = Unpooled.buffer(1234); - codec.encode(teststringPlain, buffer); - - assertThat(codec.decodeKey(buffer.nioBuffer())).isEqualTo(teststringPlain); - } - - @Test - public void encodeAndDecodeIso88591Buf() throws Exception { - - StringCodec codec = new StringCodec(StandardCharsets.ISO_8859_1); - - ByteBuf buffer = Unpooled.buffer(1234); - codec.encode(teststringPlain, buffer); - - assertThat(codec.decodeKey(buffer.nioBuffer())).isEqualTo(teststringPlain); - } - - @Test - public void estimateSize() throws Exception { - - assertThat(new StringCodec(LettuceCharsets.UTF8).estimateSize(teststring)).isEqualTo((int) (teststring.length() * 1.1)); - assertThat(new StringCodec(LettuceCharsets.ASCII).estimateSize(teststring)).isEqualTo(teststring.length()); - assertThat(new StringCodec(StandardCharsets.ISO_8859_1).estimateSize(teststring)).isEqualTo(teststring.length()); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/commands/BitCommandPoolTest.java b/src/test/java/com/lambdaworks/redis/commands/BitCommandPoolTest.java deleted file mode 100644 index 879ef88349..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/BitCommandPoolTest.java +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright (C) 2012 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisConnectionPool; -import com.lambdaworks.redis.api.sync.RedisCommands; - -public class BitCommandPoolTest extends BitCommandTest { - RedisConnectionPool> pool; - RedisConnectionPool> bitpool; - - @Override - protected RedisCommands connect() { - pool = client.pool(new BitStringCodec(), 1, 5); - bitpool = client.pool(new BitStringCodec(), 1, 5); - bitstring = bitpool.allocateConnection(); - return pool.allocateConnection(); - } - - @Override - public void closeConnection() throws Exception { - pool.freeConnection(redis); - bitpool.freeConnection(bitstring); - - pool.close(); - bitpool.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/BitCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/BitCommandTest.java deleted file mode 100644 index 2365c6597d..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/BitCommandTest.java +++ /dev/null @@ -1,211 +0,0 @@ -// Copyright (C) 2012 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static com.lambdaworks.redis.BitFieldArgs.OverflowType.WRAP; -import static com.lambdaworks.redis.BitFieldArgs.signed; -import static com.lambdaworks.redis.BitFieldArgs.unsigned; -import static org.assertj.core.api.Assertions.assertThat; - -import java.nio.ByteBuffer; -import java.util.List; - -import org.assertj.core.api.Assertions; -import org.junit.Test; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.BitFieldArgs; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.codec.Utf8StringCodec; - -public class BitCommandTest extends AbstractRedisClientTest { - - protected RedisCommands bitstring; - - @Override - protected RedisCommands connect() { - connectBitString(); - return super.connect(); - } - - protected void connectBitString() { - bitstring = client.connect(new BitStringCodec()).sync(); - } - - @Override - public void closeConnection() throws Exception { - bitstring.close(); - super.closeConnection(); - } - - @Test - public void bitcount() throws Exception { - assertThat((long) redis.bitcount(key)).isEqualTo(0); - - redis.setbit(key, 0, 1); - redis.setbit(key, 1, 1); - redis.setbit(key, 2, 1); - - assertThat((long) redis.bitcount(key)).isEqualTo(3); - assertThat(redis.bitcount(key, 3, -1)).isEqualTo(0); - } - - @Test - public void bitfieldType() throws Exception { - assertThat(signed(64).getBits()).isEqualTo(64); - assertThat(signed(64).isSigned()).isTrue(); - assertThat(unsigned(63).getBits()).isEqualTo(63); - assertThat(unsigned(63).isSigned()).isFalse(); - } - - @Test(expected = IllegalArgumentException.class) - public void bitfieldTypeSigned65() throws Exception { - signed(65); - } - - @Test(expected = IllegalArgumentException.class) - public void bitfieldTypeUnsigned64() throws Exception { - unsigned(64); - } - - @Test(expected = IllegalStateException.class) - public void bitfieldBuilderEmptyPreviousType() throws Exception { - new BitFieldArgs().overflow(WRAP).get(); - } - - @Test - public void bitfield() throws Exception { - - BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 1).set(5, 1).incrBy(2, 3).get().get(2); - - List values = redis.bitfield(key, bitFieldArgs); - - assertThat(values).containsExactly(0L, 32L, 3L, 0L, 3L); - assertThat(bitstring.get(key)).isEqualTo("0000000000010011"); - } - - @Test - public void bitfieldSet() throws Exception { - - BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 5).set(5); - - List values = redis.bitfield(key, bitFieldArgs); - - assertThat(values).containsExactly(0L, 5L); - assertThat(bitstring.get(key)).isEqualTo("10100000"); - } - - @Test - public void bitfieldIncrBy() throws Exception { - - BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 5).incrBy(1); - - List values = redis.bitfield(key, bitFieldArgs); - - assertThat(values).containsExactly(0L, 6L); - assertThat(bitstring.get(key)).isEqualTo("01100000"); - } - - @Test - public void bitfieldOverflow() throws Exception { - - BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.overflow(WRAP).set(signed(8), 9, Integer.MAX_VALUE).get(signed(8)); - - List values = redis.bitfield(key, bitFieldArgs); - assertThat(values).containsExactly(0L, 0L); - assertThat(bitstring.get(key)).isEqualTo("000000001111111000000001"); - } - - @Test - public void bitpos() throws Exception { - assertThat((long) redis.bitcount(key)).isEqualTo(0); - redis.setbit(key, 0, 0); - redis.setbit(key, 1, 1); - - assertThat(bitstring.get(key)).isEqualTo("00000010"); - assertThat((long) redis.bitpos(key, true)).isEqualTo(1); - } - - @Test - public void bitposOffset() throws Exception { - assertThat((long) redis.bitcount(key)).isEqualTo(0); - redis.setbit(key, 0, 1); - redis.setbit(key, 1, 1); - redis.setbit(key, 2, 0); - redis.setbit(key, 3, 0); - redis.setbit(key, 4, 0); - redis.setbit(key, 5, 1); - - assertThat((long) bitstring.getbit(key, 1)).isEqualTo(1); - assertThat((long) bitstring.getbit(key, 4)).isEqualTo(0); - assertThat((long) bitstring.getbit(key, 5)).isEqualTo(1); - assertThat(bitstring.get(key)).isEqualTo("00100011"); - assertThat((long) redis.bitpos(key, false, 0, 0)).isEqualTo(2); - } - - @Test - public void bitopAnd() throws Exception { - redis.setbit("foo", 0, 1); - redis.setbit("bar", 1, 1); - redis.setbit("baz", 2, 1); - assertThat(redis.bitopAnd(key, "foo", "bar", "baz")).isEqualTo(1); - assertThat((long) redis.bitcount(key)).isEqualTo(0); - assertThat(bitstring.get(key)).isEqualTo("00000000"); - } - - @Test - public void bitopNot() throws Exception { - redis.setbit("foo", 0, 1); - redis.setbit("foo", 2, 1); - - assertThat(redis.bitopNot(key, "foo")).isEqualTo(1); - assertThat((long) redis.bitcount(key)).isEqualTo(6); - assertThat(bitstring.get(key)).isEqualTo("11111010"); - } - - @Test - public void bitopOr() throws Exception { - redis.setbit("foo", 0, 1); - redis.setbit("bar", 1, 1); - redis.setbit("baz", 2, 1); - assertThat(redis.bitopOr(key, "foo", "bar", "baz")).isEqualTo(1); - assertThat(bitstring.get(key)).isEqualTo("00000111"); - } - - @Test - public void bitopXor() throws Exception { - redis.setbit("foo", 0, 1); - redis.setbit("bar", 0, 1); - redis.setbit("baz", 2, 1); - assertThat(redis.bitopXor(key, "foo", "bar", "baz")).isEqualTo(1); - assertThat(bitstring.get(key)).isEqualTo("00000100"); - } - - @Test - public void getbit() throws Exception { - assertThat(redis.getbit(key, 0)).isEqualTo(0); - redis.setbit(key, 0, 1); - assertThat(redis.getbit(key, 0)).isEqualTo(1); - } - - @Test - public void setbit() throws Exception { - - assertThat(redis.setbit(key, 0, 1)).isEqualTo(0); - assertThat(redis.setbit(key, 0, 0)).isEqualTo(1); - } - - public static class BitStringCodec extends Utf8StringCodec { - @Override - public String decodeValue(ByteBuffer bytes) { - StringBuilder bits = new StringBuilder(bytes.remaining() * 8); - while (bytes.remaining() > 0) { - byte b = bytes.get(); - for (int i = 0; i < 8; i++) { - bits.append(Integer.valueOf(b >>> i & 1)); - } - } - return bits.toString(); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/CustomCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/CustomCommandTest.java deleted file mode 100644 index 41d7ef2160..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/CustomCommandTest.java +++ /dev/null @@ -1,121 +0,0 @@ -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assume.assumeTrue; - -import java.util.List; - -import org.junit.Test; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.ReactiveCommandDispatcher; -import com.lambdaworks.redis.RedisCommandExecutionException; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.*; - -import rx.Observable; - -/** - * @author Mark Paluch - */ -public class CustomCommandTest extends AbstractRedisClientTest { - - protected final Utf8StringCodec utf8StringCodec = new Utf8StringCodec(); - - @Test - public void dispatchSet() throws Exception { - - String response = redis.dispatch(MyCommands.SET, new StatusOutput<>(utf8StringCodec), - new CommandArgs<>(utf8StringCodec).addKey(key).addValue(value)); - - assertThat(response).isEqualTo("OK"); - } - - @Test - public void dispatchWithoutArgs() throws Exception { - - String response = redis.dispatch(MyCommands.INFO, new StatusOutput<>(utf8StringCodec)); - - assertThat(response).contains("connected_clients"); - } - - @Test(expected = RedisCommandExecutionException.class) - public void dispatchShouldFailForWrongDataType() throws Exception { - - redis.hset(key, key, value); - redis.dispatch(CommandType.GET, new StatusOutput<>(utf8StringCodec), new CommandArgs<>(utf8StringCodec).addKey(key)); - } - - @Test - public void dispatchTransactions() throws Exception { - - redis.multi(); - String response = redis.dispatch(CommandType.SET, new StatusOutput<>(utf8StringCodec), - new CommandArgs<>(utf8StringCodec).addKey(key).addValue(value)); - - List exec = redis.exec(); - - assertThat(response).isNull(); - assertThat(exec).hasSize(1).contains("OK"); - } - - @Test - public void standaloneAsyncPing() throws Exception { - - RedisCommand command = new Command<>(MyCommands.PING, new StatusOutput<>(new Utf8StringCodec()), - null); - - AsyncCommand async = new AsyncCommand<>(command); - getStandaloneConnection().dispatch(async); - - assertThat(async.get()).isEqualTo("PONG"); - } - - @Test - public void standaloneFireAndForget() throws Exception { - - RedisCommand command = new Command<>(MyCommands.PING, new StatusOutput<>(new Utf8StringCodec()), - null); - getStandaloneConnection().dispatch(command); - assertThat(command.isCancelled()).isFalse(); - - } - - @Test - public void standaloneReactivePing() throws Exception { - - RedisCommand command = new Command<>(MyCommands.PING, new StatusOutput<>(new Utf8StringCodec()), - null); - ReactiveCommandDispatcher dispatcher = new ReactiveCommandDispatcher<>(command, - getStandaloneConnection(), false); - - String result = Observable.create(dispatcher).toBlocking().first(); - - assertThat(result).isEqualTo("PONG"); - } - - private StatefulRedisConnection getStandaloneConnection() { - - assumeTrue(redis.getStatefulConnection() instanceof StatefulRedisConnection); - return redis.getStatefulConnection(); - } - - public enum MyCommands implements ProtocolKeyword { - PING, SET, INFO; - - private final byte name[]; - - MyCommands() { - // cache the bytes for the command name. Reduces memory and cpu pressure when using commands. - name = name().getBytes(); - } - - @Override - public byte[] getBytes() { - return name; - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/GeoCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/GeoCommandTest.java deleted file mode 100644 index 7f90e0d2da..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/GeoCommandTest.java +++ /dev/null @@ -1,444 +0,0 @@ -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.offset; - -import java.util.List; -import java.util.Set; - -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; - -import com.lambdaworks.redis.*; - -public class GeoCommandTest extends AbstractRedisClientTest { - - @Rule - public ExpectedException expectedException = ExpectedException.none(); - - @Test - public void geoadd() throws Exception { - - Long result = redis.geoadd(key, -73.9454966, 40.747533, "lic market"); - assertThat(result).isEqualTo(1); - - Long readd = redis.geoadd(key, -73.9454966, 40.747533, "lic market"); - assertThat(readd).isEqualTo(0); - } - - @Test - public void geoaddWithTransaction() throws Exception { - - redis.multi(); - redis.geoadd(key, -73.9454966, 40.747533, "lic market"); - redis.geoadd(key, -73.9454966, 40.747533, "lic market"); - - assertThat(redis.exec()).containsSequence(1L, 0L); - } - - @Test - public void geoaddMulti() throws Exception { - - Long result = redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim", 8.3796281, 48.9978127, "EFS9", 8.665351, 49.553302, - "Bahn"); - assertThat(result).isEqualTo(3); - } - - @Test - public void geoaddMultiWithTransaction() throws Exception { - - redis.multi(); - redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim", 8.3796281, 48.9978127, "EFS9", 8.665351, 49.553302, "Bahn"); - - assertThat(redis.exec()).contains(3L); - } - - @Test(expected = IllegalArgumentException.class) - public void geoaddMultiWrongArgument() throws Exception { - redis.geoadd(key, 49.528253); - } - - protected void prepareGeo() { - redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim"); - redis.geoadd(key, 8.3796281, 48.9978127, "EFS9", 8.665351, 49.553302, "Bahn"); - } - - @Test - public void georadius() throws Exception { - - prepareGeo(); - - Set georadius = redis.georadius(key, 8.6582861, 49.5285695, 1, GeoArgs.Unit.km); - assertThat(georadius).hasSize(1).contains("Weinheim"); - - Set largerGeoradius = redis.georadius(key, 8.6582861, 49.5285695, 5, GeoArgs.Unit.km); - assertThat(largerGeoradius).hasSize(2).contains("Weinheim").contains("Bahn"); - } - - @Test - public void georadiusWithTransaction() throws Exception { - - prepareGeo(); - - redis.multi(); - redis.georadius(key, 8.6582861, 49.5285695, 1, GeoArgs.Unit.km); - redis.georadius(key, 8.6582861, 49.5285695, 5, GeoArgs.Unit.km); - - List exec = redis.exec(); - Set georadius = (Set) exec.get(0); - Set largerGeoradius = (Set) exec.get(1); - - assertThat(georadius).hasSize(1).contains("Weinheim"); - assertThat(largerGeoradius).hasSize(2).contains("Weinheim").contains("Bahn"); - } - - @Test - public void geodist() throws Exception { - - prepareGeo(); - - Double result = redis.geodist(key, "Weinheim", "Bahn", GeoArgs.Unit.km); - // 10 mins with the bike - assertThat(result).isGreaterThan(2.5).isLessThan(2.9); - } - - @Test - public void geodistWithTransaction() throws Exception { - - prepareGeo(); - - redis.multi(); - redis.geodist(key, "Weinheim", "Bahn", GeoArgs.Unit.km); - Double result = (Double) redis.exec().get(0); - - // 10 mins with the bike - assertThat(result).isGreaterThan(2.5).isLessThan(2.9); - - } - - @Test - public void geopos() throws Exception { - - prepareGeo(); - - List geopos = redis.geopos(key, "Weinheim", "foobar", "Bahn"); - - assertThat(geopos).hasSize(3); - assertThat(geopos.get(0).x.doubleValue()).isEqualTo(8.6638, offset(0.001)); - assertThat(geopos.get(1)).isNull(); - assertThat(geopos.get(2)).isNotNull(); - } - - @Test - public void geoposWithTransaction() throws Exception { - - prepareGeo(); - - redis.multi(); - redis.geopos(key, "Weinheim", "foobar", "Bahn"); - redis.geopos(key, "Weinheim", "foobar", "Bahn"); - List geopos = (List) redis.exec().get(1); - - assertThat(geopos).hasSize(3); - assertThat(geopos.get(0).x.doubleValue()).isEqualTo(8.6638, offset(0.001)); - assertThat(geopos.get(1)).isNull(); - assertThat(geopos.get(2)).isNotNull(); - } - - @Test - public void georadiusWithArgs() throws Exception { - - prepareGeo(); - - GeoArgs geoArgs = new GeoArgs().withHash().withCoordinates().withDistance().withCount(1).desc(); - - List> result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, geoArgs); - assertThat(result).hasSize(1); - - GeoWithin weinheim = result.get(0); - - assertThat(weinheim.member).isEqualTo("Weinheim"); - assertThat(weinheim.geohash).isEqualTo(3666615932941099L); - - assertThat(weinheim.distance).isEqualTo(2.7882, offset(0.5)); - assertThat(weinheim.coordinates.x.doubleValue()).isEqualTo(8.663875, offset(0.5)); - assertThat(weinheim.coordinates.y.doubleValue()).isEqualTo(49.52825, offset(0.5)); - - result = redis.georadius(key, 8.665351, 49.553302, 1, GeoArgs.Unit.km, new GeoArgs()); - assertThat(result).hasSize(1); - - GeoWithin bahn = result.get(0); - - assertThat(bahn.member).isEqualTo("Bahn"); - assertThat(bahn.geohash).isNull(); - - assertThat(bahn.distance).isNull(); - assertThat(bahn.coordinates).isNull(); - } - - @Test - public void georadiusWithArgsAndTransaction() throws Exception { - - prepareGeo(); - - redis.multi(); - GeoArgs geoArgs = new GeoArgs().withHash().withCoordinates().withDistance().withCount(1).desc(); - redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, geoArgs); - redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, geoArgs); - List exec = redis.exec(); - - assertThat(exec).hasSize(2); - - List> result = (List) exec.get(1); - assertThat(result).hasSize(1); - - GeoWithin weinheim = result.get(0); - - assertThat(weinheim.member).isEqualTo("Weinheim"); - assertThat(weinheim.geohash).isEqualTo(3666615932941099L); - - assertThat(weinheim.distance).isEqualTo(2.7882, offset(0.5)); - assertThat(weinheim.coordinates.x.doubleValue()).isEqualTo(8.663875, offset(0.5)); - assertThat(weinheim.coordinates.y.doubleValue()).isEqualTo(49.52825, offset(0.5)); - - result = redis.georadius(key, 8.665351, 49.553302, 1, GeoArgs.Unit.km, new GeoArgs()); - assertThat(result).hasSize(1); - - GeoWithin bahn = result.get(0); - - assertThat(bahn.member).isEqualTo("Bahn"); - assertThat(bahn.geohash).isNull(); - - assertThat(bahn.distance).isNull(); - assertThat(bahn.coordinates).isNull(); - } - - @Test - public void geohash() throws Exception { - - prepareGeo(); - - List geohash = redis.geohash(key, "Weinheim", "Bahn", "dunno"); - - assertThat(geohash).containsSequence("u0y1v0kffz0", "u0y1vhvuvm0", null); - } - - @Test - public void geohashUnknownKey() throws Exception { - - prepareGeo(); - - List geohash = redis.geohash("dunno", "member"); - - assertThat(geohash).isEmpty(); - } - - @Test - public void geohashWithTransaction() throws Exception { - - prepareGeo(); - - redis.multi(); - redis.geohash(key, "Weinheim", "Bahn", "dunno"); - redis.geohash(key, "Weinheim", "Bahn", "dunno"); - List exec = redis.exec(); - - List geohash = (List) exec.get(1); - - assertThat(geohash).containsSequence("u0y1v0kffz0", "u0y1vhvuvm0", null); - } - - @Test - public void georadiusStore() throws Exception { - - prepareGeo(); - - String resultKey = "38o54"; // yields in same slot as "key" - Long result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, - new GeoRadiusStoreArgs<>().withStore(resultKey)); - assertThat(result).isEqualTo(2); - - List> results = redis.zrangeWithScores(resultKey, 0, -1); - assertThat(results).hasSize(2); - } - - @Test - public void georadiusStoreWithCountAndSort() throws Exception { - - prepareGeo(); - - String resultKey = "38o54"; // yields in same slot as "key" - Long result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, new GeoRadiusStoreArgs<>().withCount(1) - .desc().withStore(resultKey)); - assertThat(result).isEqualTo(1); - - List> results = redis.zrangeWithScores(resultKey, 0, -1); - assertThat(results).hasSize(1); - assertThat(results.get(0).score).isGreaterThan(99999); - } - - @Test - public void georadiusStoreDist() throws Exception { - - prepareGeo(); - - String resultKey = "38o54"; // yields in same slot as "key" - Long result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, - new GeoRadiusStoreArgs<>().withStoreDist("38o54")); - assertThat(result).isEqualTo(2); - - List> dist = redis.zrangeWithScores(resultKey, 0, -1); - assertThat(dist).hasSize(2); - } - - @Test - public void georadiusStoreDistWithCountAndSort() throws Exception { - - prepareGeo(); - - String resultKey = "38o54"; // yields in same slot as "key" - Long result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, new GeoRadiusStoreArgs<>().withCount(1) - .desc().withStoreDist("38o54")); - assertThat(result).isEqualTo(1); - - List> dist = redis.zrangeWithScores(resultKey, 0, -1); - assertThat(dist).hasSize(1); - - assertThat(dist.get(0).score).isBetween(2d, 3d); - } - - @Test(expected = IllegalArgumentException.class) - public void georadiusWithNullArgs() throws Exception { - redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, (GeoArgs) null); - } - - @Test(expected = IllegalArgumentException.class) - public void georadiusStoreWithNullArgs() throws Exception { - redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, (GeoRadiusStoreArgs) null); - } - - @Test - public void georadiusbymember() throws Exception { - - prepareGeo(); - - Set empty = redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km); - assertThat(empty).hasSize(1).contains("Bahn"); - - Set georadiusbymember = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km); - assertThat(georadiusbymember).hasSize(2).contains("Bahn", "Weinheim"); - } - - @Test - public void georadiusbymemberStoreDistWithCountAndSort() throws Exception { - - prepareGeo(); - - String resultKey = "38o54"; // yields in same slot as "key" - Long result = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, new GeoRadiusStoreArgs<>().withCount(1).desc() - .withStoreDist("38o54")); - assertThat(result).isEqualTo(1); - - List> dist = redis.zrangeWithScores(resultKey, 0, -1); - assertThat(dist).hasSize(1); - - assertThat(dist.get(0).score).isBetween(2d, 3d); - } - - @Test - public void georadiusbymemberWithArgs() throws Exception { - - prepareGeo(); - - List> empty = redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km, - new GeoArgs().withHash().withCoordinates().withDistance().desc()); - assertThat(empty).isNotEmpty(); - - List> withDistanceAndCoordinates = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, - new GeoArgs().withCoordinates().withDistance().desc()); - assertThat(withDistanceAndCoordinates).hasSize(2); - - GeoWithin weinheim = withDistanceAndCoordinates.get(0); - assertThat(weinheim.member).isEqualTo("Weinheim"); - assertThat(weinheim.geohash).isNull(); - assertThat(weinheim.distance).isNotNull(); - assertThat(weinheim.coordinates).isNotNull(); - - List> withDistanceAndHash = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, - new GeoArgs().withDistance().withHash().desc()); - assertThat(withDistanceAndHash).hasSize(2); - - GeoWithin weinheimDistanceHash = withDistanceAndHash.get(0); - assertThat(weinheimDistanceHash.member).isEqualTo("Weinheim"); - assertThat(weinheimDistanceHash.geohash).isNotNull(); - assertThat(weinheimDistanceHash.distance).isNotNull(); - assertThat(weinheimDistanceHash.coordinates).isNull(); - - List> withCoordinates = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, - new GeoArgs().withCoordinates().desc()); - assertThat(withCoordinates).hasSize(2); - - GeoWithin weinheimCoordinates = withCoordinates.get(0); - assertThat(weinheimCoordinates.member).isEqualTo("Weinheim"); - assertThat(weinheimCoordinates.geohash).isNull(); - assertThat(weinheimCoordinates.distance).isNull(); - assertThat(weinheimCoordinates.coordinates).isNotNull(); - } - - @Test - public void georadiusbymemberWithArgsAndTransaction() throws Exception { - - prepareGeo(); - - redis.multi(); - redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km, - new GeoArgs().withHash().withCoordinates().withDistance().desc()); - redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, new GeoArgs().withCoordinates().withDistance().desc()); - redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, new GeoArgs().withDistance().withHash().desc()); - redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, new GeoArgs().withCoordinates().desc()); - - List exec = redis.exec(); - - List> empty = (List) exec.get(0); - assertThat(empty).isNotEmpty(); - - List> withDistanceAndCoordinates = (List) exec.get(1); - assertThat(withDistanceAndCoordinates).hasSize(2); - - GeoWithin weinheim = withDistanceAndCoordinates.get(0); - assertThat(weinheim.member).isEqualTo("Weinheim"); - assertThat(weinheim.geohash).isNull(); - assertThat(weinheim.distance).isNotNull(); - assertThat(weinheim.coordinates).isNotNull(); - - List> withDistanceAndHash = (List) exec.get(2); - assertThat(withDistanceAndHash).hasSize(2); - - GeoWithin weinheimDistanceHash = withDistanceAndHash.get(0); - assertThat(weinheimDistanceHash.member).isEqualTo("Weinheim"); - assertThat(weinheimDistanceHash.geohash).isNotNull(); - assertThat(weinheimDistanceHash.distance).isNotNull(); - assertThat(weinheimDistanceHash.coordinates).isNull(); - - List> withCoordinates = (List) exec.get(3); - assertThat(withCoordinates).hasSize(2); - - GeoWithin weinheimCoordinates = withCoordinates.get(0); - assertThat(weinheimCoordinates.member).isEqualTo("Weinheim"); - assertThat(weinheimCoordinates.geohash).isNull(); - assertThat(weinheimCoordinates.distance).isNull(); - assertThat(weinheimCoordinates.coordinates).isNotNull(); - } - - @Test(expected = IllegalArgumentException.class) - public void georadiusbymemberWithNullArgs() throws Exception { - redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km, (GeoArgs) null); - } - - @Test(expected = IllegalArgumentException.class) - public void georadiusStorebymemberWithNullArgs() throws Exception { - redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km, (GeoRadiusStoreArgs) null); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/commands/HLLCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/HLLCommandTest.java deleted file mode 100644 index 5213d51cf6..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/HLLCommandTest.java +++ /dev/null @@ -1,112 +0,0 @@ -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Fail.fail; - -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.RedisHLLConnection; -import com.lambdaworks.redis.api.sync.RedisHLLCommands; - -public class HLLCommandTest extends AbstractRedisClientTest { - @Rule - public ExpectedException exception = ExpectedException.none(); - - private RedisHLLCommands commands() { - return redis; - } - - private RedisHLLConnection connection() { - return redis; - } - - @Test - public void pfadd() throws Exception { - - assertThat(commands().pfadd(key, value, value)).isEqualTo(1); - assertThat(commands().pfadd(key, value, value)).isEqualTo(0); - assertThat(commands().pfadd(key, value)).isEqualTo(0); - } - - @Test - public void pfaddDeprecated() throws Exception { - assertThat(connection().pfadd(key, value, value)).isEqualTo(1); - assertThat(connection().pfadd(key, value, value)).isEqualTo(0); - assertThat(connection().pfadd(key, value)).isEqualTo(0); - } - - @Test(expected = IllegalArgumentException.class) - public void pfaddNoValues() throws Exception { - commands().pfadd(key); - } - - @Test - public void pfaddNullValues() throws Exception { - try { - commands().pfadd(key, null); - fail("Missing IllegalArgumentException"); - } catch (IllegalArgumentException e) { - } - try { - commands().pfadd(key, value, null); - fail("Missing IllegalArgumentException"); - } catch (IllegalArgumentException e) { - } - } - - @Test - public void pfmerge() throws Exception { - commands().pfadd(key, value); - commands().pfadd("key2", "value2"); - commands().pfadd("key3", "value3"); - - assertThat(commands().pfmerge(key, "key2", "key3")).isEqualTo("OK"); - assertThat(commands().pfcount(key)).isEqualTo(3); - - commands().pfadd("key2660", "rand", "mat"); - commands().pfadd("key7112", "mat", "perrin"); - - commands().pfmerge("key8885", "key2660", "key7112"); - - assertThat(commands().pfcount("key8885")).isEqualTo(3); - } - - @Test - public void pfmergeDeprecated() throws Exception { - connection().pfadd(key, value); - connection().pfadd("key2", "value2"); - connection().pfadd("key3", "value3"); - - assertThat(connection().pfmerge(key, "key2", "key3")).isEqualTo("OK"); - } - - @Test(expected = IllegalArgumentException.class) - public void pfmergeNoKeys() throws Exception { - commands().pfmerge(key); - } - - @Test - public void pfcount() throws Exception { - commands().pfadd(key, value); - commands().pfadd("key2", "value2"); - assertThat(commands().pfcount(key)).isEqualTo(1); - assertThat(commands().pfcount(key, "key2")).isEqualTo(2); - } - - @Test - public void pfcountDeprecated() throws Exception { - connection().pfadd(key, value); - connection().pfadd("key2", "value2"); - assertThat(connection().pfcount(key)).isEqualTo(1); - assertThat(connection().pfcount(key, "key2")).isEqualTo(2); - } - - @Test(expected = IllegalArgumentException.class) - public void pfcountNoKeys() throws Exception { - commands().pfcount(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/commands/HashCommandPoolTest.java b/src/test/java/com/lambdaworks/redis/commands/HashCommandPoolTest.java deleted file mode 100644 index 7c72df3061..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/HashCommandPoolTest.java +++ /dev/null @@ -1,25 +0,0 @@ -package com.lambdaworks.redis.commands; - -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisConnectionPool; -import com.lambdaworks.redis.api.sync.RedisCommands; - -/** - * @author Mark Paluch - */ -public class HashCommandPoolTest extends HashCommandTest { - - RedisConnectionPool> pool; - - @Override - protected RedisCommands connect() { - pool = client.pool(); - return pool.allocateConnection(); - } - - @Override - public void closeConnection() throws Exception { - pool.freeConnection(redis); - pool.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/HashCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/HashCommandTest.java deleted file mode 100644 index 430fa5e06a..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/HashCommandTest.java +++ /dev/null @@ -1,340 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.offset; - -import java.util.Collections; -import java.util.LinkedHashMap; -import java.util.List; -import java.util.Map; - -import org.junit.Test; - -import com.lambdaworks.redis.*; - -public class HashCommandTest extends AbstractRedisClientTest { - - @Test - public void hdel() throws Exception { - assertThat(redis.hdel(key, "one")).isEqualTo(0); - redis.hset(key, "two", "2"); - assertThat(redis.hdel(key, "one")).isEqualTo(0); - redis.hset(key, "one", "1"); - assertThat(redis.hdel(key, "one")).isEqualTo(1); - redis.hset(key, "one", "1"); - assertThat(redis.hdel(key, "one", "two")).isEqualTo(2); - } - - @Test - public void hexists() throws Exception { - assertThat(redis.hexists(key, "one")).isFalse(); - redis.hset(key, "two", "2"); - assertThat(redis.hexists(key, "one")).isFalse(); - redis.hset(key, "one", "1"); - assertThat(redis.hexists(key, "one")).isTrue(); - } - - @Test - public void hget() throws Exception { - assertThat(redis.hget(key, "one")).isNull(); - redis.hset(key, "one", "1"); - assertThat(redis.hget(key, "one")).isEqualTo("1"); - } - - @Test - public void hgetall() throws Exception { - assertThat(redis.hgetall(key).isEmpty()).isTrue(); - - redis.hset(key, "zero", "0"); - redis.hset(key, "one", "1"); - redis.hset(key, "two", "2"); - - Map map = redis.hgetall(key); - - assertThat(map).hasSize(3); - assertThat(map.keySet()).containsExactly("zero", "one", "two"); - } - - @Test - public void hgetallStreaming() throws Exception { - - KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter(); - - assertThat(redis.hgetall(key).isEmpty()).isTrue(); - redis.hset(key, "one", "1"); - redis.hset(key, "two", "2"); - Long count = redis.hgetall(adapter, key); - Map map = adapter.getMap(); - assertThat(count.intValue()).isEqualTo(2); - assertThat(map).hasSize(2); - assertThat(map.get("one")).isEqualTo("1"); - assertThat(map.get("two")).isEqualTo("2"); - } - - @Test - public void hincrby() throws Exception { - assertThat(redis.hincrby(key, "one", 1)).isEqualTo(1); - assertThat(redis.hincrby(key, "one", -2)).isEqualTo(-1); - } - - @Test - public void hincrbyfloat() throws Exception { - assertThat(redis.hincrbyfloat(key, "one", 1.0)).isEqualTo(1.0); - assertThat(redis.hincrbyfloat(key, "one", -2.0)).isEqualTo(-1.0); - assertThat(redis.hincrbyfloat(key, "one", 1.23)).isEqualTo(0.23, offset(0.001)); - } - - @Test - public void hkeys() throws Exception { - setup(); - List keys = redis.hkeys(key); - assertThat(keys).hasSize(2); - assertThat(keys.containsAll(list("one", "two"))).isTrue(); - } - - @Test - public void hkeysStreaming() throws Exception { - setup(); - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - - Long count = redis.hkeys(streamingAdapter, key); - assertThat(count.longValue()).isEqualTo(2); - - List keys = streamingAdapter.getList(); - assertThat(keys).hasSize(2); - assertThat(keys.containsAll(list("one", "two"))).isTrue(); - } - - private void setup() { - assertThat(redis.hkeys(key)).isEqualTo(list()); - redis.hset(key, "one", "1"); - redis.hset(key, "two", "2"); - } - - @Test - public void hlen() throws Exception { - assertThat((long) redis.hlen(key)).isEqualTo(0); - redis.hset(key, "one", "1"); - assertThat((long) redis.hlen(key)).isEqualTo(1); - } - - @Test - public void hstrlen() throws Exception { - assertThat((long) redis.hstrlen(key, "one")).isEqualTo(0); - redis.hset(key, "one", value); - assertThat((long) redis.hstrlen(key, "one")).isEqualTo(value.length()); - } - - @Test - public void hmget() throws Exception { - setupHmget(); - List values = redis.hmget(key, "one", "two"); - assertThat(values).hasSize(2); - assertThat(values.containsAll(list("1", "1"))).isTrue(); - } - - private void setupHmget() { - assertThat(redis.hmget(key, "one", "two")).isEqualTo(list(null, null)); - redis.hset(key, "one", "1"); - redis.hset(key, "two", "2"); - } - - @Test - public void hmgetStreaming() throws Exception { - setupHmget(); - - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - Long count = redis.hmget(streamingAdapter, key, "one", "two"); - List values = streamingAdapter.getList(); - assertThat(count.intValue()).isEqualTo(2); - assertThat(values).hasSize(2); - assertThat(values.containsAll(list("1", "1"))).isTrue(); - } - - @Test - public void hmset() throws Exception { - Map hash = new LinkedHashMap<>(); - hash.put("one", "1"); - hash.put("two", "2"); - assertThat(redis.hmset(key, hash)).isEqualTo("OK"); - assertThat(redis.hmget(key, "one", "two")).isEqualTo(list("1", "2")); - } - - @Test - public void hmsetWithNulls() throws Exception { - Map hash = new LinkedHashMap<>(); - hash.put("one", null); - assertThat(redis.hmset(key, hash)).isEqualTo("OK"); - assertThat(redis.hmget(key, "one")).isEqualTo(list("")); - - hash.put("one", ""); - assertThat(redis.hmset(key, hash)).isEqualTo("OK"); - assertThat(redis.hmget(key, "one")).isEqualTo(list("")); - } - - @Test - public void hset() throws Exception { - assertThat(redis.hset(key, "one", "1")).isTrue(); - assertThat(redis.hset(key, "one", "1")).isFalse(); - } - - @Test - public void hsetnx() throws Exception { - redis.hset(key, "one", "1"); - assertThat(redis.hsetnx(key, "one", "2")).isFalse(); - assertThat(redis.hget(key, "one")).isEqualTo("1"); - } - - @Test - public void hvals() throws Exception { - assertThat(redis.hvals(key)).isEqualTo(list()); - redis.hset(key, "one", "1"); - redis.hset(key, "two", "2"); - List values = redis.hvals(key); - assertThat(values).hasSize(2); - assertThat(values.containsAll(list("1", "1"))).isTrue(); - } - - @Test - public void hvalsStreaming() throws Exception { - assertThat(redis.hvals(key)).isEqualTo(list()); - redis.hset(key, "one", "1"); - redis.hset(key, "two", "2"); - - ListStreamingAdapter channel = new ListStreamingAdapter(); - Long count = redis.hvals(channel, key); - assertThat(count.intValue()).isEqualTo(2); - assertThat(channel.getList()).hasSize(2); - assertThat(channel.getList().containsAll(list("1", "1"))).isTrue(); - } - - @Test - public void hscan() throws Exception { - redis.hset(key, key, value); - MapScanCursor cursor = redis.hscan(key); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getMap()).isEqualTo(Collections.singletonMap(key, value)); - } - - @Test - public void hscanWithCursor() throws Exception { - redis.hset(key, key, value); - - MapScanCursor cursor = redis.hscan(key, ScanCursor.INITIAL); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getMap()).isEqualTo(Collections.singletonMap(key, value)); - } - - @Test - public void hscanWithCursorAndArgs() throws Exception { - redis.hset(key, key, value); - - MapScanCursor cursor = redis.hscan(key, ScanCursor.INITIAL, ScanArgs.Builder.limit(2)); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getMap()).isEqualTo(Collections.singletonMap(key, value)); - } - - @Test - public void hscanStreaming() throws Exception { - redis.hset(key, key, value); - KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter(); - - StreamScanCursor cursor = redis.hscan(adapter, key, ScanArgs.Builder.limit(100).match("*")); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(adapter.getMap()).isEqualTo(Collections.singletonMap(key, value)); - } - - @Test - public void hscanStreamingWithCursor() throws Exception { - redis.hset(key, key, value); - KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter(); - - StreamScanCursor cursor = redis.hscan(adapter, key, ScanCursor.INITIAL); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void hscanStreamingWithCursorAndArgs() throws Exception { - redis.hset(key, key, value); - KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter(); - - StreamScanCursor cursor3 = redis.hscan(adapter, key, ScanCursor.INITIAL, ScanArgs.Builder.limit(100).match("*")); - - assertThat(cursor3.getCount()).isEqualTo(1); - assertThat(cursor3.getCursor()).isEqualTo("0"); - assertThat(cursor3.isFinished()).isTrue(); - } - - @Test - public void hscanStreamingWithArgs() throws Exception { - redis.hset(key, key, value); - KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter(); - - StreamScanCursor cursor = redis.hscan(adapter, key); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void hscanMultiple() throws Exception { - - Map expect = new LinkedHashMap<>(); - Map check = new LinkedHashMap<>(); - setup100KeyValues(expect); - - MapScanCursor cursor = redis.hscan(key, ScanArgs.Builder.limit(5)); - - assertThat(cursor.getCursor()).isNotNull(); - assertThat(cursor.getMap()).hasSize(100); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - check.putAll(cursor.getMap()); - - while (!cursor.isFinished()) { - cursor = redis.hscan(key, cursor); - check.putAll(cursor.getMap()); - } - - assertThat(check).isEqualTo(expect); - } - - @Test - public void hscanMatch() throws Exception { - - Map expect = new LinkedHashMap<>(); - setup100KeyValues(expect); - - MapScanCursor cursor = redis.hscan(key, ScanArgs.Builder.limit(100).match("key1*")); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - assertThat(cursor.getMap()).hasSize(11); - } - - protected void setup100KeyValues(Map expect) { - for (int i = 0; i < 100; i++) { - expect.put(key + i, value + 1); - } - - redis.hmset(key, expect); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/KeyCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/KeyCommandTest.java deleted file mode 100644 index 993b5f5e37..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/KeyCommandTest.java +++ /dev/null @@ -1,410 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertNotEquals; - -import java.util.*; - -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; - -import com.lambdaworks.redis.*; - -public class KeyCommandTest extends AbstractRedisClientTest { - - @Rule - public ExpectedException exception = ExpectedException.none(); - - @Test - public void del() throws Exception { - redis.set(key, value); - assertThat((long) redis.del(key)).isEqualTo(1); - redis.set(key + "1", value); - redis.set(key + "2", value); - - assertThat(redis.del(key + "1", key + "2")).isEqualTo(2); - } - - @Test - public void unlink() throws Exception { - redis.set(key, value); - assertThat((long) redis.unlink(key)).isEqualTo(1); - redis.set(key + "1", value); - redis.set(key + "2", value); - assertThat(redis.unlink(key + "1", key + "2")).isEqualTo(2); - } - - @Test - public void dump() throws Exception { - assertThat(redis.dump("invalid")).isNull(); - redis.set(key, value); - assertThat(redis.dump(key).length > 0).isTrue(); - } - - @Test - public void exists() throws Exception { - assertThat(redis.exists(key)).isFalse(); - redis.set(key, value); - assertThat(redis.exists(key)).isTrue(); - } - - @Test - public void existsVariadic() throws Exception { - assertThat(redis.exists(key, "key2", "key3")).isEqualTo(0); - redis.set(key, value); - redis.set("key2", value); - assertThat(redis.exists(key, "key2", "key3")).isEqualTo(2); - } - - @Test - public void expire() throws Exception { - assertThat(redis.expire(key, 10)).isFalse(); - redis.set(key, value); - assertThat(redis.expire(key, 10)).isTrue(); - assertThat((long) redis.ttl(key)).isEqualTo(10); - } - - @Test - public void expireat() throws Exception { - Date expiration = new Date(System.currentTimeMillis() + 10000); - assertThat(redis.expireat(key, expiration)).isFalse(); - redis.set(key, value); - assertThat(redis.expireat(key, expiration)).isTrue(); - - assertThat(redis.ttl(key)).isGreaterThanOrEqualTo(8); - } - - @Test - public void keys() throws Exception { - assertThat(redis.keys("*")).isEqualTo(list()); - Map map = new LinkedHashMap<>(); - map.put("one", "1"); - map.put("two", "2"); - map.put("three", "3"); - redis.mset(map); - List keys = redis.keys("???"); - assertThat(keys).hasSize(2); - assertThat(keys.contains("one")).isTrue(); - assertThat(keys.contains("two")).isTrue(); - } - - @Test - public void keysStreaming() throws Exception { - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - assertThat(redis.keys("*")).isEqualTo(list()); - Map map = new LinkedHashMap<>(); - map.put("one", "1"); - map.put("two", "2"); - map.put("three", "3"); - redis.mset(map); - Long count = redis.keys(adapter, "???"); - assertThat(count.intValue()).isEqualTo(2); - - List keys = adapter.getList(); - assertThat(keys).hasSize(2); - assertThat(keys.contains("one")).isTrue(); - assertThat(keys.contains("two")).isTrue(); - } - - @Test - public void move() throws Exception { - redis.set(key, value); - redis.move(key, 1); - assertThat(redis.get(key)).isNull(); - redis.select(1); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void objectEncoding() throws Exception { - redis.set(key, value); - assertThat(redis.objectEncoding(key)).isEqualTo("embstr"); - redis.set(key, String.valueOf(1)); - assertThat(redis.objectEncoding(key)).isEqualTo("int"); - } - - @Test - public void objectIdletime() throws Exception { - redis.set(key, value); - assertThat((long) redis.objectIdletime(key)).isLessThan(2); - } - - @Test - public void objectRefcount() throws Exception { - redis.set(key, value); - assertThat(redis.objectRefcount(key)).isGreaterThan(0); - } - - @Test - public void persist() throws Exception { - assertThat(redis.persist(key)).isFalse(); - redis.set(key, value); - assertThat(redis.persist(key)).isFalse(); - redis.expire(key, 10); - assertThat(redis.persist(key)).isTrue(); - } - - @Test - public void pexpire() throws Exception { - assertThat(redis.pexpire(key, 5000)).isFalse(); - redis.set(key, value); - assertThat(redis.pexpire(key, 5000)).isTrue(); - assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(5000); - } - - @Test - public void pexpireat() throws Exception { - Date expiration = new Date(System.currentTimeMillis() + 5000); - assertThat(redis.pexpireat(key, expiration)).isFalse(); - redis.set(key, value); - assertThat(redis.pexpireat(key, expiration)).isTrue(); - assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(5000); - } - - @Test - public void pttl() throws Exception { - assertThat((long) redis.pttl(key)).isEqualTo(-2); - redis.set(key, value); - assertThat((long) redis.pttl(key)).isEqualTo(-1); - redis.pexpire(key, 5000); - assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(5000); - } - - @Test - public void randomkey() throws Exception { - assertThat(redis.randomkey()).isNull(); - redis.set(key, value); - assertThat(redis.randomkey()).isEqualTo(key); - } - - @Test - public void rename() throws Exception { - redis.set(key, value); - - assertThat(redis.rename(key, key + "X")).isEqualTo("OK"); - assertThat(redis.get(key)).isNull(); - assertThat(redis.get(key + "X")).isEqualTo(value); - redis.set(key, value + "X"); - assertThat(redis.rename(key + "X", key)).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test(expected = RedisException.class) - public void renameNonexistentKey() throws Exception { - redis.rename(key, key + "X"); - } - - @Test - public void renamenx() throws Exception { - redis.set(key, value); - assertThat(redis.renamenx(key, key + "X")).isTrue(); - assertThat(redis.get(key + "X")).isEqualTo(value); - redis.set(key, value); - assertThat(redis.renamenx(key + "X", key)).isFalse(); - } - - @Test(expected = RedisException.class) - public void renamenxNonexistentKey() throws Exception { - redis.renamenx(key, key + "X"); - } - - @Test - public void renamenxIdenticalKeys() throws Exception { - redis.set(key, value); - assertThat(redis.renamenx(key, key)).isFalse(); - } - - @Test - public void restore() throws Exception { - redis.set(key, value); - byte[] bytes = redis.dump(key); - redis.del(key); - - assertThat(redis.restore(key, 0, bytes)).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - assertThat(redis.pttl(key).longValue()).isEqualTo(-1); - - redis.del(key); - assertThat(redis.restore(key, 1000, bytes)).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - - assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(1000); - } - - @Test - public void touch() throws Exception { - assertThat((long) redis.touch(key)).isEqualTo(0); - redis.set(key, value); - assertThat((long) redis.touch(key, "key2")).isEqualTo(1); - } - - @Test - public void ttl() throws Exception { - assertThat((long) redis.ttl(key)).isEqualTo(-2); - redis.set(key, value); - assertThat((long) redis.ttl(key)).isEqualTo(-1); - redis.expire(key, 10); - assertThat((long) redis.ttl(key)).isEqualTo(10); - } - - @Test - public void type() throws Exception { - assertThat(redis.type(key)).isEqualTo("none"); - - redis.set(key, value); - assertThat(redis.type(key)).isEqualTo("string"); - - redis.hset(key + "H", value, "1"); - assertThat(redis.type(key + "H")).isEqualTo("hash"); - - redis.lpush(key + "L", "1"); - assertThat(redis.type(key + "L")).isEqualTo("list"); - - redis.sadd(key + "S", "1"); - assertThat(redis.type(key + "S")).isEqualTo("set"); - - redis.zadd(key + "Z", 1, "1"); - assertThat(redis.type(key + "Z")).isEqualTo("zset"); - } - - @Test - public void scan() throws Exception { - redis.set(key, value); - - KeyScanCursor cursor = redis.scan(); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getKeys()).isEqualTo(list(key)); - } - - @Test - public void scanWithArgs() throws Exception { - redis.set(key, value); - - KeyScanCursor cursor = redis.scan(ScanArgs.Builder.limit(10)); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - } - - @Test - public void scanInitialCursor() throws Exception { - redis.set(key, value); - - KeyScanCursor cursor = redis.scan(ScanCursor.INITIAL); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getKeys()).isEqualTo(list(key)); - } - - @Test(expected = IllegalArgumentException.class) - public void scanFinishedCursor() throws Exception { - redis.set(key, value); - redis.scan(ScanCursor.FINISHED); - } - - @Test(expected = IllegalArgumentException.class) - public void scanNullCursor() throws Exception { - redis.set(key, value); - redis.scan((ScanCursor) null); - } - - @Test - public void scanStreaming() throws Exception { - redis.set(key, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.scan(adapter); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(adapter.getList()).isEqualTo(list(key)); - } - - @Test - public void scanStreamingWithCursor() throws Exception { - redis.set(key, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.scan(adapter, ScanCursor.INITIAL); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void scanStreamingWithCursorAndArgs() throws Exception { - redis.set(key, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.scan(adapter, ScanCursor.INITIAL, ScanArgs.Builder.limit(5)); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void scanStreamingArgs() throws Exception { - redis.set(key, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.scan(adapter, ScanArgs.Builder.limit(100).match("*")); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(adapter.getList()).isEqualTo(list(key)); - } - - @Test - public void scanMultiple() throws Exception { - - Set expect = new HashSet<>(); - Set check = new HashSet<>(); - setup100KeyValues(expect); - - KeyScanCursor cursor = redis.scan(ScanArgs.Builder.limit(12)); - - assertThat(cursor.getCursor()).isNotNull(); - assertNotEquals("0", cursor.getCursor()); - assertThat(cursor.isFinished()).isFalse(); - - check.addAll(cursor.getKeys()); - - while (!cursor.isFinished()) { - cursor = redis.scan(cursor); - check.addAll(cursor.getKeys()); - } - - assertThat(check).isEqualTo(expect); - assertThat(check).hasSize(100); - } - - @Test - public void scanMatch() throws Exception { - - Set expect = new HashSet<>(); - setup100KeyValues(expect); - - KeyScanCursor cursor = redis.scan(ScanArgs.Builder.limit(200).match("key1*")); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - assertThat(cursor.getKeys()).hasSize(11); - } - - protected void setup100KeyValues(Set expect) { - for (int i = 0; i < 100; i++) { - redis.set(key + i, value + i); - expect.add(key + i); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/ListCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/ListCommandTest.java deleted file mode 100644 index 8f9e07c1c3..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/ListCommandTest.java +++ /dev/null @@ -1,208 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.List; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.ListStreamingAdapter; -import org.assertj.core.api.Assertions; -import org.junit.Test; - -public class ListCommandTest extends AbstractRedisClientTest { - @Test - public void blpop() throws Exception { - redis.rpush("two", "2", "3"); - assertThat(redis.blpop(1, "one", "two")).isEqualTo(kv("two", "2")); - } - - @Test - public void blpopTimeout() throws Exception { - redis.setTimeout(10, TimeUnit.SECONDS); - assertThat(redis.blpop(1, key)).isNull(); - } - - @Test - public void brpop() throws Exception { - redis.rpush("two", "2", "3"); - assertThat(redis.brpop(1, "one", "two")).isEqualTo(kv("two", "3")); - } - - @Test - public void brpoplpush() throws Exception { - redis.rpush("one", "1", "2"); - redis.rpush("two", "3", "4"); - assertThat(redis.brpoplpush(1, "one", "two")).isEqualTo("2"); - assertThat(redis.lrange("one", 0, -1)).isEqualTo(list("1")); - assertThat(redis.lrange("two", 0, -1)).isEqualTo(list("2", "3", "4")); - } - - @Test - public void brpoplpushTimeout() throws Exception { - assertThat(redis.brpoplpush(1, "one", "two")).isNull(); - } - - @Test - public void lindex() throws Exception { - assertThat(redis.lindex(key, 0)).isNull(); - redis.rpush(key, "one"); - assertThat(redis.lindex(key, 0)).isEqualTo("one"); - } - - @Test - public void linsert() throws Exception { - assertThat(redis.linsert(key, false, "one", "two")).isEqualTo(0); - redis.rpush(key, "one"); - redis.rpush(key, "three"); - assertThat(redis.linsert(key, true, "three", "two")).isEqualTo(3); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two", "three")); - } - - @Test - public void llen() throws Exception { - assertThat((long) redis.llen(key)).isEqualTo(0); - redis.lpush(key, "one"); - assertThat((long) redis.llen(key)).isEqualTo(1); - } - - @Test - public void lpop() throws Exception { - assertThat(redis.lpop(key)).isNull(); - redis.rpush(key, "one", "two"); - assertThat(redis.lpop(key)).isEqualTo("one"); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("two")); - } - - @Test - public void lpush() throws Exception { - assertThat((long) redis.lpush(key, "two")).isEqualTo(1); - assertThat((long) redis.lpush(key, "one")).isEqualTo(2); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two")); - assertThat((long) redis.lpush(key, "three", "four")).isEqualTo(4); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("four", "three", "one", "two")); - } - - @Test - public void lpushx() throws Exception { - assertThat((long) redis.lpushx(key, "two")).isEqualTo(0); - redis.lpush(key, "two"); - assertThat((long) redis.lpushx(key, "one")).isEqualTo(2); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two")); - } - - @Test - public void lpushxVariadic() throws Exception { - assertThat((long) redis.lpushx(key, "one", "two")).isEqualTo(0); - redis.lpush(key, "two"); - assertThat((long) redis.lpushx(key, "one", "zero")).isEqualTo(3); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("zero", "one", "two")); - } - - @Test - public void lrange() throws Exception { - assertThat(redis.lrange(key, 0, 10).isEmpty()).isTrue(); - redis.rpush(key, "one", "two", "three"); - List range = redis.lrange(key, 0, 1); - assertThat(range).hasSize(2); - assertThat(range.get(0)).isEqualTo("one"); - assertThat(range.get(1)).isEqualTo("two"); - assertThat(redis.lrange(key, 0, -1)).hasSize(3); - } - - @Test - public void lrangeStreaming() throws Exception { - assertThat(redis.lrange(key, 0, 10).isEmpty()).isTrue(); - redis.rpush(key, "one", "two", "three"); - - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - Long count = redis.lrange(adapter, key, 0, 1); - assertThat(count.longValue()).isEqualTo(2); - - List range = adapter.getList(); - - assertThat(range).hasSize(2); - assertThat(range.get(0)).isEqualTo("one"); - assertThat(range.get(1)).isEqualTo("two"); - assertThat(redis.lrange(key, 0, -1)).hasSize(3); - } - - @Test - public void lrem() throws Exception { - assertThat(redis.lrem(key, 0, value)).isEqualTo(0); - - redis.rpush(key, "1", "2", "1", "2", "1"); - assertThat((long) redis.lrem(key, 1, "1")).isEqualTo(1); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("2", "1", "2", "1")); - - redis.lpush(key, "1"); - assertThat((long) redis.lrem(key, -1, "1")).isEqualTo(1); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("1", "2", "1", "2")); - - redis.lpush(key, "1"); - assertThat((long) redis.lrem(key, 0, "1")).isEqualTo(3); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("2", "2")); - } - - @Test - public void lset() throws Exception { - redis.rpush(key, "one", "two", "three"); - assertThat(redis.lset(key, 2, "san")).isEqualTo("OK"); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two", "san")); - } - - @Test - public void ltrim() throws Exception { - redis.rpush(key, "1", "2", "3", "4", "5", "6"); - assertThat(redis.ltrim(key, 0, 3)).isEqualTo("OK"); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("1", "2", "3", "4")); - assertThat(redis.ltrim(key, -2, -1)).isEqualTo("OK"); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("3", "4")); - } - - @Test - public void rpop() throws Exception { - assertThat(redis.rpop(key)).isNull(); - redis.rpush(key, "one", "two"); - assertThat(redis.rpop(key)).isEqualTo("two"); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one")); - } - - @Test - public void rpoplpush() throws Exception { - assertThat(redis.rpoplpush("one", "two")).isNull(); - redis.rpush("one", "1", "2"); - redis.rpush("two", "3", "4"); - assertThat(redis.rpoplpush("one", "two")).isEqualTo("2"); - assertThat(redis.lrange("one", 0, -1)).isEqualTo(list("1")); - assertThat(redis.lrange("two", 0, -1)).isEqualTo(list("2", "3", "4")); - } - - @Test - public void rpush() throws Exception { - assertThat((long) redis.rpush(key, "one")).isEqualTo(1); - assertThat((long) redis.rpush(key, "two")).isEqualTo(2); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two")); - assertThat((long) redis.rpush(key, "three", "four")).isEqualTo(4); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two", "three", "four")); - } - - @Test - public void rpushx() throws Exception { - assertThat((long) redis.rpushx(key, "one")).isEqualTo(0); - redis.rpush(key, "one"); - assertThat((long) redis.rpushx(key, "two")).isEqualTo(2); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two")); - } - - @Test - public void rpushxVariadic() throws Exception { - assertThat((long) redis.rpushx(key, "two", "three")).isEqualTo(0); - redis.rpush(key, "one"); - assertThat((long) redis.rpushx(key, "two", "three")).isEqualTo(3); - assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two", "three")); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/NumericCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/NumericCommandTest.java deleted file mode 100644 index 1c3ef506e1..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/NumericCommandTest.java +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.*; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.offset; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import org.assertj.core.api.Assertions; -import org.junit.Test; - -public class NumericCommandTest extends AbstractRedisClientTest { - @Test - public void decr() throws Exception { - assertThat((long) redis.decr(key)).isEqualTo(-1); - assertThat((long) redis.decr(key)).isEqualTo(-2); - } - - @Test - public void decrby() throws Exception { - assertThat(redis.decrby(key, 3)).isEqualTo(-3); - assertThat(redis.decrby(key, 3)).isEqualTo(-6); - } - - @Test - public void incr() throws Exception { - assertThat((long) redis.incr(key)).isEqualTo(1); - assertThat((long) redis.incr(key)).isEqualTo(2); - } - - @Test - public void incrby() throws Exception { - assertThat(redis.incrby(key, 3)).isEqualTo(3); - assertThat(redis.incrby(key, 3)).isEqualTo(6); - } - - @Test - public void incrbyfloat() throws Exception { - - assertThat(redis.incrbyfloat(key, 3.0)).isEqualTo(3.0, offset(0.1)); - assertThat(redis.incrbyfloat(key, 0.2)).isEqualTo(3.2, offset(0.1)); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/RunOnlyOnceServerCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/RunOnlyOnceServerCommandTest.java deleted file mode 100644 index d7264296bb..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/RunOnlyOnceServerCommandTest.java +++ /dev/null @@ -1,113 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.google.code.tempusfugit.temporal.Timeout.timeout; -import static com.lambdaworks.redis.TestSettings.host; -import static com.lambdaworks.redis.TestSettings.port; -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assume.assumeTrue; - -import com.lambdaworks.CanConnect; -import com.lambdaworks.redis.*; -import org.junit.FixMethodOrder; -import org.junit.Ignore; -import org.junit.Test; -import org.junit.runners.MethodSorters; - -import com.google.code.tempusfugit.temporal.Condition; -import com.google.code.tempusfugit.temporal.WaitFor; -import org.springframework.util.SocketUtils; - -import java.util.Arrays; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -public class RunOnlyOnceServerCommandTest extends AbstractRedisClientTest { - - /** - * Executed in order: 1 this test causes a stop of the redis. This means, you cannot repeat the test without restarting your - * redis. - * - * @throws Exception - */ - @Test - public void debugSegfault() throws Exception { - - assumeTrue(CanConnect.to(host(), port(1))); - - final RedisAsyncConnection connection = client.connectAsync(RedisURI.Builder.redis(host(), port(1)) - .build()); - try { - connection.debugSegfault(); - - WaitFor.waitOrTimeout(() -> !connection.isOpen(), timeout(seconds(5))); - assertThat(connection.isOpen()).isFalse(); - } finally { - connection.close(); - } - } - - /** - * Executed in order: 2 - * - * @throws Exception - */ - @Test - public void migrate() throws Exception { - - assumeTrue(CanConnect.to(host(), port(2))); - - redis.set(key, value); - - String result = redis.migrate("localhost", TestSettings.port(2), key, 0, 10); - assertThat(result).isEqualTo("OK"); - } - - /** - * Executed in order: 3 - * - * @throws Exception - */ - @Test - public void migrateCopyReplace() throws Exception { - - assumeTrue(CanConnect.to(host(), port(2))); - - redis.set(key, value); - redis.set("key1", value); - redis.set("key2", value); - - String result = redis.migrate("localhost", TestSettings.port(2), 0, 10, MigrateArgs.Builder.keys(key).copy().replace()); - assertThat(result).isEqualTo("OK"); - - result = redis.migrate("localhost", TestSettings.port(2), 0, 10, MigrateArgs.Builder.keys(Arrays.asList("key1", "key2")).replace()); - assertThat(result).isEqualTo("OK"); - } - - /** - * Executed in order: 4 this test causes a stop of the redis. This means, you cannot repeat the test without restarting your - * redis. - * - * @throws Exception - */ - @Test - public void shutdown() throws Exception { - - assumeTrue(CanConnect.to(host(), port(2))); - - final RedisAsyncConnection connection = client.connectAsync(RedisURI.Builder.redis(host(), port(2)) - .build()); - try { - - connection.shutdown(true); - connection.shutdown(false); - WaitFor.waitOrTimeout(() -> !connection.isOpen(), timeout(seconds(5))); - - assertThat(connection.isOpen()).isFalse(); - - } finally { - connection.close(); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/ScriptingCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/ScriptingCommandTest.java deleted file mode 100644 index 4cb1ffdbd9..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/ScriptingCommandTest.java +++ /dev/null @@ -1,124 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static com.lambdaworks.redis.ScriptOutputType.*; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.List; - -import org.junit.After; -import org.junit.FixMethodOrder; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; -import org.junit.runners.MethodSorters; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.RedisAsyncConnection; -import com.lambdaworks.redis.RedisException; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -public class ScriptingCommandTest extends AbstractRedisClientTest { - @Rule - public ExpectedException exception = ExpectedException.none(); - - @After - public void tearDown() throws Exception { - - Wait.untilNoException(() -> { - try { - redis.scriptKill(); - } catch (RedisException e) { - // ignore - } - redis.ping(); - }).waitOrTimeout(); - - } - - @Test - public void eval() throws Exception { - assertThat((Boolean) redis.eval("return 1 + 1 == 4", BOOLEAN)).isEqualTo(false); - assertThat((Number) redis.eval("return 1 + 1", INTEGER)).isEqualTo(2L); - assertThat((String) redis.eval("return {ok='status'}", STATUS)).isEqualTo("status"); - assertThat((String) redis.eval("return 'one'", VALUE)).isEqualTo("one"); - assertThat((List) redis.eval("return {1, 'one', {2}}", MULTI)).isEqualTo(list(1L, "one", list(2L))); - exception.expectMessage("Oops!"); - redis.eval("return {err='Oops!'}", STATUS); - } - - @Test - public void evalWithKeys() throws Exception { - assertThat((List) redis.eval("return {KEYS[1], KEYS[2]}", MULTI, "one", "two")).isEqualTo(list("one", "two")); - } - - @Test - public void evalWithArgs() throws Exception { - String[] keys = new String[0]; - assertThat((List) redis.eval("return {ARGV[1], ARGV[2]}", MULTI, keys, "a", "b")).isEqualTo(list("a", "b")); - } - - @Test - public void evalsha() throws Exception { - redis.scriptFlush(); - String script = "return 1 + 1"; - String digest = redis.digest(script); - assertThat((Number) redis.eval(script, INTEGER)).isEqualTo(2L); - assertThat((Number) redis.evalsha(digest, INTEGER)).isEqualTo(2L); - exception.expectMessage("NOSCRIPT No matching script. Please use EVAL."); - redis.evalsha(redis.digest("return 1 + 1 == 4"), INTEGER); - } - - @Test - public void evalshaWithMulti() throws Exception { - redis.scriptFlush(); - String digest = redis.digest("return {1234, 5678}"); - exception.expectMessage("NOSCRIPT No matching script. Please use EVAL."); - redis.evalsha(digest, MULTI); - } - - @Test - public void evalshaWithKeys() throws Exception { - redis.scriptFlush(); - String digest = redis.scriptLoad("return {KEYS[1], KEYS[2]}"); - assertThat((Object) redis.evalsha(digest, MULTI, "one", "two")).isEqualTo(list("one", "two")); - } - - @Test - public void evalshaWithArgs() throws Exception { - redis.scriptFlush(); - String digest = redis.scriptLoad("return {ARGV[1], ARGV[2]}"); - String[] keys = new String[0]; - assertThat((Object) redis.evalsha(digest, MULTI, keys, "a", "b")).isEqualTo(list("a", "b")); - } - - @Test - public void script() throws Exception { - assertThat(redis.scriptFlush()).isEqualTo("OK"); - - String script1 = "return 1 + 1"; - String digest1 = redis.digest(script1); - String script2 = "return 1 + 1 == 4"; - String digest2 = redis.digest(script2); - - assertThat(redis.scriptExists(digest1, digest2)).isEqualTo(list(false, false)); - assertThat(redis.scriptLoad(script1)).isEqualTo(digest1); - assertThat((Object) redis.evalsha(digest1, INTEGER)).isEqualTo(2L); - assertThat(redis.scriptExists(digest1, digest2)).isEqualTo(list(true, false)); - - assertThat(redis.scriptFlush()).isEqualTo("OK"); - assertThat(redis.scriptExists(digest1, digest2)).isEqualTo(list(false, false)); - - redis.configSet("lua-time-limit", "10"); - RedisAsyncConnection async = client.connectAsync(); - try { - async.eval("while true do end", STATUS, new String[0]); - Thread.sleep(100); - assertThat(redis.scriptKill()).isEqualTo("OK"); - } finally { - async.close(); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/ServerCommandPoolTest.java b/src/test/java/com/lambdaworks/redis/commands/ServerCommandPoolTest.java deleted file mode 100644 index 494fd43a5b..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/ServerCommandPoolTest.java +++ /dev/null @@ -1,27 +0,0 @@ -package com.lambdaworks.redis.commands; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import org.junit.Test; - -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisConnectionPool; - -/** - * @author Mark Paluch - */ -public class ServerCommandPoolTest extends ServerCommandTest { - - RedisConnectionPool> pool; - - @Override - protected RedisCommands connect() { - pool = client.pool(); - return pool.allocateConnection(); - } - - @Override - public void closeConnection() throws Exception { - pool.freeConnection(redis); - pool.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/ServerCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/ServerCommandTest.java deleted file mode 100644 index 641f6b8a24..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/ServerCommandTest.java +++ /dev/null @@ -1,331 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.hamcrest.CoreMatchers.*; -import static org.junit.Assert.assertThat; - -import java.util.Date; -import java.util.List; -import java.util.regex.Matcher; -import java.util.regex.Pattern; - -import org.junit.FixMethodOrder; -import org.junit.Ignore; -import org.junit.Test; -import org.junit.runners.MethodSorters; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.KillArgs; -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.models.command.CommandDetail; -import com.lambdaworks.redis.models.command.CommandDetailParser; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RoleParser; -import com.lambdaworks.redis.protocol.CommandType; - -@FixMethodOrder(MethodSorters.NAME_ASCENDING) -public class ServerCommandTest extends AbstractRedisClientTest { - @Test - public void bgrewriteaof() throws Exception { - String msg = "Background append only file rewriting"; - assertThat(redis.bgrewriteaof(), containsString(msg)); - } - - @Test - public void bgsave() throws Exception { - while (redis.info().contains("aof_rewrite_in_progress:1")) { - Thread.sleep(100); - } - String msg = "Background saving started"; - assertThat(redis.bgsave()).isEqualTo(msg); - } - - @Test - public void clientGetSetname() throws Exception { - assertThat(redis.clientGetname()).isNull(); - assertThat(redis.clientSetname("test")).isEqualTo("OK"); - assertThat(redis.clientGetname()).isEqualTo("test"); - assertThat(redis.clientSetname("")).isEqualTo("OK"); - assertThat(redis.clientGetname()).isNull(); - } - - @Test - public void clientPause() throws Exception { - assertThat(redis.clientPause(10)).isEqualTo("OK"); - } - - @Test - public void clientKill() throws Exception { - Pattern p = Pattern.compile(".*addr=([^ ]+).*"); - String clients = redis.clientList(); - Matcher m = p.matcher(clients); - - assertThat(m.lookingAt()).isTrue(); - assertThat(redis.clientKill(m.group(1))).isEqualTo("OK"); - } - - @Test - public void clientKillExtended() throws Exception { - - RedisConnection connection2 = client.connect().sync(); - connection2.clientSetname("killme"); - - Pattern p = Pattern.compile("^.*addr=([^ ]+).*name=killme.*$", Pattern.MULTILINE | Pattern.DOTALL); - String clients = redis.clientList(); - Matcher m = p.matcher(clients); - - assertThat(m.matches()).isTrue(); - String addr = m.group(1); - assertThat(redis.clientKill(KillArgs.Builder.addr(addr).skipme())).isGreaterThan(0); - - assertThat(redis.clientKill(KillArgs.Builder.id(4234))).isEqualTo(0); - assertThat(redis.clientKill(KillArgs.Builder.typeSlave().id(4234))).isEqualTo(0); - assertThat(redis.clientKill(KillArgs.Builder.typeNormal().id(4234))).isEqualTo(0); - assertThat(redis.clientKill(KillArgs.Builder.typePubsub().id(4234))).isEqualTo(0); - - connection2.close(); - } - - @Test - public void clientList() throws Exception { - assertThat(redis.clientList().contains("addr=")).isTrue(); - } - - @Test - public void commandCount() throws Exception { - assertThat(redis.commandCount()).isGreaterThan(100); - } - - @Test - public void command() throws Exception { - - List result = redis.command(); - - assertThat(result.size()).isGreaterThan(100); - - List commands = CommandDetailParser.parse(result); - assertThat(commands).hasSameSizeAs(result); - } - - @Test - public void commandInfo() throws Exception { - - List result = redis.commandInfo(CommandType.GETRANGE, CommandType.SET); - - assertThat(result.size()).isEqualTo(2); - - List commands = CommandDetailParser.parse(result); - assertThat(commands).hasSameSizeAs(result); - - result = redis.commandInfo("a missing command"); - - assertThat(result.size()).isEqualTo(0); - - } - - @Test - public void configGet() throws Exception { - assertThat(redis.configGet("maxmemory")).isEqualTo(list("maxmemory", "0")); - } - - @Test - public void configResetstat() throws Exception { - redis.get(key); - redis.get(key); - assertThat(redis.configResetstat()).isEqualTo("OK"); - assertThat(redis.info().contains("keyspace_misses:0")).isTrue(); - } - - @Test - public void configSet() throws Exception { - String maxmemory = redis.configGet("maxmemory").get(1); - assertThat(redis.configSet("maxmemory", "1024")).isEqualTo("OK"); - assertThat(redis.configGet("maxmemory").get(1)).isEqualTo("1024"); - redis.configSet("maxmemory", maxmemory); - } - - @Test - public void configRewrite() throws Exception { - - String result = redis.configRewrite(); - assertThat(result).isEqualTo("OK"); - } - - @Test - public void dbsize() throws Exception { - assertThat(redis.dbsize()).isEqualTo(0); - redis.set(key, value); - assertThat(redis.dbsize()).isEqualTo(1); - } - - @Test - @Ignore("Causes instabilities") - public void debugCrashAndRecover() throws Exception { - try { - assertThat(redis.debugCrashAndRecover(1L)).isNotNull(); - } catch (Exception e) { - assertThat(e).hasMessageContaining("ERR failed to restart the server"); - } - } - - @Test - public void debugHtstats() throws Exception { - redis.set(key, value); - String result = redis.debugHtstats(0); - assertThat(result).contains("table size"); - } - - @Test - public void debugObject() throws Exception { - redis.set(key, value); - redis.debugObject(key); - } - - @Test - public void debugReload() throws Exception { - assertThat(redis.debugReload()).isEqualTo("OK"); - } - - @Test - @Ignore("Causes instabilities") - public void debugRestart() throws Exception { - try { - assertThat(redis.debugRestart(1L)).isNotNull(); - } catch (Exception e) { - assertThat(e).hasMessageContaining("ERR failed to restart the server"); - } - } - - @Test - public void debugSdslen() throws Exception { - redis.set(key, value); - String result = redis.debugSdslen(key); - assertThat(result).contains("key_sds_len"); - } - - /** - * this test causes a stop of the redis. This means, you cannot repeat the test without restarting your redis. - * - * @throws Exception - */ - @Test - public void flushall() throws Exception { - redis.set(key, value); - assertThat(redis.flushall()).isEqualTo("OK"); - assertThat(redis.get(key)).isNull(); - } - - @Test - public void flushallAsync() throws Exception { - redis.set(key, value); - assertThat(redis.flushallAsync()).isEqualTo("OK"); - assertThat(redis.get(key)).isNull(); - } - - @Test - public void flushdb() throws Exception { - redis.set(key, value); - redis.select(1); - redis.set(key, value + "X"); - assertThat(redis.flushdb()).isEqualTo("OK"); - assertThat(redis.get(key)).isNull(); - redis.select(0); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void flushdbAsync() throws Exception { - redis.set(key, value); - redis.select(1); - redis.set(key, value + "X"); - assertThat(redis.flushdbAsync()).isEqualTo("OK"); - assertThat(redis.get(key)).isNull(); - redis.select(0); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void info() throws Exception { - assertThat(redis.info().contains("redis_version")).isTrue(); - assertThat(redis.info("server").contains("redis_version")).isTrue(); - } - - @Test - public void lastsave() throws Exception { - Date start = new Date(System.currentTimeMillis() / 1000); - assertThat(start.compareTo(redis.lastsave()) <= 0).isTrue(); - } - - @Test - public void save() throws Exception { - - while (redis.info().contains("aof_rewrite_in_progress:1")) { - Thread.sleep(100); - } - assertThat(redis.save()).isEqualTo("OK"); - } - - @Test - public void slaveof() throws Exception { - - assertThat(redis.slaveof(TestSettings.host(), 0)).isEqualTo("OK"); - redis.slaveofNoOne(); - } - - @Test(expected = IllegalArgumentException.class) - public void slaveofEmptyHost() throws Exception { - redis.slaveof("", 0); - } - - @Test - public void role() throws Exception { - - List objects = redis.role(); - - assertThat(objects.get(0)).isEqualTo("master"); - assertThat(objects.get(1).getClass()).isEqualTo(Long.class); - - RedisInstance redisInstance = RoleParser.parse(objects); - assertThat(redisInstance.getRole()).isEqualTo(RedisInstance.Role.MASTER); - } - - @Test - public void slaveofNoOne() throws Exception { - assertThat(redis.slaveofNoOne()).isEqualTo("OK"); - } - - @Test - @SuppressWarnings("unchecked") - public void slowlog() throws Exception { - long start = System.currentTimeMillis() / 1000; - - assertThat(redis.configSet("slowlog-log-slower-than", "0")).isEqualTo("OK"); - assertThat(redis.slowlogReset()).isEqualTo("OK"); - redis.set(key, value); - - List log = redis.slowlogGet(); - assertThat(log).hasSize(2); - - List entry = (List) log.get(0); - assertThat(entry).hasSize(4); - assertThat(entry.get(0) instanceof Long).isTrue(); - assertThat((Long) entry.get(1) >= start).isTrue(); - assertThat(entry.get(2) instanceof Long).isTrue(); - assertThat(entry.get(3)).isEqualTo(list("SET", key, value)); - - entry = (List) log.get(1); - assertThat(entry).hasSize(4); - assertThat(entry.get(0) instanceof Long).isTrue(); - assertThat((Long) entry.get(1) >= start).isTrue(); - assertThat(entry.get(2) instanceof Long).isTrue(); - assertThat(entry.get(3)).isEqualTo(list("SLOWLOG", "RESET")); - - assertThat(redis.slowlogGet(1)).hasSize(1); - assertThat((long) redis.slowlogLen()).isGreaterThanOrEqualTo(4); - - redis.configSet("slowlog-log-slower-than", "10000"); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/SetCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/SetCommandTest.java deleted file mode 100644 index f390738fe7..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/SetCommandTest.java +++ /dev/null @@ -1,342 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.HashSet; -import java.util.List; -import java.util.Set; -import java.util.TreeSet; - -import org.junit.Test; - -import com.lambdaworks.redis.*; - -public class SetCommandTest extends AbstractRedisClientTest { - - @Test - public void sadd() throws Exception { - assertThat(redis.sadd(key, "a")).isEqualTo(1L); - assertThat(redis.sadd(key, "a")).isEqualTo(0); - assertThat(redis.smembers(key)).isEqualTo(set("a")); - assertThat(redis.sadd(key, "b", "c")).isEqualTo(2); - assertThat(redis.smembers(key)).isEqualTo(set("a", "b", "c")); - } - - @Test - public void scard() throws Exception { - assertThat((long) redis.scard(key)).isEqualTo(0); - redis.sadd(key, "a"); - assertThat((long) redis.scard(key)).isEqualTo(1); - } - - @Test - public void sdiff() throws Exception { - setupSet(); - assertThat(redis.sdiff("key1", "key2", "key3")).isEqualTo(set("b", "d")); - } - - @Test - public void sdiffStreaming() throws Exception { - setupSet(); - - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - - Long count = redis.sdiff(streamingAdapter, "key1", "key2", "key3"); - assertThat(count.intValue()).isEqualTo(2); - assertThat(streamingAdapter.getList()).containsOnly("b", "d"); - } - - @Test - public void sdiffstore() throws Exception { - setupSet(); - assertThat(redis.sdiffstore("newset", "key1", "key2", "key3")).isEqualTo(2); - assertThat(redis.smembers("newset")).containsOnly("b", "d"); - } - - @Test - public void sinter() throws Exception { - setupSet(); - assertThat(redis.sinter("key1", "key2", "key3")).isEqualTo(set("c")); - } - - @Test - public void sinterStreaming() throws Exception { - setupSet(); - - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - Long count = redis.sinter(streamingAdapter, "key1", "key2", "key3"); - - assertThat(count.intValue()).isEqualTo(1); - assertThat(streamingAdapter.getList()).containsExactly("c"); - } - - @Test - public void sinterstore() throws Exception { - setupSet(); - assertThat(redis.sinterstore("newset", "key1", "key2", "key3")).isEqualTo(1); - assertThat(redis.smembers("newset")).containsExactly("c"); - } - - @Test - public void sismember() throws Exception { - assertThat(redis.sismember(key, "a")).isFalse(); - redis.sadd(key, "a"); - assertThat(redis.sismember(key, "a")).isTrue(); - } - - @Test - public void smove() throws Exception { - redis.sadd(key, "a", "b", "c"); - assertThat(redis.smove(key, "key1", "d")).isFalse(); - assertThat(redis.smove(key, "key1", "a")).isTrue(); - assertThat(redis.smembers(key)).isEqualTo(set("b", "c")); - assertThat(redis.smembers("key1")).isEqualTo(set("a")); - } - - @Test - public void smembers() throws Exception { - setupSet(); - assertThat(redis.smembers(key)).isEqualTo(set("a", "b", "c")); - } - - @Test - public void smembersStreaming() throws Exception { - setupSet(); - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - Long count = redis.smembers(streamingAdapter, key); - assertThat(count.longValue()).isEqualTo(3); - assertThat(streamingAdapter.getList()).containsOnly("a", "b", "c"); - } - - @Test - public void spop() throws Exception { - assertThat(redis.spop(key)).isNull(); - redis.sadd(key, "a", "b", "c"); - String rand = redis.spop(key); - assertThat(set("a", "b", "c").contains(rand)).isTrue(); - assertThat(redis.smembers(key).contains(rand)).isFalse(); - } - - @Test - public void spopMultiple() throws Exception { - assertThat(redis.spop(key)).isNull(); - redis.sadd(key, "a", "b", "c"); - Set rand = redis.spop(key, 2); - assertThat(rand).hasSize(2); - assertThat(set("a", "b", "c").containsAll(rand)).isTrue(); - } - - @Test - public void srandmember() throws Exception { - assertThat(redis.spop(key)).isNull(); - redis.sadd(key, "a", "b", "c", "d"); - assertThat(set("a", "b", "c", "d").contains(redis.srandmember(key))).isTrue(); - assertThat(redis.smembers(key)).isEqualTo(set("a", "b", "c", "d")); - List rand = redis.srandmember(key, 3); - assertThat(rand).hasSize(3); - assertThat(set("a", "b", "c", "d").containsAll(rand)).isTrue(); - List randWithDuplicates = redis.srandmember(key, -10); - assertThat(randWithDuplicates).hasSize(10); - } - - @Test - public void srandmemberStreaming() throws Exception { - assertThat(redis.spop(key)).isNull(); - redis.sadd(key, "a", "b", "c", "d"); - - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - - Long count = redis.srandmember(streamingAdapter, key, 2); - - assertThat(count.longValue()).isEqualTo(2); - - assertThat(set("a", "b", "c", "d").containsAll(streamingAdapter.getList())).isTrue(); - - } - - @Test - public void srem() throws Exception { - redis.sadd(key, "a", "b", "c"); - assertThat(redis.srem(key, "d")).isEqualTo(0); - assertThat(redis.srem(key, "b")).isEqualTo(1); - assertThat(redis.smembers(key)).isEqualTo(set("a", "c")); - assertThat(redis.srem(key, "a", "c")).isEqualTo(2); - assertThat(redis.smembers(key)).isEqualTo(set()); - } - - @Test(expected = IllegalArgumentException.class) - public void sremEmpty() throws Exception { - redis.srem(key); - } - - @Test(expected = IllegalArgumentException.class) - public void sremNulls() throws Exception { - redis.srem(key, new String[0]); - } - - @Test - public void sunion() throws Exception { - setupSet(); - assertThat(redis.sunion("key1", "key2", "key3")).isEqualTo(set("a", "b", "c", "d", "e")); - } - - @Test(expected = IllegalArgumentException.class) - public void sunionEmpty() throws Exception { - redis.sunion(); - } - - @Test - public void sunionStreaming() throws Exception { - setupSet(); - - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - Long count = redis.sunion(adapter, "key1", "key2", "key3"); - - assertThat(count.longValue()).isEqualTo(5); - - assertThat(new TreeSet(adapter.getList())).isEqualTo(new TreeSet(list("c", "a", "b", "e", "d"))); - } - - @Test - public void sunionstore() throws Exception { - setupSet(); - assertThat(redis.sunionstore("newset", "key1", "key2", "key3")).isEqualTo(5); - assertThat(redis.smembers("newset")).isEqualTo(set("a", "b", "c", "d", "e")); - } - - @Test - public void sscan() throws Exception { - redis.sadd(key, value); - ValueScanCursor cursor = redis.sscan(key); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getValues()).isEqualTo(list(value)); - } - - @Test - public void sscanWithCursor() throws Exception { - redis.sadd(key, value); - ValueScanCursor cursor = redis.sscan(key, ScanCursor.INITIAL); - - assertThat(cursor.getValues()).hasSize(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void sscanWithCursorAndArgs() throws Exception { - redis.sadd(key, value); - - ValueScanCursor cursor = redis.sscan(key, ScanCursor.INITIAL, ScanArgs.Builder.limit(5)); - - assertThat(cursor.getValues()).hasSize(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - } - - @Test - public void sscanStreaming() throws Exception { - redis.sadd(key, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.sscan(adapter, key); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(adapter.getList()).isEqualTo(list(value)); - } - - @Test - public void sscanStreamingWithCursor() throws Exception { - redis.sadd(key, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.sscan(adapter, key, ScanCursor.INITIAL); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void sscanStreamingWithCursorAndArgs() throws Exception { - redis.sadd(key, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.sscan(adapter, key, ScanCursor.INITIAL, ScanArgs.Builder.limit(5)); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void sscanStreamingArgs() throws Exception { - redis.sadd(key, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.sscan(adapter, key, ScanArgs.Builder.limit(100).match("*")); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(adapter.getList()).isEqualTo(list(value)); - } - - @Test - public void sscanMultiple() throws Exception { - - Set expect = new HashSet<>(); - Set check = new HashSet<>(); - setup100KeyValues(expect); - - ValueScanCursor cursor = redis.sscan(key, ScanArgs.Builder.limit(5)); - - assertThat(cursor.getCursor()).isNotNull().isNotEqualTo("0"); - assertThat(cursor.isFinished()).isFalse(); - - check.addAll(cursor.getValues()); - - while (!cursor.isFinished()) { - cursor = redis.sscan(key, cursor); - check.addAll(cursor.getValues()); - } - - assertThat(new TreeSet(check)).isEqualTo(new TreeSet(expect)); - } - - @Test - public void scanMatch() throws Exception { - - Set expect = new HashSet<>(); - setup100KeyValues(expect); - - ValueScanCursor cursor = redis.sscan(key, ScanArgs.Builder.limit(200).match("value1*")); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - assertThat(cursor.getValues()).hasSize(11); - } - - protected void setup100KeyValues(Set expect) { - for (int i = 0; i < 100; i++) { - redis.sadd(key, value + i); - expect.add(value + i); - } - } - - private void setupSet() { - redis.sadd(key, "a", "b", "c"); - redis.sadd("key1", "a", "b", "c", "d"); - redis.sadd("key2", "c"); - redis.sadd("key3", "a", "c", "e"); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/commands/SortCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/SortCommandTest.java deleted file mode 100644 index 7e0665a993..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/SortCommandTest.java +++ /dev/null @@ -1,83 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static com.lambdaworks.redis.SortArgs.Builder.alpha; -import static com.lambdaworks.redis.SortArgs.Builder.asc; -import static com.lambdaworks.redis.SortArgs.Builder.by; -import static com.lambdaworks.redis.SortArgs.Builder.desc; -import static com.lambdaworks.redis.SortArgs.Builder.get; -import static com.lambdaworks.redis.SortArgs.Builder.limit; -import static org.assertj.core.api.Assertions.assertThat; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.ListStreamingAdapter; -import org.assertj.core.api.Assertions; -import org.junit.Test; - -public class SortCommandTest extends AbstractRedisClientTest { - @Test - public void sort() throws Exception { - redis.rpush(key, "3", "2", "1"); - assertThat(redis.sort(key)).isEqualTo(list("1", "2", "3")); - assertThat(redis.sort(key, asc())).isEqualTo(list("1", "2", "3")); - } - - @Test - public void sortStreaming() throws Exception { - redis.rpush(key, "3", "2", "1"); - - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - Long count = redis.sort(streamingAdapter, key); - - assertThat(count.longValue()).isEqualTo(3); - assertThat(streamingAdapter.getList()).isEqualTo(list("1", "2", "3")); - streamingAdapter.getList().clear(); - - count = redis.sort(streamingAdapter, key, desc()); - assertThat(count.longValue()).isEqualTo(3); - assertThat(streamingAdapter.getList()).isEqualTo(list("3", "2", "1")); - } - - @Test - public void sortAlpha() throws Exception { - redis.rpush(key, "A", "B", "C"); - assertThat(redis.sort(key, alpha().desc())).isEqualTo(list("C", "B", "A")); - } - - @Test - public void sortBy() throws Exception { - redis.rpush(key, "foo", "bar", "baz"); - redis.set("weight_foo", "8"); - redis.set("weight_bar", "4"); - redis.set("weight_baz", "2"); - assertThat(redis.sort(key, by("weight_*"))).isEqualTo(list("baz", "bar", "foo")); - } - - @Test - public void sortDesc() throws Exception { - redis.rpush(key, "1", "2", "3"); - assertThat(redis.sort(key, desc())).isEqualTo(list("3", "2", "1")); - } - - @Test - public void sortGet() throws Exception { - redis.rpush(key, "1", "2"); - redis.set("obj_1", "foo"); - redis.set("obj_2", "bar"); - assertThat(redis.sort(key, get("obj_*"))).isEqualTo(list("foo", "bar")); - } - - @Test - public void sortLimit() throws Exception { - redis.rpush(key, "3", "2", "1"); - assertThat(redis.sort(key, limit(1, 2))).isEqualTo(list("2", "3")); - } - - @Test - public void sortStore() throws Exception { - redis.rpush("one", "1", "2", "3"); - assertThat(redis.sortStore("one", desc(), "two")).isEqualTo(3); - assertThat(redis.lrange("two", 0, -1)).isEqualTo(list("3", "2", "1")); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/SortedSetCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/SortedSetCommandTest.java deleted file mode 100644 index 884c5b0db2..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/SortedSetCommandTest.java +++ /dev/null @@ -1,574 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static com.lambdaworks.redis.ZStoreArgs.Builder.max; -import static com.lambdaworks.redis.ZStoreArgs.Builder.min; -import static com.lambdaworks.redis.ZStoreArgs.Builder.sum; -import static com.lambdaworks.redis.ZStoreArgs.Builder.weights; -import static java.lang.Double.NEGATIVE_INFINITY; -import static java.lang.Double.POSITIVE_INFINITY; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.data.Offset.offset; - -import java.util.HashSet; -import java.util.List; -import java.util.Set; - -import org.junit.Test; - -import com.lambdaworks.redis.*; - -public class SortedSetCommandTest extends AbstractRedisClientTest { - - @Test - public void zadd() throws Exception { - assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(1); - assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(0); - - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a")); - assertThat(redis.zadd(key, 2.0, "b", 3.0, "c")).isEqualTo(2); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "b", "c")); - } - - @Test - public void zaddScoredValue() throws Exception { - assertThat(redis.zadd(key, new ScoredValue(1.0, "a"))).isEqualTo(1); - assertThat(redis.zadd(key, new ScoredValue(1.0, "a"))).isEqualTo(0); - - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a")); - assertThat(redis.zadd(key, new ScoredValue(2.0, "b"), new ScoredValue(3.0, "c"))).isEqualTo(2); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "b", "c")); - } - - @Test - public void zaddnx() throws Exception { - assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(1); - assertThat(redis.zadd(key, ZAddArgs.Builder.nx(), new ScoredValue(2.0, "a"))).isEqualTo(0); - - assertThat(redis.zadd(key, ZAddArgs.Builder.nx(), new ScoredValue(2.0, "b"))).isEqualTo(1); - - assertThat(redis.zadd(key, ZAddArgs.Builder.nx(), new Object[] { 2.0, "b", 3.0, "c" })).isEqualTo(1); - - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"))); - } - - @Test(expected = IllegalArgumentException.class) - public void zaddWrongArguments() throws Exception { - assertThat(redis.zadd(key, 2.0, "b", 3.0)).isEqualTo(2); - } - - @Test(expected = IllegalArgumentException.class) - public void zaddnxWrongArguments() throws Exception { - assertThat(redis.zadd(key, ZAddArgs.Builder.nx(), new Object[] { 2.0, "b", 3.0 })).isEqualTo(1); - } - - @Test - public void zaddxx() throws Exception { - assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(1); - assertThat(redis.zadd(key, ZAddArgs.Builder.xx(), 2.0, "a")).isEqualTo(0); - - assertThat(redis.zadd(key, ZAddArgs.Builder.xx(), 2.0, "b")).isEqualTo(0); - - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"))); - } - - @Test - public void zaddch() throws Exception { - assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(1); - assertThat(redis.zadd(key, ZAddArgs.Builder.ch(), 2.0, "a")).isEqualTo(1); - - assertThat(redis.zadd(key, ZAddArgs.Builder.ch(), 2.0, "b")).isEqualTo(1); - - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"), sv(2.0, "b"))); - } - - @Test - public void zaddincr() throws Exception { - assertThat(redis.zadd(key, 1.0, "a").longValue()).isEqualTo(1); - assertThat(redis.zaddincr(key, 2.0, "a").longValue()).isEqualTo(3); - - assertThat(redis.zaddincr(key, 2.0, "b").longValue()).isEqualTo(2); - - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "a"))); - } - - @Test - public void zcard() throws Exception { - assertThat(redis.zcard(key)).isEqualTo(0); - redis.zadd(key, 1.0, "a"); - assertThat(redis.zcard(key)).isEqualTo(1); - } - - @Test - public void zcount() throws Exception { - assertThat(redis.zcount(key, 0, 0)).isEqualTo(0); - - redis.zadd(key, 1.0, "a", 2.0, "b", 2.1, "c"); - - assertThat(redis.zcount(key, 1.0, 3.0)).isEqualTo(3); - assertThat(redis.zcount(key, 1.0, 2.0)).isEqualTo(2); - assertThat(redis.zcount(key, NEGATIVE_INFINITY, POSITIVE_INFINITY)).isEqualTo(3); - - assertThat(redis.zcount(key, "(1.0", "3.0")).isEqualTo(2); - assertThat(redis.zcount(key, "-inf", "+inf")).isEqualTo(3); - } - - @Test - public void zincrby() throws Exception { - assertThat(redis.zincrby(key, 0.0, "a")).isEqualTo(0, offset(0.1)); - assertThat(redis.zincrby(key, 1.1, "a")).isEqualTo(1.1, offset(0.1)); - assertThat(redis.zscore(key, "a")).isEqualTo(1.1, offset(0.1)); - assertThat(redis.zincrby(key, -1.2, "a")).isEqualTo(-0.1, offset(0.1)); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zinterstore() throws Exception { - redis.zadd("zset1", 1.0, "a", 2.0, "b"); - redis.zadd("zset2", 2.0, "a", 3.0, "b", 4.0, "c"); - assertThat(redis.zinterstore(key, "zset1", "zset2")).isEqualTo(2); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "b")); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(3.0, "a"), sv(5.0, "b"))); - } - - @Test - public void zrange() throws Exception { - setup(); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "b", "c")); - } - - @Test - public void zrangeStreaming() throws Exception { - setup(); - - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - Long count = redis.zrange(streamingAdapter, key, 0, -1); - assertThat(count.longValue()).isEqualTo(3); - - assertThat(streamingAdapter.getList()).isEqualTo(list("a", "b", "c")); - } - - private void setup() { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c"); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zrangeWithScores() throws Exception { - setup(); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"))); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zrangeWithScoresStreaming() throws Exception { - setup(); - ScoredValueStreamingAdapter streamingAdapter = new ScoredValueStreamingAdapter(); - Long count = redis.zrangeWithScores(streamingAdapter, key, 0, -1); - assertThat(count.longValue()).isEqualTo(3); - assertThat(streamingAdapter.getList()).isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"))); - } - - @Test - public void zrangebyscore() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - assertThat(redis.zrangebyscore(key, 2.0, 3.0)).isEqualTo(list("b", "c")); - assertThat(redis.zrangebyscore(key, "(1.0", "(4.0")).isEqualTo(list("b", "c")); - assertThat(redis.zrangebyscore(key, NEGATIVE_INFINITY, POSITIVE_INFINITY)).isEqualTo(list("a", "b", "c", "d")); - assertThat(redis.zrangebyscore(key, "-inf", "+inf")).isEqualTo(list("a", "b", "c", "d")); - assertThat(redis.zrangebyscore(key, 0.0, 4.0, 1, 3)).isEqualTo(list("b", "c", "d")); - assertThat(redis.zrangebyscore(key, "-inf", "+inf", 2, 2)).isEqualTo(list("c", "d")); - } - - @Test - public void zrangebyscoreStreaming() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - - assertThat(redis.zrangebyscore(streamingAdapter, key, 2.0, 3.0)).isEqualTo(2); - assertThat(redis.zrangebyscore(streamingAdapter, key, "(1.0", "(4.0")).isEqualTo(2); - assertThat(redis.zrangebyscore(streamingAdapter, key, NEGATIVE_INFINITY, POSITIVE_INFINITY)).isEqualTo(4); - assertThat(redis.zrangebyscore(streamingAdapter, key, "-inf", "+inf")).isEqualTo(4); - assertThat(redis.zrangebyscore(streamingAdapter, key, "-inf", "+inf")).isEqualTo(4); - assertThat(redis.zrangebyscore(streamingAdapter, key, 0.0, 4.0, 1, 3)).isEqualTo(3); - assertThat(redis.zrangebyscore(streamingAdapter, key, "-inf", "+inf", 2, 2)).isEqualTo(2); - - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zrangebyscoreWithScores() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - assertThat(redis.zrangebyscoreWithScores(key, 2.0, 3.0)).isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "c"))); - assertThat(redis.zrangebyscoreWithScores(key, "(1.0", "(4.0")).isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "c"))); - assertThat(redis.zrangebyscoreWithScores(key, NEGATIVE_INFINITY, POSITIVE_INFINITY)).isEqualTo( - svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"), sv(4.0, "d"))); - assertThat(redis.zrangebyscoreWithScores(key, "-inf", "+inf")).isEqualTo( - svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"), sv(4.0, "d"))); - assertThat(redis.zrangebyscoreWithScores(key, 0.0, 4.0, 1, 3)).isEqualTo( - svlist(sv(2.0, "b"), sv(3.0, "c"), sv(4.0, "d"))); - assertThat(redis.zrangebyscoreWithScores(key, "-inf", "+inf", 2, 2)).isEqualTo(svlist(sv(3.0, "c"), sv(4.0, "d"))); - } - - @Test - public void zrangebyscoreWithScoresStreaming() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - - assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, 2.0, 3.0).longValue()).isEqualTo(2); - assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, "(1.0", "(4.0").longValue()).isEqualTo(2); - assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, NEGATIVE_INFINITY, POSITIVE_INFINITY).longValue()) - .isEqualTo(4); - assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, "-inf", "+inf").longValue()).isEqualTo(4); - assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, "-inf", "+inf").longValue()).isEqualTo(4); - assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, 0.0, 4.0, 1, 3).longValue()).isEqualTo(3); - assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, "-inf", "+inf", 2, 2).longValue()).isEqualTo(2); - - } - - @Test - public void zrank() throws Exception { - assertThat(redis.zrank(key, "a")).isNull(); - setup(); - assertThat(redis.zrank(key, "a")).isEqualTo(0); - assertThat(redis.zrank(key, "c")).isEqualTo(2); - } - - @Test - public void zrem() throws Exception { - assertThat(redis.zrem(key, "a")).isEqualTo(0); - setup(); - assertThat(redis.zrem(key, "b")).isEqualTo(1); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "c")); - assertThat(redis.zrem(key, "a", "c")).isEqualTo(2); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list()); - } - - @Test - public void zremrangebyscore() throws Exception { - setup(); - assertThat(redis.zremrangebyscore(key, 1.0, 2.0)).isEqualTo(2); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("c")); - - setup(); - assertThat(redis.zremrangebyscore(key, "(1.0", "(3.0")).isEqualTo(1); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "c")); - } - - @Test - public void zremrangebyrank() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - assertThat(redis.zremrangebyrank(key, 1, 2)).isEqualTo(2); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "d")); - - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - assertThat(redis.zremrangebyrank(key, 0, -1)).isEqualTo(4); - assertThat(redis.zcard(key)).isEqualTo(0); - } - - @Test - public void zrevrange() throws Exception { - setup(); - assertThat(redis.zrevrange(key, 0, -1)).isEqualTo(list("c", "b", "a")); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zrevrangeWithScores() throws Exception { - setup(); - assertThat(redis.zrevrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); - } - - @Test - public void zrevrangeStreaming() throws Exception { - setup(); - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - Long count = redis.zrevrange(streamingAdapter, key, 0, -1); - assertThat(count).isEqualTo(3); - assertThat(streamingAdapter.getList()).isEqualTo(list("c", "b", "a")); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zrevrangeWithScoresStreaming() throws Exception { - setup(); - ScoredValueStreamingAdapter streamingAdapter = new ScoredValueStreamingAdapter(); - Long count = redis.zrevrangeWithScores(streamingAdapter, key, 0, -1); - assertThat(count).isEqualTo(3); - assertThat(streamingAdapter.getList()).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); - } - - @Test - public void zrevrangebyscore() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - assertThat(redis.zrevrangebyscore(key, 3.0, 2.0)).isEqualTo(list("c", "b")); - assertThat(redis.zrevrangebyscore(key, "(4.0", "(1.0")).isEqualTo(list("c", "b")); - assertThat(redis.zrevrangebyscore(key, POSITIVE_INFINITY, NEGATIVE_INFINITY)).isEqualTo(list("d", "c", "b", "a")); - assertThat(redis.zrevrangebyscore(key, "+inf", "-inf")).isEqualTo(list("d", "c", "b", "a")); - assertThat(redis.zrevrangebyscore(key, 4.0, 0.0, 1, 3)).isEqualTo(list("c", "b", "a")); - assertThat(redis.zrevrangebyscore(key, "+inf", "-inf", 2, 2)).isEqualTo(list("b", "a")); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zrevrangebyscoreWithScores() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - assertThat(redis.zrevrangebyscoreWithScores(key, 3.0, 2.0)).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"))); - assertThat(redis.zrevrangebyscoreWithScores(key, "(4.0", "(1.0")).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"))); - assertThat(redis.zrevrangebyscoreWithScores(key, POSITIVE_INFINITY, NEGATIVE_INFINITY)).isEqualTo( - svlist(sv(4.0, "d"), sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); - assertThat(redis.zrevrangebyscoreWithScores(key, "+inf", "-inf")).isEqualTo( - svlist(sv(4.0, "d"), sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); - assertThat(redis.zrevrangebyscoreWithScores(key, 4.0, 0.0, 1, 3)).isEqualTo( - svlist(sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); - assertThat(redis.zrevrangebyscoreWithScores(key, "+inf", "-inf", 2, 2)).isEqualTo(svlist(sv(2.0, "b"), sv(1.0, "a"))); - } - - @Test - public void zrevrangebyscoreStreaming() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - - assertThat(redis.zrevrangebyscore(streamingAdapter, key, 3.0, 2.0).longValue()).isEqualTo(2); - assertThat(redis.zrevrangebyscore(streamingAdapter, key, "(4.0", "(1.0").longValue()).isEqualTo(2); - assertThat(redis.zrevrangebyscore(streamingAdapter, key, POSITIVE_INFINITY, NEGATIVE_INFINITY).longValue()) - .isEqualTo(4); - assertThat(redis.zrevrangebyscore(streamingAdapter, key, "+inf", "-inf").longValue()).isEqualTo(4); - assertThat(redis.zrevrangebyscore(streamingAdapter, key, 4.0, 0.0, 1, 3).longValue()).isEqualTo(3); - assertThat(redis.zrevrangebyscore(streamingAdapter, key, "+inf", "-inf", 2, 2).longValue()).isEqualTo(2); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zrevrangebyscoreWithScoresStreaming() throws Exception { - redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); - - ScoredValueStreamingAdapter streamingAdapter = new ScoredValueStreamingAdapter(); - - assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, 3.0, 2.0)).isEqualTo(2); - assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, "(4.0", "(1.0")).isEqualTo(2); - assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, POSITIVE_INFINITY, NEGATIVE_INFINITY)).isEqualTo(4); - assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, "+inf", "-inf")).isEqualTo(4); - assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, 4.0, 0.0, 1, 3)).isEqualTo(3); - assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, "+inf", "-inf", 2, 2)).isEqualTo(2); - } - - @Test - public void zrevrank() throws Exception { - assertThat(redis.zrevrank(key, "a")).isNull(); - setup(); - assertThat(redis.zrevrank(key, "c")).isEqualTo(0); - assertThat(redis.zrevrank(key, "a")).isEqualTo(2); - } - - @Test - public void zscore() throws Exception { - assertThat(redis.zscore(key, "a")).isNull(); - redis.zadd(key, 1.0, "a"); - assertThat(redis.zscore(key, "a")).isEqualTo(1.0); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zunionstore() throws Exception { - redis.zadd("zset1", 1.0, "a", 2.0, "b"); - redis.zadd("zset2", 2.0, "a", 3.0, "b", 4.0, "c"); - assertThat(redis.zunionstore(key, "zset1", "zset2")).isEqualTo(3); - assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "c", "b")); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(3.0, "a"), sv(4.0, "c"), sv(5.0, "b"))); - - assertThat(redis.zunionstore(key, weights(2, 3), "zset1", "zset2")).isEqualTo(3); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(8.0, "a"), sv(12.0, "c"), sv(13.0, "b"))); - - assertThat(redis.zunionstore(key, weights(2, 3).sum(), "zset1", "zset2")).isEqualTo(3); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(8.0, "a"), sv(12.0, "c"), sv(13.0, "b"))); - - assertThat(redis.zunionstore(key, weights(2, 3).min(), "zset1", "zset2")).isEqualTo(3); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"), sv(4.0, "b"), sv(12.0, "c"))); - - assertThat(redis.zunionstore(key, weights(2, 3).max(), "zset1", "zset2")).isEqualTo(3); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(6.0, "a"), sv(9.0, "b"), sv(12.0, "c"))); - } - - @Test - @SuppressWarnings({ "unchecked" }) - public void zStoreArgs() throws Exception { - redis.zadd("zset1", 1.0, "a", 2.0, "b"); - redis.zadd("zset2", 2.0, "a", 3.0, "b", 4.0, "c"); - - assertThat(redis.zinterstore(key, sum(), "zset1", "zset2")).isEqualTo(2); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(3.0, "a"), sv(5.0, "b"))); - - assertThat(redis.zinterstore(key, min(), "zset1", "zset2")).isEqualTo(2); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"))); - - assertThat(redis.zinterstore(key, max(), "zset1", "zset2")).isEqualTo(2); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"), sv(3.0, "b"))); - - assertThat(redis.zinterstore(key, weights(2, 3), "zset1", "zset2")).isEqualTo(2); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(8.0, "a"), sv(13.0, "b"))); - - assertThat(redis.zinterstore(key, weights(2, 3).sum(), "zset1", "zset2")).isEqualTo(2); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(8.0, "a"), sv(13.0, "b"))); - - assertThat(redis.zinterstore(key, weights(2, 3).min(), "zset1", "zset2")).isEqualTo(2); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"), sv(4.0, "b"))); - - assertThat(redis.zinterstore(key, weights(2, 3).max(), "zset1", "zset2")).isEqualTo(2); - assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(6.0, "a"), sv(9.0, "b"))); - } - - @Test - public void zsscan() throws Exception { - redis.zadd(key, 1, value); - ScoredValueScanCursor cursor = redis.zscan(key); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getValues().get(0)).isEqualTo(sv(1, value)); - } - - @Test - public void zsscanWithCursor() throws Exception { - redis.zadd(key, 1, value); - - ScoredValueScanCursor cursor = redis.zscan(key, ScanCursor.INITIAL); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getValues().get(0)).isEqualTo(sv(1, value)); - } - - @Test - public void zsscanWithCursorAndArgs() throws Exception { - redis.zadd(key, 1, value); - - ScoredValueScanCursor cursor = redis.zscan(key, ScanCursor.INITIAL, ScanArgs.Builder.limit(5)); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(cursor.getValues().get(0)).isEqualTo(sv(1, value)); - } - - @Test - public void zscanStreaming() throws Exception { - redis.zadd(key, 1, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.zscan(adapter, key); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - assertThat(adapter.getList().get(0)).isEqualTo(value); - } - - @Test - public void zscanStreamingWithCursor() throws Exception { - redis.zadd(key, 1, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.zscan(adapter, key, ScanCursor.INITIAL); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void zscanStreamingWithCursorAndArgs() throws Exception { - redis.zadd(key, 1, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.zscan(adapter, key, ScanCursor.INITIAL, ScanArgs.Builder.matches("*").limit(100)); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - } - - @Test - public void zscanStreamingWithArgs() throws Exception { - redis.zadd(key, 1, value); - ListStreamingAdapter adapter = new ListStreamingAdapter(); - - StreamScanCursor cursor = redis.zscan(adapter, key, ScanArgs.Builder.limit(100).match("*")); - - assertThat(cursor.getCount()).isEqualTo(1); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - } - - @Test - public void zscanMultiple() throws Exception { - - Set expect = new HashSet<>(); - setup100KeyValues(expect); - - ScoredValueScanCursor cursor = redis.zscan(key, ScanArgs.Builder.limit(5)); - - assertThat(cursor.getCursor()).isNotNull(); - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - assertThat(cursor.getValues()).hasSize(100); - - } - - @Test - public void zscanMatch() throws Exception { - - Set expect = new HashSet<>(); - setup100KeyValues(expect); - - ScoredValueScanCursor cursor = redis.zscan(key, ScanArgs.Builder.limit(10).match("val*")); - - assertThat(cursor.getCursor()).isEqualTo("0"); - assertThat(cursor.isFinished()).isTrue(); - - assertThat(cursor.getValues()).hasSize(100); - } - - @Test - public void zlexcount() throws Exception { - setup100KeyValues(new HashSet<>()); - Long result = redis.zlexcount(key, "-", "+"); - - assertThat(result.longValue()).isEqualTo(100); - - Long resultFromTo = redis.zlexcount(key, "[value", "[zzz"); - assertThat(resultFromTo.longValue()).isEqualTo(100); - } - - @Test - public void zrangebylex() throws Exception { - setup100KeyValues(new HashSet<>()); - List result = redis.zrangebylex(key, "-", "+"); - - assertThat(result).hasSize(100); - - List result2 = redis.zrangebylex(key, "-", "+", 10, 10); - - assertThat(result2).hasSize(10); - } - - @Test - public void zremrangebylex() throws Exception { - setup100KeyValues(new HashSet<>()); - Long result = redis.zremrangebylex(key, "(aaa", "[zzz"); - - assertThat(result.longValue()).isEqualTo(100); - - } - - protected void setup100KeyValues(Set expect) { - for (int i = 0; i < 100; i++) { - redis.zadd(key + 1, i, value + i); - redis.zadd(key, i, value + i); - expect.add(value + i); - } - - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/StringCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/StringCommandTest.java deleted file mode 100644 index 533da3a708..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/StringCommandTest.java +++ /dev/null @@ -1,205 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static com.lambdaworks.redis.SetArgs.Builder.ex; -import static com.lambdaworks.redis.SetArgs.Builder.nx; -import static com.lambdaworks.redis.SetArgs.Builder.px; -import static com.lambdaworks.redis.SetArgs.Builder.xx; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.LinkedHashMap; -import java.util.List; -import java.util.Map; - -import com.lambdaworks.redis.codec.ByteArrayCodec; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.ListStreamingAdapter; -import com.lambdaworks.redis.RedisCommandExecutionException; -import com.lambdaworks.redis.RedisException; - -public class StringCommandTest extends AbstractRedisClientTest { - @Rule - public ExpectedException exception = ExpectedException.none(); - - @Test - public void append() throws Exception { - assertThat(redis.append(key, value)).isEqualTo( value.length() ); - assertThat(redis.append(key, "X")).isEqualTo( value.length() + 1 ); - } - - @Test - public void get() throws Exception { - assertThat(redis.get(key)).isNull(); - redis.set(key, value); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void getbit() throws Exception { - assertThat(redis.getbit(key, 0)).isEqualTo(0); - redis.setbit(key, 0, 1); - assertThat(redis.getbit(key, 0)).isEqualTo(1); - } - - @Test - public void getrange() throws Exception { - assertThat(redis.getrange(key, 0, -1)).isEqualTo( "" ); - redis.set(key, "foobar"); - assertThat(redis.getrange(key, 2, 4)).isEqualTo( "oba" ); - assertThat(redis.getrange(key, 3, -1)).isEqualTo( "bar" ); - } - - @Test - public void getset() throws Exception { - assertThat(redis.getset(key, value)).isNull(); - assertThat(redis.getset(key, "two")).isEqualTo( value ); - assertThat(redis.get(key)).isEqualTo("two"); - } - - @Test - public void mget() throws Exception { - setupMget(); - assertThat(redis.mget("one", "two")).isEqualTo(list("1", "2") ); - } - - protected void setupMget() { - assertThat(redis.mget(key)).isEqualTo(list((String) null)); - redis.set("one", "1"); - redis.set("two", "2"); - } - - @Test - public void mgetStreaming() throws Exception { - setupMget(); - - ListStreamingAdapter streamingAdapter = new ListStreamingAdapter(); - Long count = redis.mget(streamingAdapter, "one", "two"); - assertThat(count.intValue()).isEqualTo(2); - - assertThat(streamingAdapter.getList()).isEqualTo(list("1", "2")); - } - - @Test - public void mset() throws Exception { - assertThat(redis.mget("one", "two")).isEqualTo(list(null, null)); - Map map = new LinkedHashMap<>(); - map.put("one", "1"); - map.put("two", "2"); - assertThat(redis.mset(map)).isEqualTo("OK"); - assertThat(redis.mget("one", "two")).isEqualTo(list("1", "2")); - } - - @Test - public void msetnx() throws Exception { - redis.set("one", "1"); - Map map = new LinkedHashMap<>(); - map.put("one", "1"); - map.put("two", "2"); - assertThat(redis.msetnx(map)).isFalse(); - redis.del("one"); - assertThat(redis.msetnx(map)).isTrue(); - assertThat(redis.get("two")).isEqualTo("2"); - } - - @Test - public void set() throws Exception { - assertThat(redis.get(key)).isNull(); - assertThat(redis.set(key, value)).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - - assertThat(redis.set(key, value, px(20000))).isEqualTo("OK"); - assertThat(redis.set(key, value, ex(10))).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - assertThat(redis.ttl(key)).isGreaterThanOrEqualTo( 9 ); - - assertThat(redis.set(key, value, px(10000))).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - assertThat(redis.ttl(key)).isGreaterThanOrEqualTo( 9 ); - - assertThat(redis.set(key, value, nx())).isNull(); - assertThat(redis.set(key, value, xx())).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - - redis.del(key); - assertThat(redis.set(key, value, nx())).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - - redis.del(key); - - assertThat(redis.set(key, value, px(20000).nx())).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - assertThat(redis.ttl(key) >= 19).isTrue(); - } - - @Test(expected = RedisException.class) - public void setNegativeEX() throws Exception { - redis.set(key, value, ex(-10)); - } - - @Test(expected = RedisException.class) - public void setNegativePX() throws Exception { - redis.set(key, value, px(-1000)); - } - - @Test - public void setExWithPx() throws Exception { - exception.expect(RedisCommandExecutionException.class); - exception.expectMessage("ERR syntax error"); - redis.set(key, value, ex(10).px(20000).nx()); - } - - @Test - public void setbit() throws Exception { - assertThat(redis.setbit(key, 0, 1)).isEqualTo(0); - assertThat(redis.setbit(key, 0, 0)).isEqualTo(1); - } - - @Test - public void setex() throws Exception { - assertThat(redis.setex(key, 10, value)).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - assertThat(redis.ttl(key) >= 9).isTrue(); - } - - @Test - public void psetex() throws Exception { - assertThat(redis.psetex(key, 20000, value)).isEqualTo("OK"); - assertThat(redis.get(key)).isEqualTo(value); - assertThat(redis.pttl(key) >= 19000).isTrue(); - } - - @Test - public void setnx() throws Exception { - assertThat(redis.setnx(key, value)).isTrue(); - assertThat(redis.setnx(key, value)).isFalse(); - } - - @Test - public void setrange() throws Exception { - assertThat(redis.setrange(key, 0, "foo")).isEqualTo("foo".length()); - assertThat(redis.setrange(key, 3, "bar")).isEqualTo(6); - assertThat(redis.get(key)).isEqualTo("foobar"); - } - - @Test - public void strlen() throws Exception { - assertThat((long) redis.strlen(key)).isEqualTo(0); - redis.set(key, value); - assertThat((long) redis.strlen(key)).isEqualTo(value.length()); - } - - @Test - public void time() throws Exception { - - List time = redis.time(); - assertThat(time).hasSize(2); - - Long.parseLong(time.get(0)); - Long.parseLong(time.get(1)); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/TransactionCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/TransactionCommandTest.java deleted file mode 100644 index aca7ad5316..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/TransactionCommandTest.java +++ /dev/null @@ -1,96 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.commands; - -import static org.assertj.core.api.Assertions.*; - -import java.util.List; - -import org.assertj.core.api.Assertions; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.RedisCommandExecutionException; -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisException; - -public class TransactionCommandTest extends AbstractRedisClientTest { - @Rule - public ExpectedException exception = ExpectedException.none(); - - @Test - public void discard() throws Exception { - assertThat(redis.multi()).isEqualTo("OK"); - redis.set(key, value); - assertThat(redis.discard()).isEqualTo("OK"); - assertThat(redis.get(key)).isNull(); - } - - @Test - public void exec() throws Exception { - assertThat(redis.multi()).isEqualTo("OK"); - redis.set(key, value); - assertThat(redis.exec()).isEqualTo(list("OK")); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void watch() throws Exception { - assertThat(redis.watch(key)).isEqualTo("OK"); - - RedisConnection redis2 = client.connect().sync(); - redis2.set(key, value + "X"); - redis2.close(); - - redis.multi(); - redis.append(key, "foo"); - assertThat(redis.exec()).isEqualTo(list()); - - } - - @Test - public void unwatch() throws Exception { - assertThat(redis.unwatch()).isEqualTo("OK"); - } - - @Test - public void commandsReturnNullInMulti() throws Exception { - assertThat(redis.multi()).isEqualTo("OK"); - assertThat(redis.set(key, value)).isNull(); - assertThat(redis.get(key)).isNull(); - assertThat(redis.exec()).isEqualTo(list("OK", value)); - assertThat(redis.get(key)).isEqualTo(value); - } - - @Test - public void execmulti() throws Exception { - redis.multi(); - redis.set("one", "1"); - redis.set("two", "2"); - redis.mget("one", "two"); - redis.llen(key); - assertThat(redis.exec()).isEqualTo(list("OK", "OK", list("1", "2"), 0L)); - } - - @Test - public void errorInMulti() throws Exception { - redis.multi(); - redis.set(key, value); - redis.lpop(key); - redis.get(key); - List values = redis.exec(); - assertThat(values.get(0)).isEqualTo("OK"); - assertThat(values.get(1) instanceof RedisException).isTrue(); - assertThat(values.get(2)).isEqualTo(value); - } - - @Test - public void execWithoutMulti() throws Exception { - exception.expect(RedisCommandExecutionException.class); - exception.expectMessage("ERR EXEC without MULTI"); - redis.exec(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/BitRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/BitRxCommandTest.java deleted file mode 100644 index 89dcfca12a..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/BitRxCommandTest.java +++ /dev/null @@ -1,16 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.BitCommandTest; - -/** - * @author Mark Paluch - */ -public class BitRxCommandTest extends BitCommandTest { - @Override - protected RedisCommands connect() { - bitstring = RxSyncInvocationHandler.sync(client.connectAsync(new BitStringCodec()).getStatefulConnection()); - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/CustomRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/CustomRxCommandTest.java deleted file mode 100644 index 62db9f778e..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/CustomRxCommandTest.java +++ /dev/null @@ -1,60 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import org.junit.Test; - -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.CustomCommandTest; -import com.lambdaworks.redis.output.ValueListOutput; -import com.lambdaworks.redis.output.ValueOutput; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandType; - -import rx.Observable; -import rx.observers.TestSubscriber; - -/** - * @author Mark Paluch - */ -public class CustomRxCommandTest extends CustomCommandTest { - - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } - - @Test - public void dispatchGetAndSet() throws Exception { - - redis.set(key, value); - RedisReactiveCommands reactive = redis.getStatefulConnection().reactive(); - - Observable observable = reactive.dispatch(CommandType.GET, new ValueOutput<>(utf8StringCodec), - new CommandArgs<>(utf8StringCodec).addKey(key)); - - TestSubscriber testSubscriber = TestSubscriber.create(); - observable.subscribe(testSubscriber); - - testSubscriber.awaitTerminalEvent(); - testSubscriber.assertCompleted(); - testSubscriber.assertValue(value); - } - - @Test - public void dispatchList() throws Exception { - - redis.rpush(key, "a", "b", "c"); - RedisReactiveCommands reactive = redis.getStatefulConnection().reactive(); - - Observable observable = reactive.dispatch(CommandType.LRANGE, new ValueListOutput<>(utf8StringCodec), - new CommandArgs<>(utf8StringCodec).addKey(key).add(0).add(-1)); - - TestSubscriber testSubscriber = TestSubscriber.create(); - observable.subscribe(testSubscriber); - - testSubscriber.awaitTerminalEvent(); - testSubscriber.assertCompleted(); - testSubscriber.assertValues("a", "b", "c"); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/GeoRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/GeoRxCommandTest.java deleted file mode 100644 index 605bcae85e..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/GeoRxCommandTest.java +++ /dev/null @@ -1,12 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.commands.GeoCommandTest; -import com.lambdaworks.redis.api.sync.RedisCommands; - -public class GeoRxCommandTest extends GeoCommandTest { - - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/HLLRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/HLLRxCommandTest.java deleted file mode 100644 index 7d122d6f29..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/HLLRxCommandTest.java +++ /dev/null @@ -1,27 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.HLLCommandTest; - -public class HLLRxCommandTest extends HLLCommandTest { - - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } - - @Override - public void pfaddDeprecated() throws Exception { - // Not available on reactive connection - } - - @Override - public void pfmergeDeprecated() throws Exception { - // Not available on reactive connection - } - - @Override - public void pfcountDeprecated() throws Exception { - // Not available on reactive connection - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/HashRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/HashRxCommandTest.java deleted file mode 100644 index cc97c4c67e..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/HashRxCommandTest.java +++ /dev/null @@ -1,13 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.HashCommandTest; - -public class HashRxCommandTest extends HashCommandTest { - - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/KeyRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/KeyRxCommandTest.java deleted file mode 100644 index 668d687a84..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/KeyRxCommandTest.java +++ /dev/null @@ -1,11 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.KeyCommandTest; - -public class KeyRxCommandTest extends KeyCommandTest { - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/ListRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/ListRxCommandTest.java deleted file mode 100644 index 9b702b7bc7..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/ListRxCommandTest.java +++ /dev/null @@ -1,11 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.ListCommandTest; - -public class ListRxCommandTest extends ListCommandTest { - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/NumericRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/NumericRxCommandTest.java deleted file mode 100644 index 5c29c38bc0..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/NumericRxCommandTest.java +++ /dev/null @@ -1,11 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.NumericCommandTest; - -public class NumericRxCommandTest extends NumericCommandTest { - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/RxSyncInvocationHandler.java b/src/test/java/com/lambdaworks/redis/commands/rx/RxSyncInvocationHandler.java deleted file mode 100644 index ea5ef08d62..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/RxSyncInvocationHandler.java +++ /dev/null @@ -1,100 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import java.lang.reflect.InvocationTargetException; -import java.lang.reflect.Method; -import java.lang.reflect.Proxy; -import java.util.Iterator; -import java.util.List; -import java.util.Set; - -import com.lambdaworks.redis.internal.AbstractInvocationHandler; -import rx.Observable; - -import com.lambdaworks.redis.api.StatefulConnection; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.internal.LettuceSets; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; -import com.lambdaworks.redis.sentinel.api.sync.RedisSentinelCommands; - -/** - * Invocation handler for testing purposes. - * @param - * @param - */ -public class RxSyncInvocationHandler extends AbstractInvocationHandler { - - private final StatefulConnection connection; - private final Object rxApi; - - public RxSyncInvocationHandler(StatefulConnection connection, Object rxApi) { - this.connection = connection; - this.rxApi = rxApi; - } - - @Override - @SuppressWarnings("unchecked") - protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { - - try { - - Method targetMethod = rxApi.getClass().getMethod(method.getName(), method.getParameterTypes()); - - Object result = targetMethod.invoke(rxApi, args); - - if (result == null || !(result instanceof Observable)) { - return result; - } - Observable observable = (Observable) result; - - if (!method.getName().equals("exec") && !method.getName().equals("multi")) { - if (connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection).isMulti()) { - observable.subscribe(); - return null; - } - } - - List value = observable.toList().toBlocking().first(); - - if (method.getReturnType().equals(List.class)) { - return value; - } - - if (method.getReturnType().equals(Set.class)) { - return LettuceSets.newHashSet(value); - } - - if (!value.isEmpty()) { - return value.get(0); - } - - return null; - - } catch (InvocationTargetException e) { - throw e.getTargetException(); - } - } - - public static RedisCommands sync(StatefulRedisConnection connection) { - - RxSyncInvocationHandler handler = new RxSyncInvocationHandler<>(connection, connection.reactive()); - return (RedisCommands) Proxy.newProxyInstance(handler.getClass().getClassLoader(), - new Class[] { RedisCommands.class }, handler); - } - - public static RedisCommands sync(StatefulRedisClusterConnection connection) { - - RxSyncInvocationHandler handler = new RxSyncInvocationHandler<>(connection, connection.reactive()); - return (RedisCommands) Proxy.newProxyInstance(handler.getClass().getClassLoader(), - new Class[] { RedisCommands.class }, handler); - } - - public static RedisSentinelCommands sync(StatefulRedisSentinelConnection connection) { - - RxSyncInvocationHandler handler = new RxSyncInvocationHandler<>(connection, connection.reactive()); - return (RedisSentinelCommands) Proxy.newProxyInstance(handler.getClass().getClassLoader(), - new Class[] { RedisSentinelCommands.class }, handler); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/ScriptingRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/ScriptingRxCommandTest.java deleted file mode 100644 index e38f75ff99..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/ScriptingRxCommandTest.java +++ /dev/null @@ -1,14 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.ScriptingCommandTest; - -public class ScriptingRxCommandTest extends ScriptingCommandTest { - - - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/ServerRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/ServerRxCommandTest.java deleted file mode 100644 index f5e011f6da..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/ServerRxCommandTest.java +++ /dev/null @@ -1,59 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import static org.assertj.core.api.Assertions.*; - -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.ServerCommandTest; - -public class ServerRxCommandTest extends ServerCommandTest { - - private RedisReactiveCommands reactive; - - @Before - public void openConnection() throws Exception { - super.openConnection(); - reactive = redis.getStatefulConnection().reactive(); - } - - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } - - /** - * Luckily these commands do not destroy anything in contrast to sync/async. - */ - @Test - public void shutdown() throws Exception { - reactive.shutdown(true); - assertThat(reactive.isOpen()).isTrue(); - } - - @Test - public void debugOom() throws Exception { - reactive.debugOom(); - assertThat(reactive.isOpen()).isTrue(); - } - - @Test - public void debugSegfault() throws Exception { - reactive.debugSegfault(); - assertThat(reactive.isOpen()).isTrue(); - } - - @Test - public void debugRestart() throws Exception { - reactive.debugRestart(1L); - assertThat(reactive.isOpen()).isTrue(); - } - - @Test - public void migrate() throws Exception { - reactive.migrate("host", 1234, "key", 1, 10); - assertThat(reactive.isOpen()).isTrue(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/SetRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/SetRxCommandTest.java deleted file mode 100644 index 6d585e2b8c..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/SetRxCommandTest.java +++ /dev/null @@ -1,13 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.SetCommandTest; - -public class SetRxCommandTest extends SetCommandTest { - - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/SortRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/SortRxCommandTest.java deleted file mode 100644 index 78aee42251..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/SortRxCommandTest.java +++ /dev/null @@ -1,11 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.SortCommandTest; - -public class SortRxCommandTest extends SortCommandTest { - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/SortedSetRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/SortedSetRxCommandTest.java deleted file mode 100644 index ff5acd805f..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/SortedSetRxCommandTest.java +++ /dev/null @@ -1,11 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.SortedSetCommandTest; - -public class SortedSetRxCommandTest extends SortedSetCommandTest { - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/StringRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/StringRxCommandTest.java deleted file mode 100644 index ce2552f0ee..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/StringRxCommandTest.java +++ /dev/null @@ -1,32 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.StringCommandTest; -import org.junit.Test; -import rx.Observable; - -import static org.assertj.core.api.Assertions.assertThat; - -public class StringRxCommandTest extends StringCommandTest { - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } - - @Test - public void mget() throws Exception { - - StatefulRedisConnection connection = client.connect(); - - connection.sync().set(key, value); - connection.sync().set("key1", value); - connection.sync().set("key2", value); - - Observable mget = connection.reactive().mget(key, "key1", "key2"); - String first = mget.toBlocking().first(); - assertThat(first).isEqualTo(value); - - connection.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/commands/rx/TransactionRxCommandTest.java b/src/test/java/com/lambdaworks/redis/commands/rx/TransactionRxCommandTest.java deleted file mode 100644 index 0649a67602..0000000000 --- a/src/test/java/com/lambdaworks/redis/commands/rx/TransactionRxCommandTest.java +++ /dev/null @@ -1,136 +0,0 @@ -package com.lambdaworks.redis.commands.rx; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.Iterator; -import java.util.List; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import rx.Observable; -import rx.observables.BlockingObservable; -import rx.observers.TestSubscriber; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.commands.TransactionCommandTest; -import com.lambdaworks.redis.internal.LettuceLists; - -public class TransactionRxCommandTest extends TransactionCommandTest { - - private RedisReactiveCommands commands; - - @Override - protected RedisCommands connect() { - return RxSyncInvocationHandler.sync(client.connectAsync().getStatefulConnection()); - } - - @Before - public void openConnection() throws Exception { - client.setOptions(ClientOptions.builder().build()); - redis = connect(); - redis.flushall(); - redis.flushdb(); - - commands = redis.getStatefulConnection().reactive(); - } - - @After - public void closeConnection() throws Exception { - redis.close(); - } - - @Test - public void discard() throws Exception { - assertThat(first(commands.multi())).isEqualTo("OK"); - - commands.set(key, value); - - assertThat(first(commands.discard())).isEqualTo("OK"); - assertThat(first(commands.get(key))).isNull(); - } - - @Test - public void execSingular() throws Exception { - - assertThat(first(commands.multi())).isEqualTo("OK"); - - redis.set(key, value); - - assertThat(first(commands.exec())).isEqualTo("OK"); - assertThat(first(commands.get(key))).isEqualTo(value); - } - - @Test - public void errorInMulti() throws Exception { - commands.multi().subscribe(); - commands.set(key, value).subscribe(); - commands.lpop(key).onExceptionResumeNext(Observable. empty()).subscribe(); - commands.get(key).subscribe(); - - List values = all(commands.exec()); - assertThat(values.get(0)).isEqualTo("OK"); - assertThat(values.get(1) instanceof RedisException).isTrue(); - assertThat(values.get(2)).isEqualTo(value); - } - - @Test - public void resultOfMultiIsContainedInCommandObservables() throws Exception { - - TestSubscriber set1 = TestSubscriber.create(); - TestSubscriber set2 = TestSubscriber.create(); - TestSubscriber mget = TestSubscriber.create(); - TestSubscriber llen = TestSubscriber.create(); - TestSubscriber exec = TestSubscriber.create(); - - commands.multi().subscribe(); - commands.set("key1", "value1").subscribe(set1); - commands.set("key2", "value2").subscribe(set2); - commands.mget("key1", "key2").subscribe(mget); - commands.llen("something").subscribe(llen); - commands.exec().subscribe(exec); - - exec.awaitTerminalEvent(); - - set1.assertValue("OK"); - set2.assertValue("OK"); - mget.assertValues("value1", "value2"); - llen.assertValue(0L); - } - - @Test - public void resultOfMultiIsContainedInExecObservable() throws Exception { - - TestSubscriber exec = TestSubscriber.create(); - - commands.multi().subscribe(); - commands.set("key1", "value1").subscribe(); - commands.set("key2", "value2").subscribe(); - commands.mget("key1", "key2").subscribe(); - commands.llen("something").subscribe(); - commands.exec().subscribe(exec); - - exec.awaitTerminalEvent(); - - assertThat(exec.getOnNextEvents()).hasSize(4).containsExactly("OK", "OK", list("value1", "value2"), 0L); - } - - protected T first(Observable observable) { - BlockingObservable blocking = observable.toBlocking(); - Iterator iterator = blocking.getIterator(); - if (iterator.hasNext()) { - return iterator.next(); - } - return null; - } - - protected List all(Observable observable) { - BlockingObservable blocking = observable.toBlocking(); - Iterator iterator = blocking.getIterator(); - return LettuceLists.newList(iterator); - } -} diff --git a/src/test/java/com/lambdaworks/redis/event/ConnectionEventsTriggeredTest.java b/src/test/java/com/lambdaworks/redis/event/ConnectionEventsTriggeredTest.java deleted file mode 100644 index a6ba8c04ae..0000000000 --- a/src/test/java/com/lambdaworks/redis/event/ConnectionEventsTriggeredTest.java +++ /dev/null @@ -1,63 +0,0 @@ -package com.lambdaworks.redis.event; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.List; - -import org.assertj.core.api.Condition; -import org.junit.Test; - -import rx.Subscription; -import rx.observers.TestSubscriber; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.event.connection.*; - -/** - * @author Mark Paluch - */ -public class ConnectionEventsTriggeredTest extends AbstractRedisClientTest { - - @Test - public void testConnectionEvents() throws Exception { - - TestSubscriber subscriber = TestSubscriber.create(); - - Subscription subscription = client.getResources().eventBus().get().filter( - event -> event instanceof ConnectionEvent) - .subscribe(subscriber); - - try { - client.connect().close(); - Wait.untilTrue(() -> subscriber.getOnNextEvents().size() > 3).waitOrTimeout(); - } finally { - subscription.unsubscribe(); - } - - List events = subscriber.getOnNextEvents(); - assertThat(events).areAtLeast(1, new ExpectedClassCondition(ConnectedEvent.class)); - assertThat(events).areAtLeast(1, new ExpectedClassCondition(ConnectionActivatedEvent.class)); - assertThat(events).areAtLeast(1, new ExpectedClassCondition(DisconnectedEvent.class)); - assertThat(events).areAtLeast(1, new ExpectedClassCondition(ConnectionDeactivatedEvent.class)); - - ConnectionEvent event = (ConnectionEvent) events.get(0); - assertThat(event.remoteAddress()).isNotNull(); - assertThat(event.localAddress()).isNotNull(); - - assertThat(event.toString()).contains("->"); - } - - private static class ExpectedClassCondition extends Condition { - private final Class expectedClass; - - public ExpectedClassCondition(Class expectedClass) { - this.expectedClass = expectedClass; - } - - @Override - public boolean matches(Object value) { - return value != null && value.getClass().equals(expectedClass); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/event/DefaultEventBusTest.java b/src/test/java/com/lambdaworks/redis/event/DefaultEventBusTest.java deleted file mode 100644 index b344aef133..0000000000 --- a/src/test/java/com/lambdaworks/redis/event/DefaultEventBusTest.java +++ /dev/null @@ -1,39 +0,0 @@ -package com.lambdaworks.redis.event; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.runners.MockitoJUnitRunner; - -import rx.observers.TestSubscriber; -import rx.schedulers.Schedulers; -import rx.schedulers.TestScheduler; - -/** - * @author Mark Paluch - */ -@RunWith(MockitoJUnitRunner.class) -public class DefaultEventBusTest { - - @Mock - private Event event; - - @Test - public void publishToSubscriber() throws Exception { - TestScheduler testScheduler = Schedulers.test(); - EventBus sut = new DefaultEventBus(testScheduler); - - TestSubscriber subscriber = new TestSubscriber(); - sut.get().subscribe(subscriber); - - sut.publish(event); - - testScheduler.advanceTimeBy(1, TimeUnit.SECONDS); - - assertThat(subscriber.getOnNextEvents()).hasSize(1).contains(event); - } -} diff --git a/src/test/java/com/lambdaworks/redis/event/DefaultEventPublisherOptionsTest.java b/src/test/java/com/lambdaworks/redis/event/DefaultEventPublisherOptionsTest.java deleted file mode 100644 index 004e909587..0000000000 --- a/src/test/java/com/lambdaworks/redis/event/DefaultEventPublisherOptionsTest.java +++ /dev/null @@ -1,41 +0,0 @@ -package com.lambdaworks.redis.event; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class DefaultEventPublisherOptionsTest { - - @Test - public void testDefault() throws Exception { - - DefaultEventPublisherOptions sut = DefaultEventPublisherOptions.create(); - - assertThat(sut.eventEmitInterval()).isEqualTo(10); - assertThat(sut.eventEmitIntervalUnit()).isEqualTo(TimeUnit.MINUTES); - } - - @Test - public void testDisabled() throws Exception { - - DefaultEventPublisherOptions sut = DefaultEventPublisherOptions.disabled(); - - assertThat(sut.eventEmitInterval()).isEqualTo(0); - assertThat(sut.eventEmitIntervalUnit()).isEqualTo(TimeUnit.SECONDS); - } - - @Test - public void testBuilder() throws Exception { - - DefaultEventPublisherOptions sut = new DefaultEventPublisherOptions.Builder().eventEmitInterval(1, TimeUnit.SECONDS) - .build(); - - assertThat(sut.eventEmitInterval()).isEqualTo(1); - assertThat(sut.eventEmitIntervalUnit()).isEqualTo(TimeUnit.SECONDS); - } -} diff --git a/src/test/java/com/lambdaworks/redis/internal/AbstractInvocationHandlerTest.java b/src/test/java/com/lambdaworks/redis/internal/AbstractInvocationHandlerTest.java deleted file mode 100644 index 410875e257..0000000000 --- a/src/test/java/com/lambdaworks/redis/internal/AbstractInvocationHandlerTest.java +++ /dev/null @@ -1,66 +0,0 @@ -package com.lambdaworks.redis.internal; - -import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat; - -import java.lang.reflect.Method; -import java.lang.reflect.Proxy; -import java.util.Collection; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class AbstractInvocationHandlerTest { - - @Test - public void shouldHandleInterfaceMethod() throws Exception { - - ReturnOne proxy = createProxy(); - assertThat(proxy.returnOne()).isEqualTo(1); - } - - @Test - public void shouldBeEqualToSelf() throws Exception { - - ReturnOne proxy1 = createProxy(); - ReturnOne proxy2 = createProxy(); - - assertThat(proxy1).isEqualTo(proxy1); - assertThat(proxy1.hashCode()).isEqualTo(proxy1.hashCode()); - - assertThat(proxy1).isNotEqualTo(proxy2); - assertThat(proxy1.hashCode()).isNotEqualTo(proxy2.hashCode()); - } - - @Test - public void shouldBeNotEqualToProxiesWithDifferentInterfaces() throws Exception { - - ReturnOne proxy1 = createProxy(); - Object proxy2 = Proxy.newProxyInstance(getClass().getClassLoader(), new Class[] { ReturnOne.class, Collection.class }, - new InvocationHandler()); - - assertThat(proxy1).isNotEqualTo(proxy2); - assertThat(proxy1.hashCode()).isNotEqualTo(proxy2.hashCode()); - } - - private ReturnOne createProxy() { - - return (ReturnOne) Proxy.newProxyInstance(getClass().getClassLoader(), new Class[] { ReturnOne.class }, - new InvocationHandler()); - - } - - static class InvocationHandler extends AbstractInvocationHandler { - - @Override - protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { - return 1; - } - } - - static interface ReturnOne { - int returnOne(); - } - -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/internal/HostAndPortTest.java b/src/test/java/com/lambdaworks/redis/internal/HostAndPortTest.java deleted file mode 100644 index c78cab34b6..0000000000 --- a/src/test/java/com/lambdaworks/redis/internal/HostAndPortTest.java +++ /dev/null @@ -1,171 +0,0 @@ -package com.lambdaworks.redis.internal; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class HostAndPortTest { - - @Test - public void testFromStringWellFormed() { - // Well-formed inputs. - checkFromStringCase("google.com", 80, "google.com", 80, false); - checkFromStringCase("google.com", 80, "google.com", 80, false); - checkFromStringCase("192.0.2.1", 82, "192.0.2.1", 82, false); - checkFromStringCase("[2001::1]", 84, "2001::1", 84, false); - checkFromStringCase("2001::3", 86, "2001::3", 86, false); - checkFromStringCase("host:", 80, "host", 80, false); - } - - @Test - public void testFromStringBadDefaultPort() { - // Well-formed strings with bad default ports. - checkFromStringCase("gmail.com:81", -1, "gmail.com", 81, true); - checkFromStringCase("192.0.2.2:83", -1, "192.0.2.2", 83, true); - checkFromStringCase("[2001::2]:85", -1, "2001::2", 85, true); - checkFromStringCase("goo.gl:65535", 65536, "goo.gl", 65535, true); - // No port, bad default. - checkFromStringCase("google.com", -1, "google.com", -1, false); - checkFromStringCase("192.0.2.1", 65536, "192.0.2.1", -1, false); - checkFromStringCase("[2001::1]", -1, "2001::1", -1, false); - checkFromStringCase("2001::3", 65536, "2001::3", -1, false); - } - - @Test - public void testFromStringUnusedDefaultPort() { - // Default port, but unused. - checkFromStringCase("gmail.com:81", 77, "gmail.com", 81, true); - checkFromStringCase("192.0.2.2:83", 77, "192.0.2.2", 83, true); - checkFromStringCase("[2001::2]:85", 77, "2001::2", 85, true); - } - - @Test - public void testFromStringBadPort() { - // Out-of-range ports. - checkFromStringCase("google.com:65536", 1, null, 99, false); - checkFromStringCase("google.com:9999999999", 1, null, 99, false); - // Invalid port parts. - checkFromStringCase("google.com:port", 1, null, 99, false); - checkFromStringCase("google.com:-25", 1, null, 99, false); - checkFromStringCase("google.com:+25", 1, null, 99, false); - checkFromStringCase("google.com:25 ", 1, null, 99, false); - checkFromStringCase("google.com:25\t", 1, null, 99, false); - checkFromStringCase("google.com:0x25 ", 1, null, 99, false); - } - - @Test - public void testFromStringUnparseableNonsense() { - // Some nonsense that causes parse failures. - checkFromStringCase("[goo.gl]", 1, null, 99, false); - checkFromStringCase("[goo.gl]:80", 1, null, 99, false); - checkFromStringCase("[", 1, null, 99, false); - checkFromStringCase("[]:", 1, null, 99, false); - checkFromStringCase("[]:80", 1, null, 99, false); - checkFromStringCase("[]bad", 1, null, 99, false); - } - - @Test - public void testFromStringParseableNonsense() { - // Examples of nonsense that gets through. - checkFromStringCase("[[:]]", 86, "[:]", 86, false); - checkFromStringCase("x:y:z", 87, "x:y:z", 87, false); - checkFromStringCase("", 88, "", 88, false); - checkFromStringCase(":", 99, "", 99, false); - checkFromStringCase(":123", -1, "", 123, true); - checkFromStringCase("\nOMG\t", 89, "\nOMG\t", 89, false); - } - - @Test - public void shouldCreateHostAndPortFromParts() { - HostAndPort hp = HostAndPort.of("gmail.com", 81); - assertThat(hp.getHostText()).isEqualTo("gmail.com"); - assertThat(hp.hasPort()).isTrue(); - assertThat(hp.getPort()).isEqualTo(81); - - try { - HostAndPort.of("gmail.com:80", 81); - fail("Expected IllegalArgumentException"); - } catch (IllegalArgumentException expected) { - } - - try { - HostAndPort.of("gmail.com", -1); - fail("Expected IllegalArgumentException"); - } catch (IllegalArgumentException expected) { - } - } - - @Test - public void shouldCompare() { - HostAndPort hp1 = HostAndPort.parse("foo::123"); - HostAndPort hp2 = HostAndPort.parse("foo::123"); - HostAndPort hp3 = HostAndPort.parse("[foo::124]"); - HostAndPort hp4 = HostAndPort.of("[foo::123]", 80); - HostAndPort hp5 = HostAndPort.parse("[foo::123]:80"); - assertThat(hp1.hashCode()).isEqualTo(hp1.hashCode()); - assertThat(hp2.hashCode()).isEqualTo(hp1.hashCode()); - assertThat(hp3.hashCode()).isNotEqualTo(hp1.hashCode()); - assertThat(hp3.hashCode()).isNotEqualTo(hp4.hashCode()); - assertThat(hp5.hashCode()).isNotEqualTo(hp4.hashCode()); - - assertThat(hp1.equals(hp1)).isTrue(); - assertThat(hp1).isEqualTo(hp1); - assertThat(hp1.equals(hp2)).isTrue(); - assertThat(hp1.equals(hp3)).isFalse(); - assertThat(hp1).isNotEqualTo(hp3); - assertThat(hp3.equals(hp4)).isFalse(); - assertThat(hp4.equals(hp5)).isFalse(); - assertThat(hp1.equals(null)).isFalse(); - } - - @Test - public void shouldApplyCompatibilityParsing() throws Exception { - - checkFromCompatCase("affe::123:6379", "affe::123", 6379); - checkFromCompatCase("1:2:3:4:5:6:7:8:6379", "1:2:3:4:5:6:7:8", 6379); - checkFromCompatCase("[affe::123]:6379", "affe::123", 6379); - checkFromCompatCase("127.0.0.1:6379", "127.0.0.1", 6379); - } - - private static void checkFromStringCase(String hpString, int defaultPort, String expectHost, int expectPort, - boolean expectHasExplicitPort) { - HostAndPort hp; - try { - hp = HostAndPort.parse(hpString); - } catch (IllegalArgumentException e) { - // Make sure we expected this. - assertThat(expectHost).isNull(); - return; - } - assertThat(expectHost).isNotNull(); - - // Apply withDefaultPort(), yielding hp2. - final boolean badDefaultPort = (defaultPort < 0 || defaultPort > 65535); - - // Check the pre-withDefaultPort() instance. - if (expectHasExplicitPort) { - assertThat(hp.hasPort()).isTrue(); - assertThat(hp.getPort()).isEqualTo(expectPort); - } else { - assertThat(hp.hasPort()).isFalse(); - try { - hp.getPort(); - fail("Expected IllegalStateException"); - } catch (IllegalStateException expected) { - } - } - assertThat(hp.getHostText()).isEqualTo(expectHost); - } - - private static void checkFromCompatCase(String hpString, String expectHost, int expectPort) { - - HostAndPort hostAndPort = HostAndPort.parseCompat(hpString); - assertThat(hostAndPort.getHostText()).isEqualTo(expectHost); - assertThat(hostAndPort.getPort()).isEqualTo(expectPort); - - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/issue42/BreakClientBase.java b/src/test/java/com/lambdaworks/redis/issue42/BreakClientBase.java deleted file mode 100644 index 8baa2f0597..0000000000 --- a/src/test/java/com/lambdaworks/redis/issue42/BreakClientBase.java +++ /dev/null @@ -1,99 +0,0 @@ -package com.lambdaworks.redis.issue42; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.fail; - -import java.nio.ByteBuffer; -import java.util.concurrent.TimeUnit; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; - -import com.lambdaworks.redis.RedisCommandTimeoutException; -import com.lambdaworks.redis.api.sync.RedisHashCommands; -import com.lambdaworks.redis.codec.Utf8StringCodec; - -/** - * Base for simulating slow connections/commands running into timeouts. - */ -public abstract class BreakClientBase { - - public static int TIMEOUT = 1; - - public static final String TEST_KEY = "taco"; - public volatile boolean sleep = false; - - protected Logger log = LogManager.getLogger(getClass()); - - public void testSingle(RedisHashCommands client) throws InterruptedException { - populateTest(0, client); - - assertThat(client.hvals(TEST_KEY)).hasSize(16385); - - breakClient(client); - - assertThat(client.hvals(TEST_KEY)).hasSize(16385); - } - - public void testLoop(RedisHashCommands client) throws InterruptedException { - populateTest(100, client); - - assertThat(client.hvals(TEST_KEY)).hasSize(16485); - - breakClient(client); - - assertExtraKeys(100, client); - } - - public void assertExtraKeys(int howmany, RedisHashCommands target) { - for (int x = 0; x < howmany; x++) { - int i = Integer.parseInt(target.hget(TEST_KEY, "GET-" + x)); - assertThat(x).isEqualTo(i); - } - } - - protected void breakClient(RedisHashCommands target) throws InterruptedException { - try { - this.sleep = true; - log.info("This should timeout"); - target.hgetall(TEST_KEY); - fail(); - } catch (RedisCommandTimeoutException expected) { - log.info("got expected timeout"); - } - } - - protected void populateTest(int loopFor, RedisHashCommands target) { - log.info("populating hash"); - target.hset(TEST_KEY, TEST_KEY, TEST_KEY); - - for (int x = 0; x < loopFor; x++) { - target.hset(TEST_KEY, "GET-" + x, Integer.toString(x)); - } - - for (int i = 0; i < 16384; i++) { - target.hset(TEST_KEY, Integer.toString(i), TEST_KEY); - } - assertThat(target.hvals(TEST_KEY)).hasSize(16385 + loopFor); - log.info("done"); - - } - - public Utf8StringCodec slowCodec = new Utf8StringCodec() { - public String decodeValue(ByteBuffer bytes) { - - if (sleep) { - log.info("Sleeping for " + (TIMEOUT + 2) + " seconds in slowCodec"); - sleep = false; - try { - TimeUnit.SECONDS.sleep(TIMEOUT + 2); - } catch (InterruptedException e) { - throw new RuntimeException(e); - } - log.info("Done sleeping in slowCodec"); - } - - return super.decodeValue(bytes); - } - }; -} diff --git a/src/test/java/com/lambdaworks/redis/issue42/BreakClientTest.java b/src/test/java/com/lambdaworks/redis/issue42/BreakClientTest.java deleted file mode 100644 index 70c85c7da0..0000000000 --- a/src/test/java/com/lambdaworks/redis/issue42/BreakClientTest.java +++ /dev/null @@ -1,45 +0,0 @@ -package com.lambdaworks.redis.issue42; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.category.SlowTests; -import com.lambdaworks.redis.DefaultRedisClient; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.api.sync.RedisCommands; -import org.junit.After; -import org.junit.Before; -import org.junit.Ignore; -import org.junit.Test; - -@SlowTests -@Ignore("Run me manually") -public class BreakClientTest extends BreakClientBase { - - protected static RedisClient client = DefaultRedisClient.get(); - - protected RedisCommands redis; - - @Before - public void setUp() throws Exception { - client.setDefaultTimeout(TIMEOUT, TimeUnit.SECONDS); - redis = client.connect(this.slowCodec).sync(); - redis.flushall(); - redis.flushdb(); - } - - @After - public void tearDown() throws Exception { - redis.close(); - } - - @Test - public void testStandAlone() throws Exception { - testSingle(redis); - } - - @Test - public void testLooping() throws Exception { - testLoop(redis); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/issue42/BreakClusterClientTest.java b/src/test/java/com/lambdaworks/redis/issue42/BreakClusterClientTest.java deleted file mode 100644 index d1685cb2a7..0000000000 --- a/src/test/java/com/lambdaworks/redis/issue42/BreakClusterClientTest.java +++ /dev/null @@ -1,76 +0,0 @@ -package com.lambdaworks.redis.issue42; - -import static com.google.code.tempusfugit.temporal.Duration.*; -import static com.google.code.tempusfugit.temporal.Timeout.*; - -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.category.SlowTests; -import com.lambdaworks.redis.FastShutdown; -import org.junit.*; - -import com.google.code.tempusfugit.temporal.Condition; -import com.google.code.tempusfugit.temporal.Duration; -import com.google.code.tempusfugit.temporal.ThreadSleep; -import com.google.code.tempusfugit.temporal.WaitFor; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.cluster.ClusterRule; -import com.lambdaworks.redis.cluster.RedisClusterClient; -import com.lambdaworks.redis.cluster.api.sync.RedisClusterCommands; - -@SlowTests -@Ignore("Run me manually") -public class BreakClusterClientTest extends BreakClientBase { - public static final String host = TestSettings.hostAddr(); - public static final int port1 = 7379; - public static final int port2 = 7380; - public static final int port3 = 7381; - public static final int port4 = 7382; - - private static RedisClusterClient clusterClient; - private RedisClusterCommands clusterConnection; - - @Rule - public ClusterRule clusterRule = new ClusterRule(clusterClient, port1, port2, port3, port4); - - @BeforeClass - public static void setupClient() { - clusterClient = new RedisClusterClient(RedisURI.Builder.redis(host, port1).withTimeout(TIMEOUT, TimeUnit.SECONDS) - .build()); - } - - @AfterClass - public static void shutdownClient() { - FastShutdown.shutdown(clusterClient); - } - - @Before - public void setUp() throws Exception { - WaitFor.waitOrTimeout(new Condition() { - @Override - public boolean isSatisfied() { - return clusterRule.isStable(); - } - }, timeout(seconds(5)), new ThreadSleep(Duration.millis(500))); - - clusterConnection = clusterClient.connectCluster(this.slowCodec); - - } - - @After - public void tearDown() throws Exception { - clusterConnection.close(); - } - - @Test - public void testStandAlone() throws Exception { - testSingle(clusterConnection); - } - - @Test - public void testLooping() throws Exception { - testLoop(clusterConnection); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveSentinelTest.java b/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveSentinelTest.java deleted file mode 100644 index 523f3adcf8..0000000000 --- a/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveSentinelTest.java +++ /dev/null @@ -1,131 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static com.lambdaworks.redis.TestSettings.port; -import static com.lambdaworks.redis.masterslave.MasterSlaveTest.slaveCall; -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.fail; - -import java.net.ConnectException; -import java.util.regex.Matcher; -import java.util.regex.Pattern; - -import org.junit.Before; -import org.junit.Rule; -import org.junit.Test; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.TestClientResources; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.sentinel.AbstractSentinelTest; -import com.lambdaworks.redis.sentinel.SentinelRule; - -import io.netty.channel.group.ChannelGroup; - -/** - * @author Mark Paluch - */ -public class MasterSlaveSentinelTest extends AbstractSentinelTest { - - static { - sentinelClient = RedisClient.create(TestClientResources.create(), - RedisURI.Builder.sentinel(TestSettings.host(), MASTER_ID).build()); - } - - @Rule - public SentinelRule sentinelRule = new SentinelRule(sentinelClient, false, 26379, 26380); - - private RedisURI sentinelUri = RedisURI.Builder.sentinel(TestSettings.host(), 26379, MASTER_ID).build(); - private Pattern pattern = Pattern.compile("role:(\\w+)"); - - @Before - public void before() throws Exception { - sentinelRule.needMasterWithSlave(MASTER_ID, port(3), port(4)); - } - - @Test - public void testMasterSlaveSentinelBasic() throws Exception { - - RedisURI uri = RedisURI.create( - "redis-sentinel://127.0.0.1:21379,127.0.0.1:22379,127.0.0.1:26379?sentinelMasterId=mymaster&timeout=5s"); - StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(sentinelClient, - new Utf8StringCodec(), uri); - - connection.setReadFrom(ReadFrom.MASTER); - String server = slaveCall(connection); - assertThatServerIs(server, "master"); - - connection.close(); - } - - @Test - public void testMasterSlaveSentinelWithTwoUnavailableSentinels() throws Exception { - - RedisURI uri = RedisURI.create( - "redis-sentinel://127.0.0.1:21379,127.0.0.1:22379,127.0.0.1:26379?sentinelMasterId=mymaster&timeout=5s"); - StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(sentinelClient, - new Utf8StringCodec(), uri); - - connection.setReadFrom(ReadFrom.MASTER); - String server = connection.sync().info("replication"); - assertThatServerIs(server, "master"); - - connection.close(); - } - - @Test - public void testMasterSlaveSentinelWithUnavailableSentinels() throws Exception { - - RedisURI uri = RedisURI - .create("redis-sentinel://127.0.0.1:21379,127.0.0.1:21379?sentinelMasterId=mymaster&timeout=5s"); - - try { - MasterSlave.connect(sentinelClient, new Utf8StringCodec(), uri); - fail("Missing RedisConnectionException"); - } catch (RedisConnectionException e) { - assertThat(e.getCause()).hasCauseInstanceOf(ConnectException.class); - } - } - - @Test - public void testMasterSlaveSentinelConnectionCount() throws Exception { - - ChannelGroup channels = (ChannelGroup) ReflectionTestUtils.getField(sentinelClient, "channels"); - int count = channels.size(); - - StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(sentinelClient, - new Utf8StringCodec(), sentinelUri); - - connection.sync().ping(); - connection.setReadFrom(ReadFrom.SLAVE); - slaveCall(connection); - - assertThat(channels.size()).isEqualTo(count + 2 /* connections */ + 1 /* sentinel connections */); - - connection.close(); - } - - @Test - public void testMasterSlaveSentinelClosesSentinelConnections() throws Exception { - - ChannelGroup channels = (ChannelGroup) ReflectionTestUtils.getField(sentinelClient, "channels"); - int count = channels.size(); - - StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(sentinelClient, - new Utf8StringCodec(), sentinelUri); - - connection.sync().ping(); - connection.setReadFrom(ReadFrom.SLAVE); - slaveCall(connection); - connection.close(); - - assertThat(channels.size()).isEqualTo(count); - } - - protected void assertThatServerIs(String server, String expectation) { - Matcher matcher = pattern.matcher(server); - - assertThat(matcher.find()).isTrue(); - assertThat(matcher.group(1)).isEqualTo(expectation); - } -} diff --git a/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveTest.java b/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveTest.java deleted file mode 100644 index 5b072f18fc..0000000000 --- a/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveTest.java +++ /dev/null @@ -1,185 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assume.assumeTrue; - -import java.util.Collections; -import java.util.List; -import java.util.concurrent.TimeUnit; -import java.util.regex.Matcher; -import java.util.regex.Pattern; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; -import com.lambdaworks.redis.models.role.RoleParser; - -/** - * @author Mark Paluch - */ -public class MasterSlaveTest extends AbstractRedisClientTest { - - private RedisURI masterURI = RedisURI.Builder.redis(host, TestSettings.port(3)).withPassword(passwd).withDatabase(5) - .build(); - private StatefulRedisMasterSlaveConnectionImpl connection; - private RedisAsyncCommands connectionToNode1; - private RedisAsyncCommands connectionToNode2; - - private RedisURI master; - private RedisURI slave; - - @Before - public void before() throws Exception { - - RedisURI node1 = RedisURI.Builder.redis(host, TestSettings.port(3)).withDatabase(2).build(); - RedisURI node2 = RedisURI.Builder.redis(host, TestSettings.port(4)).withDatabase(2).build(); - - connectionToNode1 = client.connect(node1).async(); - connectionToNode2 = client.connect(node2).async(); - - RedisInstance node1Instance = RoleParser.parse(connectionToNode1.role().get(2, TimeUnit.SECONDS)); - RedisInstance node2Instance = RoleParser.parse(connectionToNode2.role().get(2, TimeUnit.SECONDS)); - - if (node1Instance.getRole() == RedisInstance.Role.MASTER && node2Instance.getRole() == RedisInstance.Role.SLAVE) { - master = node1; - slave = node2; - } else if (node2Instance.getRole() == RedisInstance.Role.MASTER - && node1Instance.getRole() == RedisInstance.Role.SLAVE) { - master = node2; - slave = node1; - } else { - assumeTrue(String.format("Cannot run the test because I don't have a distinct master and slave but %s and %s", - node1Instance, node2Instance), false); - } - - connectionToNode1.configSet("requirepass", passwd); - connectionToNode1.configSet("masterauth", passwd); - connectionToNode1.auth(passwd); - - connectionToNode2.configSet("requirepass", passwd); - connectionToNode2.configSet("masterauth", passwd); - connectionToNode2.auth(passwd); - - connection = (StatefulRedisMasterSlaveConnectionImpl) MasterSlave.connect(client, new Utf8StringCodec(), masterURI); - connection.setReadFrom(ReadFrom.SLAVE); - } - - @After - public void after() throws Exception { - - if (connectionToNode1 != null) { - connectionToNode1.configSet("requirepass", ""); - connectionToNode1.configSet("masterauth", "").get(1, TimeUnit.SECONDS); - connectionToNode1.close(); - } - - if (connectionToNode2 != null) { - connectionToNode2.configSet("requirepass", ""); - connectionToNode2.configSet("masterauth", "").get(1, TimeUnit.SECONDS); - connectionToNode2.close(); - } - - if (connection != null) { - connection.close(); - } - } - - @Test - public void testMasterSlaveReadFromMaster() throws Exception { - - connection.setReadFrom(ReadFrom.MASTER); - String server = connection.sync().info("server"); - - Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); - Matcher matcher = pattern.matcher(server); - - assertThat(matcher.find()).isTrue(); - assertThat(matcher.group(1)).isEqualTo("" + master.getPort()); - } - - @Test - public void testMasterSlaveReadFromSlave() throws Exception { - - String server = connection.sync().info("server"); - - Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); - Matcher matcher = pattern.matcher(server); - - assertThat(matcher.find()).isTrue(); - assertThat(matcher.group(1)).isEqualTo("" + slave.getPort()); - assertThat(connection.getReadFrom()).isEqualTo(ReadFrom.SLAVE); - } - - @Test - public void testMasterSlaveReadWrite() throws Exception { - - RedisCommands redisCommands = connection.sync(); - redisCommands.set(key, value); - redisCommands.waitForReplication(1, 100); - - assertThat(redisCommands.get(key)).isEqualTo(value); - } - - @Test - public void testConnectToSlave() throws Exception { - - connection.close(); - - RedisURI slaveUri = RedisURI.Builder.redis(host, TestSettings.port(4)).withPassword(passwd).build(); - connection = (StatefulRedisMasterSlaveConnectionImpl) MasterSlave.connect(client, new Utf8StringCodec(), slaveUri); - - RedisCommands sync = connection.sync(); - sync.set(key, value); - } - - @Test(expected = RedisException.class) - public void noSlaveForRead() throws Exception { - - connection.setReadFrom(new ReadFrom() { - @Override - public List select(Nodes nodes) { - return Collections.emptyList(); - } - }); - - slaveCall(connection); - } - - @Test - public void testConnectionCount() throws Exception { - - MasterSlaveConnectionProvider connectionProvider = getConnectionProvider(); - - assertThat(connectionProvider.getConnectionCount()).isEqualTo(1); - slaveCall(connection); - - assertThat(connectionProvider.getConnectionCount()).isEqualTo(2); - } - - @Test - public void testReconfigureTopology() throws Exception { - MasterSlaveConnectionProvider connectionProvider = getConnectionProvider(); - - slaveCall(connection); - - connectionProvider.setKnownNodes(Collections.emptyList()); - - assertThat(connectionProvider.getConnectionCount()).isEqualTo(0); - } - - protected static String slaveCall(StatefulRedisMasterSlaveConnection connection) { - return connection.sync().info("replication"); - } - - protected MasterSlaveConnectionProvider getConnectionProvider() { - MasterSlaveChannelWriter writer = connection.getChannelWriter(); - return writer.getMasterSlaveConnectionProvider(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyProviderTest.java b/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyProviderTest.java deleted file mode 100644 index 53fdc8e3b3..0000000000 --- a/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveTopologyProviderTest.java +++ /dev/null @@ -1,135 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.mockito.Mockito.mock; - -import java.util.List; - -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RedisNodeDescription; - -/** - * @author Mark Paluch - */ -public class MasterSlaveTopologyProviderTest { - - private StatefulRedisConnection connectionMock = mock(StatefulRedisConnection.class); - - private MasterSlaveTopologyProvider sut = new MasterSlaveTopologyProvider(connectionMock, - RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build()); - - @Test - public void shouldParseMaster() throws Exception { - - String info = "# Replication\r\n" + "role:master\r\n" + "connected_slaves:1\r\n" + "master_repl_offset:56276\r\n" - + "repl_backlog_active:1\r\n"; - - List result = sut.getNodesFromInfo(info); - assertThat(result).hasSize(1); - - RedisNodeDescription redisNodeDescription = result.get(0); - - assertThat(redisNodeDescription.getRole()).isEqualTo(RedisInstance.Role.MASTER); - assertThat(redisNodeDescription.getUri().getHost()).isEqualTo(TestSettings.host()); - assertThat(redisNodeDescription.getUri().getPort()).isEqualTo(TestSettings.port()); - } - - @Test - public void shouldParseMasterAndSlave() throws Exception { - - String info = "# Replication\r\n" + "role:slave\r\n" + "connected_slaves:1\r\n" + "master_host:127.0.0.1\r\n" - + "master_port:1234\r\n" + "master_repl_offset:56276\r\n" + "repl_backlog_active:1\r\n"; - - List result = sut.getNodesFromInfo(info); - assertThat(result).hasSize(2); - - RedisNodeDescription slave = result.get(0); - assertThat(slave.getRole()).isEqualTo(RedisInstance.Role.SLAVE); - - RedisNodeDescription master = result.get(1); - assertThat(master.getRole()).isEqualTo(RedisInstance.Role.MASTER); - assertThat(master.getUri().getHost()).isEqualTo("127.0.0.1"); - } - - @Test - public void shouldParseIPv6MasterAddress() throws Exception { - - String info = "# Replication\r\n" + "role:slave\r\n" + "connected_slaves:1\r\n" + "master_host:::20f8:1400:0:0\r\n" - + "master_port:1234\r\n" + "master_repl_offset:56276\r\n" + "repl_backlog_active:1\r\n"; - - List result = sut.getNodesFromInfo(info); - assertThat(result).hasSize(2); - - - RedisNodeDescription slave = result.get(0); - assertThat(slave.getRole()).isEqualTo(RedisInstance.Role.SLAVE); - - RedisNodeDescription master = result.get(1); - assertThat(master.getRole()).isEqualTo(RedisInstance.Role.MASTER); - assertThat(master.getUri().getHost()).isEqualTo("::20f8:1400:0:0"); - } - - @Test(expected = IllegalStateException.class) - public void shouldFailWithoutRole() throws Exception { - - String info = "# Replication\r\n" + "connected_slaves:1\r\n" + "master_repl_offset:56276\r\n" - + "repl_backlog_active:1\r\n"; - - sut.getNodesFromInfo(info); - } - - @Test(expected = IllegalStateException.class) - public void shouldFailWithInvalidRole() throws Exception { - - String info = "# Replication\r\n" + "role:abc\r\n" + "master_repl_offset:56276\r\n" + "repl_backlog_active:1\r\n"; - - sut.getNodesFromInfo(info); - } - - @Test - public void shouldParseSlaves() throws Exception { - - String info = "# Replication\r\n" + "role:master\r\n" - + "slave0:ip=127.0.0.1,port=6483,state=online,offset=56276,lag=0\r\n" - + "slave1:ip=127.0.0.1,port=6484,state=online,offset=56276,lag=0\r\n" + "master_repl_offset:56276\r\n" - + "repl_backlog_active:1\r\n"; - - List result = sut.getNodesFromInfo(info); - assertThat(result).hasSize(3); - - RedisNodeDescription slave1 = result.get(1); - - assertThat(slave1.getRole()).isEqualTo(RedisInstance.Role.SLAVE); - assertThat(slave1.getUri().getHost()).isEqualTo("127.0.0.1"); - assertThat(slave1.getUri().getPort()).isEqualTo(6483); - - RedisNodeDescription slave2 = result.get(2); - - assertThat(slave2.getRole()).isEqualTo(RedisInstance.Role.SLAVE); - assertThat(slave2.getUri().getHost()).isEqualTo("127.0.0.1"); - assertThat(slave2.getUri().getPort()).isEqualTo(6484); - } - - @Test - public void shouldParseIPv6SlaveAddress() throws Exception { - - String info = "# Replication\r\n" + "role:master\r\n" - + "slave0:ip=::20f8:1400:0:0,port=6483,state=online,offset=56276,lag=0\r\n" - + "master_repl_offset:56276\r\n" - + "repl_backlog_active:1\r\n"; - - List result = sut.getNodesFromInfo(info); - assertThat(result).hasSize(2); - - RedisNodeDescription slave1 = result.get(1); - - assertThat(slave1.getRole()).isEqualTo(RedisInstance.Role.SLAVE); - assertThat(slave1.getUri().getHost()).isEqualTo("::20f8:1400:0:0"); - assertThat(slave1.getUri().getPort()).isEqualTo(6483); - } -} diff --git a/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveUtilsTest.java b/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveUtilsTest.java deleted file mode 100644 index 70149343eb..0000000000 --- a/src/test/java/com/lambdaworks/redis/masterslave/MasterSlaveUtilsTest.java +++ /dev/null @@ -1,86 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat; - -import java.util.Arrays; - -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.models.role.RedisInstance; - -/** - * @author Mark Paluch - */ -public class MasterSlaveUtilsTest { - - @Test - public void isChangedShouldReturnFalse() throws Exception { - - RedisMasterSlaveNode master = new RedisMasterSlaveNode("host", 1234, RedisURI.create("host", 111), - RedisInstance.Role.MASTER); - RedisMasterSlaveNode slave = new RedisMasterSlaveNode("host", 234, RedisURI.create("host", 234), - RedisInstance.Role.SLAVE); - - RedisMasterSlaveNode newmaster = new RedisMasterSlaveNode("host", 1234, RedisURI.create("host", 555), - RedisInstance.Role.MASTER); - RedisMasterSlaveNode newslave = new RedisMasterSlaveNode("host", 234, RedisURI.create("host", 666), - RedisInstance.Role.SLAVE); - - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(master, slave), Arrays.asList(newmaster, newslave))).isFalse(); - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(slave, master), Arrays.asList(newmaster, newslave))).isFalse(); - - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(newmaster, newslave), Arrays.asList(master, slave))).isFalse(); - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(newmaster, newslave), Arrays.asList(slave, master))).isFalse(); - } - - @Test - public void isChangedShouldReturnTrueBecauseSlaveIsGone() throws Exception { - - RedisMasterSlaveNode master = new RedisMasterSlaveNode("host", 1234, RedisURI.create("host", 111), - RedisInstance.Role.MASTER); - RedisMasterSlaveNode slave = new RedisMasterSlaveNode("host", 234, RedisURI.create("host", 234), - RedisInstance.Role.MASTER); - - RedisMasterSlaveNode newmaster = new RedisMasterSlaveNode("host", 1234, RedisURI.create("host", 111), - RedisInstance.Role.MASTER); - - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(master, slave), Arrays.asList(newmaster))).isTrue(); - } - - @Test - public void isChangedShouldReturnTrueBecauseHostWasMigrated() throws Exception { - - RedisMasterSlaveNode master = new RedisMasterSlaveNode("host", 1234, RedisURI.create("host", 111), - RedisInstance.Role.MASTER); - RedisMasterSlaveNode slave = new RedisMasterSlaveNode("host", 234, RedisURI.create("host", 234), - RedisInstance.Role.SLAVE); - - RedisMasterSlaveNode newmaster = new RedisMasterSlaveNode("host", 1234, RedisURI.create("host", 555), - RedisInstance.Role.MASTER); - RedisMasterSlaveNode newslave = new RedisMasterSlaveNode("newhost", 234, RedisURI.create("newhost", 666), - RedisInstance.Role.SLAVE); - - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(master, slave), Arrays.asList(newmaster, newslave))).isTrue(); - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(slave, master), Arrays.asList(newmaster, newslave))).isTrue(); - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(newmaster, newslave), Arrays.asList(master, slave))).isTrue(); - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(newslave, newmaster), Arrays.asList(master, slave))).isTrue(); - } - - @Test - public void isChangedShouldReturnTrueBecauseRolesSwitched() throws Exception { - - RedisMasterSlaveNode master = new RedisMasterSlaveNode("host", 1234, RedisURI.create("host", 111), - RedisInstance.Role.MASTER); - RedisMasterSlaveNode slave = new RedisMasterSlaveNode("host", 234, RedisURI.create("host", 234), - RedisInstance.Role.MASTER); - - RedisMasterSlaveNode newslave = new RedisMasterSlaveNode("host", 1234, RedisURI.create("host", 111), - RedisInstance.Role.SLAVE); - RedisMasterSlaveNode newmaster = new RedisMasterSlaveNode("host", 234, RedisURI.create("host", 234), - RedisInstance.Role.MASTER); - - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(master, slave), Arrays.asList(newmaster, newslave))).isTrue(); - assertThat(MasterSlaveUtils.isChanged(Arrays.asList(master, slave), Arrays.asList(newslave, newmaster))).isTrue(); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/masterslave/SentinelTopologyRefreshTest.java b/src/test/java/com/lambdaworks/redis/masterslave/SentinelTopologyRefreshTest.java deleted file mode 100644 index 7ac7ec3a53..0000000000 --- a/src/test/java/com/lambdaworks/redis/masterslave/SentinelTopologyRefreshTest.java +++ /dev/null @@ -1,175 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static org.mockito.Matchers.any; -import static org.mockito.Matchers.anyLong; -import static org.mockito.Mockito.never; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.verifyNoMoreInteractions; -import static org.mockito.Mockito.when; - -import java.util.Arrays; - -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.runners.MockitoJUnitRunner; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.pubsub.RedisPubSubAdapter; -import com.lambdaworks.redis.pubsub.StatefulRedisPubSubConnection; -import com.lambdaworks.redis.pubsub.api.async.RedisPubSubAsyncCommands; -import com.lambdaworks.redis.resource.ClientResources; - -import io.netty.util.concurrent.EventExecutorGroup; - -/** - * @author Mark Paluch - */ -@RunWith(MockitoJUnitRunner.class) -public class SentinelTopologyRefreshTest { - - @Mock - private RedisClient redisClient; - - @Mock - private StatefulRedisPubSubConnection connection; - - @Mock - private RedisPubSubAsyncCommands pubSubAsyncCommands; - - @Mock - private ClientResources clientResources; - - @Mock - private EventExecutorGroup eventExecutors; - - @Mock - private Runnable refreshRunnable; - - private SentinelTopologyRefresh sut; - - @Before - public void before() throws Exception { - - sut = new SentinelTopologyRefresh(redisClient, "mymaster", Arrays.asList(RedisURI.create("localhost", 1234))); - - when(redisClient.connectPubSub(any(Utf8StringCodec.class), any())).thenReturn(connection); - when(clientResources.eventExecutorGroup()).thenReturn(eventExecutors); - when(redisClient.getResources()).thenReturn(clientResources); - when(connection.async()).thenReturn(pubSubAsyncCommands); - } - - @Test - public void bind() throws Exception { - - sut.bind(refreshRunnable); - - verify(redisClient).connectPubSub(any(), any()); - verify(pubSubAsyncCommands).psubscribe("*"); - } - - @Test - public void close() throws Exception { - - sut.bind(refreshRunnable); - sut.close(); - - verify(connection).removeListener(any()); - verify(connection).close(); - } - - @Test - public void shouldNotProcessOtherEvents() throws Exception { - - RedisPubSubAdapter adapter = getAdapter(); - - adapter.message("*", "*", "irreleval"); - - verify(eventExecutors).isShuttingDown(); - verifyNoMoreInteractions(eventExecutors); - } - - @Test - public void shouldProcessElectedLeader() throws Exception { - - RedisPubSubAdapter adapter = getAdapter(); - - adapter.message("*", "+elected-leader", "master mymaster 127.0.0.1"); - - verify(eventExecutors).schedule(any(Runnable.class), anyLong(), any()); - } - - @Test - public void shouldProcessSwitchMaster() throws Exception { - - RedisPubSubAdapter adapter = getAdapter(); - - adapter.message("*", "+switch-master", "mymaster 127.0.0.1"); - - verify(eventExecutors).schedule(any(Runnable.class), anyLong(), any()); - } - - @Test - public void shouldProcessFixSlaveConfig() throws Exception { - - RedisPubSubAdapter adapter = getAdapter(); - - adapter.message("*", "fix-slave-config", "@ mymaster 127.0.0.1"); - - verify(eventExecutors).schedule(any(Runnable.class), anyLong(), any()); - } - - @Test - public void shouldProcessFailoverEnd() throws Exception { - - RedisPubSubAdapter adapter = getAdapter(); - - adapter.message("*", "failover-end", ""); - - verify(eventExecutors).schedule(any(Runnable.class), anyLong(), any()); - } - - @Test - public void shouldProcessFailoverTimeout() throws Exception { - - RedisPubSubAdapter adapter = getAdapter(); - - adapter.message("*", "failover-end-for-timeout", ""); - - verify(eventExecutors).schedule(any(Runnable.class), anyLong(), any()); - } - - @Test - public void shouldExecuteOnceWithinATimeout() throws Exception { - - RedisPubSubAdapter adapter = getAdapter(); - - adapter.message("*", "failover-end-for-timeout", ""); - adapter.message("*", "failover-end-for-timeout", ""); - - verify(eventExecutors, times(1)).schedule(any(Runnable.class), anyLong(), any()); - } - - @Test - public void shouldNotProcessIfExecutorIsShuttingDown() throws Exception { - - RedisPubSubAdapter adapter = getAdapter(); - when(eventExecutors.isShuttingDown()).thenReturn(true); - - adapter.message("*", "failover-end-for-timeout", ""); - - verify(eventExecutors, never()).schedule(any(Runnable.class), anyLong(), any()); - } - - private RedisPubSubAdapter getAdapter() { - - sut.bind(refreshRunnable); - return (RedisPubSubAdapter) ReflectionTestUtils.getField(sut, - "adapter"); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/masterslave/StaticMasterSlaveTest.java b/src/test/java/com/lambdaworks/redis/masterslave/StaticMasterSlaveTest.java deleted file mode 100644 index 2a6c6399ee..0000000000 --- a/src/test/java/com/lambdaworks/redis/masterslave/StaticMasterSlaveTest.java +++ /dev/null @@ -1,199 +0,0 @@ -package com.lambdaworks.redis.masterslave; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assume.assumeTrue; - -import java.util.Arrays; -import java.util.Collections; -import java.util.concurrent.TimeUnit; -import java.util.regex.Matcher; -import java.util.regex.Pattern; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RoleParser; - -/** - * @author Mark Paluch - */ -public class StaticMasterSlaveTest extends AbstractRedisClientTest { - - private StatefulRedisMasterSlaveConnectionImpl connection; - - private RedisURI master; - private RedisURI slave; - - private RedisAsyncCommands connectionToNode1; - private RedisAsyncCommands connectionToNode2; - - @Before - public void before() throws Exception { - - RedisURI node1 = RedisURI.Builder.redis(host, TestSettings.port(3)).withDatabase(2).build(); - RedisURI node2 = RedisURI.Builder.redis(host, TestSettings.port(4)).withDatabase(2).build(); - - connectionToNode1 = client.connect(node1).async(); - connectionToNode2 = client.connect(node2).async(); - - RedisInstance node1Instance = RoleParser.parse(connectionToNode1.role().get(2, TimeUnit.SECONDS)); - RedisInstance node2Instance = RoleParser.parse(connectionToNode2.role().get(2, TimeUnit.SECONDS)); - - if (node1Instance.getRole() == RedisInstance.Role.MASTER && node2Instance.getRole() == RedisInstance.Role.SLAVE) { - master = node1; - slave = node2; - } else if (node2Instance.getRole() == RedisInstance.Role.MASTER - && node1Instance.getRole() == RedisInstance.Role.SLAVE) { - master = node2; - slave = node1; - } else { - assumeTrue(String.format("Cannot run the test because I don't have a distinct master and slave but %s and %s", - node1Instance, node2Instance), false); - } - - connectionToNode1.configSet("requirepass", passwd); - connectionToNode1.configSet("masterauth", passwd); - connectionToNode1.auth(passwd); - - connectionToNode2.configSet("requirepass", passwd); - connectionToNode2.configSet("masterauth", passwd); - connectionToNode2.auth(passwd); - - node1.setPassword(passwd); - node2.setPassword(passwd); - - connection = (StatefulRedisMasterSlaveConnectionImpl) MasterSlave.connect(client, new Utf8StringCodec(), - Arrays.asList(master, slave)); - connection.setReadFrom(ReadFrom.SLAVE); - } - - @After - public void after() throws Exception { - - if (connectionToNode1 != null) { - connectionToNode1.configSet("requirepass", ""); - connectionToNode1.configSet("masterauth", "").get(1, TimeUnit.SECONDS); - connectionToNode1.close(); - } - - if (connectionToNode2 != null) { - connectionToNode2.configSet("requirepass", ""); - connectionToNode2.configSet("masterauth", "").get(1, TimeUnit.SECONDS); - connectionToNode2.close(); - } - - if (connection != null) { - connection.close(); - } - } - - @Test - public void testMasterSlaveStandaloneBasic() throws Exception { - - String server = connection.sync().info("server"); - - Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); - Matcher matcher = pattern.matcher(server); - - assertThat(matcher.find()).isTrue(); - assertThat(matcher.group(1)).isEqualTo("6483"); - assertThat(connection.getReadFrom()).isEqualTo(ReadFrom.SLAVE); - } - - @Test - public void testMasterSlaveReadWrite() throws Exception { - - RedisCommands redisCommands = connection.sync(); - redisCommands.set(key, value); - redisCommands.waitForReplication(1, 100); - - assertThat(redisCommands.get(key)).isEqualTo(value); - } - - @Test(expected = RedisException.class) - public void noSlaveForRead() throws Exception { - - connection.close(); - - connection = (StatefulRedisMasterSlaveConnectionImpl) MasterSlave.connect(client, new Utf8StringCodec(), - Arrays.asList(master)); - connection.setReadFrom(ReadFrom.SLAVE); - - slaveCall(connection); - } - - @Test - public void shouldWorkWithMasterOnly() throws Exception { - - connection.close(); - - connection = (StatefulRedisMasterSlaveConnectionImpl) MasterSlave.connect(client, new Utf8StringCodec(), - Arrays.asList(master)); - - connection.sync().set(key, value); - assertThat(connection.sync().get(key)).isEqualTo("value"); - } - - @Test - public void shouldWorkWithSlaveOnly() throws Exception { - - connection.close(); - - connection = (StatefulRedisMasterSlaveConnectionImpl) MasterSlave.connect(client, new Utf8StringCodec(), - Arrays.asList(slave)); - connection.setReadFrom(ReadFrom.MASTER_PREFERRED); - - assertThat(connection.sync().info()).isNotEmpty(); - } - - @Test(expected = RedisException.class) - public void noMasterForWrite() throws Exception { - - connection.close(); - - connection = (StatefulRedisMasterSlaveConnectionImpl) MasterSlave.connect(client, new Utf8StringCodec(), - Arrays.asList(slave)); - - connection.sync().set(key, value); - } - - @Test - public void testConnectionCount() throws Exception { - - MasterSlaveConnectionProvider connectionProvider = getConnectionProvider(); - - assertThat(connectionProvider.getConnectionCount()).isEqualTo(0); - slaveCall(connection); - - assertThat(connectionProvider.getConnectionCount()).isEqualTo(1); - - connection.sync().set(key, value); - assertThat(connectionProvider.getConnectionCount()).isEqualTo(2); - } - - @Test - public void testReconfigureTopology() throws Exception { - MasterSlaveConnectionProvider connectionProvider = getConnectionProvider(); - - slaveCall(connection); - - connectionProvider.setKnownNodes(Collections.emptyList()); - - assertThat(connectionProvider.getConnectionCount()).isEqualTo(0); - } - - protected static String slaveCall(StatefulRedisMasterSlaveConnection connection) { - return connection.sync().info("replication"); - } - - protected MasterSlaveConnectionProvider getConnectionProvider() { - MasterSlaveChannelWriter writer = connection.getChannelWriter(); - return writer.getMasterSlaveConnectionProvider(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/metrics/CommandLatencyIdTest.java b/src/test/java/com/lambdaworks/redis/metrics/CommandLatencyIdTest.java deleted file mode 100644 index e25ef6b65d..0000000000 --- a/src/test/java/com/lambdaworks/redis/metrics/CommandLatencyIdTest.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.redis.metrics; - -import static org.assertj.core.api.Assertions.assertThat; - -import com.lambdaworks.redis.protocol.CommandKeyword; -import io.netty.channel.local.LocalAddress; -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class CommandLatencyIdTest { - - private CommandLatencyId sut = CommandLatencyId.create(LocalAddress.ANY, new LocalAddress("me"), CommandKeyword.ADDR); - - @Test - public void testToString() throws Exception { - assertThat(sut.toString()).contains("local:any -> local:me"); - } - - @Test - public void testValues() throws Exception { - assertThat(sut.localAddress()).isEqualTo(LocalAddress.ANY); - assertThat(sut.remoteAddress()).isEqualTo(new LocalAddress("me")); - } -} diff --git a/src/test/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollectorTest.java b/src/test/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollectorTest.java deleted file mode 100644 index 5b51a802d1..0000000000 --- a/src/test/java/com/lambdaworks/redis/metrics/DefaultCommandLatencyCollectorTest.java +++ /dev/null @@ -1,67 +0,0 @@ -package com.lambdaworks.redis.metrics; - -import static java.util.concurrent.TimeUnit.MICROSECONDS; -import static java.util.concurrent.TimeUnit.MILLISECONDS; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.Map; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.runners.MockitoJUnitRunner; - -import com.lambdaworks.redis.protocol.CommandType; -import io.netty.channel.local.LocalAddress; - -/** - * @author Mark Paluch - */ -@RunWith(MockitoJUnitRunner.class) -public class DefaultCommandLatencyCollectorTest { - - private static CommandLatencyCollectorOptions options = DefaultCommandLatencyCollectorOptions.create(); - - private DefaultCommandLatencyCollector sut = new DefaultCommandLatencyCollector(options); - - @Test - public void shutdown() throws Exception { - sut.shutdown(); - - assertThat(sut.isEnabled()).isFalse(); - } - - @Test - public void verifyMetrics() throws Exception { - - setupData(); - - Map latencies = sut.retrieveMetrics(); - assertThat(latencies).hasSize(1); - - Map.Entry entry = latencies.entrySet().iterator().next(); - - assertThat(entry.getKey().commandType()).isSameAs(CommandType.BGSAVE); - - CommandMetrics metrics = entry.getValue(); - - assertThat(metrics.getCount()).isEqualTo(3); - assertThat(metrics.getCompletion().getMin()).isBetween(990000L, 1100000L); - assertThat(metrics.getCompletion().getPercentiles()).hasSize(5); - - assertThat(metrics.getFirstResponse().getMin()).isBetween(90000L, 110000L); - assertThat(metrics.getFirstResponse().getMax()).isBetween(290000L, 310000L); - assertThat(metrics.getCompletion().getPercentiles()).containsKey(50.0d); - - assertThat(metrics.getTimeUnit()).isEqualTo(MICROSECONDS); - - } - - private void setupData() { - sut.recordCommandLatency(LocalAddress.ANY, LocalAddress.ANY, CommandType.BGSAVE, MILLISECONDS.toNanos(100), - MILLISECONDS.toNanos(1000)); - sut.recordCommandLatency(LocalAddress.ANY, LocalAddress.ANY, CommandType.BGSAVE, MILLISECONDS.toNanos(200), - MILLISECONDS.toNanos(1000)); - sut.recordCommandLatency(LocalAddress.ANY, LocalAddress.ANY, CommandType.BGSAVE, MILLISECONDS.toNanos(300), - MILLISECONDS.toNanos(1000)); - } -} diff --git a/src/test/java/com/lambdaworks/redis/metrics/DefaultDefaultCommandLatencyCollectorOptionsTest.java b/src/test/java/com/lambdaworks/redis/metrics/DefaultDefaultCommandLatencyCollectorOptionsTest.java deleted file mode 100644 index 65fe4760c1..0000000000 --- a/src/test/java/com/lambdaworks/redis/metrics/DefaultDefaultCommandLatencyCollectorOptionsTest.java +++ /dev/null @@ -1,40 +0,0 @@ -package com.lambdaworks.redis.metrics; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class DefaultDefaultCommandLatencyCollectorOptionsTest { - - @Test - public void testDefault() throws Exception { - - DefaultCommandLatencyCollectorOptions sut = DefaultCommandLatencyCollectorOptions.create(); - - assertThat(sut.targetPercentiles()).hasSize(5); - assertThat(sut.targetUnit()).isEqualTo(TimeUnit.MICROSECONDS); - } - - @Test - public void testDisabled() throws Exception { - - DefaultCommandLatencyCollectorOptions sut = DefaultCommandLatencyCollectorOptions.disabled(); - - assertThat(sut.isEnabled()).isEqualTo(false); - } - - @Test - public void testBuilder() throws Exception { - - DefaultCommandLatencyCollectorOptions sut = DefaultCommandLatencyCollectorOptions.builder() - .targetUnit(TimeUnit.HOURS).targetPercentiles(new double[] { 1, 2, 3 }).build(); - - assertThat(sut.targetPercentiles()).hasSize(3); - assertThat(sut.targetUnit()).isEqualTo(TimeUnit.HOURS); - } -} diff --git a/src/test/java/com/lambdaworks/redis/models/command/CommandDetailParserTest.java b/src/test/java/com/lambdaworks/redis/models/command/CommandDetailParserTest.java deleted file mode 100644 index f4a7daf50a..0000000000 --- a/src/test/java/com/lambdaworks/redis/models/command/CommandDetailParserTest.java +++ /dev/null @@ -1,61 +0,0 @@ -package com.lambdaworks.redis.models.command; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.ArrayList; -import java.util.HashSet; -import java.util.List; - -import org.junit.Test; - -import com.lambdaworks.redis.internal.LettuceLists; - -public class CommandDetailParserTest { - - @Test - public void testMappings() throws Exception { - assertThat(CommandDetailParser.FLAG_MAPPING).hasSameSizeAs(CommandDetail.Flag.values()); - } - - @Test - public void testEmptyList() throws Exception { - - List result = CommandDetailParser.parse(new ArrayList<>()); - assertThat(result).isEmpty(); - } - - @Test - public void testMalformedList() throws Exception { - Object o = LettuceLists.newList("", "", ""); - List result = CommandDetailParser.parse(LettuceLists.newList(o)); - assertThat(result).isEmpty(); - } - - @Test - public void testParse() throws Exception { - Object o = LettuceLists.newList("get", "1", LettuceLists.newList("fast", "loading"), 1L, 2L, 3L); - List result = CommandDetailParser.parse(LettuceLists.newList(o)); - assertThat(result).hasSize(1); - - CommandDetail commandDetail = result.get(0); - assertThat(commandDetail.getName()).isEqualTo("get"); - assertThat(commandDetail.getArity()).isEqualTo(1); - assertThat(commandDetail.getFlags()).hasSize(2); - assertThat(commandDetail.getFirstKeyPosition()).isEqualTo(1); - assertThat(commandDetail.getLastKeyPosition()).isEqualTo(2); - assertThat(commandDetail.getKeyStepCount()).isEqualTo(3); - } - - @Test - public void testModel() throws Exception { - CommandDetail commandDetail = new CommandDetail(); - commandDetail.setArity(1); - commandDetail.setFirstKeyPosition(2); - commandDetail.setLastKeyPosition(3); - commandDetail.setKeyStepCount(4); - commandDetail.setName("theName"); - commandDetail.setFlags(new HashSet<>()); - - assertThat(commandDetail.toString()).contains(CommandDetail.class.getSimpleName()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/models/role/RoleParserTest.java b/src/test/java/com/lambdaworks/redis/models/role/RoleParserTest.java deleted file mode 100644 index 436cef5b57..0000000000 --- a/src/test/java/com/lambdaworks/redis/models/role/RoleParserTest.java +++ /dev/null @@ -1,154 +0,0 @@ -package com.lambdaworks.redis.models.role; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.ArrayList; -import java.util.List; - -import org.junit.Test; - -import com.google.common.net.HostAndPort; -import com.lambdaworks.redis.internal.LettuceLists; - -public class RoleParserTest { - public static final long REPLICATION_OFFSET_1 = 3167038L; - public static final long REPLICATION_OFFSET_2 = 3167039L; - public static final String LOCALHOST = "127.0.0.1"; - - @Test - public void testMappings() throws Exception { - assertThat(RoleParser.ROLE_MAPPING).hasSameSizeAs(RedisInstance.Role.values()); - assertThat(RoleParser.SLAVE_STATE_MAPPING).hasSameSizeAs(RedisSlaveInstance.State.values()); - } - - @Test(expected = IllegalArgumentException.class) - public void emptyList() throws Exception { - RoleParser.parse(new ArrayList<>()); - - } - - @Test(expected = IllegalArgumentException.class) - public void invalidFirstElement() throws Exception { - RoleParser.parse(LettuceLists.newList(new Object())); - - } - - @Test(expected = IllegalArgumentException.class) - public void invalidRole() throws Exception { - RoleParser.parse(LettuceLists.newList("blubb")); - - } - - @Test - public void master() throws Exception { - - List> slaves = LettuceLists.newList(LettuceLists.newList(LOCALHOST, "9001", "" + REPLICATION_OFFSET_2), - LettuceLists.newList(LOCALHOST, "9002", "3129543")); - - List input = LettuceLists.newList("master", REPLICATION_OFFSET_1, slaves); - - RedisInstance result = RoleParser.parse(input); - - assertThat(result.getRole()).isEqualTo(RedisInstance.Role.MASTER); - assertThat(result instanceof RedisMasterInstance).isTrue(); - - RedisMasterInstance instance = (RedisMasterInstance) result; - - assertThat(instance.getReplicationOffset()).isEqualTo(REPLICATION_OFFSET_1); - assertThat(instance.getSlaves()).hasSize(2); - - ReplicationPartner slave1 = instance.getSlaves().get(0); - assertThat(slave1.getHost().getHostText()).isEqualTo(LOCALHOST); - assertThat(slave1.getHost().getPort()).isEqualTo(9001); - assertThat(slave1.getReplicationOffset()).isEqualTo(REPLICATION_OFFSET_2); - - assertThat(instance.toString()).startsWith(RedisMasterInstance.class.getSimpleName()); - assertThat(slave1.toString()).startsWith(ReplicationPartner.class.getSimpleName()); - - } - - @Test - public void slave() throws Exception { - - List input = LettuceLists.newList("slave", LOCALHOST, 9000L, "connected", REPLICATION_OFFSET_1); - - RedisInstance result = RoleParser.parse(input); - - assertThat(result.getRole()).isEqualTo(RedisInstance.Role.SLAVE); - assertThat(result instanceof RedisSlaveInstance).isTrue(); - - RedisSlaveInstance instance = (RedisSlaveInstance) result; - - assertThat(instance.getMaster().getReplicationOffset()).isEqualTo(REPLICATION_OFFSET_1); - assertThat(instance.getState()).isEqualTo(RedisSlaveInstance.State.CONNECTED); - - assertThat(instance.toString()).startsWith(RedisSlaveInstance.class.getSimpleName()); - - } - - @Test - public void sentinel() throws Exception { - - List input = LettuceLists.newList("sentinel", LettuceLists.newList("resque-master", "html-fragments-master", "stats-master")); - - RedisInstance result = RoleParser.parse(input); - - assertThat(result.getRole()).isEqualTo(RedisInstance.Role.SENTINEL); - assertThat(result instanceof RedisSentinelInstance).isTrue(); - - RedisSentinelInstance instance = (RedisSentinelInstance) result; - - assertThat(instance.getMonitoredMasters()).hasSize(3); - - assertThat(instance.toString()).startsWith(RedisSentinelInstance.class.getSimpleName()); - - } - - @Test - public void sentinelWithoutMasters() throws Exception { - - List input = LettuceLists.newList("sentinel"); - - RedisInstance result = RoleParser.parse(input); - RedisSentinelInstance instance = (RedisSentinelInstance) result; - - assertThat(instance.getMonitoredMasters()).hasSize(0); - - } - - @Test - public void sentinelMastersIsNotAList() throws Exception { - - List input = LettuceLists.newList("sentinel", ""); - - RedisInstance result = RoleParser.parse(input); - RedisSentinelInstance instance = (RedisSentinelInstance) result; - - assertThat(instance.getMonitoredMasters()).hasSize(0); - - } - - @Test - public void testModelTest() throws Exception { - - RedisMasterInstance master = new RedisMasterInstance(); - master.setReplicationOffset(1); - master.setSlaves(new ArrayList<>()); - assertThat(master.toString()).contains(RedisMasterInstance.class.getSimpleName()); - - RedisSlaveInstance slave = new RedisSlaveInstance(); - slave.setMaster(new ReplicationPartner()); - slave.setState(RedisSlaveInstance.State.CONNECT); - assertThat(slave.toString()).contains(RedisSlaveInstance.class.getSimpleName()); - - RedisSentinelInstance sentinel = new RedisSentinelInstance(); - sentinel.setMonitoredMasters(new ArrayList<>()); - assertThat(sentinel.toString()).contains(RedisSentinelInstance.class.getSimpleName()); - - ReplicationPartner partner = new ReplicationPartner(); - partner.setHost(HostAndPort.fromHost("localhost")); - partner.setReplicationOffset(12); - - assertThat(partner.toString()).contains(ReplicationPartner.class.getSimpleName()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/output/BooleanListOutputTest.java b/src/test/java/com/lambdaworks/redis/output/BooleanListOutputTest.java deleted file mode 100644 index 9be0a51607..0000000000 --- a/src/test/java/com/lambdaworks/redis/output/BooleanListOutputTest.java +++ /dev/null @@ -1,37 +0,0 @@ -package com.lambdaworks.redis.output; - -import static org.assertj.core.api.Assertions.*; - -import java.nio.ByteBuffer; - -import org.junit.Test; - -import com.lambdaworks.redis.codec.Utf8StringCodec; - -/** - * @author Mark Paluch - */ -public class BooleanListOutputTest { - - private BooleanListOutput sut = new BooleanListOutput<>(new Utf8StringCodec()); - - @Test - public void defaultSubscriberIsSet() throws Exception { - assertThat(sut.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); - } - - @Test - public void commandOutputCorrectlyDecoded() throws Exception { - - sut.set(1L); - sut.set(0L); - sut.set(2L); - - assertThat(sut.get()).contains(true, false, false); - } - - @Test(expected = IllegalStateException.class) - public void setByteNotImplemented() throws Exception { - sut.set(ByteBuffer.wrap("4.567".getBytes())); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/output/GeoCoordinatesListOutputTest.java b/src/test/java/com/lambdaworks/redis/output/GeoCoordinatesListOutputTest.java deleted file mode 100644 index 4b250a1f2a..0000000000 --- a/src/test/java/com/lambdaworks/redis/output/GeoCoordinatesListOutputTest.java +++ /dev/null @@ -1,38 +0,0 @@ -package com.lambdaworks.redis.output; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.nio.ByteBuffer; - -import org.junit.Test; - -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.codec.Utf8StringCodec; - -/** - * @author Mark Paluch - */ -public class GeoCoordinatesListOutputTest { - - private GeoCoordinatesListOutput sut = new GeoCoordinatesListOutput<>(new Utf8StringCodec()); - - @Test - public void defaultSubscriberIsSet() throws Exception { - assertThat(sut.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); - } - - @Test(expected = IllegalStateException.class) - public void setIntegerShouldFail() throws Exception { - sut.set(123L); - } - - @Test - public void commandOutputCorrectlyDecoded() throws Exception { - - sut.set(ByteBuffer.wrap("1.234".getBytes())); - sut.set(ByteBuffer.wrap("4.567".getBytes())); - sut.multi(-1); - - assertThat(sut.get()).contains(new GeoCoordinates(1.234, 4.567)); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/output/GeoWithinListOutputTest.java b/src/test/java/com/lambdaworks/redis/output/GeoWithinListOutputTest.java deleted file mode 100644 index 59703a7cf6..0000000000 --- a/src/test/java/com/lambdaworks/redis/output/GeoWithinListOutputTest.java +++ /dev/null @@ -1,84 +0,0 @@ -package com.lambdaworks.redis.output; - -import static org.assertj.core.api.Assertions.*; - -import java.nio.ByteBuffer; - -import org.junit.Test; - -import com.lambdaworks.redis.GeoCoordinates; -import com.lambdaworks.redis.GeoWithin; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.codec.Utf8StringCodec; - -/** - * @author Mark Paluch - */ -public class GeoWithinListOutputTest { - - private GeoWithinListOutput sut = new GeoWithinListOutput<>(new Utf8StringCodec(), false, false, false); - - @Test - public void defaultSubscriberIsSet() throws Exception { - assertThat(sut.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); - } - - @Test - public void commandOutputKeyOnlyDecoded() throws Exception { - - sut.set(ByteBuffer.wrap("key".getBytes())); - sut.set(ByteBuffer.wrap("4.567".getBytes())); - sut.complete(1); - - assertThat(sut.get()).contains(new GeoWithin<>("key", null, null, null)); - } - - @Test - public void commandOutputKeyAndDistanceDecoded() throws Exception { - - sut = new GeoWithinListOutput<>(new Utf8StringCodec(), true, false, false); - - sut.set(ByteBuffer.wrap("key".getBytes())); - sut.set(ByteBuffer.wrap("4.567".getBytes())); - sut.complete(1); - - assertThat(sut.get()).contains(new GeoWithin<>("key", 4.567, null, null)); - } - - @Test - public void commandOutputKeyAndHashDecoded() throws Exception { - - sut = new GeoWithinListOutput<>(new Utf8StringCodec(), false, true, false); - - sut.set(ByteBuffer.wrap("key".getBytes())); - sut.set(4567); - sut.complete(1); - - assertThat(sut.get()).contains(new GeoWithin<>("key", null, 4567L, null)); - } - - @Test - public void commandOutputLongKeyAndHashDecoded() throws Exception { - - GeoWithinListOutput sut = new GeoWithinListOutput<>((RedisCodec) new Utf8StringCodec(), false, true, false); - - sut.set(1234); - sut.set(4567); - sut.complete(1); - - assertThat(sut.get()).contains(new GeoWithin<>(1234L, null, 4567L, null)); - } - - @Test - public void commandOutputKeyAndCoordinatesDecoded() throws Exception { - - sut = new GeoWithinListOutput<>(new Utf8StringCodec(), false, false, true); - - sut.set(ByteBuffer.wrap("key".getBytes())); - sut.set(ByteBuffer.wrap("1.234".getBytes())); - sut.set(ByteBuffer.wrap("4.567".getBytes())); - sut.complete(1); - - assertThat(sut.get()).contains(new GeoWithin<>("key", null, null, new GeoCoordinates(1.234, 4.567))); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/output/ListOutputTest.java b/src/test/java/com/lambdaworks/redis/output/ListOutputTest.java deleted file mode 100644 index 53893bfd08..0000000000 --- a/src/test/java/com/lambdaworks/redis/output/ListOutputTest.java +++ /dev/null @@ -1,76 +0,0 @@ -package com.lambdaworks.redis.output; - -import static org.assertj.core.api.Assertions.*; - -import java.nio.ByteBuffer; -import java.util.Arrays; -import java.util.Collection; -import java.util.List; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; -import org.junit.runners.Parameterized.Parameter; -import org.junit.runners.Parameterized.Parameters; - -import com.lambdaworks.redis.codec.Utf8StringCodec; - -/** - * @author Mark Paluch - */ -@RunWith(Parameterized.class) -public class ListOutputTest { - - @Parameter(0) - public CommandOutput> commandOutput; - - @Parameter(1) - public StreamingOutput streamingOutput; - - @Parameter(2) - public byte[] valueBytes; - - @Parameter(3) - public Object value; - - @Parameters - public static Collection parameters() { - - Utf8StringCodec codec = new Utf8StringCodec(); - - KeyListOutput keyListOutput = new KeyListOutput<>(codec); - Object[] keyList = new Object[] { keyListOutput, keyListOutput, "hello world".getBytes(), "hello world" }; - - ValueListOutput valueListOutput = new ValueListOutput<>(codec); - Object[] valueList = new Object[] { valueListOutput, valueListOutput, "hello world".getBytes(), "hello world" }; - - StringListOutput stringListOutput = new StringListOutput<>(codec); - Object[] stringList = new Object[] { stringListOutput, stringListOutput, "hello world".getBytes(), "hello world" }; - - return Arrays.asList(keyList, valueList, stringList); - - } - - @Test(expected = IllegalArgumentException.class) - public void settingEmptySubscriberShouldFail() throws Exception { - streamingOutput.setSubscriber(null); - } - - @Test - public void defaultSubscriberIsSet() throws Exception { - assertThat(streamingOutput.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); - } - - @Test(expected = IllegalStateException.class) - public void setIntegerShouldFail() throws Exception { - commandOutput.set(123L); - } - - @Test - public void setValueShouldConvert() throws Exception { - commandOutput.set(ByteBuffer.wrap(valueBytes)); - - assertThat(commandOutput.get()).contains(value); - } - -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/output/NestedMultiOutputTest.java b/src/test/java/com/lambdaworks/redis/output/NestedMultiOutputTest.java deleted file mode 100644 index df9b626173..0000000000 --- a/src/test/java/com/lambdaworks/redis/output/NestedMultiOutputTest.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.lambdaworks.redis.output; - -import static com.lambdaworks.redis.protocol.LettuceCharsets.buffer; -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Test; - -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.codec.Utf8StringCodec; - -/** - * @author Mark Paluch - */ -public class NestedMultiOutputTest { - - private RedisCodec codec = new Utf8StringCodec(); - - @Test - public void nestedMultiError() throws Exception { - - NestedMultiOutput output = new NestedMultiOutput(codec); - output.setError(buffer("Oops!")); - assertThat(output.getError()).isNotNull(); - } - -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/output/ScoredValueListOutputTest.java b/src/test/java/com/lambdaworks/redis/output/ScoredValueListOutputTest.java deleted file mode 100644 index 44050ecbed..0000000000 --- a/src/test/java/com/lambdaworks/redis/output/ScoredValueListOutputTest.java +++ /dev/null @@ -1,38 +0,0 @@ -package com.lambdaworks.redis.output; - -import static org.assertj.core.api.Assertions.*; - -import java.nio.ByteBuffer; - -import org.junit.Test; - -import com.lambdaworks.redis.ScoredValue; -import com.lambdaworks.redis.codec.Utf8StringCodec; - -/** - * @author Mark Paluch - */ -public class ScoredValueListOutputTest { - - private ScoredValueListOutput sut = new ScoredValueListOutput<>(new Utf8StringCodec()); - - @Test - public void defaultSubscriberIsSet() throws Exception { - assertThat(sut.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); - } - - @Test(expected = IllegalStateException.class) - public void setIntegerShouldFail() throws Exception { - sut.set(123L); - } - - @Test - public void commandOutputCorrectlyDecoded() throws Exception { - - sut.set(ByteBuffer.wrap("key".getBytes())); - sut.set(ByteBuffer.wrap("4.567".getBytes())); - sut.multi(-1); - - assertThat(sut.get()).contains(new ScoredValue<>(4.567, "key")); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/protocol/AsyncCommandInternalsTest.java b/src/test/java/com/lambdaworks/redis/protocol/AsyncCommandInternalsTest.java deleted file mode 100644 index 0e26b7d365..0000000000 --- a/src/test/java/com/lambdaworks/redis/protocol/AsyncCommandInternalsTest.java +++ /dev/null @@ -1,199 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import static com.lambdaworks.redis.protocol.LettuceCharsets.buffer; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.CancellationException; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; - -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.CommandOutput; -import com.lambdaworks.redis.output.StatusOutput; - -public class AsyncCommandInternalsTest { - protected RedisCodec codec = new Utf8StringCodec(); - protected Command internal; - protected AsyncCommand sut; - - @Before - public final void createCommand() throws Exception { - CommandOutput output = new StatusOutput(codec); - internal = new Command(CommandType.INFO, output, null); - sut = new AsyncCommand<>(internal); - } - - @Test - public void isCancelled() throws Exception { - assertThat(sut.isCancelled()).isFalse(); - assertThat(sut.cancel(true)).isTrue(); - assertThat(sut.isCancelled()).isTrue(); - assertThat(sut.cancel(true)).isTrue(); - } - - @Test - public void isDone() throws Exception { - assertThat(sut.isDone()).isFalse(); - sut.complete(); - assertThat(sut.isDone()).isTrue(); - } - - @Test - public void awaitAllCompleted() throws Exception { - sut.complete(); - assertThat(LettuceFutures.awaitAll(5, TimeUnit.MILLISECONDS, sut)).isTrue(); - } - - @Test - public void awaitAll() throws Exception { - assertThat(LettuceFutures.awaitAll(-1, TimeUnit.NANOSECONDS, sut)).isFalse(); - } - - @Test(expected = RedisCommandTimeoutException.class) - public void awaitNotCompleted() throws Exception { - LettuceFutures.awaitOrCancel(sut, 0, TimeUnit.NANOSECONDS); - } - - @Test(expected = RedisException.class) - public void awaitWithExecutionException() throws Exception { - sut.completeExceptionally(new RedisException("error")); - LettuceFutures.awaitOrCancel(sut, 1, TimeUnit.SECONDS); - } - - @Test(expected = CancellationException.class) - public void awaitWithCancelledCommand() throws Exception { - sut.cancel(); - LettuceFutures.awaitOrCancel(sut, 5, TimeUnit.SECONDS); - } - - @Test(expected = RedisException.class) - public void awaitAllWithExecutionException() throws Exception { - sut.completeExceptionally(new RedisCommandExecutionException("error")); - - assertThat(LettuceFutures.awaitAll(0, TimeUnit.SECONDS, sut)); - } - - @Test - public void getError() throws Exception { - sut.getOutput().setError("error"); - assertThat(internal.getError()).isEqualTo("error"); - } - - @Test(expected = ExecutionException.class) - public void getErrorAsync() throws Exception { - sut.getOutput().setError("error"); - sut.complete(); - sut.get(); - } - - @Test(expected = ExecutionException.class) - public void completeExceptionally() throws Exception { - sut.completeExceptionally(new RuntimeException("test")); - assertThat(internal.getError()).isEqualTo("test"); - - sut.get(); - } - - @Test - public void asyncGet() throws Exception { - sut.getOutput().set(buffer("one")); - sut.complete(); - assertThat(sut.get()).isEqualTo("one"); - sut.getOutput().toString(); - } - - @Test - public void customKeyword() throws Exception { - sut = new AsyncCommand<>( - new Command(MyKeywords.DUMMY, new StatusOutput(codec), null)); - - assertThat(sut.toString()).contains(MyKeywords.DUMMY.name()); - } - - @Test - public void customKeywordWithArgs() throws Exception { - sut = new AsyncCommand<>( - new Command(MyKeywords.DUMMY, null, new CommandArgs(codec))); - sut.getArgs().add(MyKeywords.DUMMY); - assertThat(sut.getArgs().toString()).contains(MyKeywords.DUMMY.name()); - } - - @Test - public void getWithTimeout() throws Exception { - sut.getOutput().set(buffer("one")); - sut.complete(); - - assertThat(sut.get(0, TimeUnit.MILLISECONDS)).isEqualTo("one"); - } - - @Test(expected = TimeoutException.class, timeout = 100) - public void getTimeout() throws Exception { - assertThat(sut.get(2, TimeUnit.MILLISECONDS)).isNull(); - } - - @Test(timeout = 100) - public void awaitTimeout() throws Exception { - assertThat(sut.await(2, TimeUnit.MILLISECONDS)).isFalse(); - } - - @Test(expected = InterruptedException.class, timeout = 100) - public void getInterrupted() throws Exception { - Thread.currentThread().interrupt(); - sut.get(); - } - - @Test(expected = InterruptedException.class, timeout = 100) - public void getInterrupted2() throws Exception { - Thread.currentThread().interrupt(); - sut.get(5, TimeUnit.MILLISECONDS); - } - - @Test(expected = RedisCommandInterruptedException.class, timeout = 100) - public void awaitInterrupted2() throws Exception { - Thread.currentThread().interrupt(); - sut.await(5, TimeUnit.MILLISECONDS); - } - - @Test(expected = IllegalStateException.class) - public void outputSubclassOverride1() { - CommandOutput output = new CommandOutput(codec, null) { - @Override - public String get() throws RedisException { - return null; - } - }; - output.set(null); - } - - @Test(expected = IllegalStateException.class) - public void outputSubclassOverride2() { - CommandOutput output = new CommandOutput(codec, null) { - @Override - public String get() throws RedisException { - return null; - } - }; - output.set(0); - } - - @Test - public void sillyTestsForEmmaCoverage() throws Exception { - assertThat(CommandType.valueOf("APPEND")).isEqualTo(CommandType.APPEND); - assertThat(CommandKeyword.valueOf("AFTER")).isEqualTo(CommandKeyword.AFTER); - } - - private enum MyKeywords implements ProtocolKeyword { - DUMMY; - - @Override - public byte[] getBytes() { - return name().getBytes(); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/protocol/CommandArgsTest.java b/src/test/java/com/lambdaworks/redis/protocol/CommandArgsTest.java deleted file mode 100644 index 3b7cfa1e4f..0000000000 --- a/src/test/java/com/lambdaworks/redis/protocol/CommandArgsTest.java +++ /dev/null @@ -1,188 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.nio.ByteBuffer; -import java.util.Arrays; - -import com.lambdaworks.redis.codec.ByteArrayCodec; -import org.junit.Test; - -import com.lambdaworks.redis.codec.Utf8StringCodec; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.Unpooled; - -/** - * @author Mark Paluch - */ -public class CommandArgsTest { - - private Utf8StringCodec codec = new Utf8StringCodec(); - - @Test - public void getFirstIntegerShouldReturnNull() throws Exception { - - CommandArgs args = new CommandArgs<>(codec); - - assertThat(args.getFirstInteger()).isNull(); - } - - @Test - public void getFirstIntegerShouldReturnFirstInteger() throws Exception { - - CommandArgs args = new CommandArgs<>(codec).add(1L).add(127).add(128).add(129).add(0).add(-1); - - assertThat(args.getFirstInteger()).isEqualTo(1L); - } - - @Test - public void getFirstStringShouldReturnNull() throws Exception { - - CommandArgs args = new CommandArgs<>(codec); - - assertThat(args.getFirstString()).isNull(); - } - - @Test - public void getFirstStringShouldReturnFirstString() throws Exception { - - CommandArgs args = new CommandArgs<>(codec).add("one").add("two"); - - assertThat(args.getFirstString()).isEqualTo("one"); - } - - @Test - public void getFirstEncodedKeyShouldReturnNull() throws Exception { - - CommandArgs args = new CommandArgs<>(codec); - - assertThat(args.getFirstString()).isNull(); - } - - @Test - public void getFirstEncodedKeyShouldReturnFirstKey() throws Exception { - - CommandArgs args = new CommandArgs<>(codec).addKey("one").addKey("two"); - - assertThat(args.getFirstEncodedKey()).isEqualTo(ByteBuffer.wrap("one".getBytes())); - } - - @Test - public void addValues() throws Exception { - - CommandArgs args = new CommandArgs<>(codec).addValues(Arrays.asList("1", "2")); - - ByteBuf buffer = Unpooled.buffer(); - args.encode(buffer); - - ByteBuf expected = Unpooled.buffer(); - expected.writeBytes(("$1\r\n" + "1\r\n" + "$1\r\n" + "2\r\n").getBytes()); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(expected.toString(LettuceCharsets.ASCII)); - } - - @Test - public void addByte() throws Exception { - - CommandArgs args = new CommandArgs<>(codec).add("one".getBytes()); - - ByteBuf buffer = Unpooled.buffer(); - args.encode(buffer); - - ByteBuf expected = Unpooled.buffer(); - expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(expected.toString(LettuceCharsets.ASCII)); - } - - @Test - public void addByteUsingDirectByteCodec() throws Exception { - - CommandArgs args = new CommandArgs<>(CommandArgs.ExperimentalByteArrayCodec.INSTANCE) - .add("one".getBytes()); - - ByteBuf buffer = Unpooled.buffer(); - args.encode(buffer); - - ByteBuf expected = Unpooled.buffer(); - expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(expected.toString(LettuceCharsets.ASCII)); - } - - @Test - public void addValueUsingDirectByteCodec() throws Exception { - - CommandArgs args = new CommandArgs<>(CommandArgs.ExperimentalByteArrayCodec.INSTANCE) - .addValue("one".getBytes()); - - ByteBuf buffer = Unpooled.buffer(); - args.encode(buffer); - - ByteBuf expected = Unpooled.buffer(); - expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(expected.toString(LettuceCharsets.ASCII)); - } - - @Test - public void addKeyUsingDirectByteCodec() throws Exception { - - CommandArgs args = new CommandArgs<>(CommandArgs.ExperimentalByteArrayCodec.INSTANCE) - .addValue("one".getBytes()); - - ByteBuf buffer = Unpooled.buffer(); - args.encode(buffer); - - ByteBuf expected = Unpooled.buffer(); - expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(expected.toString(LettuceCharsets.ASCII)); - } - - @Test - public void addByteUsingByteCodec() throws Exception { - - CommandArgs args = new CommandArgs<>(ByteArrayCodec.INSTANCE) - .add("one".getBytes()); - - ByteBuf buffer = Unpooled.buffer(); - args.encode(buffer); - - ByteBuf expected = Unpooled.buffer(); - expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(expected.toString(LettuceCharsets.ASCII)); - } - - @Test - public void addValueUsingByteCodec() throws Exception { - - CommandArgs args = new CommandArgs<>(ByteArrayCodec.INSTANCE) - .addValue("one".getBytes()); - - ByteBuf buffer = Unpooled.buffer(); - args.encode(buffer); - - ByteBuf expected = Unpooled.buffer(); - expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(expected.toString(LettuceCharsets.ASCII)); - } - - @Test - public void addKeyUsingByteCodec() throws Exception { - - CommandArgs args = new CommandArgs<>(ByteArrayCodec.INSTANCE) - .addValue("one".getBytes()); - - ByteBuf buffer = Unpooled.buffer(); - args.encode(buffer); - - ByteBuf expected = Unpooled.buffer(); - expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); - - assertThat(buffer.toString(LettuceCharsets.ASCII)).isEqualTo(expected.toString(LettuceCharsets.ASCII)); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/protocol/CommandHandlerTest.java b/src/test/java/com/lambdaworks/redis/protocol/CommandHandlerTest.java deleted file mode 100644 index 9034d3b5e9..0000000000 --- a/src/test/java/com/lambdaworks/redis/protocol/CommandHandlerTest.java +++ /dev/null @@ -1,774 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Fail.fail; -import static org.mockito.Matchers.any; -import static org.mockito.Mockito.*; - -import java.io.IOException; -import java.util.*; -import java.util.concurrent.atomic.AtomicLong; - -import com.lambdaworks.redis.metrics.DefaultCommandLatencyCollector; -import com.lambdaworks.redis.metrics.DefaultCommandLatencyCollectorOptions; -import org.apache.logging.log4j.Level; -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.core.LoggerContext; -import org.apache.logging.log4j.core.config.Configuration; -import org.apache.logging.log4j.core.config.LoggerConfig; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.ArgumentCaptor; -import org.mockito.Mock; -import org.mockito.invocation.InvocationOnMock; -import org.mockito.runners.MockitoJUnitRunner; -import org.mockito.stubbing.Answer; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.ConnectionEvents; -import com.lambdaworks.redis.RedisChannelHandler; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.resource.ClientResources; - -import edu.umd.cs.mtc.MultithreadedTestCase; -import edu.umd.cs.mtc.TestFramework; -import io.netty.buffer.ByteBuf; -import io.netty.buffer.ByteBufAllocator; -import io.netty.channel.*; -import io.netty.util.concurrent.ImmediateEventExecutor; - -@RunWith(MockitoJUnitRunner.class) -public class CommandHandlerTest { - - private Queue> q = new ArrayDeque<>(10); - - private CommandHandler sut; - - private final Command command = new Command<>(CommandType.APPEND, - new StatusOutput(new Utf8StringCodec()), null); - - @Mock - private ChannelHandlerContext context; - - @Mock - private Channel channel; - - @Mock - private ByteBufAllocator byteBufAllocator; - - @Mock - private ChannelPipeline pipeline; - - @Mock - private EventLoop eventLoop; - - @Mock - private ClientResources clientResources; - - @Mock - private RedisChannelHandler channelHandler; - - @BeforeClass - public static void beforeClass() { - LoggerContext ctx = (LoggerContext) LogManager.getContext(); - Configuration config = ctx.getConfiguration(); - LoggerConfig loggerConfig = config.getLoggerConfig(CommandHandler.class.getName()); - loggerConfig.setLevel(Level.ALL); - } - - @AfterClass - public static void afterClass() { - LoggerContext ctx = (LoggerContext) LogManager.getContext(); - Configuration config = ctx.getConfiguration(); - LoggerConfig loggerConfig = config.getLoggerConfig(CommandHandler.class.getName()); - loggerConfig.setLevel(null); - } - - @Before - public void before() throws Exception { - when(context.channel()).thenReturn(channel); - when(context.alloc()).thenReturn(byteBufAllocator); - when(channel.pipeline()).thenReturn(pipeline); - when(channel.eventLoop()).thenReturn(eventLoop); - when(eventLoop.submit(any(Runnable.class))).thenAnswer(invocation -> { - Runnable r = (Runnable) invocation.getArguments()[0]; - r.run(); - return null; - }); - - when(clientResources.commandLatencyCollector()).thenReturn(new DefaultCommandLatencyCollector( - DefaultCommandLatencyCollectorOptions.create())); - - when(channel.write(any())).thenAnswer(invocation -> { - - if (invocation.getArguments()[0] instanceof RedisCommand) { - q.add((RedisCommand) invocation.getArguments()[0]); - } - - if (invocation.getArguments()[0] instanceof Collection) { - q.addAll((Collection) invocation.getArguments()[0]); - } - - return new DefaultChannelPromise(channel); - }); - - when(channel.writeAndFlush(any())).thenAnswer(invocation -> { - if (invocation.getArguments()[0] instanceof RedisCommand) { - q.add((RedisCommand) invocation.getArguments()[0]); - } - - if (invocation.getArguments()[0] instanceof Collection) { - q.addAll((Collection) invocation.getArguments()[0]); - } - return new DefaultChannelPromise(channel); - }); - - sut = new CommandHandler(ClientOptions.create(), clientResources, q); - sut.setRedisChannelHandler(channelHandler); - } - - @Test - public void testChannelActive() throws Exception { - sut.channelRegistered(context); - - sut.channelActive(context); - - verify(pipeline).fireUserEventTriggered(any(ConnectionEvents.Activated.class)); - - } - - @Test - public void testChannelActiveFailureShouldCancelCommands() throws Exception { - - ClientOptions clientOptions = ClientOptions.builder().cancelCommandsOnReconnectFailure(true).build(); - - sut = new CommandHandler(clientOptions, clientResources, q); - sut.setRedisChannelHandler(channelHandler); - - sut.channelRegistered(context); - sut.write(command); - - reset(context); - when(context.channel()).thenThrow(new RuntimeException()); - try { - sut.channelActive(context); - fail("Missing RuntimeException"); - } catch (RuntimeException e) { - } - - assertThat(command.isCancelled()).isTrue(); - } - - @Test - public void testChannelActiveWithBufferedAndQueuedCommands() throws Exception { - - Command bufferedCommand = new Command<>(CommandType.GET, - new StatusOutput(new Utf8StringCodec()), null); - - Command pingCommand = new Command<>(CommandType.PING, - new StatusOutput(new Utf8StringCodec()), null); - q.add(bufferedCommand); - - AtomicLong atomicLong = (AtomicLong) ReflectionTestUtils.getField(sut, "writers"); - doAnswer(new Answer() { - @Override - public Object answer(InvocationOnMock invocation) throws Throwable { - - assertThat(atomicLong.get()).isEqualTo(-1); - assertThat(ReflectionTestUtils.getField(sut, "exclusiveLockOwner")).isNotNull(); - - sut.write(pingCommand); - - return null; - } - }).when(channelHandler).activated(); - when(channel.isActive()).thenReturn(true); - - sut.channelRegistered(context); - sut.channelActive(context); - - assertThat(atomicLong.get()).isEqualTo(0); - assertThat(ReflectionTestUtils.getField(sut, "exclusiveLockOwner")).isNull(); - assertThat(q).containsSequence(pingCommand, bufferedCommand); - - verify(pipeline).fireUserEventTriggered(any(ConnectionEvents.Activated.class)); - } - - @Test - public void testChannelActiveWithBufferedAndQueuedCommandsRetainsOrder() throws Exception { - - Command bufferedCommand1 = new Command<>(CommandType.SET, - new StatusOutput(new Utf8StringCodec()), null); - - Command bufferedCommand2 = new Command<>(CommandType.GET, - new StatusOutput(new Utf8StringCodec()), null); - - Command queuedCommand1 = new Command<>(CommandType.PING, - new StatusOutput(new Utf8StringCodec()), null); - - Command queuedCommand2 = new Command<>(CommandType.AUTH, - new StatusOutput(new Utf8StringCodec()), null); - - q.add(queuedCommand1); - q.add(queuedCommand2); - - Collection buffer = (Collection) ReflectionTestUtils.getField(sut, "commandBuffer"); - buffer.add(bufferedCommand1); - buffer.add(bufferedCommand2); - - reset(channel); - when(channel.writeAndFlush(any())).thenAnswer(invocation -> new DefaultChannelPromise(channel)); - when(channel.eventLoop()).thenReturn(eventLoop); - when(channel.pipeline()).thenReturn(pipeline); - - sut.channelRegistered(context); - sut.channelActive(context); - - assertThat(q).isEmpty(); - assertThat(buffer).isEmpty(); - - ArgumentCaptor objectArgumentCaptor = ArgumentCaptor.forClass(Object.class); - verify(channel).writeAndFlush(objectArgumentCaptor.capture()); - - assertThat((Collection) objectArgumentCaptor.getValue()).containsSequence(queuedCommand1, queuedCommand2, - bufferedCommand1, bufferedCommand2); - } - - @Test - public void testChannelActiveReplayBufferedCommands() throws Exception { - - Command bufferedCommand1 = new Command<>(CommandType.SET, - new StatusOutput(new Utf8StringCodec()), null); - - Command bufferedCommand2 = new Command<>(CommandType.GET, - new StatusOutput(new Utf8StringCodec()), null); - - Command queuedCommand1 = new Command<>(CommandType.PING, - new StatusOutput(new Utf8StringCodec()), null); - - Command queuedCommand2 = new Command<>(CommandType.AUTH, - new StatusOutput(new Utf8StringCodec()), null); - - q.add(queuedCommand1); - q.add(queuedCommand2); - - Collection buffer = (Collection) ReflectionTestUtils.getField(sut, "commandBuffer"); - buffer.add(bufferedCommand1); - buffer.add(bufferedCommand2); - - sut.channelRegistered(context); - sut.channelActive(context); - - assertThat(q).containsSequence(queuedCommand1, queuedCommand2, bufferedCommand1, bufferedCommand2); - assertThat(buffer).isEmpty(); - } - - @Test - public void testExceptionChannelActive() throws Exception { - sut.setState(CommandHandler.LifecycleState.ACTIVE); - - when(channel.isActive()).thenReturn(true); - - sut.channelActive(context); - sut.exceptionCaught(context, new Exception()); - } - - @Test - public void testIOExceptionChannelActive() throws Exception { - sut.setState(CommandHandler.LifecycleState.ACTIVE); - - when(channel.isActive()).thenReturn(true); - - sut.channelActive(context); - sut.exceptionCaught(context, new IOException("Connection timed out")); - } - - @Test - public void testWriteChannelDisconnected() throws Exception { - - when(channel.isActive()).thenReturn(true); - sut.channelRegistered(context); - sut.channelActive(context); - - sut.setState(CommandHandler.LifecycleState.DISCONNECTED); - - sut.write(command); - - Collection buffer = (Collection) ReflectionTestUtils.getField(sut, "commandBuffer"); - assertThat(buffer).containsOnly(command); - } - - @Test(expected = RedisException.class) - public void testWriteChannelDisconnectedWithoutReconnect() throws Exception { - - sut = new CommandHandler(ClientOptions.builder().autoReconnect(false).build(), clientResources, - q); - sut.setRedisChannelHandler(channelHandler); - - when(channel.isActive()).thenReturn(true); - sut.channelRegistered(context); - sut.channelActive(context); - - sut.setState(CommandHandler.LifecycleState.DISCONNECTED); - - sut.write(command); - } - - @Test - public void testExceptionChannelInactive() throws Exception { - sut.setState(CommandHandler.LifecycleState.DISCONNECTED); - sut.exceptionCaught(context, new Exception()); - verify(context, never()).fireExceptionCaught(any(Exception.class)); - } - - @Test - public void testExceptionWithQueue() throws Exception { - sut.setState(CommandHandler.LifecycleState.ACTIVE); - q.clear(); - - sut.channelActive(context); - when(channel.isActive()).thenReturn(true); - - q.add(command); - sut.exceptionCaught(context, new Exception()); - - assertThat(q).isEmpty(); - command.get(); - - assertThat(ReflectionTestUtils.getField(command, "exception")).isNotNull(); - } - - @Test(expected = RedisException.class) - public void testWriteWhenClosed() throws Exception { - - sut.setState(CommandHandler.LifecycleState.CLOSED); - - sut.write(command); - } - - @Test - public void testExceptionWhenClosed() throws Exception { - - sut.setState(CommandHandler.LifecycleState.CLOSED); - - sut.exceptionCaught(context, new Exception()); - verifyZeroInteractions(context); - } - - @Test - public void isConnectedShouldReportFalseForNOT_CONNECTED() throws Exception { - - sut.setState(CommandHandler.LifecycleState.NOT_CONNECTED); - assertThat(sut.isConnected()).isFalse(); - } - - @Test - public void isConnectedShouldReportFalseForREGISTERED() throws Exception { - - sut.setState(CommandHandler.LifecycleState.REGISTERED); - assertThat(sut.isConnected()).isFalse(); - } - - @Test - public void isConnectedShouldReportTrueForCONNECTED() throws Exception { - - sut.setState(CommandHandler.LifecycleState.CONNECTED); - assertThat(sut.isConnected()).isTrue(); - } - - @Test - public void isConnectedShouldReportTrueForACTIVATING() throws Exception { - - sut.setState(CommandHandler.LifecycleState.ACTIVATING); - assertThat(sut.isConnected()).isTrue(); - } - - @Test - public void isConnectedShouldReportTrueForACTIVE() throws Exception { - - sut.setState(CommandHandler.LifecycleState.ACTIVE); - assertThat(sut.isConnected()).isTrue(); - } - - @Test - public void isConnectedShouldReportFalseForDISCONNECTED() throws Exception { - - sut.setState(CommandHandler.LifecycleState.DISCONNECTED); - assertThat(sut.isConnected()).isFalse(); - } - - @Test - public void isConnectedShouldReportFalseForDEACTIVATING() throws Exception { - - sut.setState(CommandHandler.LifecycleState.DEACTIVATING); - assertThat(sut.isConnected()).isFalse(); - } - - @Test - public void isConnectedShouldReportFalseForDEACTIVATED() throws Exception { - - sut.setState(CommandHandler.LifecycleState.DEACTIVATED); - assertThat(sut.isConnected()).isFalse(); - } - - @Test - public void isConnectedShouldReportFalseForCLOSED() throws Exception { - - sut.setState(CommandHandler.LifecycleState.CLOSED); - assertThat(sut.isConnected()).isFalse(); - } - - @Test - public void shouldNotWriteCancelledCommands() throws Exception { - - command.cancel(); - sut.write(context, command, null); - - verifyZeroInteractions(context); - assertThat((Collection) ReflectionTestUtils.getField(sut, "queue")).isEmpty(); - } - - @Test - public void shouldCancelCommandOnQueueSingleFailure() throws Exception { - - Command commandMock = mock(Command.class); - - RuntimeException exception = new RuntimeException(); - when(commandMock.getOutput()).thenThrow(exception); - - ChannelPromise channelPromise = new DefaultChannelPromise(null, ImmediateEventExecutor.INSTANCE); - try { - sut.write(context, commandMock, channelPromise); - fail("Missing RuntimeException"); - } catch (RuntimeException e) { - assertThat(e).isSameAs(exception); - } - - assertThat((Collection) ReflectionTestUtils.getField(sut, "queue")).isEmpty(); - verify(commandMock).completeExceptionally(exception); - } - - @Test - public void shouldCancelCommandOnQueueBatchFailure() throws Exception { - - Command commandMock = mock(Command.class); - - RuntimeException exception = new RuntimeException(); - when(commandMock.getOutput()).thenThrow(exception); - - ChannelPromise channelPromise = new DefaultChannelPromise(null, ImmediateEventExecutor.INSTANCE); - try { - sut.write(context, Arrays.asList(commandMock), channelPromise); - fail("Missing RuntimeException"); - } catch (RuntimeException e) { - assertThat(e).isSameAs(exception); - } - - assertThat((Collection) ReflectionTestUtils.getField(sut, "queue")).isEmpty(); - verify(commandMock).completeExceptionally(exception); - } - - @Test - public void shouldWriteActiveCommands() throws Exception { - - sut.write(context, command, null); - - verify(context).write(command, null); - assertThat((Collection) ReflectionTestUtils.getField(sut, "queue")).containsOnly(command); - } - - @Test - public void shouldNotWriteCancelledCommandBatch() throws Exception { - - command.cancel(); - sut.write(context, Arrays.asList(command), null); - - verifyZeroInteractions(context); - assertThat((Collection) ReflectionTestUtils.getField(sut, "queue")).isEmpty(); - } - - @Test - public void shouldWriteActiveCommandsInBatch() throws Exception { - - List> commands = Arrays.asList(command); - sut.write(context, commands, null); - - verify(context).write(commands, null); - assertThat((Collection) ReflectionTestUtils.getField(sut, "queue")).containsOnly(command); - } - - @Test - public void shouldWriteActiveCommandsInMixedBatch() throws Exception { - - Command command2 = new Command<>(CommandType.APPEND, - new StatusOutput(new Utf8StringCodec()), null); - - command.cancel(); - - sut.write(context, Arrays.asList(command, command2), null); - - ArgumentCaptor captor = ArgumentCaptor.forClass(List.class); - verify(context).write(captor.capture(), any()); - - assertThat(captor.getValue()).containsOnly(command2); - assertThat((Collection) ReflectionTestUtils.getField(sut, "queue")).containsOnly(command2); - } - - @Test - public void shouldIgnoreNonReadableBuffers() throws Exception { - - ByteBuf byteBufMock = mock(ByteBuf.class); - when(byteBufMock.isReadable()).thenReturn(false); - - sut.channelRead(context, byteBufMock); - - verify(byteBufMock, never()).release(); - } - - @Test - public void shouldSetLatency() throws Exception { - - sut.write(context, Arrays.asList(command), null); - - assertThat(command.sentNs).isNotEqualTo(-1); - assertThat(command.firstResponseNs).isEqualTo(-1); - } - - @Test - public void testMTCConcurrentWriteThenReset() throws Throwable { - TestFramework.runOnce(new MTCConcurrentWriteThenReset(clientResources, q, command)); - } - - @Test - public void testMTCConcurrentResetThenWrite() throws Throwable { - TestFramework.runOnce(new MTCConcurrentResetThenWrite(clientResources, q, command)); - } - - @Test - public void testMTCConcurrentConcurrentWrite() throws Throwable { - TestFramework.runOnce(new MTCConcurrentConcurrentWrite(clientResources, q, command)); - } - - /** - * Test of concurrent access to locks. write call wins over reset call. - */ - static class MTCConcurrentWriteThenReset extends MultithreadedTestCase { - - private final Command command; - private TestableCommandHandler handler; - private List expectedThreadOrder = Collections.synchronizedList(new ArrayList<>()); - private List entryThreadOrder = Collections.synchronizedList(new ArrayList<>()); - private List exitThreadOrder = Collections.synchronizedList(new ArrayList<>()); - - public MTCConcurrentWriteThenReset(ClientResources clientResources, - Queue> queue, - Command command) { - this.command = command; - handler = new TestableCommandHandler(ClientOptions.create(), clientResources, queue) { - - @Override - protected void incrementWriters() { - - waitForTick(2); - super.incrementWriters(); - waitForTick(4); - } - - @Override - protected void lockWritersExclusive() { - - waitForTick(4); - super.lockWritersExclusive(); - } - - @Override - protected , T> void writeToBuffer(C command) { - - entryThreadOrder.add(Thread.currentThread()); - super.writeToBuffer(command); - } - - @Override - protected List> prepareReset() { - - entryThreadOrder.add(Thread.currentThread()); - return super.prepareReset(); - } - - @Override - protected void unlockWritersExclusive() { - - exitThreadOrder.add(Thread.currentThread()); - super.unlockWritersExclusive(); - } - - @Override - protected void decrementWriters() { - - exitThreadOrder.add(Thread.currentThread()); - super.decrementWriters(); - } - }; - } - - public void thread1() throws InterruptedException { - - waitForTick(1); - expectedThreadOrder.add(Thread.currentThread()); - handler.write(command); - - } - - public void thread2() throws InterruptedException { - - waitForTick(3); - expectedThreadOrder.add(Thread.currentThread()); - handler.reset(); - } - - @Override - public void finish() { - - assertThat(entryThreadOrder).containsExactlyElementsOf(expectedThreadOrder); - assertThat(exitThreadOrder).containsExactlyElementsOf(expectedThreadOrder); - } - } - - /** - * Test of concurrent access to locks. write call wins over flush call. - */ - static class MTCConcurrentResetThenWrite extends MultithreadedTestCase { - - private final Command command; - private TestableCommandHandler handler; - private List expectedThreadOrder = Collections.synchronizedList(new ArrayList<>()); - private List entryThreadOrder = Collections.synchronizedList(new ArrayList<>()); - private List exitThreadOrder = Collections.synchronizedList(new ArrayList<>()); - - public MTCConcurrentResetThenWrite(ClientResources clientResources, - Queue> queue, - Command command) { - this.command = command; - handler = new TestableCommandHandler(ClientOptions.create(), clientResources, queue) { - - @Override - protected void incrementWriters() { - - waitForTick(4); - super.incrementWriters(); - } - - @Override - protected void lockWritersExclusive() { - - waitForTick(2); - super.lockWritersExclusive(); - waitForTick(4); - } - - @Override - protected , T> void writeToBuffer(C command) { - - entryThreadOrder.add(Thread.currentThread()); - super.writeToBuffer(command); - } - - @Override - protected List> prepareReset() { - - entryThreadOrder.add(Thread.currentThread()); - return super.prepareReset(); - } - - @Override - protected void unlockWritersExclusive() { - - exitThreadOrder.add(Thread.currentThread()); - super.unlockWritersExclusive(); - } - - @Override - protected void decrementWriters() { - - exitThreadOrder.add(Thread.currentThread()); - super.decrementWriters(); - } - }; - } - - public void thread1() throws InterruptedException { - - waitForTick(1); - expectedThreadOrder.add(Thread.currentThread()); - handler.reset(); - } - - public void thread2() throws InterruptedException { - - waitForTick(3); - expectedThreadOrder.add(Thread.currentThread()); - handler.write(command); - } - - @Override - public void finish() { - - assertThat(entryThreadOrder).containsExactlyElementsOf(expectedThreadOrder); - assertThat(exitThreadOrder).containsExactlyElementsOf(expectedThreadOrder); - } - } - - /** - * Test of concurrent access to locks. Two concurrent writes. - */ - static class MTCConcurrentConcurrentWrite extends MultithreadedTestCase { - - private final Command command; - private TestableCommandHandler handler; - - public MTCConcurrentConcurrentWrite(ClientResources clientResources, - Queue> queue, - Command command) { - this.command = command; - handler = new TestableCommandHandler(ClientOptions.create(), clientResources, queue) { - - @Override - protected , T> void writeToBuffer(C command) { - - waitForTick(2); - assertThat(writers.get()).isEqualTo(2); - waitForTick(3); - super.writeToBuffer(command); - } - - }; - } - - public void thread1() throws InterruptedException { - - waitForTick(1); - handler.write(command); - } - - public void thread2() throws InterruptedException { - - waitForTick(1); - handler.write(command); - } - - } - - static class TestableCommandHandler extends CommandHandler { - public TestableCommandHandler(ClientOptions clientOptions, ClientResources clientResources, - Queue> queue) { - super(clientOptions, clientResources, queue); - } - } - -} diff --git a/src/test/java/com/lambdaworks/redis/protocol/CommandInternalsTest.java b/src/test/java/com/lambdaworks/redis/protocol/CommandInternalsTest.java deleted file mode 100644 index c06b7a49c9..0000000000 --- a/src/test/java/com/lambdaworks/redis/protocol/CommandInternalsTest.java +++ /dev/null @@ -1,128 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import static com.lambdaworks.redis.protocol.LettuceCharsets.buffer; -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.CommandOutput; -import com.lambdaworks.redis.output.NestedMultiOutput; -import com.lambdaworks.redis.output.StatusOutput; - -public class CommandInternalsTest { - protected RedisCodec codec = new Utf8StringCodec(); - protected Command sut; - - @Before - public final void createCommand() throws Exception { - CommandOutput output = new StatusOutput(codec); - sut = new Command<>(CommandType.INFO, output, null); - } - - @Test - public void isCancelled() throws Exception { - assertThat(sut.isCancelled()).isFalse(); - sut.cancel(); - - assertThat(sut.isCancelled()).isTrue(); - sut.cancel(); - } - - @Test - public void isDone() throws Exception { - assertThat(sut.isDone()).isFalse(); - sut.complete(); - assertThat(sut.isDone()).isTrue(); - } - - @Test - public void get() throws Exception { - assertThat(sut.get()).isNull(); - sut.getOutput().set(buffer("one")); - assertThat(sut.get()).isEqualTo("one"); - } - - @Test - public void getError() throws Exception { - sut.getOutput().setError("error"); - assertThat(sut.getError()).isEqualTo("error"); - } - - @Test(expected = IllegalStateException.class) - public void setOutputAfterCompleted() throws Exception { - sut.complete(); - sut.setOutput(new StatusOutput<>(codec)); - } - - @Test - public void testToString() throws Exception { - assertThat(sut.toString()).contains("Command"); - } - - @Test - public void customKeyword() throws Exception { - - sut = new Command(MyKeywords.DUMMY, null, null); - sut.setOutput(new StatusOutput(codec)); - - assertThat(sut.toString()).contains(MyKeywords.DUMMY.name()); - } - - @Test - public void customKeywordWithArgs() throws Exception { - sut = new Command(MyKeywords.DUMMY, null, new CommandArgs(codec)); - sut.getArgs().add(MyKeywords.DUMMY); - assertThat(sut.getArgs().toString()).contains(MyKeywords.DUMMY.name()); - } - - @Test - public void getWithTimeout() throws Exception { - sut.getOutput().set(buffer("one")); - sut.complete(); - - assertThat(sut.get()).isEqualTo("one"); - } - - @Test(expected = IllegalStateException.class) - public void outputSubclassOverride1() { - CommandOutput output = new CommandOutput(codec, null) { - @Override - public String get() throws RedisException { - return null; - } - }; - output.set(null); - } - - @Test(expected = IllegalStateException.class) - public void outputSubclassOverride2() { - CommandOutput output = new CommandOutput(codec, null) { - @Override - public String get() throws RedisException { - return null; - } - }; - output.set(0); - } - - @Test - public void sillyTestsForEmmaCoverage() throws Exception { - assertThat(CommandType.valueOf("APPEND")).isEqualTo(CommandType.APPEND); - assertThat(CommandKeyword.valueOf("AFTER")).isEqualTo(CommandKeyword.AFTER); - } - - private enum MyKeywords implements ProtocolKeyword { - DUMMY; - - @Override - public byte[] getBytes() { - return name().getBytes(); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/protocol/ConnectionFailureTest.java b/src/test/java/com/lambdaworks/redis/protocol/ConnectionFailureTest.java deleted file mode 100644 index e611f35fe6..0000000000 --- a/src/test/java/com/lambdaworks/redis/protocol/ConnectionFailureTest.java +++ /dev/null @@ -1,256 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.*; - -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import org.junit.Test; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.Connections; -import com.lambdaworks.Wait; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.server.RandomResponseServer; - -/** - * @author Mark Paluch - */ -public class ConnectionFailureTest extends AbstractRedisClientTest { - - private RedisURI defaultRedisUri = RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build(); - - /** - * Expect to run into Invalid first byte exception instead of timeout. - * - * @throws Exception - */ - @Test(timeout = 10000) - public void pingBeforeConnectFails() throws Exception { - - client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); - - RandomResponseServer ts = getRandomResponseServer(); - - RedisURI redisUri = RedisURI.Builder.redis(TestSettings.host(), TestSettings.nonexistentPort()) - .withTimeout(10, TimeUnit.MINUTES).build(); - - try { - client.connect(redisUri); - } catch (Exception e) { - assertThat(e).isExactlyInstanceOf(RedisConnectionException.class); - assertThat(e.getCause()).hasMessageContaining("Invalid first byte:"); - } finally { - ts.shutdown(); - } - } - - /** - * Simulates a failure on reconnect by changing the port to a invalid server and triggering a reconnect. Meanwhile a command - * is fired to the connection and the watchdog is triggered afterwards to reconnect. - * - * Expectation: Command after failed reconnect contains the reconnect exception. - * - * @throws Exception - */ - @Test(timeout = 120000) - public void pingBeforeConnectFailOnReconnect() throws Exception { - - ClientOptions clientOptions = ClientOptions.builder().pingBeforeActivateConnection(true) - .suspendReconnectOnProtocolFailure(true).build(); - client.setOptions(clientOptions); - - RandomResponseServer ts = getRandomResponseServer(); - - RedisURI redisUri = RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build(); - redisUri.setTimeout(5); - redisUri.setUnit(TimeUnit.SECONDS); - - try { - RedisAsyncCommands connection = client.connectAsync(redisUri); - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection.getStatefulConnection()); - - assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); - assertThat(connectionWatchdog.isReconnectSuspended()).isFalse(); - assertThat(clientOptions.isSuspendReconnectOnProtocolFailure()).isTrue(); - assertThat(connectionWatchdog.getReconnectionHandler().getClientOptions()).isSameAs(clientOptions); - - redisUri.setPort(TestSettings.nonexistentPort()); - - connection.quit(); - Wait.untilTrue(() -> connectionWatchdog.isReconnectSuspended()).waitOrTimeout(); - - assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); - - try { - connection.info().get(1, TimeUnit.MINUTES); - } catch (ExecutionException e) { - assertThat(e).hasRootCauseExactlyInstanceOf(RedisException.class); - assertThat(e.getCause()).hasMessageStartingWith("Invalid first byte"); - } - connection.close(); - } finally { - ts.shutdown(); - } - } - - /** - * Simulates a failure on reconnect by changing the port to a invalid server and triggering a reconnect. - * - * Expectation: {@link com.lambdaworks.redis.ConnectionEvents.Reconnect} events are sent. - * - * @throws Exception - */ - @Test(timeout = 120000) - public void pingBeforeConnectFailOnReconnectShouldSendEvents() throws Exception { - - client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true) - .suspendReconnectOnProtocolFailure(false).build()); - - RandomResponseServer ts = getRandomResponseServer(); - - RedisURI redisUri = RedisURI.create(defaultRedisUri.toURI()); - redisUri.setTimeout(5); - redisUri.setUnit(TimeUnit.SECONDS); - - try { - final BlockingQueue events = new LinkedBlockingDeque<>(); - - RedisAsyncCommands connection = client.connectAsync(redisUri); - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection.getStatefulConnection()); - - ReconnectionListener reconnectionListener = new ReconnectionListener() { - @Override - public void onReconnect(ConnectionEvents.Reconnect reconnect) { - events.add(reconnect); - } - }; - - ReflectionTestUtils.setField(connectionWatchdog, "reconnectionListener", reconnectionListener); - - redisUri.setPort(TestSettings.nonexistentPort()); - - connection.quit(); - Wait.untilTrue(() -> events.size() > 1).waitOrTimeout(); - connection.close(); - - ConnectionEvents.Reconnect event1 = events.take(); - assertThat(event1.getAttempt()).isEqualTo(1); - - ConnectionEvents.Reconnect event2 = events.take(); - assertThat(event2.getAttempt()).isEqualTo(2); - - } finally { - ts.shutdown(); - } - } - - /** - * Simulates a failure on reconnect by changing the port to a invalid server and triggering a reconnect. Meanwhile a command - * is fired to the connection and the watchdog is triggered afterwards to reconnect. - * - * Expectation: Queued commands are canceled (reset), subsequent commands contain the connection exception. - * - * @throws Exception - */ - @Test(timeout = 10000) - public void cancelCommandsOnReconnectFailure() throws Exception { - - client.setOptions( - ClientOptions.builder().pingBeforeActivateConnection(true).cancelCommandsOnReconnectFailure(true).build()); - - RandomResponseServer ts = getRandomResponseServer(); - - RedisURI redisUri = RedisURI.create(defaultRedisUri.toURI()); - - try { - RedisAsyncCommandsImpl connection = (RedisAsyncCommandsImpl) client - .connectAsync(redisUri); - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection.getStatefulConnection()); - - assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); - - connectionWatchdog.setReconnectSuspended(true); - redisUri.setPort(TestSettings.nonexistentPort()); - - connection.quit(); - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - - RedisFuture set1 = connection.set(key, value); - RedisFuture set2 = connection.set(key, value); - - assertThat(set1.isDone()).isFalse(); - assertThat(set1.isCancelled()).isFalse(); - - assertThat(connection.isOpen()).isFalse(); - connectionWatchdog.setReconnectSuspended(false); - connectionWatchdog.run(null); - Thread.sleep(500); - assertThat(connection.isOpen()).isFalse(); - - try { - set1.get(); - } catch (CancellationException e) { - assertThat(e).hasNoCause(); - } - - try { - set2.get(); - } catch (CancellationException e) { - assertThat(e).hasNoCause(); - } - - try { - connection.info().get(); - } catch (ExecutionException e) { - assertThat(e).hasRootCauseExactlyInstanceOf(RedisException.class); - assertThat(e.getCause()).hasMessageStartingWith("Invalid first byte"); - } - - connection.close(); - } finally { - ts.shutdown(); - } - } - - /** - * Expect to disable {@link ConnectionWatchdog} when closing a broken connection. - * - * @throws Exception - */ - @Test - public void closingDisconnectedConnectionShouldDisableConnectionWatchdog() throws Exception { - - client.setOptions(ClientOptions.create()); - - - RedisURI redisUri = RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()) - .withTimeout(10, TimeUnit.MINUTES).build(); - - StatefulRedisConnection connection = client.connect(redisUri); - - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection); - - assertThat(connectionWatchdog.isReconnectSuspended()).isFalse(); - assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); - - connection.sync().ping(); - - redisUri.setPort(TestSettings.nonexistentPort() + 5); - - connection.async().quit(); - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - - connection.close(); - - assertThat(connectionWatchdog.isReconnectSuspended()).isTrue(); - assertThat(connectionWatchdog.isListenOnChannelInactive()).isFalse(); - } - - protected RandomResponseServer getRandomResponseServer() throws InterruptedException { - RandomResponseServer ts = new RandomResponseServer(); - ts.initialize(TestSettings.nonexistentPort()); - return ts; - } -} diff --git a/src/test/java/com/lambdaworks/redis/protocol/StateMachineTest.java b/src/test/java/com/lambdaworks/redis/protocol/StateMachineTest.java deleted file mode 100644 index 8d8e3c325c..0000000000 --- a/src/test/java/com/lambdaworks/redis/protocol/StateMachineTest.java +++ /dev/null @@ -1,154 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.protocol; - -import static com.lambdaworks.redis.protocol.RedisStateMachine.State; -import static org.assertj.core.api.Assertions.assertThat; - -import java.nio.charset.Charset; -import java.util.Arrays; -import java.util.List; - -import org.apache.logging.log4j.Level; -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.core.LoggerContext; -import org.apache.logging.log4j.core.config.Configuration; -import org.apache.logging.log4j.core.config.LoggerConfig; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.*; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.Unpooled; - -public class StateMachineTest { - protected RedisCodec codec = new Utf8StringCodec(); - protected Charset charset = Charset.forName("UTF-8"); - protected CommandOutput output; - protected RedisStateMachine rsm; - - @BeforeClass - public static void beforeClass() { - - LoggerContext ctx = (LoggerContext) LogManager.getContext(); - Configuration config = ctx.getConfiguration(); - LoggerConfig loggerConfig = config.getLoggerConfig(RedisStateMachine.class.getName()); - loggerConfig.setLevel(Level.ALL); - } - - @AfterClass - public static void afterClass() { - LoggerContext ctx = (LoggerContext) LogManager.getContext(); - Configuration config = ctx.getConfiguration(); - LoggerConfig loggerConfig = config.getLoggerConfig(RedisStateMachine.class.getName()); - loggerConfig.setLevel(null); - } - - @Before - public final void createStateMachine() throws Exception { - output = new StatusOutput(codec); - rsm = new RedisStateMachine(); - } - - @Test - public void single() throws Exception { - assertThat(rsm.decode(buffer("+OK\r\n"), output)).isTrue(); - assertThat(output.get()).isEqualTo("OK"); - } - - @Test - public void error() throws Exception { - assertThat(rsm.decode(buffer("-ERR\r\n"), output)).isTrue(); - assertThat(output.getError()).isEqualTo("ERR"); - } - - @Test - public void errorWithoutLineBreak() throws Exception { - assertThat(rsm.decode(buffer("-ERR"), output)).isFalse(); - assertThat(rsm.decode(buffer("\r\n"), output)).isTrue(); - assertThat(output.getError()).isEqualTo(""); - } - - @Test - public void integer() throws Exception { - CommandOutput output = new IntegerOutput(codec); - assertThat(rsm.decode(buffer(":1\r\n"), output)).isTrue(); - assertThat((long) output.get()).isEqualTo(1); - } - - @Test - public void bulk() throws Exception { - CommandOutput output = new ValueOutput(codec); - assertThat(rsm.decode(buffer("$-1\r\n"), output)).isTrue(); - assertThat(output.get()).isNull(); - assertThat(rsm.decode(buffer("$3\r\nfoo\r\n"), output)).isTrue(); - assertThat(output.get()).isEqualTo("foo"); - } - - @Test - public void multi() throws Exception { - CommandOutput> output = new ValueListOutput(codec); - ByteBuf buffer = buffer("*2\r\n$-1\r\n$2\r\nok\r\n"); - assertThat(rsm.decode(buffer, output)).isTrue(); - assertThat(output.get()).isEqualTo(Arrays.asList(null, "ok")); - } - - @Test - public void multiEmptyArray1() throws Exception { - CommandOutput> output = new NestedMultiOutput(codec); - ByteBuf buffer = buffer("*2\r\n$3\r\nABC\r\n*0\r\n"); - assertThat(rsm.decode(buffer, output)).isTrue(); - assertThat(output.get().get(0)).isEqualTo("ABC"); - assertThat(output.get().get(1)).isEqualTo(Arrays.asList()); - assertThat(output.get().size()).isEqualTo(2); - } - - @Test - public void multiEmptyArray2() throws Exception { - CommandOutput> output = new NestedMultiOutput(codec); - ByteBuf buffer = buffer("*2\r\n*0\r\n$3\r\nABC\r\n"); - assertThat(rsm.decode(buffer, output)).isTrue(); - assertThat(output.get().get(0)).isEqualTo(Arrays.asList()); - assertThat(output.get().get(1)).isEqualTo("ABC"); - assertThat(output.get().size()).isEqualTo(2); - } - - @Test - public void multiEmptyArray3() throws Exception { - CommandOutput> output = new NestedMultiOutput(codec); - ByteBuf buffer = buffer("*2\r\n*2\r\n$2\r\nAB\r\n$2\r\nXY\r\n*0\r\n"); - assertThat(rsm.decode(buffer, output)).isTrue(); - assertThat(output.get().get(0)).isEqualTo(Arrays.asList("AB", "XY")); - assertThat(output.get().get(1)).isEqualTo(Arrays.asList()); - assertThat(output.get().size()).isEqualTo(2); - } - - @Test - public void partialFirstLine() throws Exception { - assertThat(rsm.decode(buffer("+"), output)).isFalse(); - assertThat(rsm.decode(buffer("-"), output)).isFalse(); - assertThat(rsm.decode(buffer(":"), output)).isFalse(); - assertThat(rsm.decode(buffer("$"), output)).isFalse(); - assertThat(rsm.decode(buffer("*"), output)).isFalse(); - } - - @Test(expected = RedisException.class) - public void invalidReplyType() throws Exception { - rsm.decode(buffer("="), output); - } - - @Test - public void sillyTestsForEmmaCoverage() throws Exception { - assertThat(State.Type.valueOf("SINGLE")).isEqualTo(State.Type.SINGLE); - } - - protected ByteBuf buffer(String content) { - return Unpooled.copiedBuffer(content, charset); - } -} diff --git a/src/test/java/com/lambdaworks/redis/pubsub/PubSubCommandTest.java b/src/test/java/com/lambdaworks/redis/pubsub/PubSubCommandTest.java deleted file mode 100644 index 2dc501bd9c..0000000000 --- a/src/test/java/com/lambdaworks/redis/pubsub/PubSubCommandTest.java +++ /dev/null @@ -1,403 +0,0 @@ -// Copyright (C) 2011 - Will Glozer. All rights reserved. - -package com.lambdaworks.redis.pubsub; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Fail.fail; -import static org.hamcrest.CoreMatchers.hasItem; -import static org.junit.Assert.assertThat; - -import java.util.ArrayList; -import java.util.List; -import java.util.Map; -import java.util.concurrent.BlockingQueue; -import java.util.concurrent.TimeUnit; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.internal.LettuceFactories; -import com.lambdaworks.redis.pubsub.api.async.RedisPubSubAsyncCommands; - -public class PubSubCommandTest extends AbstractRedisClientTest implements RedisPubSubListener { - private RedisPubSubAsyncCommands pubsub; - - private BlockingQueue channels; - private BlockingQueue patterns; - private BlockingQueue messages; - private BlockingQueue counts; - - private String channel = "channel0"; - private String pattern = "channel*"; - private String message = "msg!"; - - @Before - public void openPubSubConnection() throws Exception { - pubsub = client.connectPubSub().async(); - pubsub.addListener(this); - channels = LettuceFactories.newBlockingQueue(); - patterns = LettuceFactories.newBlockingQueue(); - messages = LettuceFactories.newBlockingQueue(); - counts = LettuceFactories.newBlockingQueue(); - } - - @After - public void closePubSubConnection() throws Exception { - pubsub.close(); - } - - @Test - public void auth() throws Exception { - new WithPasswordRequired() { - @Override - protected void run(RedisClient client) throws Exception { - RedisPubSubAsyncCommands connection = client.connectPubSub().async(); - connection.addListener(PubSubCommandTest.this); - connection.auth(passwd); - - connection.subscribe(channel); - assertThat(channels.take()).isEqualTo(channel); - } - }; - } - - @Test - public void authWithReconnect() throws Exception { - new WithPasswordRequired() { - @Override - protected void run(RedisClient client) throws Exception { - RedisPubSubAsyncCommands connection = client.connectPubSub().async(); - connection.addListener(PubSubCommandTest.this); - connection.auth(passwd); - connection.quit(); - Wait.untilTrue(() -> { - return !connection.isOpen(); - }).waitOrTimeout(); - - connection.subscribe(channel); - assertThat(channels.take()).isEqualTo(channel); - } - }; - } - - @Test(timeout = 2000) - public void message() throws Exception { - pubsub.subscribe(channel); - assertThat(channels.take()).isEqualTo(channel); - - redis.publish(channel, message); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - } - - @Test(timeout = 2000) - public void pipelinedMessage() throws Exception { - pubsub.subscribe(channel); - assertThat(channels.take()).isEqualTo(channel); - RedisAsyncCommands connection = client.connectAsync(); - - connection.setAutoFlushCommands(false); - connection.publish(channel, message); - Thread.sleep(100); - - assertThat(channels).isEmpty(); - connection.flushCommands(); - - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - - connection.close(); - } - - @Test(timeout = 2000) - public void pmessage() throws Exception { - pubsub.psubscribe(pattern).await(1, TimeUnit.MINUTES); - assertThat(patterns.take()).isEqualTo(pattern); - - redis.publish(channel, message); - assertThat(patterns.take()).isEqualTo(pattern); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - - redis.publish("channel2", "msg 2!"); - assertThat(patterns.take()).isEqualTo(pattern); - assertThat(channels.take()).isEqualTo("channel2"); - assertThat(messages.take()).isEqualTo("msg 2!"); - } - - @Test(timeout = 2000) - public void pipelinedSubscribe() throws Exception { - - pubsub.setAutoFlushCommands(false); - pubsub.subscribe(channel); - Thread.sleep(100); - assertThat(channels).isEmpty(); - pubsub.flushCommands(); - - assertThat(channels.take()).isEqualTo(channel); - - redis.publish(channel, message); - - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - - } - - @Test(timeout = 2000) - public void psubscribe() throws Exception { - RedisFuture psubscribe = pubsub.psubscribe(pattern); - assertThat(psubscribe.get()).isNull(); - assertThat(psubscribe.getError()).isNull(); - assertThat(psubscribe.isCancelled()).isFalse(); - assertThat(psubscribe.isDone()).isTrue(); - - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(1); - } - - @Test(timeout = 2000) - public void psubscribeWithListener() throws Exception { - RedisFuture psubscribe = pubsub.psubscribe(pattern); - final List listener = new ArrayList<>(); - - psubscribe.thenAccept(aVoid -> listener.add("done")); - psubscribe.await(1, TimeUnit.MINUTES); - - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(1); - assertThat(listener).hasSize(1); - } - - @Test(expected = IllegalArgumentException.class) - public void pubsubEmptyChannels() throws Exception { - pubsub.subscribe(); - fail("Missing IllegalArgumentException: channels must not be empty"); - } - - @Test - public void pubsubChannels() throws Exception { - RedisFuture future = pubsub.subscribe(channel); - future.get(1, TimeUnit.MINUTES); - List result = redis.pubsubChannels(); - assertThat(result).contains(channel); - - } - - @Test - public void pubsubMultipleChannels() throws Exception { - RedisFuture future = pubsub.subscribe(channel, "channel1", "channel3"); - future.get(); - - List result = redis.pubsubChannels(); - assertThat(result).contains(channel, "channel1", "channel3"); - - } - - @Test - public void pubsubChannelsWithArg() throws Exception { - pubsub.subscribe(channel).get(); - List result = redis.pubsubChannels(pattern); - assertThat(result, hasItem(channel)); - } - - @Test - public void pubsubNumsub() throws Exception { - - pubsub.subscribe(channel); - Thread.sleep(100); - - Map result = redis.pubsubNumsub(channel); - assertThat(result.size()).isGreaterThan(0); - assertThat(result.get(channel)).isGreaterThan(0); // Redis sometimes keeps old references - } - - @Test - public void pubsubNumpat() throws Exception { - - pubsub.psubscribe(pattern).get(); - Long result = redis.pubsubNumpat(); - assertThat(result.longValue()).isGreaterThan(0); // Redis sometimes keeps old references - } - - @Test - public void punsubscribe() throws Exception { - pubsub.punsubscribe(pattern).get(); - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(0); - - } - - @Test(timeout = 2000) - public void subscribe() throws Exception { - pubsub.subscribe(channel); - assertThat(channels.take()).isEqualTo(channel); - assertThat((long) counts.take()).isEqualTo(1); - } - - @Test(timeout = 2000) - public void unsubscribe() throws Exception { - pubsub.unsubscribe(channel).get(); - assertThat(channels.take()).isEqualTo(channel); - assertThat((long) counts.take()).isEqualTo(0); - - RedisFuture future = pubsub.unsubscribe(); - - assertThat(future.get()).isNull(); - assertThat(future.getError()).isNull(); - - assertThat(channels).isEmpty(); - assertThat(patterns).isEmpty(); - - } - - @Test - public void pubsubCloseOnClientShutdown() throws Exception { - - RedisClient redisClient = RedisClient.create(RedisURI.Builder.redis(host, port).build()); - - RedisPubSubAsyncCommands connection = redisClient.connectPubSub().async(); - - FastShutdown.shutdown(redisClient); - - assertThat(connection.isOpen()).isFalse(); - } - - @Test(timeout = 2000) - public void utf8Channel() throws Exception { - String channel = "channelλ"; - String message = "αβγ"; - - pubsub.subscribe(channel); - assertThat(channels.take()).isEqualTo(channel); - - redis.publish(channel, message); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - } - - @Test(timeout = 10000) - public void resubscribeChannelsOnReconnect() throws Exception { - pubsub.subscribe(channel); - assertThat(channels.take()).isEqualTo(channel); - assertThat((long) counts.take()).isEqualTo(1); - - pubsub.quit(); - - assertThat(channels.take()).isEqualTo(channel); - assertThat((long) counts.take()).isEqualTo(1); - - Wait.untilTrue(pubsub::isOpen).waitOrTimeout(); - - redis.publish(channel, message); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - } - - @Test(timeout = 10000) - public void resubscribePatternsOnReconnect() throws Exception { - pubsub.psubscribe(pattern); - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(1); - - pubsub.quit(); - - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(1); - - Wait.untilTrue(pubsub::isOpen).waitOrTimeout(); - - redis.publish(channel, message); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - } - - @Test(timeout = 2000) - public void adapter() throws Exception { - final BlockingQueue localCounts = LettuceFactories.newBlockingQueue(); - - RedisPubSubAdapter adapter = new RedisPubSubAdapter() { - @Override - public void subscribed(String channel, long count) { - super.subscribed(channel, count); - localCounts.add(count); - } - - @Override - public void unsubscribed(String channel, long count) { - super.unsubscribed(channel, count); - localCounts.add(count); - } - }; - - pubsub.addListener(adapter); - pubsub.subscribe(channel); - pubsub.psubscribe(pattern); - - assertThat((long) localCounts.take()).isEqualTo(1L); - - redis.publish(channel, message); - pubsub.punsubscribe(pattern); - pubsub.unsubscribe(channel); - - assertThat((long) localCounts.take()).isEqualTo(0L); - } - - @Test(timeout = 2000) - public void removeListener() throws Exception { - pubsub.subscribe(channel); - assertThat(channels.take()).isEqualTo(channel); - - redis.publish(channel, message); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - - pubsub.removeListener(this); - - redis.publish(channel, message); - assertThat(channels.poll(10, TimeUnit.MILLISECONDS)).isNull(); - assertThat(messages.poll(10, TimeUnit.MILLISECONDS)).isNull(); - } - - // RedisPubSubListener implementation - - @Override - public void message(String channel, String message) { - channels.add(channel); - messages.add(message); - } - - @Override - public void message(String pattern, String channel, String message) { - patterns.add(pattern); - channels.add(channel); - messages.add(message); - } - - @Override - public void subscribed(String channel, long count) { - channels.add(channel); - counts.add(count); - } - - @Override - public void psubscribed(String pattern, long count) { - patterns.add(pattern); - counts.add(count); - } - - @Override - public void unsubscribed(String channel, long count) { - channels.add(channel); - counts.add(count); - } - - @Override - public void punsubscribed(String pattern, long count) { - patterns.add(pattern); - counts.add(count); - } -} diff --git a/src/test/java/com/lambdaworks/redis/pubsub/PubSubRxTest.java b/src/test/java/com/lambdaworks/redis/pubsub/PubSubRxTest.java deleted file mode 100644 index 135ce84ec9..0000000000 --- a/src/test/java/com/lambdaworks/redis/pubsub/PubSubRxTest.java +++ /dev/null @@ -1,455 +0,0 @@ -package com.lambdaworks.redis.pubsub; - -import static com.google.code.tempusfugit.temporal.Duration.millis; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Fail.fail; - -import java.util.Iterator; -import java.util.List; -import java.util.Map; -import java.util.concurrent.BlockingQueue; -import java.util.concurrent.TimeUnit; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -import com.lambdaworks.Delay; -import com.lambdaworks.Wait; -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.rx.Success; -import com.lambdaworks.redis.internal.LettuceFactories; -import com.lambdaworks.redis.internal.LettuceLists; -import com.lambdaworks.redis.pubsub.api.rx.ChannelMessage; -import com.lambdaworks.redis.pubsub.api.rx.PatternMessage; -import com.lambdaworks.redis.pubsub.api.rx.RedisPubSubReactiveCommands; -import com.lambdaworks.redis.pubsub.api.sync.RedisPubSubCommands; - -import rx.Observable; -import rx.Subscription; -import rx.observables.BlockingObservable; - -/** - * @author Mark Paluch - */ -public class PubSubRxTest extends AbstractRedisClientTest implements RedisPubSubListener { - - private RedisPubSubReactiveCommands pubsub; - private RedisPubSubReactiveCommands pubsub2; - - private BlockingQueue channels; - private BlockingQueue patterns; - private BlockingQueue messages; - private BlockingQueue counts; - - private String channel = "channel0"; - private String pattern = "channel*"; - private String message = "msg!"; - - @Before - public void openPubSubConnection() throws Exception { - - pubsub = client.connectPubSub().reactive(); - pubsub2 = client.connectPubSub().reactive(); - pubsub.addListener(this); - channels = LettuceFactories.newBlockingQueue(); - patterns = LettuceFactories.newBlockingQueue(); - messages = LettuceFactories.newBlockingQueue(); - counts = LettuceFactories.newBlockingQueue(); - } - - @After - public void closePubSubConnection() throws Exception { - pubsub.close(); - pubsub2.close(); - } - - @Test - public void observeChannels() throws Exception { - - block(pubsub.subscribe(channel)); - - BlockingQueue> channelMessages = LettuceFactories.newBlockingQueue(); - - Subscription subscription = pubsub.observeChannels().doOnNext(channelMessages::add).subscribe(); - - redis.publish(channel, message); - redis.publish(channel, message); - redis.publish(channel, message); - - Wait.untilEquals(3, () -> channelMessages.size()).waitOrTimeout(); - assertThat(channelMessages).hasSize(3); - - subscription.unsubscribe(); - redis.publish(channel, message); - Delay.delay(millis(500)); - assertThat(channelMessages).hasSize(3); - - ChannelMessage channelMessage = channelMessages.take(); - assertThat(channelMessage.getChannel()).isEqualTo(channel); - assertThat(channelMessage.getMessage()).isEqualTo(message); - } - - @Test - public void observeChannelsUnsubscribe() throws Exception { - - block(pubsub.subscribe(channel)); - - BlockingQueue> channelMessages = LettuceFactories.newBlockingQueue(); - - pubsub.observeChannels().doOnNext(channelMessages::add).subscribe().unsubscribe(); - - block(redis.getStatefulConnection().reactive().publish(channel, message)); - block(redis.getStatefulConnection().reactive().publish(channel, message)); - - Delay.delay(millis(500)); - assertThat(channelMessages).isEmpty(); - } - - @Test - public void observePatterns() throws Exception { - - block(pubsub.psubscribe(pattern)); - - BlockingQueue> patternMessages = LettuceFactories.newBlockingQueue(); - - pubsub.observePatterns().doOnNext(patternMessages::add).subscribe(); - - redis.publish(channel, message); - redis.publish(channel, message); - redis.publish(channel, message); - - Wait.untilTrue(() -> patternMessages.size() == 3).waitOrTimeout(); - assertThat(patternMessages).hasSize(3); - - PatternMessage patternMessage = patternMessages.take(); - assertThat(patternMessage.getChannel()).isEqualTo(channel); - assertThat(patternMessage.getMessage()).isEqualTo(message); - assertThat(patternMessage.getPattern()).isEqualTo(pattern); - } - - @Test - public void observePatternsWithUnsubscribe() throws Exception { - - block(pubsub.psubscribe(pattern)); - - BlockingQueue> patternMessages = LettuceFactories.newBlockingQueue(); - - Subscription subscription = pubsub.observePatterns().doOnNext(patternMessages::add).subscribe(); - - redis.publish(channel, message); - redis.publish(channel, message); - redis.publish(channel, message); - - Wait.untilTrue(() -> patternMessages.size() == 3).waitOrTimeout(); - assertThat(patternMessages).hasSize(3); - subscription.unsubscribe(); - - redis.publish(channel, message); - redis.publish(channel, message); - redis.publish(channel, message); - - Delay.delay(millis(500)); - - assertThat(patternMessages).hasSize(3); - } - - @Test(timeout = 2000) - public void message() throws Exception { - - block(pubsub.subscribe(channel)); - assertThat(channels.take()).isEqualTo(channel); - - redis.publish(channel, message); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - } - - @Test(timeout = 2000) - public void pmessage() throws Exception { - - block(pubsub.psubscribe(pattern)); - assertThat(patterns.take()).isEqualTo(pattern); - - redis.publish(channel, message); - assertThat(patterns.take()).isEqualTo(pattern); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - - redis.publish("channel2", "msg 2!"); - assertThat(patterns.take()).isEqualTo(pattern); - assertThat(channels.take()).isEqualTo("channel2"); - assertThat(messages.take()).isEqualTo("msg 2!"); - } - - @Test(timeout = 2000) - public void psubscribe() throws Exception { - - Success sucess = first(pubsub.psubscribe(pattern)); - assertThat(sucess).isEqualTo(Success.Success); - - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(1); - } - - @Test(expected = IllegalArgumentException.class) - public void pubsubEmptyChannels() throws Exception { - - pubsub.subscribe(); - fail("Missing IllegalArgumentException: channels must not be empty"); - } - - @Test - public void pubsubChannels() throws Exception { - - block(pubsub.subscribe(channel)); - List result = first(pubsub2.pubsubChannels().toList()); - assertThat(result).contains(channel); - } - - @Test - public void pubsubMultipleChannels() throws Exception { - - block(pubsub.subscribe(channel, "channel1", "channel3")); - - List result = first(pubsub2.pubsubChannels().toList()); - assertThat(result).contains(channel, "channel1", "channel3"); - } - - @Test - public void pubsubChannelsWithArg() throws Exception { - - pubsub.subscribe(channel).subscribe(); - Wait.untilTrue(() -> first(pubsub2.pubsubChannels(pattern).filter(s -> channel.equals(s))) != null).waitOrTimeout(); - - String result = first(pubsub2.pubsubChannels(pattern).filter(s -> channel.equals(s))); - assertThat(result).isEqualToIgnoringCase(channel); - } - - @Test - public void pubsubNumsub() throws Exception { - - pubsub.subscribe(channel).subscribe(); - Wait.untilEquals(1, () -> first(pubsub2.pubsubNumsub(channel).toList()).size()).waitOrTimeout(); - - Map result = first(pubsub2.pubsubNumsub(channel)); - assertThat(result).hasSize(1); - assertThat(result.get(channel)).isGreaterThan(0); - } - - @Test - public void pubsubNumpat() throws Exception { - - Wait.untilEquals(0L, () -> first(pubsub2.pubsubNumpat())).waitOrTimeout(); - - pubsub.psubscribe(pattern).subscribe(); - Wait.untilEquals(1L, () -> redis.pubsubNumpat()).waitOrTimeout(); - - Long result = first(pubsub2.pubsubNumpat()); - assertThat(result.longValue()).isGreaterThan(0); - } - - @Test(timeout = 2000) - public void punsubscribe() throws Exception { - - pubsub.punsubscribe(pattern).subscribe(); - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(0); - - } - - @Test(timeout = 2000) - public void subscribe() throws Exception { - - pubsub.subscribe(channel).subscribe(); - assertThat(channels.take()).isEqualTo(channel); - assertThat((long) counts.take()).isGreaterThan(0); - } - - @Test(timeout = 2000) - public void unsubscribe() throws Exception { - - pubsub.unsubscribe(channel).subscribe(); - assertThat(channels.take()).isEqualTo(channel); - assertThat((long) counts.take()).isEqualTo(0); - - block(pubsub.unsubscribe()); - - assertThat(channels).isEmpty(); - assertThat(patterns).isEmpty(); - - } - - @Test - public void pubsubCloseOnClientShutdown() throws Exception { - - RedisClient redisClient = RedisClient.create(RedisURI.Builder.redis(host, port).build()); - RedisPubSubCommands connection = redisClient.connectPubSub().sync(); - FastShutdown.shutdown(redisClient); - - assertThat(connection.isOpen()).isFalse(); - } - - @Test(timeout = 2000) - public void utf8Channel() throws Exception { - - String channel = "channelλ"; - String message = "αβγ"; - - block(pubsub.subscribe(channel)); - assertThat(channels.take()).isEqualTo(channel); - - pubsub2.publish(channel, message).subscribe(); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - } - - @Test(timeout = 2000) - public void resubscribeChannelsOnReconnect() throws Exception { - - pubsub.subscribe(channel).subscribe(); - assertThat(channels.take()).isEqualTo(channel); - assertThat((long) counts.take()).isEqualTo(1); - - block(pubsub.quit()); - assertThat(channels.take()).isEqualTo(channel); - assertThat((long) counts.take()).isEqualTo(1); - - Wait.untilTrue(pubsub::isOpen).waitOrTimeout(); - - redis.publish(channel, message); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - } - - @Test(timeout = 2000) - public void resubscribePatternsOnReconnect() throws Exception { - - pubsub.psubscribe(pattern).subscribe(); - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(1); - - block(pubsub.quit()); - - assertThat(patterns.take()).isEqualTo(pattern); - assertThat((long) counts.take()).isEqualTo(1); - - Wait.untilTrue(pubsub::isOpen).waitOrTimeout(); - - pubsub2.publish(channel, message).subscribe(); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - } - - @Test(timeout = 2000) - public void adapter() throws Exception { - - final BlockingQueue localCounts = LettuceFactories.newBlockingQueue(); - - RedisPubSubAdapter adapter = new RedisPubSubAdapter() { - @Override - public void subscribed(String channel, long count) { - super.subscribed(channel, count); - localCounts.add(count); - } - - @Override - public void unsubscribed(String channel, long count) { - super.unsubscribed(channel, count); - localCounts.add(count); - } - }; - - pubsub.addListener(adapter); - pubsub.subscribe(channel).subscribe(); - pubsub.psubscribe(pattern).subscribe(); - - assertThat((long) localCounts.take()).isEqualTo(1L); - - pubsub2.publish(channel, message).subscribe(); - pubsub.punsubscribe(pattern).subscribe(); - pubsub.unsubscribe(channel).subscribe(); - - assertThat((long) localCounts.take()).isEqualTo(0L); - } - - @Test(timeout = 2000) - public void removeListener() throws Exception { - - pubsub.subscribe(channel).subscribe(); - assertThat(channels.take()).isEqualTo(channel); - - pubsub2.publish(channel, message).subscribe(); - assertThat(channels.take()).isEqualTo(channel); - assertThat(messages.take()).isEqualTo(message); - - pubsub.removeListener(this); - - pubsub2.publish(channel, message).subscribe(); - assertThat(channels.poll(10, TimeUnit.MILLISECONDS)).isNull(); - assertThat(messages.poll(10, TimeUnit.MILLISECONDS)).isNull(); - } - - // RedisPubSubListener implementation - @Override - public void message(String channel, String message) { - - channels.add(channel); - messages.add(message); - } - - @Override - public void message(String pattern, String channel, String message) { - patterns.add(pattern); - channels.add(channel); - messages.add(message); - } - - @Override - public void subscribed(String channel, long count) { - channels.add(channel); - counts.add(count); - } - - @Override - public void psubscribed(String pattern, long count) { - patterns.add(pattern); - counts.add(count); - } - - @Override - public void unsubscribed(String channel, long count) { - channels.add(channel); - counts.add(count); - } - - @Override - public void punsubscribed(String pattern, long count) { - patterns.add(pattern); - counts.add(count); - } - - protected void block(Observable observable) { - observable.toBlocking().last(); - } - - protected T first(Observable observable) { - - BlockingObservable blocking = observable.toBlocking(); - Iterator iterator = blocking.getIterator(); - if (iterator.hasNext()) { - return iterator.next(); - } - return null; - } - - protected List all(Observable observable) { - - BlockingObservable blocking = observable.toBlocking(); - Iterator iterator = blocking.getIterator(); - return LettuceLists.newList(iterator); - } -} diff --git a/src/test/java/com/lambdaworks/redis/reliability/AtLeastOnceTest.java b/src/test/java/com/lambdaworks/redis/reliability/AtLeastOnceTest.java deleted file mode 100644 index fe3818e2d3..0000000000 --- a/src/test/java/com/lambdaworks/redis/reliability/AtLeastOnceTest.java +++ /dev/null @@ -1,408 +0,0 @@ -package com.lambdaworks.redis.reliability; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assume.assumeTrue; - -import java.lang.reflect.InvocationHandler; -import java.lang.reflect.Proxy; -import java.util.Queue; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; - -import com.lambdaworks.Connections; -import io.netty.handler.codec.EncoderException; -import io.netty.util.Version; -import org.junit.Before; -import org.junit.Test; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.RedisAsyncConnection; -import com.lambdaworks.redis.RedisChannelHandler; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.RedisCommandTimeoutException; -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.IntegerOutput; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.AsyncCommand; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; - -import io.netty.buffer.ByteBuf; -import io.netty.channel.Channel; - -/** - * @author Mark Paluch - */ -public class AtLeastOnceTest extends AbstractRedisClientTest { - - protected final Utf8StringCodec CODEC = new Utf8StringCodec(); - protected String key = "key"; - - @Before - public void before() throws Exception { - client.setOptions(ClientOptions.builder().autoReconnect(true).build()); - - // needs to be increased on slow systems...perhaps... - client.setDefaultTimeout(3, TimeUnit.SECONDS); - - RedisCommands connection = client.connect().sync(); - connection.flushall(); - connection.flushdb(); - connection.close(); - } - - @Test - public void connectionIsConnectedAfterConnect() throws Exception { - - RedisCommands connection = client.connect().sync(); - - assertThat(getConnectionState(getRedisChannelHandler(connection))); - - connection.close(); - } - - @Test - public void reconnectIsActiveHandler() throws Exception { - - RedisCommands connection = client.connect().sync(); - - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection.getStatefulConnection()); - assertThat(connectionWatchdog).isNotNull(); - assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); - assertThat(connectionWatchdog.isReconnectSuspended()).isFalse(); - - connection.close(); - } - - @Test - public void basicOperations() throws Exception { - - RedisCommands connection = client.connect().sync(); - - connection.set(key, "1"); - assertThat(connection.get("key")).isEqualTo("1"); - - connection.close(); - } - - @Test - public void noBufferedCommandsAfterExecute() throws Exception { - - RedisCommands connection = client.connect().sync(); - - connection.set(key, "1"); - - assertThat(getQueue(getRedisChannelHandler(connection))).isEmpty(); - assertThat(getCommandBuffer(getRedisChannelHandler(connection))).isEmpty(); - - connection.close(); - } - - @Test - public void commandIsExecutedOnce() throws Exception { - - RedisCommands connection = client.connect().sync(); - - connection.set(key, "1"); - connection.incr(key); - assertThat(connection.get(key)).isEqualTo("2"); - - connection.incr(key); - assertThat(connection.get(key)).isEqualTo("3"); - - connection.incr(key); - assertThat(connection.get(key)).isEqualTo("4"); - - connection.close(); - } - - @Test - public void commandFailsWhenFailOnEncode() throws Exception { - - RedisCommands connection = client.connect().sync(); - RedisChannelWriter channelWriter = getRedisChannelHandler(connection).getChannelWriter(); - RedisCommands verificationConnection = client.connect().sync(); - - connection.set(key, "1"); - AsyncCommand working = new AsyncCommand<>(new Command<>(CommandType.INCR, new IntegerOutput( - CODEC), new CommandArgs<>(CODEC).addKey(key))); - channelWriter.write(working); - assertThat(working.await(2, TimeUnit.SECONDS)).isTrue(); - assertThat(connection.get(key)).isEqualTo("2"); - - AsyncCommand command = new AsyncCommand(new Command<>(CommandType.INCR, - new IntegerOutput(CODEC), new CommandArgs<>(CODEC).addKey(key))) { - - @Override - public void encode(ByteBuf buf) { - throw new IllegalStateException("I want to break free"); - } - }; - - channelWriter.write(command); - - assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); - assertThat(command.isCancelled()).isFalse(); - assertThat(getException(command)).isInstanceOf(EncoderException.class); - - assertThat(verificationConnection.get(key)).isEqualTo("2"); - - assertThat(getQueue(getRedisChannelHandler(connection))).isNotEmpty(); - - connection.close(); - } - - @Test - public void commandNotFailedChannelClosesWhileFlush() throws Exception { - - assumeTrue(Version.identify().get("netty-transport").artifactVersion().startsWith("4.0.2")); - - RedisCommands connection = client.connect().sync(); - RedisCommands verificationConnection = client.connect().sync(); - RedisChannelWriter channelWriter = getRedisChannelHandler(connection).getChannelWriter(); - - connection.set(key, "1"); - assertThat(verificationConnection.get(key)).isEqualTo("1"); - - final CountDownLatch block = new CountDownLatch(1); - - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection.getStatefulConnection()); - - AsyncCommand command = getBlockOnEncodeCommand(block); - - channelWriter.write(command); - - connectionWatchdog.setReconnectSuspended(true); - - Channel channel = getChannel(getRedisChannelHandler(connection)); - channel.unsafe().disconnect(channel.newPromise()); - - assertThat(channel.isOpen()).isFalse(); - assertThat(command.isCancelled()).isFalse(); - assertThat(command.isDone()).isFalse(); - block.countDown(); - assertThat(command.await(2, TimeUnit.SECONDS)).isFalse(); - assertThat(command.isCancelled()).isFalse(); - assertThat(command.isDone()).isFalse(); - - assertThat(verificationConnection.get(key)).isEqualTo("1"); - - assertThat(getQueue(getRedisChannelHandler(connection))).isEmpty(); - assertThat(getCommandBuffer(getRedisChannelHandler(connection))).isNotEmpty().contains(command); - - connection.close(); - } - - @Test - public void commandRetriedChannelClosesWhileFlush() throws Exception { - - assumeTrue(Version.identify().get("netty-transport").artifactVersion().startsWith("4.0.2")); - - RedisCommands connection = client.connect().sync(); - RedisCommands verificationConnection = client.connect().sync(); - RedisChannelWriter channelWriter = getRedisChannelHandler(connection).getChannelWriter(); - - connection.set(key, "1"); - assertThat(verificationConnection.get(key)).isEqualTo("1"); - - final CountDownLatch block = new CountDownLatch(1); - - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection.getStatefulConnection()); - - AsyncCommand command = getBlockOnEncodeCommand(block); - - channelWriter.write(command); - - connectionWatchdog.setReconnectSuspended(true); - - Channel channel = getChannel(getRedisChannelHandler(connection)); - channel.unsafe().disconnect(channel.newPromise()); - - assertThat(channel.isOpen()).isFalse(); - assertThat(command.isCancelled()).isFalse(); - assertThat(command.isDone()).isFalse(); - block.countDown(); - assertThat(command.await(2, TimeUnit.SECONDS)).isFalse(); - - connectionWatchdog.setReconnectSuspended(false); - connectionWatchdog.scheduleReconnect(); - - assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); - assertThat(command.isCancelled()).isFalse(); - assertThat(command.isDone()).isTrue(); - - assertThat(verificationConnection.get(key)).isEqualTo("2"); - - assertThat(getQueue(getRedisChannelHandler(connection))).isEmpty(); - assertThat(getCommandBuffer(getRedisChannelHandler(connection))).isEmpty(); - - connection.close(); - verificationConnection.close(); - } - - protected AsyncCommand getBlockOnEncodeCommand(final CountDownLatch block) { - return new AsyncCommand(new Command<>(CommandType.INCR, new IntegerOutput(CODEC), - new CommandArgs<>(CODEC).addKey(key))) { - - @Override - public void encode(ByteBuf buf) { - try { - block.await(); - } catch (InterruptedException e) { - } - super.encode(buf); - } - }; - } - - @Test - public void commandFailsDuringDecode() throws Exception { - - RedisCommands connection = client.connect().sync(); - RedisChannelWriter channelWriter = getRedisChannelHandler(connection).getChannelWriter(); - RedisCommands verificationConnection = client.connect().sync(); - - connection.set(key, "1"); - - AsyncCommand command = new AsyncCommand(new Command<>(CommandType.INCR, new StatusOutput<>( - CODEC), new CommandArgs<>(CODEC).addKey(key))); - - channelWriter.write(command); - - assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); - assertThat(command.isCancelled()).isFalse(); - assertThat(command.isDone()).isTrue(); - assertThat(getException(command)).isInstanceOf(IllegalStateException.class); - - assertThat(verificationConnection.get(key)).isEqualTo("2"); - assertThat(connection.get(key)).isEqualTo("2"); - - connection.close(); - verificationConnection.close(); - } - - @Test - public void commandCancelledOverSyncAPIAfterConnectionIsDisconnected() throws Exception { - - RedisCommands connection = client.connect().sync(); - RedisCommands verificationConnection = client.connect().sync(); - - connection.set(key, "1"); - - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection.getStatefulConnection()); - connectionWatchdog.setListenOnChannelInactive(false); - - connection.quit(); - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - - try { - connection.incr(key); - } catch (RedisException e) { - assertThat(e).isExactlyInstanceOf(RedisCommandTimeoutException.class); - } - - assertThat(verificationConnection.get("key")).isEqualTo("1"); - - assertThat(getQueue(getRedisChannelHandler(connection))).isEmpty(); - assertThat(getCommandBuffer(getRedisChannelHandler(connection)).size()).isGreaterThan(0); - - connectionWatchdog.setListenOnChannelInactive(true); - connectionWatchdog.scheduleReconnect(); - - while (!getCommandBuffer(getRedisChannelHandler(connection)).isEmpty() - || !getQueue(getRedisChannelHandler(connection)).isEmpty()) { - Thread.sleep(10); - } - - assertThat(connection.get(key)).isEqualTo("1"); - - connection.close(); - verificationConnection.close(); - } - - @Test - public void retryAfterConnectionIsDisconnected() throws Exception { - - RedisAsyncConnection connection = client.connectAsync(); - RedisChannelHandler redisChannelHandler = (RedisChannelHandler) connection.getStatefulConnection(); - RedisCommands verificationConnection = client.connect().sync(); - - connection.set(key, "1").get(); - - ConnectionWatchdog connectionWatchdog = Connections.getConnectionWatchdog(connection.getStatefulConnection()); - connectionWatchdog.setListenOnChannelInactive(false); - - connection.quit(); - while (connection.isOpen()) { - Thread.sleep(100); - } - - assertThat(connection.incr(key).await(1, TimeUnit.SECONDS)).isFalse(); - - assertThat(verificationConnection.get("key")).isEqualTo("1"); - - assertThat(getQueue(redisChannelHandler)).isEmpty(); - assertThat(getCommandBuffer(redisChannelHandler).size()).isGreaterThan(0); - - connectionWatchdog.setListenOnChannelInactive(true); - connectionWatchdog.scheduleReconnect(); - - while (!getCommandBuffer(redisChannelHandler).isEmpty() || !getQueue(redisChannelHandler).isEmpty()) { - Thread.sleep(10); - } - - assertThat(connection.get(key).get()).isEqualTo("2"); - assertThat(verificationConnection.get(key)).isEqualTo("2"); - - connection.close(); - verificationConnection.close(); - } - - private Throwable getException(RedisFuture command) { - try { - command.get(); - } catch (InterruptedException e) { - return e; - } catch (ExecutionException e) { - return e.getCause(); - } - return null; - } - - private RedisChannelHandler getRedisChannelHandler(RedisConnection sync) { - - InvocationHandler invocationHandler = Proxy.getInvocationHandler(sync); - return (RedisChannelHandler) ReflectionTestUtils.getField(invocationHandler, "connection"); - } - - private T getHandler(Class handlerType, RedisChannelHandler channelHandler) { - Channel channel = getChannel(channelHandler); - return (T) channel.pipeline().get((Class) handlerType); - } - - private Channel getChannel(RedisChannelHandler channelHandler) { - return (Channel) ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "channel"); - } - - private Queue getQueue(RedisChannelHandler channelHandler) { - return (Queue) ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "queue"); - } - - private Queue getCommandBuffer(RedisChannelHandler channelHandler) { - return (Queue) ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "commandBuffer"); - } - - private String getConnectionState(RedisChannelHandler channelHandler) { - return ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "lifecycleState").toString(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/reliability/AtMostOnceTest.java b/src/test/java/com/lambdaworks/redis/reliability/AtMostOnceTest.java deleted file mode 100644 index b3e738f930..0000000000 --- a/src/test/java/com/lambdaworks/redis/reliability/AtMostOnceTest.java +++ /dev/null @@ -1,305 +0,0 @@ -package com.lambdaworks.redis.reliability; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assume.assumeTrue; - -import java.lang.reflect.InvocationHandler; -import java.lang.reflect.Proxy; -import java.util.Queue; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicLong; - -import io.netty.handler.codec.EncoderException; -import io.netty.util.Version; -import org.junit.Before; -import org.junit.Test; -import org.springframework.test.util.ReflectionTestUtils; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.RedisChannelHandler; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisException; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.IntegerOutput; -import com.lambdaworks.redis.output.StatusOutput; -import com.lambdaworks.redis.protocol.AsyncCommand; -import com.lambdaworks.redis.protocol.Command; -import com.lambdaworks.redis.protocol.CommandArgs; -import com.lambdaworks.redis.protocol.CommandType; -import com.lambdaworks.redis.protocol.ConnectionWatchdog; - -import io.netty.buffer.ByteBuf; -import io.netty.channel.Channel; - -/** - * @author Mark Paluch - */ -@SuppressWarnings("rawtypes") -public class AtMostOnceTest extends AbstractRedisClientTest { - - protected final Utf8StringCodec CODEC = new Utf8StringCodec(); - protected String key = "key"; - - @Before - public void before() throws Exception { - client.setOptions(ClientOptions.builder().autoReconnect(false).build()); - - // needs to be increased on slow systems...perhaps... - client.setDefaultTimeout(3, TimeUnit.SECONDS); - - RedisCommands connection = client.connect().sync(); - connection.flushall(); - connection.flushdb(); - connection.close(); - } - - @Test - public void connectionIsConnectedAfterConnect() throws Exception { - - RedisCommands connection = client.connect().sync(); - - assertThat(getConnectionState(getRedisChannelHandler(connection))); - - connection.close(); - } - - @Test - public void noReconnectHandler() throws Exception { - - RedisCommands connection = client.connect().sync(); - - assertThat(getHandler(RedisChannelWriter.class, getRedisChannelHandler(connection))).isNotNull(); - assertThat(getHandler(ConnectionWatchdog.class, getRedisChannelHandler(connection))).isNull(); - - connection.close(); - } - - @Test - public void basicOperations() throws Exception { - - RedisCommands connection = client.connect().sync(); - - connection.set(key, "1"); - assertThat(connection.get("key")).isEqualTo("1"); - - connection.close(); - } - - @Test - public void noBufferedCommandsAfterExecute() throws Exception { - - RedisCommands connection = client.connect().sync(); - - connection.set(key, "1"); - - assertThat(getQueue(getRedisChannelHandler(connection))).isEmpty(); - assertThat(getCommandBuffer(getRedisChannelHandler(connection))).isEmpty(); - - connection.close(); - } - - @Test - public void commandIsExecutedOnce() throws Exception { - - RedisCommands connection = client.connect().sync(); - - connection.set(key, "1"); - connection.incr(key); - assertThat(connection.get(key)).isEqualTo("2"); - - connection.incr(key); - assertThat(connection.get(key)).isEqualTo("3"); - - connection.incr(key); - assertThat(connection.get(key)).isEqualTo("4"); - - connection.close(); - } - - @Test - public void commandNotExecutedFailsOnEncode() throws Exception { - - RedisCommands connection = client.connect().sync(); - RedisChannelWriter channelWriter = getRedisChannelHandler(connection).getChannelWriter(); - - connection.set(key, "1"); - AsyncCommand working = new AsyncCommand<>(new Command(CommandType.INCR, - new IntegerOutput(CODEC), new CommandArgs(CODEC).addKey(key))); - channelWriter.write(working); - assertThat(working.await(2, TimeUnit.SECONDS)).isTrue(); - assertThat(connection.get(key)).isEqualTo("2"); - - AsyncCommand command = new AsyncCommand( - new Command(CommandType.INCR, new IntegerOutput(CODEC), - new CommandArgs(CODEC).addKey(key))) { - - @Override - public void encode(ByteBuf buf) { - throw new IllegalStateException("I want to break free"); - } - }; - - channelWriter.write(command); - - assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); - assertThat(command.isCancelled()).isFalse(); - assertThat(getException(command)).isInstanceOf(EncoderException.class); - assertThat(getQueue(getRedisChannelHandler(connection))).isNotEmpty(); - getQueue(getRedisChannelHandler(connection)).clear(); - - assertThat(connection.get(key)).isEqualTo("2"); - - assertThat(getQueue(getRedisChannelHandler(connection))).isEmpty(); - assertThat(getCommandBuffer(getRedisChannelHandler(connection))).isEmpty(); - - connection.close(); - } - - @Test - public void commandNotExecutedChannelClosesWhileFlush() throws Exception { - - assumeTrue(Version.identify().get("netty-transport").artifactVersion().startsWith("4.0.2")); - - RedisCommands connection = client.connect().sync(); - RedisCommands verificationConnection = client.connect().sync(); - RedisChannelWriter channelWriter = getRedisChannelHandler(connection).getChannelWriter(); - - connection.set(key, "1"); - assertThat(verificationConnection.get(key)).isEqualTo("1"); - - final CountDownLatch block = new CountDownLatch(1); - - AsyncCommand command = new AsyncCommand(new Command<>(CommandType.INCR, - new IntegerOutput(CODEC), new CommandArgs<>(CODEC).addKey(key))) { - - @Override - public void encode(ByteBuf buf) { - try { - block.await(); - } catch (InterruptedException e) { - } - super.encode(buf); - } - }; - - channelWriter.write(command); - - Channel channel = getChannel(getRedisChannelHandler(connection)); - channel.unsafe().disconnect(channel.newPromise()); - - assertThat(channel.isOpen()).isFalse(); - assertThat(command.isCancelled()).isFalse(); - assertThat(command.isDone()).isFalse(); - block.countDown(); - assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); - assertThat(command.isCancelled()).isFalse(); - assertThat(command.isDone()).isTrue(); - - assertThat(verificationConnection.get(key)).isEqualTo("1"); - - assertThat(getQueue(getRedisChannelHandler(connection))).isEmpty(); - assertThat(getCommandBuffer(getRedisChannelHandler(connection))).isEmpty(); - - connection.close(); - } - - @Test - public void commandFailsDuringDecode() throws Exception { - - RedisCommands connection = client.connect().sync(); - RedisChannelWriter channelWriter = getRedisChannelHandler(connection).getChannelWriter(); - RedisCommands verificationConnection = client.connect().sync(); - - connection.set(key, "1"); - - AsyncCommand command = new AsyncCommand<>(new Command<>(CommandType.INCR, new StatusOutput<>( - CODEC), new CommandArgs<>(CODEC).addKey(key))); - - channelWriter.write(command); - - assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); - assertThat(command.isCancelled()).isFalse(); - assertThat(getException(command)).isInstanceOf(IllegalStateException.class); - - assertThat(verificationConnection.get(key)).isEqualTo("2"); - assertThat(connection.get(key)).isEqualTo("2"); - - connection.close(); - } - - @Test - public void noCommandsExecutedAfterConnectionIsDisconnected() throws Exception { - - RedisCommands connection = client.connect().sync(); - connection.quit(); - - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - - try { - connection.incr(key); - } catch (RedisException e) { - assertThat(e).isInstanceOf(RedisException.class); - } - - connection.close(); - - RedisCommands connection2 = client.connect().sync(); - connection2.quit(); - - try { - - Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); - - connection2.incr(key); - } catch (Exception e) { - assertThat(e).isExactlyInstanceOf(RedisException.class).hasMessageContaining("not connected"); - } - - connection2.close(); - } - - private Throwable getException(RedisFuture command) { - try { - command.get(); - } catch (InterruptedException e) { - return e; - } catch (ExecutionException e) { - return e.getCause(); - } - return null; - } - - private RedisChannelHandler getRedisChannelHandler(RedisConnection sync) { - - InvocationHandler invocationHandler = Proxy.getInvocationHandler(sync); - return (RedisChannelHandler) ReflectionTestUtils.getField(invocationHandler, "connection"); - } - - private T getHandler(Class handlerType, RedisChannelHandler channelHandler) { - Channel channel = getChannel(channelHandler); - return (T) channel.pipeline().get((Class) handlerType); - } - - private Channel getChannel(RedisChannelHandler channelHandler) { - return (Channel) ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "channel"); - } - - private Queue getQueue(RedisChannelHandler channelHandler) { - return (Queue) ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "queue"); - } - - private Queue getCommandBuffer(RedisChannelHandler channelHandler) { - return (Queue) ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "commandBuffer"); - } - - private String getConnectionState(RedisChannelHandler channelHandler) { - return ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "lifecycleState").toString(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/resource/ConstantDelayTest.java b/src/test/java/com/lambdaworks/redis/resource/ConstantDelayTest.java deleted file mode 100644 index 200492beeb..0000000000 --- a/src/test/java/com/lambdaworks/redis/resource/ConstantDelayTest.java +++ /dev/null @@ -1,36 +0,0 @@ -package com.lambdaworks.redis.resource; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class ConstantDelayTest { - - @Test(expected = IllegalArgumentException.class) - public void shouldNotCreateIfDelayIsNegative() throws Exception { - Delay.constant(-1, TimeUnit.MILLISECONDS); - } - - @Test - public void shouldCreateZeroDelay() throws Exception { - - Delay delay = Delay.constant(0, TimeUnit.MILLISECONDS); - - assertThat(delay.createDelay(0)).isEqualTo(0); - assertThat(delay.createDelay(5)).isEqualTo(0); - } - - @Test - public void shouldCreateConstantDelay() throws Exception { - - Delay delay = Delay.constant(100, TimeUnit.MILLISECONDS); - - assertThat(delay.createDelay(0)).isEqualTo(100); - assertThat(delay.createDelay(5)).isEqualTo(100); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/resource/DefaultClientResourcesTest.java b/src/test/java/com/lambdaworks/redis/resource/DefaultClientResourcesTest.java deleted file mode 100644 index f52cda10e7..0000000000 --- a/src/test/java/com/lambdaworks/redis/resource/DefaultClientResourcesTest.java +++ /dev/null @@ -1,150 +0,0 @@ -package com.lambdaworks.redis.resource; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.google.code.tempusfugit.temporal.Timeout.timeout; -import static org.assertj.core.api.Assertions.assertThat; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.verifyNoMoreInteractions; -import static org.mockito.Mockito.verifyZeroInteractions; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import com.google.code.tempusfugit.temporal.Condition; -import com.google.code.tempusfugit.temporal.WaitFor; -import com.lambdaworks.redis.event.Event; -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.metrics.CommandLatencyCollector; -import com.lambdaworks.redis.metrics.DefaultCommandLatencyCollectorOptions; - -import io.netty.channel.nio.NioEventLoopGroup; -import io.netty.util.concurrent.EventExecutorGroup; -import io.netty.util.concurrent.Future; -import rx.observers.TestSubscriber; - -/** - * @author Mark Paluch - */ -public class DefaultClientResourcesTest { - - @Test - public void testDefaults() throws Exception { - - DefaultClientResources sut = DefaultClientResources.create(); - - assertThat(sut.commandLatencyCollector()).isNotNull(); - assertThat(sut.commandLatencyCollector().isEnabled()).isTrue(); - - EventExecutorGroup eventExecutors = sut.eventExecutorGroup(); - NioEventLoopGroup eventLoopGroup = sut.eventLoopGroupProvider().allocate(NioEventLoopGroup.class); - - eventExecutors.next().submit(mock(Runnable.class)); - eventLoopGroup.next().submit(mock(Runnable.class)); - - assertThat(sut.shutdown(0, 0, TimeUnit.SECONDS).get()).isTrue(); - - assertThat(eventExecutors.isTerminated()).isTrue(); - assertThat(eventLoopGroup.isTerminated()).isTrue(); - - Future shutdown = sut.eventLoopGroupProvider().shutdown(0, 0, TimeUnit.SECONDS); - assertThat(shutdown.get()).isTrue(); - - assertThat(sut.commandLatencyCollector().isEnabled()).isFalse(); - } - - @Test - public void testBuilder() throws Exception { - - DefaultClientResources sut = new DefaultClientResources.Builder().ioThreadPoolSize(4).computationThreadPoolSize(4) - .commandLatencyCollectorOptions(DefaultCommandLatencyCollectorOptions.disabled()).build(); - - EventExecutorGroup eventExecutors = sut.eventExecutorGroup(); - NioEventLoopGroup eventLoopGroup = sut.eventLoopGroupProvider().allocate(NioEventLoopGroup.class); - - assertThat(eventExecutors.iterator()).hasSize(4); - assertThat(eventLoopGroup.executorCount()).isEqualTo(4); - assertThat(sut.ioThreadPoolSize()).isEqualTo(4); - assertThat(sut.commandLatencyCollector()).isNotNull(); - assertThat(sut.commandLatencyCollector().isEnabled()).isFalse(); - - assertThat(sut.shutdown(0, 0, TimeUnit.MILLISECONDS).get()).isTrue(); - } - - @Test - public void testDnsResolver() throws Exception { - - DirContextDnsResolver dirContextDnsResolver = new DirContextDnsResolver("8.8.8.8"); - - DefaultClientResources sut = new DefaultClientResources.Builder().dnsResolver(dirContextDnsResolver).build(); - - assertThat(sut.dnsResolver()).isEqualTo(dirContextDnsResolver); - } - - @Test - public void testProvidedResources() throws Exception { - - EventExecutorGroup executorMock = mock(EventExecutorGroup.class); - EventLoopGroupProvider groupProviderMock = mock(EventLoopGroupProvider.class); - EventBus eventBusMock = mock(EventBus.class); - CommandLatencyCollector latencyCollectorMock = mock(CommandLatencyCollector.class); - - DefaultClientResources sut = new DefaultClientResources.Builder().eventExecutorGroup(executorMock) - .eventLoopGroupProvider(groupProviderMock).eventBus(eventBusMock).commandLatencyCollector(latencyCollectorMock) - .build(); - - assertThat(sut.eventExecutorGroup()).isSameAs(executorMock); - assertThat(sut.eventLoopGroupProvider()).isSameAs(groupProviderMock); - assertThat(sut.eventBus()).isSameAs(eventBusMock); - - assertThat(sut.shutdown().get()).isTrue(); - - verifyZeroInteractions(executorMock); - verifyZeroInteractions(groupProviderMock); - verify(latencyCollectorMock).isEnabled(); - verifyNoMoreInteractions(latencyCollectorMock); - } - - @Test - public void testSmallPoolSize() throws Exception { - - DefaultClientResources sut = new DefaultClientResources.Builder().ioThreadPoolSize(1).computationThreadPoolSize(1) - .build(); - - EventExecutorGroup eventExecutors = sut.eventExecutorGroup(); - NioEventLoopGroup eventLoopGroup = sut.eventLoopGroupProvider().allocate(NioEventLoopGroup.class); - - assertThat(eventExecutors.iterator()).hasSize(3); - assertThat(eventLoopGroup.executorCount()).isEqualTo(3); - assertThat(sut.ioThreadPoolSize()).isEqualTo(3); - - assertThat(sut.shutdown(0, 0, TimeUnit.MILLISECONDS).get()).isTrue(); - } - - @Test - public void testEventBus() throws Exception { - - DefaultClientResources sut = DefaultClientResources.create(); - - EventBus eventBus = sut.eventBus(); - - final TestSubscriber subject = new TestSubscriber(); - - eventBus.get().subscribe(subject); - - Event event = mock(Event.class); - eventBus.publish(event); - - WaitFor.waitOrTimeout(new Condition() { - @Override - public boolean isSatisfied() { - return !subject.getOnNextEvents().isEmpty(); - } - }, timeout(seconds(2))); - - assertThat(subject.getOnNextEvents()).contains(event); - assertThat(sut.shutdown(0, 0, TimeUnit.MILLISECONDS).get()).isTrue(); - } - -} diff --git a/src/test/java/com/lambdaworks/redis/resource/DefaultEventLoopGroupProviderTest.java b/src/test/java/com/lambdaworks/redis/resource/DefaultEventLoopGroupProviderTest.java deleted file mode 100644 index 0677d335c4..0000000000 --- a/src/test/java/com/lambdaworks/redis/resource/DefaultEventLoopGroupProviderTest.java +++ /dev/null @@ -1,35 +0,0 @@ -package com.lambdaworks.redis.resource; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import io.netty.channel.nio.NioEventLoopGroup; -import io.netty.util.concurrent.Future; - -/** - * @author Mark Paluch - */ -public class DefaultEventLoopGroupProviderTest { - - @Test - public void shutdownTerminatedEventLoopGroup() throws Exception { - DefaultEventLoopGroupProvider sut = new DefaultEventLoopGroupProvider(1); - - NioEventLoopGroup eventLoopGroup = sut.allocate(NioEventLoopGroup.class); - - Future shutdown = sut.release(eventLoopGroup, 10, 10, TimeUnit.MILLISECONDS); - shutdown.get(); - - Future shutdown2 = sut.release(eventLoopGroup, 10, 10, TimeUnit.MILLISECONDS); - shutdown2.get(); - } - - @Test(expected = IllegalStateException.class) - public void getAfterShutdown() throws Exception { - DefaultEventLoopGroupProvider sut = new DefaultEventLoopGroupProvider(1); - - sut.shutdown(10, 10, TimeUnit.MILLISECONDS).get(); - sut.allocate(NioEventLoopGroup.class); - } -} diff --git a/src/test/java/com/lambdaworks/redis/resource/DirContextDnsResolverTest.java b/src/test/java/com/lambdaworks/redis/resource/DirContextDnsResolverTest.java deleted file mode 100644 index 1a22c49c69..0000000000 --- a/src/test/java/com/lambdaworks/redis/resource/DirContextDnsResolverTest.java +++ /dev/null @@ -1,172 +0,0 @@ -package com.lambdaworks.redis.resource; - -import static org.assertj.core.api.AssertionsForClassTypes.assertThat; - -import java.net.Inet4Address; -import java.net.Inet6Address; -import java.net.InetAddress; -import java.net.UnknownHostException; -import java.util.Arrays; -import java.util.Properties; - -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class DirContextDnsResolverTest { - - DirContextDnsResolver resolver; - - @Before - public void before() throws Exception { - - System.getProperties().remove(DirContextDnsResolver.PREFER_IPV4_KEY); - System.getProperties().remove(DirContextDnsResolver.PREFER_IPV6_KEY); - } - - @After - public void tearDown() throws Exception { - - if (resolver != null) { - resolver.close(); - } - } - - @Test - public void shouldResolveDefault() throws Exception { - - resolver = new DirContextDnsResolver(); - InetAddress[] resolved = resolver.resolve("google.com"); - - assertThat(resolved.length).isGreaterThan(1); - assertThat(resolved[0]).isInstanceOf(Inet6Address.class); - assertThat(resolved[0].getHostName()).isEqualTo("google.com"); - assertThat(resolved[resolved.length - 1]).isInstanceOf(Inet4Address.class); - } - - @Test - public void shouldResolvePreferIpv4WithProperties() throws Exception { - - resolver = new DirContextDnsResolver(true, false, new Properties()); - - InetAddress[] resolved = resolver.resolve("google.com"); - - assertThat(resolved.length).isGreaterThan(1); - assertThat(resolved[0]).isInstanceOf(Inet4Address.class); - } - - @Test - public void shouldResolveWithDnsServer() throws Exception { - - resolver = new DirContextDnsResolver(Arrays.asList("[2001:4860:4860::8888]", "8.8.8.8")); - - InetAddress[] resolved = resolver.resolve("google.com"); - - assertThat(resolved.length).isGreaterThan(1); - } - - @Test - public void shouldPreferIpv4() throws Exception { - - System.setProperty(DirContextDnsResolver.PREFER_IPV4_KEY, "true"); - - resolver = new DirContextDnsResolver(); - InetAddress[] resolved = resolver.resolve("google.com"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(Inet4Address.class); - } - - @Test - public void shouldPreferIpv4AndNotIpv6() throws Exception { - - System.setProperty(DirContextDnsResolver.PREFER_IPV4_KEY, "true"); - System.setProperty(DirContextDnsResolver.PREFER_IPV6_KEY, "false"); - - resolver = new DirContextDnsResolver(); - InetAddress[] resolved = resolver.resolve("google.com"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(Inet4Address.class); - } - - @Test - public void shouldPreferIpv6AndNotIpv4() throws Exception { - - System.setProperty(DirContextDnsResolver.PREFER_IPV4_KEY, "false"); - System.setProperty(DirContextDnsResolver.PREFER_IPV6_KEY, "true"); - - resolver = new DirContextDnsResolver(); - InetAddress[] resolved = resolver.resolve("google.com"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(Inet6Address.class); - } - - @Test(expected = UnknownHostException.class) - public void shouldFailWithUnknownHost() throws Exception { - - resolver = new DirContextDnsResolver("8.8.8.8"); - - resolver.resolve("unknown-domain-name"); - } - - @Test - public void shouldResolveCname() throws Exception { - - resolver = new DirContextDnsResolver(); - InetAddress[] resolved = resolver.resolve("www.github.io"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(InetAddress.class); - assertThat(resolved[0].getHostName()).isEqualTo("www.github.io"); - } - - @Test - public void shouldResolveWithoutSubdomain() throws Exception { - - resolver = new DirContextDnsResolver(); - InetAddress[] resolved = resolver.resolve("paluch.biz"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(InetAddress.class); - assertThat(resolved[0].getHostName()).isEqualTo("paluch.biz"); - - resolved = resolver.resolve("gmail.com"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(InetAddress.class); - assertThat(resolved[0].getHostName()).isEqualTo("gmail.com"); - } - - @Test - public void shouldWorkWithIpv4Address() throws Exception { - - resolver = new DirContextDnsResolver(); - InetAddress[] resolved = resolver.resolve("127.0.0.1"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(Inet4Address.class); - assertThat(resolved[0].getHostAddress()).isEqualTo("127.0.0.1"); - } - - @Test - public void shouldWorkWithIpv6Addresses() throws Exception { - - resolver = new DirContextDnsResolver(); - InetAddress[] resolved = resolver.resolve("::1"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(Inet6Address.class); - assertThat(resolved[0].getHostAddress()).isEqualTo("0:0:0:0:0:0:0:1"); - - resolved = resolver.resolve("2a00:1450:4001:816::200e"); - - assertThat(resolved.length).isGreaterThan(0); - assertThat(resolved[0]).isInstanceOf(Inet6Address.class); - assertThat(resolved[0].getHostAddress()).isEqualTo("2a00:1450:4001:816:0:0:0:200e"); - } -} diff --git a/src/test/java/com/lambdaworks/redis/resource/ExponentialDelayTest.java b/src/test/java/com/lambdaworks/redis/resource/ExponentialDelayTest.java deleted file mode 100644 index 6d8d16164c..0000000000 --- a/src/test/java/com/lambdaworks/redis/resource/ExponentialDelayTest.java +++ /dev/null @@ -1,84 +0,0 @@ -package com.lambdaworks.redis.resource; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -/** - * @author Mark Paluch - */ -public class ExponentialDelayTest { - - @Test(expected = IllegalArgumentException.class) - public void shouldNotCreateIfLowerBoundIsNegative() throws Exception { - Delay.exponential(-1, 100, TimeUnit.MILLISECONDS, 10); - } - - @Test(expected = IllegalArgumentException.class) - public void shouldNotCreateIfLowerBoundIsSameAsUpperBound() throws Exception { - Delay.exponential(100, 100, TimeUnit.MILLISECONDS, 10); - } - - @Test(expected = IllegalArgumentException.class) - public void shouldNotCreateIfPowerIsOne() throws Exception { - Delay.exponential(100, 1000, TimeUnit.MILLISECONDS, 1); - } - - @Test - public void negativeAttemptShouldReturnZero() throws Exception { - - Delay delay = Delay.exponential(); - - assertThat(delay.createDelay(-1)).isEqualTo(0); - } - - @Test - public void zeroShouldReturnZero() throws Exception { - - Delay delay = Delay.exponential(); - - assertThat(delay.createDelay(0)).isEqualTo(0); - } - - @Test - public void testDefaultDelays() throws Exception { - - Delay delay = Delay.exponential(); - - assertThat(delay.getTimeUnit()).isEqualTo(TimeUnit.MILLISECONDS); - - assertThat(delay.createDelay(1)).isEqualTo(1); - assertThat(delay.createDelay(2)).isEqualTo(2); - assertThat(delay.createDelay(3)).isEqualTo(4); - assertThat(delay.createDelay(4)).isEqualTo(8); - assertThat(delay.createDelay(5)).isEqualTo(16); - assertThat(delay.createDelay(6)).isEqualTo(32); - assertThat(delay.createDelay(7)).isEqualTo(64); - assertThat(delay.createDelay(8)).isEqualTo(128); - assertThat(delay.createDelay(9)).isEqualTo(256); - assertThat(delay.createDelay(10)).isEqualTo(512); - assertThat(delay.createDelay(11)).isEqualTo(1024); - assertThat(delay.createDelay(12)).isEqualTo(2048); - assertThat(delay.createDelay(13)).isEqualTo(4096); - assertThat(delay.createDelay(14)).isEqualTo(8192); - assertThat(delay.createDelay(15)).isEqualTo(16384); - assertThat(delay.createDelay(16)).isEqualTo(30000); - assertThat(delay.createDelay(17)).isEqualTo(30000); - assertThat(delay.createDelay(Integer.MAX_VALUE)).isEqualTo(30000); - } - - @Test - public void testPow10Delays() throws Exception { - - Delay delay = Delay.exponential(100, 10000, TimeUnit.MILLISECONDS, 10); - - assertThat(delay.createDelay(1)).isEqualTo(100); - assertThat(delay.createDelay(2)).isEqualTo(100); - assertThat(delay.createDelay(3)).isEqualTo(100); - assertThat(delay.createDelay(4)).isEqualTo(1000); - assertThat(delay.createDelay(5)).isEqualTo(10000); - assertThat(delay.createDelay(Integer.MAX_VALUE)).isEqualTo(10000); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/resource/FuturesTest.java b/src/test/java/com/lambdaworks/redis/resource/FuturesTest.java deleted file mode 100644 index bd82907df9..0000000000 --- a/src/test/java/com/lambdaworks/redis/resource/FuturesTest.java +++ /dev/null @@ -1,75 +0,0 @@ -package com.lambdaworks.redis.resource; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.google.code.tempusfugit.temporal.Timeout.timeout; -import static org.assertj.core.api.Assertions.assertThat; - -import org.junit.Test; - -import com.google.code.tempusfugit.temporal.Condition; -import com.google.code.tempusfugit.temporal.WaitFor; - -import io.netty.util.concurrent.DefaultPromise; -import io.netty.util.concurrent.GlobalEventExecutor; -import io.netty.util.concurrent.ImmediateEventExecutor; -import io.netty.util.concurrent.Promise; - -/** - * @author Mark Paluch - */ -public class FuturesTest { - - @Test(expected = IllegalArgumentException.class) - public void testPromise() throws Exception { - new Futures.PromiseAggregator(null); - } - - @Test(expected = IllegalStateException.class) - public void notArmed() throws Exception { - Futures.PromiseAggregator> sut = new Futures.PromiseAggregator>( - new DefaultPromise(ImmediateEventExecutor.INSTANCE)); - sut.add(new DefaultPromise(ImmediateEventExecutor.INSTANCE)); - } - - @Test(expected = IllegalStateException.class) - public void expectAfterArmed() throws Exception { - Futures.PromiseAggregator> sut = new Futures.PromiseAggregator>( - new DefaultPromise(ImmediateEventExecutor.INSTANCE)); - sut.arm(); - - sut.expectMore(1); - } - - @Test(expected = IllegalStateException.class) - public void armTwice() throws Exception { - Futures.PromiseAggregator> sut = new Futures.PromiseAggregator>( - new DefaultPromise(ImmediateEventExecutor.INSTANCE)); - sut.arm(); - sut.arm(); - } - - @Test - public void regularUse() throws Exception { - final DefaultPromise target = new DefaultPromise(GlobalEventExecutor.INSTANCE); - Futures.PromiseAggregator> sut = new Futures.PromiseAggregator>( - target); - - sut.expectMore(1); - sut.arm(); - DefaultPromise part = new DefaultPromise(GlobalEventExecutor.INSTANCE); - sut.add(part); - - assertThat(target.isDone()).isFalse(); - - part.setSuccess(true); - - WaitFor.waitOrTimeout(new Condition() { - @Override - public boolean isSatisfied() { - return target.isDone(); - } - }, timeout(seconds(5))); - - assertThat(target.isDone()).isTrue(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/sentinel/AbstractSentinelTest.java b/src/test/java/com/lambdaworks/redis/sentinel/AbstractSentinelTest.java deleted file mode 100644 index b55e077c37..0000000000 --- a/src/test/java/com/lambdaworks/redis/sentinel/AbstractSentinelTest.java +++ /dev/null @@ -1,36 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Before; - -import com.lambdaworks.redis.AbstractTest; -import com.lambdaworks.redis.FastShutdown; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.sentinel.api.sync.RedisSentinelCommands; - -public abstract class AbstractSentinelTest extends AbstractTest { - - public static final String MASTER_ID = "mymaster"; - - protected static RedisClient sentinelClient; - protected RedisSentinelCommands sentinel; - - @AfterClass - public static void shutdownClient() { - FastShutdown.shutdown(sentinelClient); - } - - @Before - public void openConnection() throws Exception { - sentinel = sentinelClient.connectSentinel().sync(); - } - - @After - public void closeConnection() throws Exception { - if (sentinel != null) { - sentinel.close(); - } - } - -} diff --git a/src/test/java/com/lambdaworks/redis/sentinel/SentinelCommandTest.java b/src/test/java/com/lambdaworks/redis/sentinel/SentinelCommandTest.java deleted file mode 100644 index f60fa730e4..0000000000 --- a/src/test/java/com/lambdaworks/redis/sentinel/SentinelCommandTest.java +++ /dev/null @@ -1,216 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import static com.lambdaworks.redis.TestSettings.hostAddr; -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import java.net.InetSocketAddress; -import java.net.SocketAddress; -import java.util.List; -import java.util.Map; - -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Rule; -import org.junit.Test; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.*; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; - -public class SentinelCommandTest extends AbstractSentinelTest { - - @Rule - public SentinelRule sentinelRule = new SentinelRule(sentinelClient, false, 26379, 26380); - - @BeforeClass - public static void setupClient() { - sentinelClient = new RedisClient(RedisURI.Builder.sentinel(TestSettings.host(), MASTER_ID).build()); - } - - @Before - public void openConnection() throws Exception { - super.openConnection(); - - try { - sentinel.master(MASTER_ID); - } catch (Exception e) { - sentinelRule.monitor(MASTER_ID, hostAddr(), TestSettings.port(3), 1, true); - } - } - - @Test - public void getMasterAddr() throws Exception { - SocketAddress result = sentinel.getMasterAddrByName(MASTER_ID); - InetSocketAddress socketAddress = (InetSocketAddress) result; - assertThat(socketAddress.getHostName()).contains(TestSettings.host()); - } - - @Test - public void getMasterAddrButNoMasterPresent() throws Exception { - InetSocketAddress socketAddress = (InetSocketAddress) sentinel.getMasterAddrByName("unknown"); - assertThat(socketAddress).isNull(); - } - - @Test - public void getMasterAddrByName() throws Exception { - InetSocketAddress socketAddress = (InetSocketAddress) sentinel.getMasterAddrByName(MASTER_ID); - assertThat(socketAddress.getPort()).isBetween(6479, 6485); - } - - @Test - public void masters() throws Exception { - - List> result = sentinel.masters(); - - assertThat(result.size()).isGreaterThan(0); - - Map map = result.get(0); - assertThat(map.get("flags")).isNotNull(); - assertThat(map.get("config-epoch")).isNotNull(); - assertThat(map.get("port")).isNotNull(); - } - - @Test - public void sentinelConnectWith() throws Exception { - - RedisClient client = new RedisClient( - RedisURI.Builder.sentinel(TestSettings.host(), 1234, MASTER_ID).withSentinel(TestSettings.host()).build()); - - RedisSentinelAsyncCommands sentinelConnection = client.connectSentinelAsync(); - assertThat(sentinelConnection.ping().get()).isEqualTo("PONG"); - - sentinelConnection.close(); - - RedisConnection connection2 = client.connect().sync(); - assertThat(connection2.ping()).isEqualTo("PONG"); - connection2.quit(); - - Wait.untilTrue(connection2::isOpen).waitOrTimeout(); - - assertThat(connection2.ping()).isEqualTo("PONG"); - connection2.close(); - FastShutdown.shutdown(client); - } - - @Test - public void sentinelConnectWrongMaster() throws Exception { - - RedisClient client = new RedisClient( - RedisURI.Builder.sentinel(TestSettings.host(), 1234, "nonexistent").withSentinel(TestSettings.host()).build()); - try { - client.connect(); - fail("missing RedisConnectionException"); - } catch (RedisConnectionException e) { - } - - FastShutdown.shutdown(client); - } - - @Test - public void sentinelConnect() throws Exception { - - RedisClient client = new RedisClient(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build()); - - RedisSentinelAsyncCommands connection = client.connectSentinelAsync(); - assertThat(connection.ping().get()).isEqualTo("PONG"); - - connection.close(); - FastShutdown.shutdown(client); - } - - @Test - public void getMaster() throws Exception { - - Map result = sentinel.master(MASTER_ID); - assertThat(result.get("ip")).isEqualTo(hostAddr()); // !! IPv4/IPv6 - assertThat(result).containsKey("role-reported"); - } - - @Test - public void role() throws Exception { - - RedisAsyncCommands connection = sentinelClient.connect(RedisURI.Builder.redis(host, 26380).build()) - .async(); - try { - - RedisFuture> role = connection.role(); - List objects = role.get(); - - assertThat(objects).hasSize(2); - - assertThat(objects.get(0)).isEqualTo("sentinel"); - assertThat(objects.get(1).toString()).isEqualTo("[" + MASTER_ID + "]"); - } finally { - connection.close(); - } - } - - @Test - public void getSlaves() throws Exception { - - List> result = sentinel.slaves(MASTER_ID); - assertThat(result).hasSize(1); - assertThat(result.get(0)).containsKey("port"); - } - - @Test - public void reset() throws Exception { - - Long result = sentinel.reset("other"); - assertThat(result.intValue()).isEqualTo(0); - } - - @Test - public void failover() throws Exception { - - try { - sentinel.failover("other"); - } catch (Exception e) { - assertThat(e).hasMessageContaining("ERR No such master with that name"); - } - } - - @Test - public void monitor() throws Exception { - - try { - sentinel.remove("mymaster2"); - } catch (Exception e) { - } - - String result = sentinel.monitor("mymaster2", hostAddr(), 8989, 2); - assertThat(result).isEqualTo("OK"); - } - - @Test - public void ping() throws Exception { - - String result = sentinel.ping(); - assertThat(result).isEqualTo("PONG"); - } - - @Test - public void set() throws Exception { - - String result = sentinel.set(MASTER_ID, "down-after-milliseconds", "1000"); - assertThat(result).isEqualTo("OK"); - } - - @Test - public void connectToRedisUsingSentinel() throws Exception { - RedisConnection connect = sentinelClient.connect().sync(); - connect.ping(); - connect.close(); - } - - @Test - public void connectToRedisUsingSentinelWithReconnect() throws Exception { - RedisConnection connect = sentinelClient.connect().sync(); - connect.ping(); - connect.quit(); - connect.ping(); - connect.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/sentinel/SentinelConnectionTest.java b/src/test/java/com/lambdaworks/redis/sentinel/SentinelConnectionTest.java deleted file mode 100644 index 7b76e46d59..0000000000 --- a/src/test/java/com/lambdaworks/redis/sentinel/SentinelConnectionTest.java +++ /dev/null @@ -1,146 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.List; -import java.util.Map; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; - -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisFuture; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.codec.ByteArrayCodec; -import com.lambdaworks.redis.sentinel.api.StatefulRedisSentinelConnection; -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; -import com.lambdaworks.redis.sentinel.api.sync.RedisSentinelCommands; - -public class SentinelConnectionTest extends AbstractSentinelTest { - - private StatefulRedisSentinelConnection connection; - private RedisSentinelAsyncCommands sentinelAsync; - - @BeforeClass - public static void setupClient() { - sentinelClient = new RedisClient(RedisURI.Builder.sentinel(TestSettings.host(), MASTER_ID).build()); - } - - @Before - public void openConnection() throws Exception { - connection = sentinelClient.connectSentinel(); - sentinel = connection.sync(); - sentinelAsync = connection.async(); - } - - @Test - public void testAsync() throws Exception { - - RedisFuture>> future = sentinelAsync.masters(); - - assertThat(future.get()).isNotNull(); - assertThat(future.isDone()).isTrue(); - assertThat(future.isCancelled()).isFalse(); - - } - - @Test - public void testFuture() throws Exception { - - RedisFuture> future = sentinelAsync.master("unknown master"); - - AtomicBoolean state = new AtomicBoolean(); - - future.exceptionally(throwable -> { - state.set(true); - return null; - }); - - assertThat(future.await(5, TimeUnit.SECONDS)).isTrue(); - assertThat(state.get()).isTrue(); - } - - @Test - public void testStatefulConnection() throws Exception { - - StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); - assertThat(statefulConnection).isSameAs(statefulConnection.async().getStatefulConnection()); - - } - - @Test - public void testSyncConnection() throws Exception { - - StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); - RedisSentinelCommands sync = statefulConnection.sync(); - assertThat(sync.ping()).isEqualTo("PONG"); - - } - - @Test - public void testSyncAsyncConversion() throws Exception { - - StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); - assertThat(statefulConnection.sync().getStatefulConnection()).isSameAs(statefulConnection); - assertThat(statefulConnection.sync().getStatefulConnection().sync()).isSameAs(statefulConnection.sync()); - - } - - @Test - public void testSyncClose() throws Exception { - - StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); - statefulConnection.sync().close(); - - Wait.untilTrue(() -> !sentinel.isOpen()).waitOrTimeout(); - - assertThat(sentinel.isOpen()).isFalse(); - assertThat(statefulConnection.isOpen()).isFalse(); - } - - @Test - public void testAsyncClose() throws Exception { - StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); - statefulConnection.async().close(); - - Wait.untilTrue(() -> !sentinel.isOpen()).waitOrTimeout(); - - assertThat(sentinel.isOpen()).isFalse(); - assertThat(statefulConnection.isOpen()).isFalse(); - } - - @Test - public void connectToOneNode() throws Exception { - RedisSentinelCommands connection = sentinelClient - .connectSentinel(RedisURI.Builder.sentinel(TestSettings.host(), MASTER_ID).build()).sync(); - assertThat(connection.ping()).isEqualTo("PONG"); - connection.close(); - } - - @Test - public void deprecatedConnectToOneNode() throws Exception { - RedisSentinelAsyncCommands connection = sentinelClient - .connectSentinelAsync(RedisURI.Builder.sentinel(TestSettings.host(), MASTER_ID).build()); - assertThat(connection.ping().get()).isEqualTo("PONG"); - connection.close(); - } - - @Test - public void connectWithByteCodec() throws Exception { - RedisSentinelCommands connection = sentinelClient.connectSentinel(new ByteArrayCodec()).sync(); - assertThat(connection.master(MASTER_ID.getBytes())).isNotNull(); - connection.close(); - } - - @Test - public void deprecatedConnectWithByteCodec() throws Exception { - RedisSentinelAsyncCommands connection = sentinelClient.connectSentinelAsync(new ByteArrayCodec()); - assertThat(connection.master(MASTER_ID.getBytes())).isNotNull(); - connection.close(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/sentinel/SentinelFailoverTest.java b/src/test/java/com/lambdaworks/redis/sentinel/SentinelFailoverTest.java deleted file mode 100644 index b5f1f10c27..0000000000 --- a/src/test/java/com/lambdaworks/redis/sentinel/SentinelFailoverTest.java +++ /dev/null @@ -1,85 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; -import static com.lambdaworks.Delay.delay; -import static com.lambdaworks.redis.TestSettings.port; -import static org.assertj.core.api.Assertions.assertThat; - -import java.util.regex.Matcher; -import java.util.regex.Pattern; - -import com.lambdaworks.redis.FastShutdown; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Ignore; -import org.junit.Rule; -import org.junit.Test; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; - -@Ignore("For manual runs only. Fails too often due to slow sentinel sync") -public class SentinelFailoverTest extends AbstractSentinelTest { - - @Rule - public SentinelRule sentinelRule = new SentinelRule(sentinelClient, false, 26379, 26380); - - @BeforeClass - public static void setupClient() { - sentinelClient = new RedisClient(RedisURI.Builder.sentinel(TestSettings.host(), 26380, MASTER_ID).build()); - } - - @Before - public void openConnection() throws Exception { - sentinel = sentinelClient.connectSentinelAsync().getStatefulConnection().sync(); - sentinelRule.needMasterWithSlave(MASTER_ID, port(3), port(4)); - } - - @Test - public void connectToRedisUsingSentinel() throws Exception { - - RedisCommands connect = sentinelClient.connect().sync(); - assertThat(connect.ping()).isEqualToIgnoringCase("PONG"); - - connect.close(); - } - - @Test - public void failover() throws Exception { - - RedisClient redisClient = new RedisClient(RedisURI.Builder.redis(TestSettings.host(), port(3)).build()); - - String tcpPort1 = connectUsingSentinelAndGetPort(); - - sentinelRule.waitForConnectedSlaves(MASTER_ID); - sentinel.failover(MASTER_ID); - - delay(seconds(5)); - - sentinelRule.waitForConnectedSlaves(MASTER_ID); - - String tcpPort2 = connectUsingSentinelAndGetPort(); - assertThat(tcpPort1).isNotEqualTo(tcpPort2); - FastShutdown.shutdown(redisClient); - } - - protected String connectUsingSentinelAndGetPort() { - RedisCommands connectAfterFailover = sentinelClient.connect().sync(); - String tcpPort2 = getTcpPort(connectAfterFailover); - connectAfterFailover.close(); - return tcpPort2; - } - - protected String getTcpPort(RedisCommands commands) { - Pattern pattern = Pattern.compile(".*tcp_port\\:(\\d+).*", Pattern.DOTALL); - - Matcher matcher = pattern.matcher(commands.info("server")); - if (matcher.lookingAt()) { - return matcher.group(1); - } - return null; - } - -} diff --git a/src/test/java/com/lambdaworks/redis/sentinel/SentinelRule.java b/src/test/java/com/lambdaworks/redis/sentinel/SentinelRule.java deleted file mode 100644 index da16d629be..0000000000 --- a/src/test/java/com/lambdaworks/redis/sentinel/SentinelRule.java +++ /dev/null @@ -1,342 +0,0 @@ -package com.lambdaworks.redis.sentinel; - -import static com.google.code.tempusfugit.temporal.Duration.seconds; - -import java.util.Arrays; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.junit.rules.TestRule; -import org.junit.runner.Description; -import org.junit.runners.model.Statement; - -import com.lambdaworks.Wait; -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.models.role.RedisInstance; -import com.lambdaworks.redis.models.role.RoleParser; -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; -import com.lambdaworks.redis.sentinel.api.sync.RedisSentinelCommands; - -/** - * Rule to simplify Redis Sentinel handling. - * - * This rule allows to: - *
      - *
    • Flush masters before test
    • - *
    • Check for slave/alive slaves to a master
    • - *
    • Wait for slave/alive slaves to a master
    • - *
    • Find a master on a given set of ports
    • - *
    • Setup a master/slave combination
    • - *
    - * - * - * @author Mark Paluch - */ -public class SentinelRule implements TestRule { - - private RedisClient redisClient; - private final boolean flushBeforeTest; - private Map> sentinelConnections = new HashMap<>(); - protected Logger log = LogManager.getLogger(getClass()); - - /** - * - * @param redisClient - * @param flushBeforeTest - * @param sentinelPorts - */ - public SentinelRule(RedisClient redisClient, boolean flushBeforeTest, int... sentinelPorts) { - this.redisClient = redisClient; - this.flushBeforeTest = flushBeforeTest; - - log.info("[Sentinel] Connecting to sentinels: " + Arrays.toString(sentinelPorts)); - for (int port : sentinelPorts) { - RedisSentinelAsyncCommands connection = redisClient - .connectSentinelAsync(RedisURI.Builder.redis(TestSettings.host(), port).build()); - sentinelConnections.put(port, connection.getStatefulConnection().sync()); - } - } - - @Override - public Statement apply(final Statement base, Description description) { - - final Statement before = new Statement() { - @Override - public void evaluate() throws Exception { - if (flushBeforeTest) { - flush(); - } - } - }; - - return new Statement() { - @Override - public void evaluate() throws Throwable { - before.evaluate(); - base.evaluate(); - - for (RedisSentinelCommands commands : sentinelConnections.values()) { - commands.close(); - } - } - }; - } - - /** - * Flush Sentinel masters. - */ - public void flush() { - log.info("[Sentinel] Flushing masters of sentinels"); - for (RedisSentinelCommands connection : sentinelConnections.values()) { - List> masters = connection.masters(); - - for (Map master : masters) { - connection.remove(master.get("name")); - connection.reset(master.get("name")); - } - } - - for (Map.Entry> entry : sentinelConnections.entrySet()) { - Wait.untilTrue(() -> entry.getValue().masters().isEmpty()) - .message("Sentinel on " + entry.getKey() + " has still masters").waitOrTimeout(); - } - } - - /** - * Requires a master with a slave. If no master or slave is present, the rule flushes known masters and sets up a master - * with a slave. - * - * @param masterId - * @param redisPorts - */ - public void needMasterWithSlave(String masterId, int... redisPorts) { - - if (!hasSlaves(masterId) || !hasMaster(redisPorts)) { - flush(); - int masterPort = setupMasterSlave(redisPorts); - monitor(masterId, TestSettings.hostAddr(), masterPort, 1, true); - } - - waitForConnectedSlaves(masterId); - } - - /** - * Wait until the master has a connected slave. - * - * @param masterId - */ - public void waitForConnectedSlaves(String masterId) { - log.info("[Sentinel] Waiting until master " + masterId + " has at least one connected slave"); - Wait.untilTrue(() -> hasConnectedSlaves(masterId)).during(seconds(20)).message("No slave found").waitOrTimeout(); - log.info("[Sentinel] Found a connected slave for master " + masterId); - } - - /** - * Wait until sentinel can provide an address for the master. - * - * @param masterId - */ - public void waitForMaster(String masterId) { - log.info("[Sentinel] Waiting until master " + masterId + " can provide a socket address"); - Wait.untilNoException(() -> { - - for (RedisSentinelCommands commands : sentinelConnections.values()) { - if (commands.getMasterAddrByName(masterId) == null) { - throw new IllegalStateException("No address"); - } - } - - }).during(seconds(20)).message("Cannot provide an address for " + masterId).waitOrTimeout(); - log.info("[Sentinel] Found master " + masterId); - - } - - /** - * Monitor a master and wait until all sentinels ACK'd by checking last-ping-reply - * - * @param key - * @param ip - * @param port - * @param quorum - */ - public void monitor(final String key, String ip, int port, int quorum, boolean sync) { - - log.info("[Sentinel] Monitoring master " + key + " (" + ip + ":" + port + ")"); - for (RedisSentinelCommands connection : sentinelConnections.values()) { - connection.monitor(key, ip, port, quorum); - } - - if (sync) { - Wait.untilTrue(() -> { - for (RedisSentinelCommands connection : sentinelConnections.values()) { - Map map = connection.master(key); - String reply = map.get("last-ping-reply"); - if (reply == null || "0".equals(reply)) { - return false; - } - } - return true; - }).waitOrTimeout(); - - log.info("[Sentinel] Master " + key + " (" + ip + ":" + port + ") is monitored now"); - } - } - - /** - * Check if the master has slaves at all (no check for connection/alive). - * - * @param masterId - * @return - */ - public boolean hasSlaves(String masterId) { - try { - for (RedisSentinelCommands connection : sentinelConnections.values()) { - - return !connection.slaves(masterId).isEmpty(); - } - } catch (Exception e) { - if (e.getMessage().contains("No such master with that name")) { - return false; - } - } - - return false; - } - - /** - * Check if a master runs on any of the given ports. - * - * @param redisPorts - * @return - */ - public boolean hasMaster(int... redisPorts) { - - Map> connections = new HashMap<>(); - for (int redisPort : redisPorts) { - connections.put(redisPort, - redisClient.connect(RedisURI.Builder.redis(TestSettings.hostAddr(), redisPort).build()).sync()); - } - - try { - Integer masterPort = getMasterPort(connections); - if (masterPort != null) { - return true; - } - } finally { - for (RedisCommands commands : connections.values()) { - commands.close(); - } - } - - return false; - } - - /** - * Check if the master has connected slaves. - * - * @param masterId - * @return - */ - public boolean hasConnectedSlaves(String masterId) { - for (RedisSentinelCommands connection : sentinelConnections.values()) { - List> slaves = connection.slaves(masterId); - for (Map slave : slaves) { - - String masterLinkStatus = slave.get("master-link-status"); - if (masterLinkStatus == null || !masterLinkStatus.contains("ok")) { - continue; - } - - String masterPort = slave.get("master-port"); - if (masterPort == null || masterPort.contains("?")) { - continue; - } - - String roleReported = slave.get("role-reported"); - if (roleReported == null || !roleReported.contains("slave")) { - continue; - } - - String flags = slave.get("flags"); - if (flags == null || flags.contains("disconnected") || flags.contains("down") | !flags.contains("slave")) { - continue; - } - - return true; - } - - return false; - } - - return false; - } - - /** - * Setup a master with one or more slaves (depending on port count). - * - * @param redisPorts - * @return - */ - public int setupMasterSlave(int... redisPorts) { - - log.info("[Sentinel] Create a master with slaves on ports " + Arrays.toString(redisPorts)); - Map> connections = new HashMap<>(); - for (int redisPort : redisPorts) { - connections.put(redisPort, - redisClient.connect(RedisURI.Builder.redis(TestSettings.hostAddr(), redisPort).build()).sync()); - } - - for (RedisCommands commands : connections.values()) { - commands.slaveofNoOne(); - } - - for (Map.Entry> entry : connections.entrySet()) { - if (entry.getKey().intValue() != redisPorts[0]) { - entry.getValue().slaveof(TestSettings.hostAddr(), redisPorts[0]); - } - } - - try { - - Wait.untilTrue(() -> getMasterPort(connections) != null).message("Cannot find master").waitOrTimeout(); - Integer masterPort = getMasterPort(connections); - log.info("[Sentinel] Master on port " + masterPort); - if (masterPort != null) { - return masterPort; - } - } finally { - for (RedisCommands commands : connections.values()) { - commands.close(); - } - } - - throw new IllegalStateException("No master available on ports: " + connections.keySet()); - } - - /** - * Retrieve the port of the first found master. - * - * @param connections - * @return - */ - public Integer getMasterPort(Map> connections) { - - for (Map.Entry> entry : connections.entrySet()) { - - List role = entry.getValue().role(); - - RedisInstance redisInstance = RoleParser.parse(role); - if (redisInstance.getRole() == RedisInstance.Role.MASTER) { - return entry.getKey(); - } - } - return null; - } - -} diff --git a/src/test/java/com/lambdaworks/redis/sentinel/rx/SentinelRxCommandTest.java b/src/test/java/com/lambdaworks/redis/sentinel/rx/SentinelRxCommandTest.java deleted file mode 100644 index 1d796863b2..0000000000 --- a/src/test/java/com/lambdaworks/redis/sentinel/rx/SentinelRxCommandTest.java +++ /dev/null @@ -1,33 +0,0 @@ -package com.lambdaworks.redis.sentinel.rx; - -import static com.lambdaworks.redis.TestSettings.hostAddr; -import static org.assertj.core.api.Assertions.assertThat; - -import com.lambdaworks.redis.TestSettings; -import com.lambdaworks.redis.commands.rx.RxSyncInvocationHandler; -import com.lambdaworks.redis.sentinel.SentinelCommandTest; -import com.lambdaworks.redis.sentinel.api.async.RedisSentinelAsyncCommands; -import com.lambdaworks.redis.sentinel.api.rx.RedisSentinelReactiveCommands; - -/** - * @author Mark Paluch - */ -public class SentinelRxCommandTest extends SentinelCommandTest { - - @Override - public void openConnection() throws Exception { - - RedisSentinelAsyncCommands async = sentinelClient.connectSentinelAsync(); - RedisSentinelReactiveCommands reactive = async.getStatefulConnection().reactive(); - sentinel = RxSyncInvocationHandler.sync(async.getStatefulConnection()); - - try { - sentinel.master(MASTER_ID); - } catch (Exception e) { - sentinelRule.monitor(MASTER_ID, hostAddr(), TestSettings.port(3), 1, true); - } - - assertThat(reactive.isOpen()).isTrue(); - assertThat(reactive.getStatefulConnection()).isSameAs(async.getStatefulConnection()); - } -} diff --git a/src/test/java/com/lambdaworks/redis/server/RandomResponseServer.java b/src/test/java/com/lambdaworks/redis/server/RandomResponseServer.java deleted file mode 100644 index 027861ffbe..0000000000 --- a/src/test/java/com/lambdaworks/redis/server/RandomResponseServer.java +++ /dev/null @@ -1,49 +0,0 @@ -package com.lambdaworks.redis.server; - -import java.util.concurrent.TimeUnit; - -import io.netty.bootstrap.ServerBootstrap; -import io.netty.channel.*; -import io.netty.channel.nio.NioEventLoopGroup; -import io.netty.channel.socket.SocketChannel; -import io.netty.channel.socket.nio.NioServerSocketChannel; - -/** - * Tiny netty server to generate random base64 data on message reception. - * - * @author Mark Paluch - */ -public class RandomResponseServer { - - private EventLoopGroup bossGroup; - private EventLoopGroup workerGroup; - private Channel channel; - - public void initialize(int port) throws InterruptedException { - - bossGroup = new NioEventLoopGroup(1); - workerGroup = new NioEventLoopGroup(); - - ServerBootstrap b = new ServerBootstrap(); - b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 100) - .childHandler(new ChannelInitializer() { - @Override - public void initChannel(SocketChannel ch) throws Exception { - ChannelPipeline p = ch.pipeline(); - // p.addLast(new LoggingHandler(LogLevel.INFO)); - p.addLast(new RandomServerHandler()); - } - }); - - // Start the server. - ChannelFuture f = b.bind(port).sync(); - - channel = f.channel(); - } - - public void shutdown() { - channel.close(); - bossGroup.shutdownGracefully(100, 100, TimeUnit.MILLISECONDS); - workerGroup.shutdownGracefully(100, 100, TimeUnit.MILLISECONDS); - } -} diff --git a/src/test/java/com/lambdaworks/redis/server/RandomServerHandler.java b/src/test/java/com/lambdaworks/redis/server/RandomServerHandler.java deleted file mode 100644 index d1bf180094..0000000000 --- a/src/test/java/com/lambdaworks/redis/server/RandomServerHandler.java +++ /dev/null @@ -1,46 +0,0 @@ -package com.lambdaworks.redis.server; - -import java.security.SecureRandom; -import java.util.Arrays; - -import io.netty.buffer.ByteBuf; -import io.netty.channel.ChannelHandler; -import io.netty.channel.ChannelHandlerContext; -import io.netty.channel.ChannelInboundHandlerAdapter; -import io.netty.handler.codec.base64.Base64; - -/** - * Handler to generate random base64 data. - */ -@ChannelHandler.Sharable -public class RandomServerHandler extends ChannelInboundHandlerAdapter { - - private SecureRandom random = new SecureRandom(); - - @Override - public void channelRead(ChannelHandlerContext ctx, Object msg) { - byte initial[] = new byte[1]; - random.nextBytes(initial); - - byte[] response = new byte[Math.abs((int) initial[0])]; - - Arrays.fill(response, "A".getBytes()[0]); - - ByteBuf buf = ctx.alloc().heapBuffer(response.length); - - ByteBuf encoded = buf.writeBytes(response); - ctx.write(encoded); - } - - @Override - public void channelReadComplete(ChannelHandlerContext ctx) { - ctx.flush(); - } - - @Override - public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { - // Close the connection when an exception is raised. - cause.printStackTrace(); - ctx.close(); - } -} \ No newline at end of file diff --git a/src/test/java/com/lambdaworks/redis/support/CdiTest.java b/src/test/java/com/lambdaworks/redis/support/CdiTest.java deleted file mode 100644 index 863bf7f31e..0000000000 --- a/src/test/java/com/lambdaworks/redis/support/CdiTest.java +++ /dev/null @@ -1,89 +0,0 @@ -package com.lambdaworks.redis.support; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.mockito.Mockito.mock; - -import javax.enterprise.inject.Disposes; -import javax.enterprise.inject.Produces; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.FastShutdown; -import org.apache.webbeans.cditest.CdiTestContainer; -import org.apache.webbeans.cditest.CdiTestContainerLoader; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; - -import com.lambdaworks.redis.RedisConnectionStateListener; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.DefaultClientResources; - -/** - * @author Mark Paluch - * @since 3.0 - */ -public class CdiTest { - - static CdiTestContainer container; - - @BeforeClass - public static void setUp() throws Exception { - - container = CdiTestContainerLoader.getCdiContainer(); - container.bootContainer(); - container.startApplicationScope(); - } - - @Produces - public RedisURI redisURI() { - return RedisURI.Builder.redis(AbstractRedisClientTest.host, AbstractRedisClientTest.port).build(); - } - - @Produces - @PersonDB - public ClientResources clientResources() { - return DefaultClientResources.create(); - } - - public void shutdownClientResources(@Disposes ClientResources clientResources) throws Exception { - FastShutdown.shutdown(clientResources); - } - - @PersonDB - @Produces - public RedisURI redisURIQualified() { - return RedisURI.Builder.redis(AbstractRedisClientTest.host, AbstractRedisClientTest.port + 1).build(); - } - - @Test - public void testInjection() { - - InjectedClient injectedClient = container.getInstance(InjectedClient.class); - assertThat(injectedClient.redisClient).isNotNull(); - assertThat(injectedClient.redisClusterClient).isNotNull(); - - assertThat(injectedClient.qualifiedRedisClient).isNotNull(); - assertThat(injectedClient.qualifiedRedisClusterClient).isNotNull(); - - RedisConnectionStateListener mock = mock(RedisConnectionStateListener.class); - - // do some interaction to force the container a creation of the repositories. - injectedClient.redisClient.addListener(mock); - injectedClient.redisClusterClient.addListener(mock); - - injectedClient.qualifiedRedisClient.addListener(mock); - injectedClient.qualifiedRedisClusterClient.addListener(mock); - - injectedClient.pingRedis(); - } - - @AfterClass - public static void afterClass() throws Exception { - - container.stopApplicationScope(); - container.shutdownContainer(); - - } - -} diff --git a/src/test/java/com/lambdaworks/redis/support/InjectedClient.java b/src/test/java/com/lambdaworks/redis/support/InjectedClient.java deleted file mode 100644 index b1897c7ec3..0000000000 --- a/src/test/java/com/lambdaworks/redis/support/InjectedClient.java +++ /dev/null @@ -1,48 +0,0 @@ -package com.lambdaworks.redis.support; - -import javax.annotation.PostConstruct; -import javax.annotation.PreDestroy; -import javax.inject.Inject; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.RedisClusterClient; - -/** - * @author Mark Paluch - * @since 3.0 - */ -public class InjectedClient { - - @Inject - public RedisClient redisClient; - - @Inject - public RedisClusterClient redisClusterClient; - - @Inject - @PersonDB - public RedisClient qualifiedRedisClient; - - @Inject - @PersonDB - public RedisClusterClient qualifiedRedisClusterClient; - - private RedisCommands connection; - - @PostConstruct - public void postConstruct() { - connection = redisClient.connect().sync(); - } - - public void pingRedis() { - connection.ping(); - } - - @PreDestroy - public void preDestroy() { - if (connection != null) { - connection.close(); - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/support/PersonDB.java b/src/test/java/com/lambdaworks/redis/support/PersonDB.java deleted file mode 100644 index 004935fb48..0000000000 --- a/src/test/java/com/lambdaworks/redis/support/PersonDB.java +++ /dev/null @@ -1,16 +0,0 @@ -package com.lambdaworks.redis.support; - -import java.lang.annotation.Retention; -import java.lang.annotation.RetentionPolicy; - -import javax.inject.Qualifier; - -/** - * @author Mark Paluch - * @since 3.0 - */ -@Retention(RetentionPolicy.RUNTIME) -@Qualifier -public @interface PersonDB { - -} diff --git a/src/test/java/com/lambdaworks/redis/support/PoolingProxyFactoryTest.java b/src/test/java/com/lambdaworks/redis/support/PoolingProxyFactoryTest.java deleted file mode 100644 index ab64a9868c..0000000000 --- a/src/test/java/com/lambdaworks/redis/support/PoolingProxyFactoryTest.java +++ /dev/null @@ -1,55 +0,0 @@ -package com.lambdaworks.redis.support; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import org.junit.Test; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisConnectionPool; -import com.lambdaworks.redis.RedisException; - -public class PoolingProxyFactoryTest extends AbstractRedisClientTest { - - @Test - public void testCreateDefault() throws Exception { - - RedisConnectionPool> pool = client.pool(); - RedisConnection connection = PoolingProxyFactory.create(pool); - - connection.set("a", "b"); - connection.set("x", "y"); - - pool.close(); - } - - @Test - public void testCloseReturnsConnection() throws Exception { - - RedisConnectionPool> pool = client.pool(); - assertThat(pool.getNumActive()).isEqualTo(0); - RedisConnection connection = pool.allocateConnection(); - assertThat(pool.getNumActive()).isEqualTo(1); - connection.close(); - assertThat(pool.getNumActive()).isEqualTo(0); - } - - @Test - public void testCreate() throws Exception { - - RedisConnection connection = PoolingProxyFactory.create(client.pool()); - - connection.set("a", "b"); - connection.close(); - - try { - connection.set("x", "y"); - fail("missing exception"); - } catch (RedisException e) { - assertThat(e.getMessage()).isEqualTo("Connection pool is closed"); - - } - } -} diff --git a/src/test/java/com/lambdaworks/redis/support/RedisClusterClientFactoryBeanTest.java b/src/test/java/com/lambdaworks/redis/support/RedisClusterClientFactoryBeanTest.java deleted file mode 100644 index 2d0be0017a..0000000000 --- a/src/test/java/com/lambdaworks/redis/support/RedisClusterClientFactoryBeanTest.java +++ /dev/null @@ -1,62 +0,0 @@ -package com.lambdaworks.redis.support; - -import static org.assertj.core.api.Assertions.*; - -import java.net.URI; - -import org.junit.Test; - -import com.lambdaworks.redis.RedisURI; - -/** - * @author Mark Paluch - */ -public class RedisClusterClientFactoryBeanTest { - - private RedisClusterClientFactoryBean sut = new RedisClusterClientFactoryBean(); - - @Test(expected = IllegalArgumentException.class) - public void invalidUri() throws Exception { - - sut.setUri(URI.create("http://www.web.de")); - sut.afterPropertiesSet(); - } - - @Test(expected = IllegalArgumentException.class) - public void sentinelUri() throws Exception { - - sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS_SENTINEL + "://www.web.de")); - sut.afterPropertiesSet(); - } - - @Test - public void validUri() throws Exception { - - sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS + "://password@host")); - sut.afterPropertiesSet(); - assertThat(sut.getRedisURI().getHost()).isEqualTo("host"); - assertThat(sut.getRedisURI().getPassword()).isEqualTo("password".toCharArray()); - } - - @Test - public void validUriPasswordOverride() throws Exception { - - sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS + "://password@host")); - sut.setPassword("thepassword"); - - sut.afterPropertiesSet(); - assertThat(sut.getRedisURI().getHost()).isEqualTo("host"); - assertThat(sut.getRedisURI().getPassword()).isEqualTo("thepassword".toCharArray()); - } - - @Test - public void supportsSsl() throws Exception { - - sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS_SECURE + "://password@host")); - sut.afterPropertiesSet(); - assertThat(sut.getRedisURI().getHost()).isEqualTo("host"); - assertThat(sut.getRedisURI().getPassword()).isEqualTo("password".toCharArray()); - assertThat(sut.getRedisURI().isVerifyPeer()).isFalse(); - assertThat(sut.getRedisURI().isSsl()).isTrue(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/support/SpringTest.java b/src/test/java/com/lambdaworks/redis/support/SpringTest.java deleted file mode 100644 index d63d5a95f1..0000000000 --- a/src/test/java/com/lambdaworks/redis/support/SpringTest.java +++ /dev/null @@ -1,52 +0,0 @@ -package com.lambdaworks.redis.support; - -import static org.assertj.core.api.Assertions.*; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.springframework.beans.factory.annotation.Autowired; -import org.springframework.beans.factory.annotation.Qualifier; -import org.springframework.test.context.ContextConfiguration; -import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; - -import com.lambdaworks.redis.RedisClient; -import com.lambdaworks.redis.cluster.RedisClusterClient; - -/** - * @author Mark Paluch - * @since 3.0 - */ -@RunWith(SpringJUnit4ClassRunner.class) -@ContextConfiguration -public class SpringTest { - - @Autowired - @Qualifier("RedisClient1") - private RedisClient redisClient1; - - @Autowired - @Qualifier("RedisClient2") - private RedisClient redisClient2; - - @Autowired - @Qualifier("RedisClient3") - private RedisClient redisClient3; - - @Autowired - @Qualifier("RedisClusterClient1") - private RedisClusterClient redisClusterClient1; - - @Autowired - @Qualifier("RedisClusterClient2") - private RedisClusterClient redisClusterClient2; - - @Test - public void testSpring() throws Exception { - - assertThat(redisClient1).isNotNull(); - assertThat(redisClient2).isNotNull(); - assertThat(redisClient3).isNotNull(); - assertThat(redisClusterClient1).isNotNull(); - assertThat(redisClusterClient2).isNotNull(); - } -} diff --git a/src/test/java/com/lambdaworks/redis/support/WithConnectionTest.java b/src/test/java/com/lambdaworks/redis/support/WithConnectionTest.java deleted file mode 100644 index 74ef53584f..0000000000 --- a/src/test/java/com/lambdaworks/redis/support/WithConnectionTest.java +++ /dev/null @@ -1,65 +0,0 @@ -package com.lambdaworks.redis.support; - -import static org.assertj.core.api.Assertions.assertThat; -import static org.assertj.core.api.Assertions.fail; - -import com.lambdaworks.redis.api.sync.RedisCommands; -import org.junit.Test; - -import com.lambdaworks.redis.AbstractRedisClientTest; -import com.lambdaworks.redis.RedisConnection; -import com.lambdaworks.redis.RedisConnectionPool; - -public class WithConnectionTest extends AbstractRedisClientTest { - - @Test - public void testPooling() throws Exception { - final RedisConnectionPool> pool = client.pool(); - - assertThat(pool.getNumActive()).isEqualTo(0); - assertThat(pool.getNumIdle()).isEqualTo(0); - - new WithConnection>(pool) { - - @Override - protected void run(RedisCommands connection) { - connection.set("key", "value"); - String result = connection.get("key"); - assertThat(result).isEqualTo("value"); - - assertThat(pool.getNumActive()).isEqualTo(1); - assertThat(pool.getNumIdle()).isEqualTo(0); - } - }; - - assertThat(pool.getNumActive()).isEqualTo(0); - assertThat(pool.getNumIdle()).isEqualTo(1); - - } - - @Test - public void testPoolingWithException() throws Exception { - final RedisConnectionPool> pool = client.pool(); - - assertThat(pool.getNumActive()).isEqualTo(0); - assertThat(pool.getNumIdle()).isEqualTo(0); - - try { - new WithConnection>(pool) { - - @Override - protected void run(RedisCommands connection) { - connection.set("key", "value"); - throw new IllegalStateException("test"); - } - }; - - fail("Missing Exception"); - } catch (Exception e) { - } - - assertThat(pool.getNumActive()).isEqualTo(0); - assertThat(pool.getNumIdle()).isEqualTo(1); - - } -} diff --git a/src/test/java/io/lettuce/RedisBug.java b/src/test/java/io/lettuce/RedisBug.java new file mode 100644 index 0000000000..9d09278994 --- /dev/null +++ b/src/test/java/io/lettuce/RedisBug.java @@ -0,0 +1,33 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce; + +import java.lang.annotation.*; + +import org.junit.jupiter.api.Disabled; + +/** + * Annotations for tests disabled due to a Redis bug. + * + * @author Mark Paluch + */ +@Target({ ElementType.TYPE, ElementType.METHOD }) +@Retention(RetentionPolicy.RUNTIME) +@Documented +@Disabled("Redis Bug") +public @interface RedisBug { + String value() default ""; +} diff --git a/src/test/java/io/lettuce/apigenerator/CompilationUnitFactory.java b/src/test/java/io/lettuce/apigenerator/CompilationUnitFactory.java new file mode 100644 index 0000000000..87e039c888 --- /dev/null +++ b/src/test/java/io/lettuce/apigenerator/CompilationUnitFactory.java @@ -0,0 +1,217 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.apigenerator; + +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.util.*; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; + +import org.springframework.util.StringUtils; + +import com.github.javaparser.JavaParser; +import com.github.javaparser.ast.*; +import com.github.javaparser.ast.body.ClassOrInterfaceDeclaration; +import com.github.javaparser.ast.body.MethodDeclaration; +import com.github.javaparser.ast.comments.Comment; +import com.github.javaparser.ast.comments.JavadocComment; +import com.github.javaparser.ast.expr.Name; +import com.github.javaparser.ast.expr.SimpleName; +import com.github.javaparser.ast.type.ClassOrInterfaceType; +import com.github.javaparser.ast.type.Type; +import com.github.javaparser.ast.type.TypeParameter; +import com.github.javaparser.ast.visitor.VoidVisitorAdapter; + +/** + * @author Mark Paluch + */ +class CompilationUnitFactory { + + private File templateFile; + private File sources; + private File target; + private String targetPackage; + private String targetName; + + private Function typeDocFunction; + private Map, Function> methodReturnTypeMutation; + private Predicate methodFilter; + private Supplier> importSupplier; + private Consumer typeMutator; + private Function methodCommentMutator; + + private CompilationUnit template; + private CompilationUnit result = new CompilationUnit(); + private ClassOrInterfaceDeclaration resultType; + + public CompilationUnitFactory(File templateFile, File sources, String targetPackage, String targetName, + Function typeDocFunction, Function methodReturnTypeFunction, + Predicate methodFilter, Supplier> importSupplier, + Consumer typeMutator, Function methodCommentMutator) { + + this.templateFile = templateFile; + this.sources = sources; + this.targetPackage = targetPackage; + this.targetName = targetName; + this.typeDocFunction = typeDocFunction; + this.methodFilter = methodFilter; + this.importSupplier = importSupplier; + this.typeMutator = typeMutator; + this.methodCommentMutator = methodCommentMutator; + this.methodReturnTypeMutation = new LinkedHashMap<>(); + + this.methodReturnTypeMutation.put(it -> true, methodReturnTypeFunction); + + this.target = new File(sources, targetPackage.replace('.', '/') + "/" + targetName + ".java"); + } + + public void createInterface() throws Exception { + + result.setPackageDeclaration(new PackageDeclaration(new Name(targetPackage))); + + template = JavaParser.parse(templateFile); + + ClassOrInterfaceDeclaration templateTypeDeclaration = (ClassOrInterfaceDeclaration) template.getTypes().get(0); + resultType = new ClassOrInterfaceDeclaration(EnumSet.of(Modifier.PUBLIC), true, targetName); + if (templateTypeDeclaration.getExtendedTypes() != null) { + resultType.setExtendedTypes(templateTypeDeclaration.getExtendedTypes()); + } + + if (!templateTypeDeclaration.getTypeParameters().isEmpty()) { + resultType.setTypeParameters(new NodeList<>()); + for (TypeParameter typeParameter : templateTypeDeclaration.getTypeParameters()) { + resultType.getTypeParameters().add( + new TypeParameter(typeParameter.getName().getIdentifier(), typeParameter.getTypeBound())); + } + } + + resultType.setComment(new JavadocComment(typeDocFunction.apply(templateTypeDeclaration.getComment().orElse(null) + .getContent()))); + result.setComment(template.getComment().orElse(null)); + + result.setImports(new NodeList<>()); + + result.addType(resultType); + resultType.setParentNode(result); + + if (template.getImports() != null) { + result.getImports().addAll(template.getImports()); + } + List importLines = importSupplier.get(); + for (String importLine : importLines) { + result.getImports().add(new ImportDeclaration(importLine, false, false)); + } + + new MethodVisitor().visit(template, null); + + if (typeMutator != null) { + typeMutator.accept(resultType); + } + + writeResult(); + + } + + public void keepMethodSignaturesFor(Set methodSignaturesToKeep) { + + this.methodReturnTypeMutation.put(methodDeclaration -> contains(methodSignaturesToKeep, methodDeclaration), + MethodDeclaration::getType); + } + + private void writeResult() throws IOException { + + FileOutputStream fos = new FileOutputStream(target); + fos.write(result.toString().getBytes()); + fos.close(); + } + + public static Type createParametrizedType(String baseType, String... typeArguments) { + + NodeList args = new NodeList<>(); + + Arrays.stream(typeArguments).map(it -> { + + if (it.contains("[]")) { + return it; + } + + return StringUtils.capitalize(it); + }).map(it -> new ClassOrInterfaceType(null, it)).forEach(args::add); + + return new ClassOrInterfaceType(null, new SimpleName(baseType), args); + } + + public static boolean contains(Collection haystack, MethodDeclaration needle) { + + ClassOrInterfaceDeclaration declaringClass = (ClassOrInterfaceDeclaration) needle.getParentNode().get(); + + return haystack.contains(needle.getNameAsString()) + || haystack.contains(declaringClass.getNameAsString() + "." + needle.getNameAsString()); + } + + /** + * Simple visitor implementation for visiting MethodDeclaration nodes. + */ + private class MethodVisitor extends VoidVisitorAdapter { + + @Override + public void visit(MethodDeclaration parsedDeclaration, Object arg) { + + if (!methodFilter.test(parsedDeclaration)) { + return; + } + + if (parsedDeclaration.getNameAsString().equals("close")) { + System.out.println(); + } + Type returnType = getMethodReturnType(parsedDeclaration); + + MethodDeclaration method = new MethodDeclaration(parsedDeclaration.getModifiers(), + parsedDeclaration.getAnnotations(), parsedDeclaration.getTypeParameters(), returnType, + parsedDeclaration.getName(), parsedDeclaration.getParameters(), parsedDeclaration.getThrownExceptions(), + null); + + if (methodCommentMutator != null) { + method.setComment(methodCommentMutator.apply(parsedDeclaration.getComment().orElse(null))); + } else { + method.setComment(parsedDeclaration.getComment().orElse(null)); + } + + resultType.addMember(method); + } + + private Type getMethodReturnType(MethodDeclaration parsedDeclaration) { + + List, Function>> entries = new ArrayList<>( + methodReturnTypeMutation.entrySet()); + + Collections.reverse(entries); + + for (Map.Entry, Function> entry : entries) { + + if (entry.getKey().test(parsedDeclaration)) { + return entry.getValue().apply(parsedDeclaration); + } + } + + return null; + } + } +} diff --git a/src/test/java/io/lettuce/apigenerator/Constants.java b/src/test/java/io/lettuce/apigenerator/Constants.java new file mode 100644 index 0000000000..2caf4a9586 --- /dev/null +++ b/src/test/java/io/lettuce/apigenerator/Constants.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.apigenerator; + +import java.io.File; + +/** + * @author Mark Paluch + */ +class Constants { + + public static final String[] TEMPLATE_NAMES = { "BaseRedisCommands", "RedisGeoCommands", "RedisHashCommands", + "RedisHLLCommands", "RedisKeyCommands", "RedisListCommands", "RedisScriptingCommands", "RedisSentinelCommands", + "RedisServerCommands", "RedisSetCommands", "RedisSortedSetCommands", "RedisStreamCommands", "RedisStringCommands", + "RedisTransactionalCommands" }; + + public static final File TEMPLATES = new File("src/main/templates"); + public static final File SOURCES = new File("src/main/java"); +} diff --git a/src/test/java/io/lettuce/apigenerator/CreateAsyncApi.java b/src/test/java/io/lettuce/apigenerator/CreateAsyncApi.java new file mode 100644 index 0000000000..7589cc5f85 --- /dev/null +++ b/src/test/java/io/lettuce/apigenerator/CreateAsyncApi.java @@ -0,0 +1,114 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.apigenerator; + +import java.io.File; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; +import java.util.function.Function; +import java.util.function.Supplier; + +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import com.github.javaparser.ast.body.MethodDeclaration; +import com.github.javaparser.ast.type.Type; + +import io.lettuce.core.internal.LettuceSets; + +/** + * Create async API based on the templates. + * + * @author Mark Paluch + */ +@RunWith(Parameterized.class) +public class CreateAsyncApi { + + private Set KEEP_METHOD_RESULT_TYPE = LettuceSets.unmodifiableSet("shutdown", "debugOom", "debugSegfault", + "digest", "close", "isOpen", "BaseRedisCommands.reset", "getStatefulConnection", "setAutoFlushCommands", + "flushCommands"); + + private CompilationUnitFactory factory; + + @Parameterized.Parameters(name = "Create {0}") + public static List arguments() { + List result = new ArrayList<>(); + + for (String templateName : Constants.TEMPLATE_NAMES) { + result.add(new Object[] { templateName }); + } + + return result; + } + + /** + * @param templateName + */ + public CreateAsyncApi(String templateName) { + + String targetName = templateName.replace("Commands", "AsyncCommands"); + + File templateFile = new File(Constants.TEMPLATES, "io/lettuce/core/api/" + templateName + ".java"); + String targetPackage; + + if (templateName.contains("RedisSentinel")) { + targetPackage = "io.lettuce.core.sentinel.api.async"; + } else { + targetPackage = "io.lettuce.core.api.async"; + } + + factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), + methodTypeMutator(), methodDeclaration -> true, importSupplier(), null, null); + + factory.keepMethodSignaturesFor(KEEP_METHOD_RESULT_TYPE); + } + + /** + * Mutate type comment. + * + * @return + */ + Function commentMutator() { + return s -> s.replaceAll("\\$\\{intent\\}", "Asynchronous executed commands") + "* @generated by " + + getClass().getName() + "\r\n "; + } + + /** + * Mutate type to async result. + * + * @return + */ + Function methodTypeMutator() { + return method -> CompilationUnitFactory.createParametrizedType("RedisFuture", method.getType().toString()); + } + + /** + * Supply additional imports. + * + * @return + */ + Supplier> importSupplier() { + return () -> Collections.singletonList("io.lettuce.core.RedisFuture"); + } + + @Test + public void createInterface() throws Exception { + factory.createInterface(); + } +} diff --git a/src/test/java/io/lettuce/apigenerator/CreateAsyncNodeSelectionClusterApi.java b/src/test/java/io/lettuce/apigenerator/CreateAsyncNodeSelectionClusterApi.java new file mode 100644 index 0000000000..5f59e9c1d9 --- /dev/null +++ b/src/test/java/io/lettuce/apigenerator/CreateAsyncNodeSelectionClusterApi.java @@ -0,0 +1,120 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.apigenerator; + +import java.io.File; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; + +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import com.github.javaparser.ast.body.MethodDeclaration; +import com.github.javaparser.ast.type.Type; + +import io.lettuce.core.internal.LettuceSets; + +/** + * Create async API based on the templates. + * + * @author Mark Paluch + */ +@RunWith(Parameterized.class) +public class CreateAsyncNodeSelectionClusterApi { + + private Set FILTER_METHODS = LettuceSets.unmodifiableSet("shutdown", "debugOom", "debugSegfault", "digest", "close", + "isOpen", "BaseRedisCommands.reset", "readOnly", "readWrite", "setAutoFlushCommands", "flushCommands"); + + private CompilationUnitFactory factory; + + @Parameterized.Parameters(name = "Create {0}") + public static List arguments() { + List result = new ArrayList<>(); + + for (String templateName : Constants.TEMPLATE_NAMES) { + if (templateName.contains("Transactional") || templateName.contains("Sentinel")) { + continue; + } + result.add(new Object[] { templateName }); + } + + return result; + } + + /** + * @param templateName + */ + public CreateAsyncNodeSelectionClusterApi(String templateName) { + + String targetName = templateName.replace("Commands", "AsyncCommands").replace("Redis", "NodeSelection"); + File templateFile = new File(Constants.TEMPLATES, "io/lettuce/core/api/" + templateName + ".java"); + String targetPackage = "io.lettuce.core.cluster.api.async"; + + factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), + methodTypeMutator(), methodFilter(), importSupplier(), null, null); + factory.keepMethodSignaturesFor(FILTER_METHODS); + } + + /** + * Mutate type comment. + * + * @return + */ + Function commentMutator() { + return s -> s.replaceAll("\\$\\{intent\\}", "Asynchronous executed commands on a node selection") + "* @generated by " + + getClass().getName() + "\r\n "; + } + + /** + * Method filter + * + * @return + */ + Predicate methodFilter() { + return method -> !CompilationUnitFactory.contains(FILTER_METHODS, method); + } + + /** + * Mutate type to async result. + * + * @return + */ + Function methodTypeMutator() { + return method -> { + return CompilationUnitFactory.createParametrizedType("AsyncExecutions", method.getType().toString()); + }; + } + + /** + * Supply additional imports. + * + * @return + */ + Supplier> importSupplier() { + return () -> Collections.singletonList("io.lettuce.core.RedisFuture"); + } + + @Test + public void createInterface() throws Exception { + factory.createInterface(); + } +} diff --git a/src/test/java/io/lettuce/apigenerator/CreateReactiveApi.java b/src/test/java/io/lettuce/apigenerator/CreateReactiveApi.java new file mode 100644 index 0000000000..fe5a221ce5 --- /dev/null +++ b/src/test/java/io/lettuce/apigenerator/CreateReactiveApi.java @@ -0,0 +1,181 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.apigenerator; + +import java.io.File; +import java.util.*; +import java.util.function.Function; +import java.util.function.Supplier; + +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import com.github.javaparser.ast.body.ClassOrInterfaceDeclaration; +import com.github.javaparser.ast.body.MethodDeclaration; +import com.github.javaparser.ast.comments.Comment; +import com.github.javaparser.ast.type.Type; + +import io.lettuce.core.internal.LettuceSets; + +/** + * Create reactive API based on the templates. + * + * @author Mark Paluch + */ +@RunWith(Parameterized.class) +public class CreateReactiveApi { + + private static Set KEEP_METHOD_RESULT_TYPE = LettuceSets.unmodifiableSet("digest", "close", "isOpen", + "BaseRedisCommands.reset", "getStatefulConnection", "setAutoFlushCommands", "flushCommands"); + + private static Set FORCE_FLUX_RESULT = LettuceSets.unmodifiableSet("eval", "evalsha", "dispatch"); + + private static Set VALUE_WRAP = LettuceSets.unmodifiableSet("geopos", "bitfield"); + + private static final Map RESULT_SPEC; + + static { + + Map resultSpec = new HashMap<>(); + resultSpec.put("geopos", "Flux>"); + resultSpec.put("bitfield", "Flux>"); + + RESULT_SPEC = resultSpec; + } + + private CompilationUnitFactory factory; + + @Parameterized.Parameters(name = "Create {0}") + public static List arguments() { + List result = new ArrayList<>(); + + for (String templateName : Constants.TEMPLATE_NAMES) { + result.add(new Object[] { templateName }); + } + + return result; + } + + /** + * + * @param templateName + */ + public CreateReactiveApi(String templateName) { + + String targetName = templateName.replace("Commands", "ReactiveCommands"); + File templateFile = new File(Constants.TEMPLATES, "io/lettuce/core/api/" + templateName + ".java"); + String targetPackage; + + if (templateName.contains("RedisSentinel")) { + targetPackage = "io.lettuce.core.sentinel.api.reactive"; + } else { + targetPackage = "io.lettuce.core.api.reactive"; + } + + factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), + methodTypeMutator(), methodDeclaration -> true, importSupplier(), null, methodCommentMutator()); + factory.keepMethodSignaturesFor(KEEP_METHOD_RESULT_TYPE); + } + + /** + * Mutate type comment. + * + * @return + */ + Function commentMutator() { + return s -> s.replaceAll("\\$\\{intent\\}", "Reactive executed commands").replaceAll("@since 3.0", "@since 4.0") + + "* @generated by " + getClass().getName() + "\r\n "; + } + + Function methodCommentMutator() { + return comment -> { + if (comment != null && comment.getContent() != null) { + comment.setContent( + comment.getContent().replaceAll("List<(.*)>", "$1").replaceAll("Set<(.*)>", "$1")); + } + return comment; + }; + } + + /** + * Mutate type to async result. + * + * @return + */ + Function methodTypeMutator() { + return method -> { + + ClassOrInterfaceDeclaration declaringClass = (ClassOrInterfaceDeclaration) method.getParentNode().get(); + + String baseType = "Mono"; + String typeArgument = method.getType().toString().trim(); + + if (getResultType(method, declaringClass) != null) { + typeArgument = getResultType(method, declaringClass); + } else if (CompilationUnitFactory.contains(FORCE_FLUX_RESULT, method)) { + baseType = "Flux"; + } else if (typeArgument.startsWith("List<")) { + baseType = "Flux"; + typeArgument = typeArgument.substring(5, typeArgument.length() - 1); + } else if (typeArgument.startsWith("Set<")) { + baseType = "Flux"; + typeArgument = typeArgument.substring(4, typeArgument.length() - 1); + } else { + baseType = "Mono"; + } + + if (CompilationUnitFactory.contains(VALUE_WRAP, method)) { + typeArgument = String.format("Value<%s>", typeArgument); + } + + return CompilationUnitFactory.createParametrizedType(baseType, typeArgument); + }; + } + + + + private String getResultType(MethodDeclaration method, + ClassOrInterfaceDeclaration classOfMethod) { + + if(RESULT_SPEC.containsKey(method.getName())){ + return RESULT_SPEC.get(method.getName()); + } + + String key = classOfMethod.getName() + "." + method.getName(); + + if(RESULT_SPEC.containsKey(key)){ + return RESULT_SPEC.get(key); + } + + return null; + } + + + /** + * Supply additional imports. + * + * @return + */ + Supplier> importSupplier() { + return () -> Arrays.asList("reactor.core.publisher.Flux", "reactor.core.publisher.Mono"); + } + + @Test + public void createInterface() throws Exception { + factory.createInterface(); + } +} diff --git a/src/test/java/io/lettuce/apigenerator/CreateSyncApi.java b/src/test/java/io/lettuce/apigenerator/CreateSyncApi.java new file mode 100644 index 0000000000..5539a50bf8 --- /dev/null +++ b/src/test/java/io/lettuce/apigenerator/CreateSyncApi.java @@ -0,0 +1,120 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.apigenerator; + +import java.io.File; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; + +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import com.github.javaparser.ast.body.MethodDeclaration; +import com.github.javaparser.ast.type.Type; + +import io.lettuce.core.internal.LettuceSets; + +/** + * Create sync API based on the templates. + * + * @author Mark Paluch + */ +@RunWith(Parameterized.class) +public class CreateSyncApi { + + private Set FILTER_METHODS = LettuceSets.unmodifiableSet("setAutoFlushCommands", "flushCommands"); + + private CompilationUnitFactory factory; + + @Parameterized.Parameters(name = "Create {0}") + public static List arguments() { + List result = new ArrayList<>(); + + for (String templateName : Constants.TEMPLATE_NAMES) { + result.add(new Object[] { templateName }); + } + + return result; + } + + /** + * + * @param templateName + */ + public CreateSyncApi(String templateName) { + + String targetName = templateName; + File templateFile = new File(Constants.TEMPLATES, "io/lettuce/core/api/" + templateName + ".java"); + String targetPackage; + + if (templateName.contains("RedisSentinel")) { + targetPackage = "io.lettuce.core.sentinel.api.sync"; + } else { + targetPackage = "io.lettuce.core.api.sync"; + } + + factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), + methodTypeMutator(), methodFilter(), importSupplier(), null, null); + } + + /** + * Mutate type comment. + * + * @return + */ + Function commentMutator() { + return s -> s.replaceAll("\\$\\{intent\\}", "Synchronous executed commands") + "* @generated by " + + getClass().getName() + "\r\n "; + } + + /** + * Method filter + * + * @return + */ + Predicate methodFilter() { + return method -> !CompilationUnitFactory.contains(FILTER_METHODS, method); + } + + /** + * Mutate type to async result. + * + * @return + */ + Function methodTypeMutator() { + return MethodDeclaration::getType; + } + + /** + * Supply additional imports. + * + * @return + */ + Supplier> importSupplier() { + return Collections::emptyList; + } + + @Test + public void createInterface() throws Exception { + factory.createInterface(); + } +} diff --git a/src/test/java/io/lettuce/apigenerator/CreateSyncNodeSelectionClusterApi.java b/src/test/java/io/lettuce/apigenerator/CreateSyncNodeSelectionClusterApi.java new file mode 100644 index 0000000000..13c206000f --- /dev/null +++ b/src/test/java/io/lettuce/apigenerator/CreateSyncNodeSelectionClusterApi.java @@ -0,0 +1,133 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.apigenerator; + +import java.io.File; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; + +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import com.github.javaparser.ast.body.ClassOrInterfaceDeclaration; +import com.github.javaparser.ast.body.MethodDeclaration; +import com.github.javaparser.ast.type.Type; + +import io.lettuce.core.internal.LettuceSets; + +/** + * Create sync API based on the templates. + * + * @author Mark Paluch + */ +@RunWith(Parameterized.class) +public class CreateSyncNodeSelectionClusterApi { + + private Set FILTER_METHODS = LettuceSets.unmodifiableSet("shutdown", "debugOom", "debugSegfault", "digest", + "close", "isOpen", "BaseRedisCommands.reset", "readOnly", "readWrite", "dispatch", "setAutoFlushCommands", "flushCommands"); + + private CompilationUnitFactory factory; + + @Parameterized.Parameters(name = "Create {0}") + public static List arguments() { + List result = new ArrayList<>(); + + for (String templateName : Constants.TEMPLATE_NAMES) { + if (templateName.contains("Transactional") || templateName.contains("Sentinel")) { + continue; + } + result.add(new Object[] { templateName }); + } + + return result; + } + + /** + * @param templateName + */ + public CreateSyncNodeSelectionClusterApi(String templateName) { + + String targetName = templateName.replace("Redis", "NodeSelection"); + File templateFile = new File(Constants.TEMPLATES, "io/lettuce/core/api/" + templateName + ".java"); + String targetPackage = "io.lettuce.core.cluster.api.sync"; + + // todo: remove AutoCloseable from BaseNodeSelectionAsyncCommands + factory = new CompilationUnitFactory(templateFile, Constants.SOURCES, targetPackage, targetName, commentMutator(), + methodTypeMutator(), methodFilter(), importSupplier(), null, null); + factory.keepMethodSignaturesFor(FILTER_METHODS); + } + + /** + * Mutate type comment. + * + * @return + */ + Function commentMutator() { + return s -> s.replaceAll("\\$\\{intent\\}", "Synchronous executed commands on a node selection") + "* @generated by " + + getClass().getName() + "\r\n "; + } + + /** + * Mutate type to async result. + * + * @return + */ + Predicate methodFilter() { + + return method -> { + + ClassOrInterfaceDeclaration classOfMethod = (ClassOrInterfaceDeclaration) method.getParentNode().orElse(null); + if (FILTER_METHODS.contains(method.getName().getIdentifier()) + || FILTER_METHODS.contains(classOfMethod.getName().getIdentifier() + "." + method.getName())) { + return false; + } + + return true; + }; + } + + /** + * Mutate type to async result. + * + * @return + */ + Function methodTypeMutator() { + + return method -> { + return CompilationUnitFactory.createParametrizedType("Executions", method.getType().toString()); + }; + } + + /** + * Supply additional imports. + * + * @return + */ + Supplier> importSupplier() { + return Collections::emptyList; + } + + @Test + public void createInterface() throws Exception { + factory.createInterface(); + } +} diff --git a/src/test/java/io/lettuce/apigenerator/GenerateCommandInterfaces.java b/src/test/java/io/lettuce/apigenerator/GenerateCommandInterfaces.java new file mode 100644 index 0000000000..bfde68d1ba --- /dev/null +++ b/src/test/java/io/lettuce/apigenerator/GenerateCommandInterfaces.java @@ -0,0 +1,31 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.apigenerator; + +import org.junit.runner.RunWith; +import org.junit.runners.Suite; + +/** + * Entrypoint to generate all Redis command interfaces from {@code src/main/templates}. + * + * @author Mark Paluch + */ +@RunWith(Suite.class) +@Suite.SuiteClasses({ CreateAsyncApi.class, CreateSyncApi.class, CreateReactiveApi.class, + CreateAsyncNodeSelectionClusterApi.class, CreateSyncNodeSelectionClusterApi.class }) +public class GenerateCommandInterfaces { + +} diff --git a/src/test/java/io/lettuce/category/SlowTests.java b/src/test/java/io/lettuce/category/SlowTests.java new file mode 100644 index 0000000000..84e7cab627 --- /dev/null +++ b/src/test/java/io/lettuce/category/SlowTests.java @@ -0,0 +1,23 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.category; + +/** + * @author Mark Paluch + */ +public @interface SlowTests { + +} diff --git a/src/test/java/io/lettuce/codec/CRC16UnitTests.java b/src/test/java/io/lettuce/codec/CRC16UnitTests.java new file mode 100644 index 0000000000..2f9f8d9549 --- /dev/null +++ b/src/test/java/io/lettuce/codec/CRC16UnitTests.java @@ -0,0 +1,70 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.codec; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.ArrayList; +import java.util.List; + +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; + +import io.lettuce.core.codec.CRC16; + +/** + * @author Mark Paluch + */ +class CRC16UnitTests { + + static List parameters() { + + List parameters = new ArrayList<>(); + + parameters.add(new Fixture("".getBytes(), 0x0)); + parameters.add(new Fixture("123456789".getBytes(), 0x31C3)); + parameters.add(new Fixture("sfger132515".getBytes(), 0xA45C)); + parameters.add(new Fixture("hae9Napahngaikeethievubaibogiech".getBytes(), 0x58CE)); + parameters.add(new Fixture("AAAAAAAAAAAAAAAAAAAAAA".getBytes(), 0x92cd)); + parameters.add(new Fixture("Hello, World!".getBytes(), 0x4FD6)); + + return parameters; + } + + @ParameterizedTest + @MethodSource("parameters") + void testCRC16(Fixture fixture) { + + int result = CRC16.crc16(fixture.bytes); + assertThat(result).describedAs("Expects " + Integer.toHexString(fixture.expected)).isEqualTo(fixture.expected); + } + + static class Fixture { + + final byte[] bytes; + final int expected; + + Fixture(byte[] bytes, int expected) { + this.bytes = bytes; + this.expected = expected; + } + + @Override + public String toString() { + return "Expects 0x" + Integer.toHexString(expected).toUpperCase(); + } + } +} diff --git a/src/test/java/io/lettuce/core/AbstractRedisClientTest.java b/src/test/java/io/lettuce/core/AbstractRedisClientTest.java new file mode 100644 index 0000000000..82eacea4f8 --- /dev/null +++ b/src/test/java/io/lettuce/core/AbstractRedisClientTest.java @@ -0,0 +1,80 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; + +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.resource.DefaultRedisClient; +import io.lettuce.test.resource.TestClientResources; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +public abstract class AbstractRedisClientTest extends TestSupport { + + protected static RedisClient client; + protected RedisCommands redis; + + @BeforeAll + public static void setupClient() { + client = DefaultRedisClient.get(); + client.setOptions(ClientOptions.create()); + } + + private static RedisClient newRedisClient() { + return RedisClient.create(TestClientResources.get(), RedisURI.Builder.redis(host, port).build()); + } + + protected RedisCommands connect() { + RedisCommands connect = client.connect().sync(); + return connect; + } + + @BeforeEach + public void openConnection() throws Exception { + client.setOptions(ClientOptions.builder().build()); + redis = connect(); + boolean scriptRunning; + do { + + scriptRunning = false; + + try { + redis.flushall(); + redis.flushdb(); + } catch (RedisBusyException e) { + scriptRunning = true; + try { + redis.scriptKill(); + } catch (RedisException e1) { + // I know, it sounds crazy, but there is a possibility where one of the commands above raises BUSY. + // Meanwhile the script ends and a call to SCRIPT KILL says NOTBUSY. + } + } + } while (scriptRunning); + } + + @AfterEach + public void closeConnection() throws Exception { + if (redis != null) { + redis.getStatefulConnection().close(); + } + } +} diff --git a/src/test/java/io/lettuce/core/AsyncConnectionIntegrationTests.java b/src/test/java/io/lettuce/core/AsyncConnectionIntegrationTests.java new file mode 100644 index 0000000000..2a09ee8f75 --- /dev/null +++ b/src/test/java/io/lettuce/core/AsyncConnectionIntegrationTests.java @@ -0,0 +1,172 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.internal.Futures; +import io.lettuce.test.Delay; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.TestFutures; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class AsyncConnectionIntegrationTests extends TestSupport { + + private final RedisClient client; + private final StatefulRedisConnection connection; + private final RedisAsyncCommands async; + + @Inject + AsyncConnectionIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + this.client = client; + this.connection = connection; + this.async = connection.async(); + this.connection.sync().flushall(); + } + + @Test + void multi() { + assertThat(TestFutures.getOrTimeout(async.multi())).isEqualTo("OK"); + Future set = async.set(key, value); + Future rpush = async.rpush("list", "1", "2"); + Future> lrange = async.lrange("list", 0, -1); + + assertThat(!set.isDone() && !rpush.isDone() && !rpush.isDone()).isTrue(); + assertThat(TestFutures.getOrTimeout(async.exec())).contains("OK", 2L, list("1", "2")); + + assertThat(TestFutures.getOrTimeout(set)).isEqualTo("OK"); + assertThat(TestFutures.getOrTimeout(rpush)).isEqualTo(2L); + assertThat(TestFutures.getOrTimeout(lrange)).isEqualTo(list("1", "2")); + } + + @Test + void watch() { + assertThat(TestFutures.getOrTimeout(async.watch(key))).isEqualTo("OK"); + + async.set(key, value + "X"); + + async.multi(); + Future set = async.set(key, value); + Future append = async.append(key, "foo"); + assertThat(TestFutures.getOrTimeout(async.exec())).isEmpty(); + assertThat(TestFutures.getOrTimeout(set)).isNull(); + assertThat(TestFutures.getOrTimeout(append)).isNull(); + } + + @Test + void futureListener() { + + final List run = new ArrayList<>(); + + Runnable listener = () -> run.add(new Object()); + + List> futures = new ArrayList<>(); + + for (int i = 0; i < 1000; i++) { + futures.add(async.lpush(key, "" + i)); + } + + TestFutures.awaitOrTimeout(futures); + + RedisAsyncCommands connection = client.connect().async(); + + Long len = TestFutures.getOrTimeout(connection.llen(key)); + assertThat(len.intValue()).isEqualTo(1000); + + RedisFuture> sort = connection.sort(key); + assertThat(sort.isCancelled()).isFalse(); + + sort.thenRun(listener); + + TestFutures.awaitOrTimeout(sort); + Delay.delay(Duration.ofMillis(100)); + + assertThat(run).hasSize(1); + + connection.getStatefulConnection().close(); + } + + @Test + void futureListenerCompleted() { + + final List run = new ArrayList<>(); + + Runnable listener = new Runnable() { + @Override + public void run() { + run.add(new Object()); + } + }; + + RedisAsyncCommands connection = client.connect().async(); + + RedisFuture set = connection.set(key, value); + TestFutures.awaitOrTimeout(set); + + set.thenRun(listener); + + assertThat(run).hasSize(1); + + connection.getStatefulConnection().close(); + } + + @Test + void discardCompletesFutures() { + async.multi(); + Future set = async.set(key, value); + async.discard(); + assertThat(TestFutures.getOrTimeout(set)).isNull(); + } + + @Test + void awaitAll() { + + Future get1 = async.get(key); + Future set = async.set(key, value); + Future get2 = async.get(key); + Future append = async.append(key, value); + + assertThat(Futures.awaitAll(1, TimeUnit.SECONDS, get1, set, get2, append)).isTrue(); + + assertThat(TestFutures.getOrTimeout(get1)).isNull(); + assertThat(TestFutures.getOrTimeout(set)).isEqualTo("OK"); + assertThat(TestFutures.getOrTimeout(get2)).isEqualTo(value); + assertThat(TestFutures.getOrTimeout(append).longValue()).isEqualTo(value.length() * 2); + } + + @Test + void awaitAllTimeout() { + Future> blpop = async.blpop(1, key); + assertThat(Futures.await(1, TimeUnit.NANOSECONDS, blpop)).isFalse(); + } +} diff --git a/src/test/java/io/lettuce/core/AuthenticationIntegrationTests.java b/src/test/java/io/lettuce/core/AuthenticationIntegrationTests.java new file mode 100644 index 0000000000..de93bf5548 --- /dev/null +++ b/src/test/java/io/lettuce/core/AuthenticationIntegrationTests.java @@ -0,0 +1,65 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.condition.EnabledOnCommand; +import io.lettuce.test.settings.TestSettings; + +/** + * Integration test for authentication. + * + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@EnabledOnCommand("ACL") +class AuthenticationIntegrationTests extends TestSupport { + + @BeforeEach + @Inject + void setUp(StatefulRedisConnection connection) { + + connection.sync().dispatch(CommandType.ACL, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).add("SETUSER").add("john").add("on").add(">foobared").add("-@all")); + } + + @Test + @Inject + void authAsJohn(RedisClient client) { + + RedisURI uri = RedisURI.builder().withHost(TestSettings.host()).withPort(TestSettings.port()) + .withAuthentication("john", "foobared").build(); + + StatefulRedisConnection connection = client.connect(uri); + + assertThatThrownBy(() -> connection.sync().info()).hasMessageContaining("NOPERM"); + + connection.close(); + } +} diff --git a/src/test/java/io/lettuce/core/ByteBufferCodec.java b/src/test/java/io/lettuce/core/ByteBufferCodec.java new file mode 100644 index 0000000000..99980d6af3 --- /dev/null +++ b/src/test/java/io/lettuce/core/ByteBufferCodec.java @@ -0,0 +1,52 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.RedisCodec; + +/** + * @author Mark Paluch + */ +public class ByteBufferCodec implements RedisCodec { + + @Override + public ByteBuffer decodeKey(ByteBuffer bytes) { + + ByteBuffer decoupled = ByteBuffer.allocate(bytes.remaining()); + decoupled.put(bytes); + return (ByteBuffer) decoupled.flip(); + } + + @Override + public ByteBuffer decodeValue(ByteBuffer bytes) { + + ByteBuffer decoupled = ByteBuffer.allocate(bytes.remaining()); + decoupled.put(bytes); + return (ByteBuffer) decoupled.flip(); + } + + @Override + public ByteBuffer encodeKey(ByteBuffer key) { + return key.asReadOnlyBuffer(); + } + + @Override + public ByteBuffer encodeValue(ByteBuffer value) { + return value.asReadOnlyBuffer(); + } +} diff --git a/src/test/java/io/lettuce/core/ClientIntegrationTests.java b/src/test/java/io/lettuce/core/ClientIntegrationTests.java new file mode 100644 index 0000000000..c466a211ab --- /dev/null +++ b/src/test/java/io/lettuce/core/ClientIntegrationTests.java @@ -0,0 +1,235 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import javax.enterprise.inject.New; +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.test.Delay; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.FastShutdown; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ClientIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisCommands redis; + + @Inject + ClientIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + this.client = client; + this.redis = connection.sync(); + this.redis.flushall(); + } + + @Test + @Inject + void close(@New StatefulRedisConnection connection) { + + connection.close(); + assertThatThrownBy(() -> connection.sync().get(key)).isInstanceOf(RedisException.class); + } + + @Test + void statefulConnectionFromSync() { + assertThat(redis.getStatefulConnection().sync()).isSameAs(redis); + } + + @Test + void statefulConnectionFromAsync() { + RedisAsyncCommands async = client.connect().async(); + assertThat(async.getStatefulConnection().async()).isSameAs(async); + async.getStatefulConnection().close(); + } + + @Test + void statefulConnectionFromReactive() { + RedisAsyncCommands async = client.connect().async(); + assertThat(async.getStatefulConnection().reactive().getStatefulConnection()).isSameAs(async.getStatefulConnection()); + async.getStatefulConnection().close(); + } + + @Test + void timeout() { + + redis.setTimeout(Duration.ofNanos(100)); + assertThatThrownBy(() -> redis.blpop(1, "unknown")).isInstanceOf(RedisCommandTimeoutException.class); + + redis.setTimeout(Duration.ofSeconds(60)); + } + + @Test + void reconnect() { + + redis.set(key, value); + + redis.quit(); + Delay.delay(Duration.ofMillis(100)); + assertThat(redis.get(key)).isEqualTo(value); + redis.quit(); + Delay.delay(Duration.ofMillis(100)); + assertThat(redis.get(key)).isEqualTo(value); + redis.quit(); + Delay.delay(Duration.ofMillis(100)); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void interrupt() { + + StatefulRedisConnection connection = client.connect(); + Thread.currentThread().interrupt(); + assertThatThrownBy(() -> connection.sync().blpop(0, key)).isInstanceOf(RedisCommandInterruptedException.class); + Thread.interrupted(); + + connection.closeAsync(); + } + + @Test + @Inject + void connectFailure(ClientResources clientResources) { + + RedisClient client = RedisClient.create(clientResources, "redis://invalid"); + + assertThatThrownBy(client::connect).isInstanceOf(RedisConnectionException.class) + .hasMessageContaining("Unable to connect"); + + FastShutdown.shutdown(client); + } + + @Test + @Inject + void connectPubSubFailure(ClientResources clientResources) { + + RedisClient client = RedisClient.create(clientResources, "redis://invalid"); + + assertThatThrownBy(client::connectPubSub).isInstanceOf(RedisConnectionException.class) + .hasMessageContaining("Unable to connect"); + FastShutdown.shutdown(client); + } + + @Test + void emptyClient() { + + try { + client.connect(); + } catch (IllegalStateException e) { + assertThat(e).hasMessageContaining("RedisURI"); + } + + try { + client.connect().async(); + } catch (IllegalStateException e) { + assertThat(e).hasMessageContaining("RedisURI"); + } + + try { + client.connect((RedisURI) null); + } catch (IllegalArgumentException e) { + assertThat(e).hasMessageContaining("RedisURI"); + } + } + + @Test + void testExceptionWithCause() { + RedisException e = new RedisException(new RuntimeException()); + assertThat(e).hasCauseExactlyInstanceOf(RuntimeException.class); + } + + @Test + void reset() { + + StatefulRedisConnection connection = client.connect(); + RedisAsyncCommands async = connection.async(); + + connection.sync().set(key, value); + async.reset(); + connection.sync().set(key, value); + connection.sync().flushall(); + + RedisFuture> eval = async.blpop(5, key); + + Delay.delay(Duration.ofMillis(500)); + + assertThat(eval.isDone()).isFalse(); + assertThat(eval.isCancelled()).isFalse(); + + async.reset(); + + Wait.untilTrue(eval::isCancelled).waitOrTimeout(); + + assertThat(eval.isCancelled()).isTrue(); + assertThat(eval.isDone()).isTrue(); + + connection.close(); + } + + @Test + void standaloneConnectionShouldSetClientName() { + + RedisURI redisURI = RedisURI.create(host, port); + redisURI.setClientName("my-client"); + + StatefulRedisConnection connection = client.connect(redisURI); + + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + + connection.sync().quit(); + Delay.delay(Duration.ofMillis(100)); + Wait.untilTrue(connection::isOpen).waitOrTimeout(); + + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + + connection.close(); + } + + @Test + void pubSubConnectionShouldSetClientName() { + + RedisURI redisURI = RedisURI.create(host, port); + redisURI.setClientName("my-client"); + + StatefulRedisConnection connection = client.connectPubSub(redisURI); + + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + + connection.sync().quit(); + Delay.delay(Duration.ofMillis(100)); + Wait.untilTrue(connection::isOpen).waitOrTimeout(); + + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + + connection.close(); + } +} diff --git a/src/test/java/io/lettuce/core/ClientMetricsIntegrationTests.java b/src/test/java/io/lettuce/core/ClientMetricsIntegrationTests.java new file mode 100644 index 0000000000..03cbecbd79 --- /dev/null +++ b/src/test/java/io/lettuce/core/ClientMetricsIntegrationTests.java @@ -0,0 +1,81 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Collection; +import java.util.concurrent.LinkedBlockingQueue; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.springframework.test.util.ReflectionTestUtils; + +import reactor.core.Disposable; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.metrics.CommandLatencyEvent; +import io.lettuce.core.event.metrics.MetricEventPublisher; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ClientMetricsIntegrationTests extends TestSupport { + + @Test + @Inject + void testMetricsEvent(RedisClient client, StatefulRedisConnection connection) { + + Collection events = new LinkedBlockingQueue<>(); + EventBus eventBus = client.getResources().eventBus(); + MetricEventPublisher publisher = (MetricEventPublisher) ReflectionTestUtils.getField(client.getResources(), + "metricEventPublisher"); + publisher.emitMetricsEvent(); + + Disposable disposable = eventBus.get().filter(redisEvent -> redisEvent instanceof CommandLatencyEvent) + .cast(CommandLatencyEvent.class).doOnNext(events::add).subscribe(); + + generateTestData(connection.sync()); + publisher.emitMetricsEvent(); + + Wait.untilTrue(() -> !events.isEmpty()).waitOrTimeout(); + + assertThat(events).isNotEmpty(); + + disposable.dispose(); + } + + private void generateTestData(RedisCommands redis) { + redis.set(key, value); + redis.set(key, value); + redis.set(key, value); + redis.set(key, value); + redis.set(key, value); + redis.set(key, value); + + redis.get(key); + redis.get(key); + redis.get(key); + redis.get(key); + redis.get(key); + } +} diff --git a/src/test/java/io/lettuce/core/ClientOptionsIntegrationTests.java b/src/test/java/io/lettuce/core/ClientOptionsIntegrationTests.java new file mode 100644 index 0000000000..17ba804ba4 --- /dev/null +++ b/src/test/java/io/lettuce/core/ClientOptionsIntegrationTests.java @@ -0,0 +1,547 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.test.ConnectionTestUtil.getChannel; +import static io.lettuce.test.ConnectionTestUtil.getConnectionWatchdog; +import static io.lettuce.test.ConnectionTestUtil.getStack; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.Assertions.fail; + +import java.net.ServerSocket; +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.Queue; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.test.condition.EnabledOnCommand; +import reactor.core.publisher.Mono; +import reactor.test.StepVerifier; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.TestFutures; +import io.lettuce.test.Wait; +import io.lettuce.test.WithPassword; +import io.lettuce.test.settings.TestSettings; +import io.netty.channel.Channel; + +/** + * Integration tests for effects configured via {@link ClientOptions}. + * + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ClientOptionsIntegrationTests extends TestSupport { + + private final RedisClient client; + + @Inject + ClientOptionsIntegrationTests(RedisClient client) { + this.client = client; + } + + @Test + void variousClientOptions() { + + StatefulRedisConnection connection1 = client.connect(); + + assertThat(connection1.getOptions().isAutoReconnect()).isTrue(); + connection1.close(); + + client.setOptions(ClientOptions.builder().autoReconnect(false).build()); + + StatefulRedisConnection connection2 = client.connect(); + assertThat(connection2.getOptions().isAutoReconnect()).isFalse(); + + assertThat(connection1.getOptions().isAutoReconnect()).isTrue(); + + connection1.close(); + connection2.close(); + } + + @Test + void requestQueueSize() { + + client.setOptions(ClientOptions.builder().requestQueueSize(10).build()); + + StatefulRedisConnection connection = client.connect(); + getConnectionWatchdog(connection).setListenOnChannelInactive(false); + + connection.async().quit(); + + Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); + + for (int i = 0; i < 10; i++) { + connection.async().ping(); + } + + assertThatThrownBy(() -> connection.async().ping().toCompletableFuture().join()) + .hasMessageContaining("Request queue size exceeded"); + assertThatThrownBy(() -> connection.sync().ping()).hasMessageContaining("Request queue size exceeded"); + + connection.close(); + } + + @Test + void requestQueueSizeAppliedForReconnect() { + + client.setOptions(ClientOptions.builder().requestQueueSize(10).build()); + + RedisAsyncCommands connection = client.connect().async(); + testHitRequestQueueLimit(connection); + } + + @Test + void testHitRequestQueueLimitReconnectWithAuthCommand() { + + WithPassword.run(client, () -> { + + client.setOptions(ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false) + .requestQueueSize(10).build()); + + RedisAsyncCommands connection = client.connect().async(); + connection.auth(passwd); + testHitRequestQueueLimit(connection); + }); + } + + @Test + @EnabledOnCommand("ACL") + void testHitRequestQueueLimitReconnectWithAuthUsernamePasswordCommand() { + + WithPassword.run(client, () -> { + + client.setOptions(ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false) + .requestQueueSize(10).build()); + + RedisAsyncCommands connection = client.connect().async(); + connection.auth(username, passwd); + testHitRequestQueueLimit(connection); + }); + } + + @Test + void testHitRequestQueueLimitReconnectWithUriAuth() { + + WithPassword.run(client, () -> { + client.setOptions(ClientOptions.builder().requestQueueSize(10).build()); + + RedisURI redisURI = RedisURI.create(host, port); + redisURI.setPassword(passwd); + + RedisAsyncCommands connection = client.connect(redisURI).async(); + testHitRequestQueueLimit(connection); + }); + } + + @Test + void testHitRequestQueueLimitReconnectWithUriAuthPingCommand() { + + WithPassword.run(client, () -> { + + client.setOptions(ClientOptions.builder().requestQueueSize(10).build()); + + RedisURI redisURI = RedisURI.create(host, port); + redisURI.setPassword(passwd); + + RedisAsyncCommands connection = client.connect(redisURI).async(); + testHitRequestQueueLimit(connection); + }); + } + + private void testHitRequestQueueLimit(RedisAsyncCommands connection) { + + ConnectionWatchdog watchdog = getConnectionWatchdog(connection.getStatefulConnection()); + + watchdog.setListenOnChannelInactive(false); + + connection.quit(); + + Wait.untilTrue(() -> !connection.getStatefulConnection().isOpen()).waitOrTimeout(); + + List> pings = new ArrayList<>(); + for (int i = 0; i < 10; i++) { + pings.add(connection.ping()); + } + + watchdog.setListenOnChannelInactive(true); + watchdog.scheduleReconnect(); + + for (RedisFuture ping : pings) { + assertThat(TestFutures.getOrTimeout(ping)).isEqualTo("PONG"); + } + + connection.getStatefulConnection().close(); + } + + @Test + void requestQueueSizeOvercommittedReconnect() { + + client.setOptions(ClientOptions.builder().requestQueueSize(10).build()); + + StatefulRedisConnection connection = client.connect(); + ConnectionWatchdog watchdog = getConnectionWatchdog(connection); + + watchdog.setListenOnChannelInactive(false); + + Queue buffer = getStack(connection); + List> pings = new ArrayList<>(); + for (int i = 0; i < 11; i++) { + + AsyncCommand command = new AsyncCommand<>( + new Command<>(CommandType.PING, new StatusOutput<>(StringCodec.UTF8))); + pings.add(command); + buffer.add(command); + } + + getChannel(connection).disconnect(); + + Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); + + watchdog.setListenOnChannelInactive(true); + watchdog.scheduleReconnect(); + + for (int i = 0; i < 10; i++) { + assertThat(TestFutures.getOrTimeout(pings.get(i))).isEqualTo("PONG"); + } + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(pings.get(10))).hasCauseInstanceOf(IllegalStateException.class) + .hasMessage("java.lang.IllegalStateException: Queue full"); + + connection.close(); + } + + @Test + void disconnectedWithoutReconnect() { + + client.setOptions(ClientOptions.builder().autoReconnect(false).build()); + + RedisAsyncCommands connection = client.connect().async(); + + connection.quit(); + Wait.untilTrue(() -> !connection.getStatefulConnection().isOpen()).waitOrTimeout(); + try { + connection.get(key); + } catch (Exception e) { + assertThat(e).isInstanceOf(RedisException.class).hasMessageContaining("not connected"); + } finally { + connection.getStatefulConnection().close(); + } + } + + @Test + void disconnectedRejectCommands() { + + client.setOptions( + ClientOptions.builder().disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS).build()); + + RedisAsyncCommands connection = client.connect().async(); + + getConnectionWatchdog(connection.getStatefulConnection()).setListenOnChannelInactive(false); + connection.quit(); + Wait.untilTrue(() -> !connection.getStatefulConnection().isOpen()).waitOrTimeout(); + try { + connection.get(key); + } catch (Exception e) { + assertThat(e).isInstanceOf(RedisException.class).hasMessageContaining("not connected"); + } finally { + connection.getStatefulConnection().close(); + } + } + + @Test + void disconnectedAcceptCommands() { + + client.setOptions(ClientOptions.builder().autoReconnect(false) + .disconnectedBehavior(ClientOptions.DisconnectedBehavior.ACCEPT_COMMANDS).build()); + + RedisAsyncCommands connection = client.connect().async(); + + connection.quit(); + Wait.untilTrue(() -> !connection.getStatefulConnection().isOpen()).waitOrTimeout(); + connection.get(key); + connection.getStatefulConnection().close(); + } + + @Test + @Inject + void pingBeforeConnect(StatefulRedisConnection sharedConnection) { + + sharedConnection.sync().set(key, value); + RedisCommands connection = client.connect().sync(); + + try { + String result = connection.get(key); + assertThat(result).isEqualTo(value); + } finally { + connection.getStatefulConnection().close(); + } + } + + @Test + void connectTimeout() throws Exception { + + try (ServerSocket serverSocket = new ServerSocket(0)) { + + RedisURI redisURI = RedisURI.Builder.redis(TestSettings.host(), serverSocket.getLocalPort()) + .withTimeout(Duration.ofMillis(500)).build(); + + try { + client.connect(redisURI); + fail("Missing RedisConnectionException"); + } catch (RedisException e) { + assertThat(e).isInstanceOf(RedisConnectionException.class) + .hasRootCauseInstanceOf(RedisCommandTimeoutException.class); + } + } + } + + @Test + void connectWithAuthentication() { + + WithPassword.run(client, () -> { + RedisURI redisURI = RedisURI.Builder.redis(host, port).withPassword(passwd).build(); + + RedisCommands connection = client.connect(redisURI).sync(); + + try { + String result = connection.info(); + assertThat(result).contains("memory"); + } finally { + connection.getStatefulConnection().close(); + } + }); + } + + @Test + void authenticationTimeout() { + + WithPassword.run(client, () -> { + + try (ServerSocket serverSocket = new ServerSocket(0)) { + + RedisURI redisURI = RedisURI.Builder.redis(TestSettings.host(), serverSocket.getLocalPort()) + .withPassword(passwd).withTimeout(Duration.ofMillis(500)).build(); + + try { + client.connect(redisURI); + fail("Missing RedisConnectionException"); + } catch (RedisException e) { + assertThat(e).isInstanceOf(RedisConnectionException.class) + .hasRootCauseInstanceOf(RedisCommandTimeoutException.class); + } + } + }); + } + + @Test + void sslAndAuthentication() { + + WithPassword.run(client, () -> { + + RedisURI redisURI = RedisURI.Builder.redis(host, 6443).withPassword(passwd).withVerifyPeer(false).withSsl(true) + .build(); + + RedisCommands connection = client.connect(redisURI).sync(); + + try { + String result = connection.info(); + assertThat(result).contains("memory"); + } finally { + connection.getStatefulConnection().close(); + } + + }); + } + + @Test + void authenticationFails() { + + WithPassword.run(client, () -> { + + RedisURI redisURI = RedisURI.Builder.redis(host, port).build(); + + try { + client.connect(redisURI); + fail("Missing RedisConnectionException"); + } catch (RedisException e) { + assertThat(e).isInstanceOf(RedisConnectionException.class); + } + }); + } + + @Test + void pingBeforeConnectWithSslAndAuthenticationFails() { + + WithPassword.run(client, () -> { + + RedisURI redisURI = RedisURI.Builder.redis(host, 6443).withVerifyPeer(false).withSsl(true).build(); + + try { + client.connect(redisURI); + fail("Missing RedisConnectionException"); + } catch (RedisException e) { + assertThat(e).isInstanceOf(RedisConnectionException.class) + .hasRootCauseInstanceOf(RedisCommandExecutionException.class); + } + }); + } + + @Test + void appliesCommandTimeoutToAsyncCommands() { + + client.setOptions(ClientOptions.builder().timeoutOptions(TimeoutOptions.enabled()).build()); + + try (StatefulRedisConnection connection = client.connect()) { + connection.setTimeout(Duration.ofMillis(100)); + + connection.async().clientPause(300); + + RedisFuture future = connection.async().ping(); + + assertThatThrownBy(future::get).isInstanceOf(ExecutionException.class) + .hasCauseInstanceOf(RedisCommandTimeoutException.class).hasMessageContaining("100 milli"); + } + } + + @Test + void appliesCommandTimeoutToReactiveCommands() { + + client.setOptions(ClientOptions.builder().timeoutOptions(TimeoutOptions.enabled()).build()); + + try (StatefulRedisConnection connection = client.connect()) { + connection.setTimeout(Duration.ofMillis(100)); + + connection.async().clientPause(300); + + Mono mono = connection.reactive().ping(); + + StepVerifier.create(mono).expectError(RedisCommandTimeoutException.class).verify(); + } + } + + @Test + void timeoutExpiresBatchedCommands() { + + client.setOptions(ClientOptions.builder() + .timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.ofMillis(1)).build()).build()); + + try (StatefulRedisConnection connection = client.connect()) { + + connection.setAutoFlushCommands(false); + RedisFuture future = connection.async().ping(); + Wait.untilTrue(future::isDone).waitOrTimeout(); + + assertThatThrownBy(future::get).isInstanceOf(ExecutionException.class) + .hasCauseInstanceOf(RedisCommandTimeoutException.class).hasMessageContaining("1 milli"); + + connection.flushCommands(); + } + } + + @Test + void pingBeforeConnectWithQueuedCommandsAndReconnect() throws Exception { + + StatefulRedisConnection controlConnection = client.connect(); + + StatefulRedisConnection redisConnection = client.connect(RedisURI.create("redis://localhost:6479/5")); + redisConnection.async().set("key1", "value1"); + redisConnection.async().set("key2", "value2"); + + RedisFuture sleep = (RedisFuture) controlConnection + .dispatch(new AsyncCommand<>(new Command<>(CommandType.DEBUG, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).add("SLEEP").add(2)))); + + sleep.await(100, TimeUnit.MILLISECONDS); + + Channel channel = getChannel(redisConnection); + ConnectionWatchdog connectionWatchdog = getConnectionWatchdog(redisConnection); + connectionWatchdog.setReconnectSuspended(true); + + TestFutures.awaitOrTimeout(channel.close()); + TestFutures.awaitOrTimeout(sleep); + + redisConnection.async().get(key).cancel(true); + + RedisFuture getFuture1 = redisConnection.async().get("key1"); + RedisFuture getFuture2 = redisConnection.async().get("key2"); + getFuture1.await(100, TimeUnit.MILLISECONDS); + + connectionWatchdog.setReconnectSuspended(false); + connectionWatchdog.scheduleReconnect(); + + assertThat(TestFutures.getOrTimeout(getFuture1)).isEqualTo("value1"); + assertThat(TestFutures.getOrTimeout(getFuture2)).isEqualTo("value2"); + + controlConnection.close(); + redisConnection.close(); + } + + @Test + void authenticatedPingBeforeConnectWithQueuedCommandsAndReconnect() { + + WithPassword.run(client, () -> { + + RedisURI redisURI = RedisURI.Builder.redis(host, port).withPassword(passwd).withDatabase(5).build(); + StatefulRedisConnection controlConnection = client.connect(redisURI); + + StatefulRedisConnection redisConnection = client.connect(redisURI); + redisConnection.async().set("key1", "value1"); + redisConnection.async().set("key2", "value2"); + + RedisFuture sleep = (RedisFuture) controlConnection + .dispatch(new AsyncCommand<>(new Command<>(CommandType.DEBUG, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).add("SLEEP").add(2)))); + + sleep.await(100, TimeUnit.MILLISECONDS); + + Channel channel = getChannel(redisConnection); + ConnectionWatchdog connectionWatchdog = getConnectionWatchdog(redisConnection); + connectionWatchdog.setReconnectSuspended(true); + + TestFutures.awaitOrTimeout(channel.close()); + TestFutures.awaitOrTimeout(sleep); + + redisConnection.async().get(key).cancel(true); + + RedisFuture getFuture1 = redisConnection.async().get("key1"); + RedisFuture getFuture2 = redisConnection.async().get("key2"); + getFuture1.await(100, TimeUnit.MILLISECONDS); + + connectionWatchdog.setReconnectSuspended(false); + connectionWatchdog.scheduleReconnect(); + + assertThat(TestFutures.getOrTimeout(getFuture1)).isEqualTo("value1"); + assertThat(TestFutures.getOrTimeout(getFuture2)).isEqualTo("value2"); + + controlConnection.close(); + redisConnection.close(); + }); + } +} diff --git a/src/test/java/io/lettuce/core/ClientOptionsUnitTests.java b/src/test/java/io/lettuce/core/ClientOptionsUnitTests.java new file mode 100644 index 0000000000..b322f29ca2 --- /dev/null +++ b/src/test/java/io/lettuce/core/ClientOptionsUnitTests.java @@ -0,0 +1,66 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.protocol.ProtocolVersion; + +import java.nio.charset.StandardCharsets; + +/** + * Unit tests for {@link ClientOptions}. + * + * @author Mark Paluch + */ +class ClientOptionsUnitTests { + + @Test + void testNew() { + checkAssertions(ClientOptions.create()); + } + + @Test + void testBuilder() { + ClientOptions options = ClientOptions.builder().scriptCharset(StandardCharsets.US_ASCII).build(); + checkAssertions(options); + assertThat(options.getScriptCharset()).isEqualTo(StandardCharsets.US_ASCII); + } + + @Test + void testCopy() { + + ClientOptions original = ClientOptions.builder().scriptCharset(StandardCharsets.US_ASCII).build(); + ClientOptions copy = ClientOptions.copyOf(original); + + checkAssertions(copy); + assertThat(copy.getScriptCharset()).isEqualTo(StandardCharsets.US_ASCII); + assertThat(copy.mutate().build().getScriptCharset()).isEqualTo(StandardCharsets.US_ASCII); + + assertThat(original.mutate()).isNotSameAs(copy.mutate()); + } + + void checkAssertions(ClientOptions sut) { + assertThat(sut.isAutoReconnect()).isEqualTo(true); + assertThat(sut.isCancelCommandsOnReconnectFailure()).isEqualTo(false); + assertThat(sut.getProtocolVersion()).isEqualTo(ProtocolVersion.RESP3); + assertThat(sut.isSuspendReconnectOnProtocolFailure()).isEqualTo(false); + assertThat(sut.getDisconnectedBehavior()).isEqualTo(ClientOptions.DisconnectedBehavior.DEFAULT); + assertThat(sut.getBufferUsageRatio()).isEqualTo(ClientOptions.DEFAULT_BUFFER_USAGE_RATIO); + } +} diff --git a/src/test/java/io/lettuce/core/ConnectMethodsIntegrationTests.java b/src/test/java/io/lettuce/core/ConnectMethodsIntegrationTests.java new file mode 100644 index 0000000000..084f291e68 --- /dev/null +++ b/src/test/java/io/lettuce/core/ConnectMethodsIntegrationTests.java @@ -0,0 +1,185 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.AsyncNodeSelection; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ConnectMethodsIntegrationTests { + + private final RedisClient redisClient; + private final RedisClusterClient clusterClient; + + @Inject + ConnectMethodsIntegrationTests(RedisClient redisClient, RedisClusterClient clusterClient) { + this.redisClient = redisClient; + this.clusterClient = clusterClient; + } + + // Standalone + @Test + void standaloneSync() { + redisClient.connect().close(); + } + + @Test + void standaloneAsync() { + redisClient.connect().async().getStatefulConnection().close(); + } + + @Test + void standaloneReactive() { + redisClient.connect().reactive().getStatefulConnection().close(); + } + + @Test + void standaloneStateful() { + redisClient.connect().close(); + } + + // PubSub + @Test + void pubsubSync() { + redisClient.connectPubSub().close(); + } + + @Test + void pubsubAsync() { + redisClient.connectPubSub().close(); + } + + @Test + void pubsubReactive() { + redisClient.connectPubSub().close(); + } + + @Test + void pubsubStateful() { + redisClient.connectPubSub().close(); + } + + // Sentinel + @Test + void sentinelSync() { + redisClient.connectSentinel().sync().getStatefulConnection().close(); + } + + @Test + void sentinelAsync() { + redisClient.connectSentinel().async().getStatefulConnection().close(); + } + + @Test + void sentinelReactive() { + redisClient.connectSentinel().reactive().getStatefulConnection().close(); + } + + @Test + void sentinelStateful() { + redisClient.connectSentinel().close(); + } + + // Cluster + @Test + void clusterSync() { + clusterClient.connect().sync().getStatefulConnection().close(); + } + + @Test + void clusterAsync() { + clusterClient.connect().async().getStatefulConnection().close(); + } + + @Test + void clusterReactive() { + clusterClient.connect().reactive().getStatefulConnection().close(); + } + + @Test + void clusterStateful() { + clusterClient.connect().close(); + } + + @Test + void clusterPubSubSync() { + clusterClient.connectPubSub().sync().getStatefulConnection().close(); + } + + @Test + void clusterPubSubAsync() { + clusterClient.connectPubSub().async().getStatefulConnection().close(); + } + + @Test + void clusterPubSubReactive() { + clusterClient.connectPubSub().reactive().getStatefulConnection().close(); + } + + @Test + void clusterPubSubStateful() { + clusterClient.connectPubSub().close(); + } + + // Advanced Cluster + @Test + void advancedClusterSync() { + StatefulRedisClusterConnection statefulConnection = clusterClient.connect(); + RedisURI uri = clusterClient.getPartitions().getPartition(0).getUri(); + statefulConnection.getConnection(uri.getHost(), uri.getPort()).sync(); + statefulConnection.close(); + } + + @Test + void advancedClusterAsync() { + StatefulRedisClusterConnection statefulConnection = clusterClient.connect(); + RedisURI uri = clusterClient.getPartitions().getPartition(0).getUri(); + statefulConnection.getConnection(uri.getHost(), uri.getPort()).sync(); + statefulConnection.close(); + } + + @Test + void advancedClusterReactive() { + StatefulRedisClusterConnection statefulConnection = clusterClient.connect(); + RedisURI uri = clusterClient.getPartitions().getPartition(0).getUri(); + statefulConnection.getConnection(uri.getHost(), uri.getPort()).reactive(); + statefulConnection.close(); + } + + @Test + void advancedClusterStateful() { + clusterClient.connect().close(); + } + + // Cluster node selection + @Test + void nodeSelectionClusterAsync() { + StatefulRedisClusterConnection statefulConnection = clusterClient.connect(); + AsyncNodeSelection masters = statefulConnection.async().masters(); + statefulConnection.close(); + } + +} diff --git a/src/test/java/io/lettuce/core/ConnectionCommandIntegrationTests.java b/src/test/java/io/lettuce/core/ConnectionCommandIntegrationTests.java new file mode 100644 index 0000000000..c56bb5d93b --- /dev/null +++ b/src/test/java/io/lettuce/core/ConnectionCommandIntegrationTests.java @@ -0,0 +1,304 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static junit.framework.TestCase.assertNotNull; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.Assertions.fail; + +import java.time.Duration; +import java.util.concurrent.Future; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.protocol.ProtocolVersion; +import io.lettuce.test.*; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Will Glozer + * @author Mark Paluch + * @author Tugdual Grall + */ +@ExtendWith(LettuceExtension.class) +class ConnectionCommandIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisCommands redis; + + @Inject + ConnectionCommandIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + this.client = client; + this.redis = connection.sync(); + } + + @BeforeEach + void setUp() { + redis.flushall(); + } + + @Test + void auth() { + + WithPassword.run(client, () -> { + client.setOptions( + ClientOptions.builder().pingBeforeActivateConnection(false).protocolVersion(ProtocolVersion.RESP2).build()); + RedisCommands connection = client.connect().sync(); + + assertThatThrownBy(connection::ping).isInstanceOf(RedisException.class) + .hasMessageContaining("NOAUTH Authentication required"); + + assertThat(connection.auth(passwd)).isEqualTo("OK"); + assertThat(connection.set(key, value)).isEqualTo("OK"); + + RedisURI redisURI = RedisURI.Builder.redis(host, port).withDatabase(2).withPassword(passwd).build(); + RedisCommands authConnection = client.connect(redisURI).sync(); + authConnection.ping(); + authConnection.getStatefulConnection().close(); + }); + } + + @Test + @EnabledOnCommand("ACL") + void authWithUsername() { + + WithPassword.run(client, () -> { + client.setOptions( + ClientOptions.builder().pingBeforeActivateConnection(false).protocolVersion(ProtocolVersion.RESP2).build()); + RedisCommands connection = client.connect().sync(); + + assertThatThrownBy(connection::ping).isInstanceOf(RedisException.class) + .hasMessageContaining("NOAUTH Authentication required"); + + assertThat(connection.auth(passwd)).isEqualTo("OK"); + assertThat(connection.set(key, value)).isEqualTo("OK"); + + // Aut with the same user & password (default) + assertThat(connection.auth(username, passwd)).isEqualTo("OK"); + assertThat(connection.set(key, value)).isEqualTo("OK"); + + // Switch to another user + assertThat(connection.auth(aclUsername, aclPasswd)).isEqualTo("OK"); + assertThat(connection.set("cached:demo", value)).isEqualTo("OK"); + assertThatThrownBy(() -> connection.get(key)).isInstanceOf(RedisCommandExecutionException.class); + assertThat(connection.del("cached:demo")).isEqualTo(1); + + RedisURI redisURI = RedisURI.Builder.redis(host, port).withDatabase(2).withPassword(passwd).build(); + RedisCommands authConnection = client.connect(redisURI).sync(); + authConnection.ping(); + authConnection.getStatefulConnection().close(); + }); + } + + @Test + @EnabledOnCommand("ACL") + void resp2HandShakeWithUsernamePassword() { + + RedisURI redisURI = RedisURI.Builder.redis(host, port).withAuthentication(username, passwd).build(); + RedisClient clientResp2 = RedisClient.create(redisURI); + clientResp2.setOptions( + ClientOptions.builder().pingBeforeActivateConnection(false).protocolVersion(ProtocolVersion.RESP2).build()); + RedisCommands connTestResp2 = null; + + try { + connTestResp2 = clientResp2.connect().sync(); + assertThat(redis.ping()).isEqualTo("PONG"); + } catch (Exception e) { + } finally { + assertNotNull(connTestResp2); + if (connTestResp2 != null) { + connTestResp2.getStatefulConnection().close(); + } + } + clientResp2.shutdown(); + } + + @Test + void echo() { + assertThat(redis.echo("hello")).isEqualTo("hello"); + } + + @Test + void ping() { + assertThat(redis.ping()).isEqualTo("PONG"); + } + + @Test + void select() { + redis.set(key, value); + assertThat(redis.select(1)).isEqualTo("OK"); + assertThat(redis.get(key)).isNull(); + } + + @Test + void authNull() { + assertThatThrownBy(() -> redis.auth(null)).isInstanceOf(IllegalArgumentException.class); + assertThatThrownBy(() -> redis.auth(null, "x")).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void authEmpty() { + assertThatThrownBy(() -> redis.auth("")).isInstanceOf(IllegalArgumentException.class); + assertThatThrownBy(() -> redis.auth("", "x")).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void authReconnect() { + WithPassword.run(client, () -> { + + client.setOptions( + ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false).build()); + RedisCommands connection = client.connect().sync(); + assertThat(connection.auth(passwd)).isEqualTo("OK"); + assertThat(connection.set(key, value)).isEqualTo("OK"); + connection.quit(); + + Delay.delay(Duration.ofMillis(100)); + assertThat(connection.get(key)).isEqualTo(value); + + connection.getStatefulConnection().close(); + }); + } + + @Test + @EnabledOnCommand("ACL") + void authReconnectRedis6() { + WithPassword.run(client, () -> { + + client.setOptions( + ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false).build()); + RedisCommands connection = client.connect().sync(); + assertThat(connection.auth(passwd)).isEqualTo("OK"); + assertThat(connection.set(key, value)).isEqualTo("OK"); + connection.quit(); + + Delay.delay(Duration.ofMillis(100)); + assertThat(connection.get(key)).isEqualTo(value); + + // reconnect with username/password + assertThat(connection.auth(username, passwd)).isEqualTo("OK"); + assertThat(connection.set(key, value)).isEqualTo("OK"); + connection.quit(); + + Delay.delay(Duration.ofMillis(100)); + assertThat(connection.get(key)).isEqualTo(value); + + connection.getStatefulConnection().close(); + }); + } + + @Test + void selectReconnect() { + redis.select(1); + redis.set(key, value); + redis.quit(); + + Wait.untilTrue(redis::isOpen).waitOrTimeout(); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void getSetReconnect() { + redis.set(key, value); + redis.quit(); + Wait.untilTrue(redis::isOpen).waitOrTimeout(); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void authInvalidPassword() { + RedisAsyncCommands async = client.connect().async(); + try { + TestFutures.awaitOrTimeout(async.auth("invalid")); + fail("Authenticated with invalid password"); + } catch (RedisException e) { + assertThat(e.getMessage()).startsWith("ERR").contains("AUTH"); + StatefulRedisConnectionImpl statefulRedisCommands = (StatefulRedisConnectionImpl) async + .getStatefulConnection(); + assertThat(statefulRedisCommands.getConnectionState()).extracting("password").isNull(); + } finally { + async.getStatefulConnection().close(); + } + } + + @Test + @EnabledOnCommand("ACL") + void authInvalidUsernamePassword() { + + WithPassword.run(client, () -> { + client.setOptions( + ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false).build()); + RedisCommands connection = client.connect().sync(); + + assertThat(connection.auth(username, passwd)).isEqualTo("OK"); + + assertThatThrownBy(() -> connection.auth(username, "invalid")) + .hasMessage("WRONGPASS invalid username-password pair"); + + assertThat(connection.auth(aclUsername, aclPasswd)).isEqualTo("OK"); + + assertThatThrownBy(() -> connection.auth(aclUsername, "invalid")) + .hasMessage("WRONGPASS invalid username-password pair"); + + connection.getStatefulConnection().close(); + }); + } + + @Test + @EnabledOnCommand("ACL") + void authInvalidDefaultPasswordNoACL() { + RedisAsyncCommands async = client.connect().async(); + // When the database is not secured the AUTH default invalid command returns OK + try { + Future auth = async.auth(username, "invalid"); + assertThat(TestFutures.getOrTimeout(auth)).isEqualTo("OK"); + } finally { + async.getStatefulConnection().close(); + } + } + + @Test + void authInvalidUsernamePasswordNoACL() { + RedisAsyncCommands async = client.connect().async(); + try { + TestFutures.awaitOrTimeout(async.select(1024)); + fail("Selected invalid db index"); + } catch (RedisException e) { + assertThat(e.getMessage()).startsWith("ERR"); + StatefulRedisConnectionImpl statefulRedisCommands = (StatefulRedisConnectionImpl) async + .getStatefulConnection(); + assertThat(statefulRedisCommands.getConnectionState()).extracting("db").isEqualTo(0); + } finally { + async.getStatefulConnection().close(); + } + } + + @Test + void testDoubleToString() { + + assertThat(LettuceStrings.string(1.1)).isEqualTo("1.1"); + assertThat(LettuceStrings.string(Double.POSITIVE_INFINITY)).isEqualTo("+inf"); + assertThat(LettuceStrings.string(Double.NEGATIVE_INFINITY)).isEqualTo("-inf"); + } +} diff --git a/src/test/java/io/lettuce/core/ConnectionFutureUnitTests.java b/src/test/java/io/lettuce/core/ConnectionFutureUnitTests.java new file mode 100644 index 0000000000..22b51aef67 --- /dev/null +++ b/src/test/java/io/lettuce/core/ConnectionFutureUnitTests.java @@ -0,0 +1,127 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionException; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.internal.Futures; + +/** + * @author Mark Paluch + */ +class ConnectionFutureUnitTests { + + @Test + void shouldComposeTransformToError() { + + CompletableFuture foo = new CompletableFuture<>(); + + ConnectionFuture transformed = ConnectionFuture.from(null, foo).thenCompose((s, t) -> { + + if (t != null) { + return Futures.failed(new IllegalStateException(t)); + } + return Futures.failed(new IllegalStateException()); + }); + + foo.complete("foo"); + + assertThat(transformed.toCompletableFuture()).isDone(); + assertThat(transformed.toCompletableFuture()).isCompletedExceptionally(); + assertThatThrownBy(transformed::join).hasRootCauseInstanceOf(IllegalStateException.class); + } + + @Test + void composeTransformShouldFailWhileTransformation() { + + CompletableFuture foo = new CompletableFuture<>(); + + ConnectionFuture transformed = ConnectionFuture.from(null, foo).thenCompose((s, t) -> { + throw new IllegalStateException(); + }); + + foo.complete("foo"); + + assertThat(transformed.toCompletableFuture()).isDone(); + assertThat(transformed.toCompletableFuture()).isCompletedExceptionally(); + assertThatThrownBy(transformed::join).hasRootCauseInstanceOf(IllegalStateException.class); + } + + @Test + void composeTransformShouldFailWhileTransformationRetainOriginalException() { + + CompletableFuture foo = new CompletableFuture<>(); + + ConnectionFuture transformed = ConnectionFuture.from(null, foo).thenCompose((s, t) -> { + throw new IllegalStateException(); + }); + + Throwable t = new Throwable(); + foo.completeExceptionally(t); + + + assertThat(transformed.toCompletableFuture()).isDone(); + assertThat(transformed.toCompletableFuture()).isCompletedExceptionally(); + + try { + transformed.join(); + } catch (CompletionException e) { + + assertThat(e).hasRootCauseInstanceOf(IllegalStateException.class); + assertThat(e.getCause()).hasSuppressedException(t); + } + } + + @Test + void shouldComposeWithErrorFlow() { + + CompletableFuture foo = new CompletableFuture<>(); + CompletableFuture exceptional = new CompletableFuture<>(); + + ConnectionFuture transformed1 = ConnectionFuture.from(null, foo).thenCompose((s, t) -> { + + if (t != null) { + return Futures.failed(new IllegalStateException(t)); + } + return CompletableFuture.completedFuture(s); + }); + + ConnectionFuture transformed2 = ConnectionFuture.from(null, exceptional).thenCompose((s, t) -> { + + if (t != null) { + return Futures.failed(new IllegalStateException(t)); + } + return CompletableFuture.completedFuture(s); + }); + + foo.complete("foo"); + exceptional.completeExceptionally(new IllegalArgumentException("foo")); + + assertThat(transformed1.toCompletableFuture()).isDone(); + assertThat(transformed1.toCompletableFuture()).isCompletedWithValue("foo"); + + assertThat(transformed2.toCompletableFuture()).isDone(); + assertThat(transformed2.toCompletableFuture()).isCompletedExceptionally(); + assertThatThrownBy(transformed2::join).hasCauseInstanceOf(IllegalStateException.class).hasRootCauseInstanceOf( + IllegalArgumentException.class); + } +} diff --git a/src/test/java/io/lettuce/core/CustomCodecIntegrationTests.java b/src/test/java/io/lettuce/core/CustomCodecIntegrationTests.java new file mode 100644 index 0000000000..b5511a580c --- /dev/null +++ b/src/test/java/io/lettuce/core/CustomCodecIntegrationTests.java @@ -0,0 +1,228 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.io.*; +import java.nio.ByteBuffer; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.util.List; + +import javax.crypto.Cipher; +import javax.crypto.spec.IvParameterSpec; +import javax.crypto.spec.SecretKeySpec; +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.test.StepVerifier; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.*; +import io.lettuce.test.LettuceExtension; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class CustomCodecIntegrationTests extends TestSupport { + + private final SecretKeySpec secretKey = new SecretKeySpec("1234567890123456".getBytes(), "AES"); + private final IvParameterSpec iv = new IvParameterSpec("1234567890123456".getBytes()); + // Creates a CryptoCipher instance with the transformation and properties. + private final String transform = "AES/CBC/PKCS5Padding"; + + CipherCodec.CipherSupplier encrypt = (CipherCodec.KeyDescriptor keyDescriptor) -> { + + Cipher cipher = Cipher.getInstance(transform); + cipher.init(Cipher.ENCRYPT_MODE, secretKey, iv); + return cipher; + }; + + CipherCodec.CipherSupplier decrypt = (CipherCodec.KeyDescriptor keyDescriptor) -> { + + Cipher cipher = Cipher.getInstance(transform); + cipher.init(Cipher.DECRYPT_MODE, secretKey, iv); + return cipher; + }; + + private final RedisClient client; + + @Inject + CustomCodecIntegrationTests(RedisClient client) { + this.client = client; + } + + @Test + void testJavaSerializer() { + StatefulRedisConnection redisConnection = client.connect(new SerializedObjectCodec()); + RedisCommands sync = redisConnection.sync(); + List list = list("one", "two"); + sync.set(key, list); + + assertThat(sync.get(key)).isEqualTo(list); + assertThat(sync.set(key, list)).isEqualTo("OK"); + assertThat(sync.set(key, list, SetArgs.Builder.ex(1))).isEqualTo("OK"); + + redisConnection.close(); + } + + @Test + void testJavaSerializerReactive() { + + StatefulRedisConnection redisConnection = client.connect(new SerializedObjectCodec()); + List list = list("one", "two"); + + StepVerifier.create(redisConnection.reactive().set(key, list, SetArgs.Builder.ex(1))).expectNext("OK").verifyComplete(); + redisConnection.close(); + } + + @Test + void testDeflateCompressedJavaSerializer() { + RedisCommands connection = client + .connect( + CompressionCodec.valueCompressor(new SerializedObjectCodec(), CompressionCodec.CompressionType.DEFLATE)) + .sync(); + List list = list("one", "two"); + connection.set(key, list); + assertThat(connection.get(key)).isEqualTo(list); + + connection.getStatefulConnection().close(); + } + + @Test + void testGzipompressedJavaSerializer() { + RedisCommands connection = client + .connect(CompressionCodec.valueCompressor(new SerializedObjectCodec(), CompressionCodec.CompressionType.GZIP)) + .sync(); + List list = list("one", "two"); + connection.set(key, list); + assertThat(connection.get(key)).isEqualTo(list); + + connection.getStatefulConnection().close(); + } + + @Test + void testEncryptedCodec() { + + RedisCommands connection = client.connect(CipherCodec.forValues(StringCodec.UTF8, encrypt, decrypt)) + .sync(); + + connection.set(key, "foobar"); + assertThat(connection.get(key)).isEqualTo("foobar"); + + connection.getStatefulConnection().close(); + } + + @Test + void testByteCodec() { + RedisCommands connection = client.connect(new ByteArrayCodec()).sync(); + String value = "üöäü+#"; + connection.set(key.getBytes(), value.getBytes()); + assertThat(connection.get(key.getBytes())).isEqualTo(value.getBytes()); + connection.set(key.getBytes(), null); + assertThat(connection.get(key.getBytes())).isEqualTo(new byte[0]); + + List keys = connection.keys(key.getBytes()); + assertThat(keys).contains(key.getBytes()); + + connection.getStatefulConnection().close(); + } + + @Test + void testByteBufferCodec() { + + RedisCommands connection = client.connect(new ByteBufferCodec()).sync(); + String value = "üöäü+#"; + + ByteBuffer wrap = ByteBuffer.wrap(value.getBytes()); + + connection.set(wrap, wrap); + + List keys = connection.keys(wrap); + assertThat(keys).hasSize(1); + ByteBuffer byteBuffer = keys.get(0); + byte[] bytes = new byte[byteBuffer.remaining()]; + byteBuffer.get(bytes); + + assertThat(bytes).isEqualTo(value.getBytes()); + + connection.getStatefulConnection().close(); + } + + @Test + void testComposedCodec() { + + RedisCodec composed = RedisCodec.of(StringCodec.ASCII, new SerializedObjectCodec()); + RedisCommands connection = client.connect(composed).sync(); + + connection.set(key, new Person()); + + List keys = connection.keys(key); + assertThat(keys).hasSize(1); + + assertThat(connection.get(key)).isInstanceOf(Person.class); + + connection.getStatefulConnection().close(); + } + + class SerializedObjectCodec implements RedisCodec { + + private Charset charset = StandardCharsets.UTF_8; + + @Override + public String decodeKey(ByteBuffer bytes) { + return charset.decode(bytes).toString(); + } + + @Override + public Object decodeValue(ByteBuffer bytes) { + try { + byte[] array = new byte[bytes.remaining()]; + bytes.get(array); + ObjectInputStream is = new ObjectInputStream(new ByteArrayInputStream(array)); + return is.readObject(); + } catch (Exception e) { + return null; + } + } + + @Override + public ByteBuffer encodeKey(String key) { + return charset.encode(key); + } + + @Override + public ByteBuffer encodeValue(Object value) { + try { + ByteArrayOutputStream bytes = new ByteArrayOutputStream(); + ObjectOutputStream os = new ObjectOutputStream(bytes); + os.writeObject(value); + return ByteBuffer.wrap(bytes.toByteArray()); + } catch (IOException e) { + return null; + } + } + } + + static class Person implements Serializable { + + } +} diff --git a/src/test/java/io/lettuce/core/ExceptionFactoryUnitTests.java b/src/test/java/io/lettuce/core/ExceptionFactoryUnitTests.java new file mode 100644 index 0000000000..ce3e9026cd --- /dev/null +++ b/src/test/java/io/lettuce/core/ExceptionFactoryUnitTests.java @@ -0,0 +1,98 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import io.lettuce.core.internal.ExceptionFactory; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class ExceptionFactoryUnitTests { + + @Test + void shouldCreateBusyException() { + + assertThat(ExceptionFactory.createExecutionException("BUSY foo bar")).isInstanceOf(RedisBusyException.class) + .hasMessage("BUSY foo bar").hasNoCause(); + assertThat(ExceptionFactory.createExecutionException("BUSY foo bar", new IllegalStateException())) + .isInstanceOf(RedisBusyException.class).hasMessage("BUSY foo bar") + .hasRootCauseInstanceOf(IllegalStateException.class); + } + + @Test + void shouldCreateNoscriptException() { + + assertThat(ExceptionFactory.createExecutionException("NOSCRIPT foo bar")).isInstanceOf(RedisNoScriptException.class) + .hasMessage("NOSCRIPT foo bar").hasNoCause(); + assertThat(ExceptionFactory.createExecutionException("NOSCRIPT foo bar", new IllegalStateException())) + .isInstanceOf(RedisNoScriptException.class).hasMessage("NOSCRIPT foo bar") + .hasRootCauseInstanceOf(IllegalStateException.class); + } + + @Test + void shouldCreateExecutionException() { + + assertThat(ExceptionFactory.createExecutionException("ERR foo bar")).isInstanceOf(RedisCommandExecutionException.class) + .hasMessage("ERR foo bar").hasNoCause(); + assertThat(ExceptionFactory.createExecutionException("ERR foo bar", new IllegalStateException())) + .isInstanceOf(RedisCommandExecutionException.class).hasMessage("ERR foo bar") + .hasRootCauseInstanceOf(IllegalStateException.class); + assertThat(ExceptionFactory.createExecutionException(null, new IllegalStateException())).isInstanceOf( + RedisCommandExecutionException.class).hasRootCauseInstanceOf(IllegalStateException.class); + } + + @Test + void shouldCreateLoadingException() { + + assertThat(ExceptionFactory.createExecutionException("LOADING foo bar")).isInstanceOf(RedisLoadingException.class) + .hasMessage("LOADING foo bar").hasNoCause(); + assertThat(ExceptionFactory.createExecutionException("LOADING foo bar", new IllegalStateException())) + .isInstanceOf(RedisLoadingException.class).hasMessage("LOADING foo bar") + .hasRootCauseInstanceOf(IllegalStateException.class); + } + + @Test + void shouldFormatExactUnits() { + + assertThat(ExceptionFactory.formatTimeout(Duration.ofMinutes(2))).isEqualTo("2 minute(s)"); + assertThat(ExceptionFactory.formatTimeout(Duration.ofMinutes(1))).isEqualTo("1 minute(s)"); + assertThat(ExceptionFactory.formatTimeout(Duration.ofMinutes(0))).isEqualTo("no timeout"); + + assertThat(ExceptionFactory.formatTimeout(Duration.ofSeconds(2))).isEqualTo("2 second(s)"); + assertThat(ExceptionFactory.formatTimeout(Duration.ofSeconds(1))).isEqualTo("1 second(s)"); + assertThat(ExceptionFactory.formatTimeout(Duration.ofSeconds(0))).isEqualTo("no timeout"); + + assertThat(ExceptionFactory.formatTimeout(Duration.ofMillis(2))).isEqualTo("2 millisecond(s)"); + assertThat(ExceptionFactory.formatTimeout(Duration.ofMillis(1))).isEqualTo("1 millisecond(s)"); + assertThat(ExceptionFactory.formatTimeout(Duration.ofMillis(0))).isEqualTo("no timeout"); + } + + @Test + void shouldFormatToMinmalApplicableTimeunit() { + + assertThat(ExceptionFactory.formatTimeout(Duration.ofMinutes(2).plus(Duration.ofSeconds(10)))).isEqualTo( + "130 second(s)"); + assertThat(ExceptionFactory.formatTimeout(Duration.ofSeconds(2).plus(Duration.ofMillis(5)))).isEqualTo( + "2005 millisecond(s)"); + assertThat(ExceptionFactory.formatTimeout(Duration.ofNanos(2))).isEqualTo("2 ns"); + } +} diff --git a/src/test/java/io/lettuce/core/GeoModelUnitTests.java b/src/test/java/io/lettuce/core/GeoModelUnitTests.java new file mode 100644 index 0000000000..ad8acd383b --- /dev/null +++ b/src/test/java/io/lettuce/core/GeoModelUnitTests.java @@ -0,0 +1,100 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Collections; +import java.util.Map; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class GeoModelUnitTests { + + @Test + void geoWithin() { + + GeoWithin sut = new GeoWithin<>("me", 1.0, 1234L, new GeoCoordinates(1, 2)); + GeoWithin equalsToSut = new GeoWithin<>("me", 1.0, 1234L, new GeoCoordinates(1, 2)); + + Map, String> map = Collections.singletonMap(sut, "value"); + + assertThat(map.get(equalsToSut)).isEqualTo("value"); + assertThat(sut).isEqualTo(equalsToSut); + assertThat(sut.hashCode()).isEqualTo(equalsToSut.hashCode()); + assertThat(sut.toString()).isEqualTo(equalsToSut.toString()); + } + + @Test + void geoWithinSlightlyDifferent() { + + GeoWithin sut = new GeoWithin<>("me", 1.0, 1234L, new GeoCoordinates(1, 2)); + GeoWithin slightlyDifferent = new GeoWithin<>("me", 1.0, 1234L, new GeoCoordinates(1.1, 2)); + + Map, String> map = Collections.singletonMap(sut, "value"); + + assertThat(map.get(slightlyDifferent)).isNull(); + assertThat(sut).isNotEqualTo(slightlyDifferent); + assertThat(sut.hashCode()).isNotEqualTo(slightlyDifferent.hashCode()); + assertThat(sut.toString()).isNotEqualTo(slightlyDifferent.toString()); + + slightlyDifferent = new GeoWithin<>("me1", 1.0, 1234L, new GeoCoordinates(1, 2)); + assertThat(sut).isNotEqualTo(slightlyDifferent); + } + + @Test + void geoWithinEmpty() { + + GeoWithin sut = new GeoWithin<>(null, null, null, null); + GeoWithin equalsToSut = new GeoWithin<>(null, null, null, null); + + assertThat(sut).isEqualTo(equalsToSut); + assertThat(sut.hashCode()).isEqualTo(equalsToSut.hashCode()); + } + + @Test + void geoCoordinates() { + + GeoCoordinates sut = new GeoCoordinates(1, 2); + GeoCoordinates equalsToSut = new GeoCoordinates(1, 2); + + Map map = Collections.singletonMap(sut, "value"); + + assertThat(map.get(equalsToSut)).isEqualTo("value"); + assertThat(sut).isEqualTo(equalsToSut); + assertThat(sut.hashCode()).isEqualTo(equalsToSut.hashCode()); + assertThat(sut.toString()).isEqualTo(equalsToSut.toString()); + + } + + @Test + void geoCoordinatesSlightlyDifferent() { + + GeoCoordinates sut = new GeoCoordinates(1, 2); + GeoCoordinates slightlyDifferent = new GeoCoordinates(1.1, 2); + + Map map = Collections.singletonMap(sut, "value"); + + assertThat(map.get(slightlyDifferent)).isNull(); + assertThat(sut).isNotEqualTo(slightlyDifferent); + assertThat(sut.hashCode()).isNotEqualTo(slightlyDifferent.hashCode()); + assertThat(sut.toString()).isNotEqualTo(slightlyDifferent.toString()); + + } +} diff --git a/src/test/java/io/lettuce/core/JavaRuntimeUnitTests.java b/src/test/java/io/lettuce/core/JavaRuntimeUnitTests.java new file mode 100644 index 0000000000..9472cb0b31 --- /dev/null +++ b/src/test/java/io/lettuce/core/JavaRuntimeUnitTests.java @@ -0,0 +1,47 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.internal.LettuceClassUtils; + +class JavaRuntimeUnitTests { + + @Test + void testJava8() { + + assumeTrue(System.getProperty("java.version").startsWith("1.8")); + + assertThat(JavaRuntime.AT_LEAST_JDK_8).isTrue(); + } + + @Test + void testJava9() { + + assumeTrue(System.getProperty("java.version").startsWith("9")); + + assertThat(JavaRuntime.AT_LEAST_JDK_8).isTrue(); + } + + @Test + void testNotPresentClass() { + assertThat(LettuceClassUtils.isPresent("total.fancy.class.name")).isFalse(); + } +} diff --git a/src/test/java/io/lettuce/core/KeyValueUnitTests.java b/src/test/java/io/lettuce/core/KeyValueUnitTests.java new file mode 100644 index 0000000000..ada65e474b --- /dev/null +++ b/src/test/java/io/lettuce/core/KeyValueUnitTests.java @@ -0,0 +1,125 @@ + +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.Value.just; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.Optional; + +import org.junit.jupiter.api.Test; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +class KeyValueUnitTests { + + @Test + void shouldCreateEmptyKeyValueFromOptional() { + + KeyValue value = KeyValue.from("key", Optional. empty()); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateEmptyValue() { + + KeyValue value = KeyValue.empty("key"); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateNonEmptyValueFromOptional() { + + KeyValue value = KeyValue.from(1L, Optional.of("hello")); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void shouldCreateEmptyValueFromValue() { + + KeyValue value = KeyValue.fromNullable("key", null); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateNonEmptyValueFromValue() { + + KeyValue value = KeyValue.fromNullable("key", "hello"); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void justShouldCreateValueFromValue() { + + KeyValue value = KeyValue.just("key", "hello"); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getKey()).isEqualTo("key"); + } + + @Test + void justShouldRejectEmptyValueFromValue() { + assertThatThrownBy(() -> just(null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void shouldCreateNonEmptyValue() { + + KeyValue value = KeyValue.from("key", Optional.of("hello")); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void equals() { + KeyValue kv = kv("key", "value"); + assertThat(kv.equals(kv("key", "value"))).isTrue(); + assertThat(kv.equals(null)).isFalse(); + assertThat(kv.equals(kv("a", "value"))).isFalse(); + assertThat(kv.equals(kv("key", "b"))).isFalse(); + } + + @Test + void testHashCode() { + assertThat(kv("key", "value").hashCode() != 0).isTrue(); + } + + @Test + void toStringShouldRenderCorrectly() { + + KeyValue value = KeyValue.from("key", Optional.of("hello")); + KeyValue empty = KeyValue.fromNullable("key", null); + + assertThat(value.toString()).isEqualTo("KeyValue[key, hello]"); + assertThat(empty.toString()).isEqualTo("KeyValue[key].empty"); + } + + KeyValue kv(String key, String value) { + return KeyValue.just(key, value); + } +} diff --git a/src/test/java/io/lettuce/core/LettuceFuturesUnitTests.java b/src/test/java/io/lettuce/core/LettuceFuturesUnitTests.java new file mode 100644 index 0000000000..5d1b9ec0ca --- /dev/null +++ b/src/test/java/io/lettuce/core/LettuceFuturesUnitTests.java @@ -0,0 +1,71 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.LettuceFutures.awaitAll; +import static java.util.concurrent.TimeUnit.SECONDS; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.concurrent.CompletableFuture; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.internal.Futures; + +/** + * @author Mark Paluch + */ +class LettuceFuturesUnitTests { + + @BeforeEach + void setUp() { + Thread.interrupted(); + } + + @Test + void awaitAllShouldThrowRedisCommandExecutionException() { + + CompletableFuture f = new CompletableFuture<>(); + f.completeExceptionally(new RedisCommandExecutionException("error")); + + assertThatThrownBy(() -> Futures.await(1, SECONDS, f)).isInstanceOf(RedisCommandExecutionException.class); + } + + @Test + void awaitAllShouldThrowRedisCommandInterruptedException() { + + CompletableFuture f = new CompletableFuture<>(); + Thread.currentThread().interrupt(); + + assertThatThrownBy(() -> Futures.await(1, SECONDS, f)).isInstanceOf(RedisCommandInterruptedException.class); + } + + @Test + void awaitAllShouldSetInterruptedBit() { + + CompletableFuture f = new CompletableFuture<>(); + Thread.currentThread().interrupt(); + + try { + Futures.await(1, SECONDS, f); + } catch (Exception e) { + } + + assertThat(Thread.currentThread().isInterrupted()).isTrue(); + } +} diff --git a/src/test/java/io/lettuce/core/LimitUnitTests.java b/src/test/java/io/lettuce/core/LimitUnitTests.java new file mode 100644 index 0000000000..74a25330e7 --- /dev/null +++ b/src/test/java/io/lettuce/core/LimitUnitTests.java @@ -0,0 +1,46 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class LimitUnitTests { + + @Test + void create() { + + Limit limit = Limit.create(1, 2); + + assertThat(limit.getOffset()).isEqualTo(1); + assertThat(limit.getCount()).isEqualTo(2); + assertThat(limit.isLimited()).isTrue(); + } + + @Test + void unlimited() { + + Limit limit = Limit.unlimited(); + + assertThat(limit.getOffset()).isEqualTo(-1); + assertThat(limit.getCount()).isEqualTo(-1); + assertThat(limit.isLimited()).isFalse(); + } +} diff --git a/src/test/java/io/lettuce/core/PipeliningIntegrationTests.java b/src/test/java/io/lettuce/core/PipeliningIntegrationTests.java new file mode 100644 index 0000000000..2a63922653 --- /dev/null +++ b/src/test/java/io/lettuce/core/PipeliningIntegrationTests.java @@ -0,0 +1,127 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.internal.Futures; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("rawtypes") +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +class PipeliningIntegrationTests extends TestSupport { + + private final RedisClient client; + private final StatefulRedisConnection connection; + + @Inject + PipeliningIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + this.client = client; + this.connection = connection; + } + + @BeforeEach + void setUp() { + this.connection.async().flushall(); + } + + @Test + void basic() { + + StatefulRedisConnection connection = client.connect(); + connection.setAutoFlushCommands(false); + + int iterations = 100; + List> futures = triggerSet(connection.async(), iterations); + + verifyNotExecuted(iterations); + + connection.flushCommands(); + + Futures.awaitAll(5, TimeUnit.SECONDS, futures.toArray(new RedisFuture[futures.size()])); + + verifyExecuted(iterations); + + connection.close(); + } + + void verifyExecuted(int iterations) { + for (int i = 0; i < iterations; i++) { + assertThat(connection.sync().get(key(i))).as("Key " + key(i) + " must be " + value(i)).isEqualTo(value(i)); + } + } + + @Test + void setAutoFlushTrueDoesNotFlush() { + + StatefulRedisConnection connection = client.connect(); + connection.setAutoFlushCommands(false); + + int iterations = 100; + List> futures = triggerSet(connection.async(), iterations); + + verifyNotExecuted(iterations); + + connection.setAutoFlushCommands(true); + + verifyNotExecuted(iterations); + + connection.flushCommands(); + boolean result = Futures.awaitAll(5, TimeUnit.SECONDS, futures.toArray(new RedisFuture[futures.size()])); + assertThat(result).isTrue(); + + connection.close(); + } + + void verifyNotExecuted(int iterations) { + for (int i = 0; i < iterations; i++) { + assertThat(connection.sync().get(key(i))).as("Key " + key(i) + " must be null").isNull(); + } + } + + List> triggerSet(RedisAsyncCommands connection, int iterations) { + List> futures = new ArrayList<>(); + for (int i = 0; i < iterations; i++) { + futures.add(connection.set(key(i), value(i))); + } + return futures; + } + + String value(int i) { + return value + "-" + i; + } + + String key(int i) { + return key + "-" + i; + } +} diff --git a/src/test/java/io/lettuce/core/ProtectedModeTests.java b/src/test/java/io/lettuce/core/ProtectedModeTests.java new file mode 100644 index 0000000000..7524ff1d54 --- /dev/null +++ b/src/test/java/io/lettuce/core/ProtectedModeTests.java @@ -0,0 +1,143 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; + +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.server.MockTcpServer; +import io.lettuce.test.settings.TestSettings; +import io.netty.buffer.ByteBuf; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; + +/** + * @author Mark Paluch + */ +class ProtectedModeTests { + + private static MockTcpServer server; + private static RedisClient client; + + @BeforeAll + static void beforeClass() throws Exception { + + server = new MockTcpServer(); + + server.addHandler(() -> { + return new ChannelInboundHandlerAdapter() { + @Override + public void channelActive(ChannelHandlerContext ctx) { + + String message = getMessage(); + ByteBuf buffer = ctx.alloc().buffer(message.length() + 3); + buffer.writeCharSequence("-", StandardCharsets.US_ASCII); + buffer.writeCharSequence(message, StandardCharsets.US_ASCII); + buffer.writeByte('\r').writeByte('\n'); + + ctx.writeAndFlush(buffer).addListener(future -> { + ctx.close(); + }); + } + }; + }); + + server.initialize(TestSettings.nonexistentPort()); + + client = RedisClient.create(TestClientResources.get(), + RedisURI.create(TestSettings.host(), TestSettings.nonexistentPort())); + } + + @AfterAll + static void afterClass() { + + server.shutdown(); + FastShutdown.shutdown(client); + } + + @BeforeEach + void before() { + client.setOptions(ClientOptions.create()); + } + + @Test + void regularClientFailsOnFirstCommand() { + + try (StatefulRedisConnection connect = client.connect()) { + + connect.sync().ping(); + } catch (RedisException e) { + if (e.getCause() instanceof IOException) { + assertThat(e).hasCauseInstanceOf(IOException.class); + } else { + assertThat(e.getCause()).hasMessageContaining("DENIED"); + } + } + } + + @Test + void regularClientFailsOnFirstCommandWithDelay() { + + try (StatefulRedisConnection connect = client.connect()) { + + Wait.untilEquals(false, connect::isOpen).waitOrTimeout(); + + connect.sync().ping(); + } catch (RedisException e) { + if (e.getCause() instanceof IOException) { + assertThat(e).hasCauseInstanceOf(IOException.class); + } else { + assertThat(e.getCause()).hasMessageContaining("DENIED"); + } + } + } + + @Test + void connectFailsOnPing() { + + client.setOptions(ClientOptions.builder().build()); + assertThatThrownBy(() -> client.connect()).isInstanceOf(RedisConnectionException.class).hasCauseInstanceOf( + RedisConnectionException.class); + } + + private static String getMessage() { + + return "DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, " + + "no authentication password is requested to clients. In this mode connections are only accepted from the " + + "loopback interface. If you want to connect from external computers to Redis you may adopt one of the " + + "following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' " + + "from the loopback interface by connecting to Redis from the same host the server is running, however " + + "MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this " + + "change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis " + + "configuration file, and setting the protected mode option to 'no', and then restarting the server. " + + "3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. " + + "4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above " + + "things in order for the server to start accepting connections from the outside."; + + } +} diff --git a/src/test/java/io/lettuce/core/RangeUnitTests.java b/src/test/java/io/lettuce/core/RangeUnitTests.java new file mode 100644 index 0000000000..4f485f6bf5 --- /dev/null +++ b/src/test/java/io/lettuce/core/RangeUnitTests.java @@ -0,0 +1,107 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.Range.Boundary.excluding; +import static io.lettuce.core.Range.Boundary.including; +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class RangeUnitTests { + + @Test + void unbounded() { + + Range unbounded = Range.unbounded(); + + assertThat(unbounded.getLower().isIncluding()).isTrue(); + assertThat(unbounded.getLower().getValue()).isNull(); + assertThat(unbounded.getUpper().isIncluding()).isTrue(); + assertThat(unbounded.getUpper().getValue()).isNull(); + } + + @Test + void createIncluded() { + + Range range = Range.create("ze", "ro"); + + assertThat(range.getLower().isIncluding()).isTrue(); + assertThat(range.getLower().getValue()).isEqualTo("ze"); + assertThat(range.getUpper().isIncluding()).isTrue(); + assertThat(range.getUpper().getValue()).isEqualTo("ro"); + } + + @Test + void fromBoundaries() { + + Range range = Range.from(including("ze"), excluding("ro")); + + assertThat(range.getLower().isIncluding()).isTrue(); + assertThat(range.getLower().getValue()).isEqualTo("ze"); + assertThat(range.getUpper().isIncluding()).isFalse(); + assertThat(range.getUpper().getValue()).isEqualTo("ro"); + } + + @Test + void greater() { + + Range gt = Range.unbounded().gt("zero"); + + assertThat(gt.getLower().isIncluding()).isFalse(); + assertThat(gt.getLower().getValue()).isEqualTo("zero"); + assertThat(gt.getUpper().isIncluding()).isTrue(); + assertThat(gt.getUpper().getValue()).isNull(); + } + + @Test + void greaterOrEquals() { + + Range gte = Range.unbounded().gte("zero"); + + assertThat(gte.getLower().isIncluding()).isTrue(); + assertThat(gte.getLower().getValue()).isEqualTo("zero"); + assertThat(gte.getUpper().isIncluding()).isTrue(); + assertThat(gte.getUpper().getValue()).isNull(); + } + + @Test + void less() { + + Range lt = Range.unbounded().lt("zero"); + + assertThat(lt.getLower().isIncluding()).isTrue(); + assertThat(lt.getLower().getValue()).isNull(); + assertThat(lt.getUpper().isIncluding()).isFalse(); + assertThat(lt.getUpper().getValue()).isEqualTo("zero"); + assertThat(lt.toString()).isEqualTo("Range [[unbounded] to (zero]"); + } + + @Test + void lessOrEquals() { + + Range lte = Range.unbounded().lte("zero"); + + assertThat(lte.getLower().isIncluding()).isTrue(); + assertThat(lte.getLower().getValue()).isNull(); + assertThat(lte.getUpper().isIncluding()).isTrue(); + assertThat(lte.getUpper().getValue()).isEqualTo("zero"); + assertThat(lte.toString()).isEqualTo("Range [[unbounded] to [zero]"); + } +} diff --git a/src/test/java/io/lettuce/core/ReactiveBackpressurePropagationUnitTests.java b/src/test/java/io/lettuce/core/ReactiveBackpressurePropagationUnitTests.java new file mode 100644 index 0000000000..193a906863 --- /dev/null +++ b/src/test/java/io/lettuce/core/ReactiveBackpressurePropagationUnitTests.java @@ -0,0 +1,199 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Matchers.any; +import static org.mockito.Mockito.when; + +import java.util.List; +import java.util.concurrent.CountDownLatch; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; + +import reactor.core.Disposable; +import reactor.core.publisher.Flux; +import reactor.core.scheduler.Schedulers; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.metrics.CommandLatencyCollector; +import io.lettuce.core.output.ValueListOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.tracing.Tracing; +import io.netty.buffer.Unpooled; +import io.netty.channel.embedded.EmbeddedChannel; +import io.netty.channel.local.LocalAddress; +import io.netty.util.concurrent.ImmediateEventExecutor; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class ReactiveBackpressurePropagationUnitTests { + + private CommandHandler commandHandler; + private EmbeddedChannel embeddedChannel; + + @Mock + private Endpoint endpoint; + + @Mock + private ClientResources clientResources; + + @Mock + private CommandLatencyCollector latencyCollector; + + @Mock + private StatefulConnection statefulConnection; + + @BeforeEach + void before() { + + when(clientResources.commandLatencyCollector()).thenReturn(latencyCollector); + when(clientResources.tracing()).thenReturn(Tracing.disabled()); + when(statefulConnection.dispatch(any(RedisCommand.class))).thenAnswer(invocation -> { + + RedisCommand command = (RedisCommand) invocation.getArguments()[0]; + embeddedChannel.writeOutbound(command); + return command; + }); + + commandHandler = new CommandHandler(ClientOptions.create(), clientResources, endpoint); + + embeddedChannel = new EmbeddedChannel(commandHandler); + embeddedChannel.connect(new LocalAddress("remote")); + } + + @Test + void writeCommand() throws Exception { + + Command> lrange = new Command<>(CommandType.LRANGE, + new ValueListOutput<>(StringCodec.UTF8)); + RedisPublisher publisher = new RedisPublisher<>((Command) lrange, statefulConnection, true, + ImmediateEventExecutor.INSTANCE); + + CountDownLatch pressureArrived = new CountDownLatch(1); + CountDownLatch buildPressure = new CountDownLatch(1); + CountDownLatch waitForPressureReduced = new CountDownLatch(2); + CountDownLatch waitForWorkCompleted = new CountDownLatch(4); + + Flux.from(publisher).limitRate(2).publishOn(Schedulers.single()).doOnNext(s -> { + + try { + pressureArrived.countDown(); + buildPressure.await(); + } catch (InterruptedException e) { + } + + waitForPressureReduced.countDown(); + waitForWorkCompleted.countDown(); + + }).subscribe(); + + assertThat(embeddedChannel.config().isAutoRead()).isTrue(); + + // produce some back pressure + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.arrayHeader(4))); + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.bulkString("one"))); + pressureArrived.await(); + assertThat(embeddedChannel.config().isAutoRead()).isTrue(); + + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.bulkString("two"))); + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.bulkString("three"))); + assertThat(embeddedChannel.config().isAutoRead()).isFalse(); + + // allow processing + buildPressure.countDown(); + + // wait until processing caught up + waitForPressureReduced.await(); + assertThat(embeddedChannel.config().isAutoRead()).isTrue(); + + // emit the last item + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.bulkString("four"))); + + // done + waitForWorkCompleted.await(); + assertThat(embeddedChannel.config().isAutoRead()).isTrue(); + } + + @Test + void writeCommandAndCancelInTheMiddle() throws Exception { + + Command> lrange = new Command<>(CommandType.LRANGE, + new ValueListOutput<>(StringCodec.UTF8)); + RedisPublisher publisher = new RedisPublisher<>(lrange, statefulConnection, true, + ImmediateEventExecutor.INSTANCE); + + CountDownLatch pressureArrived = new CountDownLatch(1); + CountDownLatch buildPressure = new CountDownLatch(1); + CountDownLatch waitForPressureReduced = new CountDownLatch(2); + + Disposable cancellation = Flux.from(publisher).limitRate(2).publishOn(Schedulers.single()).doOnNext(s -> { + + try { + pressureArrived.countDown(); + buildPressure.await(); + } catch (InterruptedException e) { + } + + waitForPressureReduced.countDown(); + + }).subscribe(); + + assertThat(embeddedChannel.config().isAutoRead()).isTrue(); + + // produce some back pressure + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.arrayHeader(4))); + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.bulkString("one"))); + pressureArrived.await(); + assertThat(embeddedChannel.config().isAutoRead()).isTrue(); + + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.bulkString("two"))); + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.bulkString("three"))); + assertThat(embeddedChannel.config().isAutoRead()).isFalse(); + + cancellation.dispose(); + + assertThat(embeddedChannel.config().isAutoRead()).isTrue(); + + // allow processing + buildPressure.countDown(); + + // emit the last item + embeddedChannel.writeInbound(Unpooled.wrappedBuffer(RESP.bulkString("four"))); + + // done + assertThat(embeddedChannel.config().isAutoRead()).isTrue(); + } + + static class RESP { + + static byte[] arrayHeader(int count) { + return String.format("*%d\r\n", count).getBytes(); + } + + static byte[] bulkString(String string) { + return String.format("$%d\r\n%s\r\n", string.getBytes().length, string).getBytes(); + } + } + +} diff --git a/src/test/java/io/lettuce/core/ReactiveConnectionIntegrationTests.java b/src/test/java/io/lettuce/core/ReactiveConnectionIntegrationTests.java new file mode 100644 index 0000000000..b7fae489c0 --- /dev/null +++ b/src/test/java/io/lettuce/core/ReactiveConnectionIntegrationTests.java @@ -0,0 +1,338 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.ClientOptions.DisconnectedBehavior.REJECT_COMMANDS; +import static io.lettuce.core.ScriptOutputType.INTEGER; +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; + +import javax.enterprise.inject.New; +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; +import org.reactivestreams.Subscriber; +import org.reactivestreams.Subscription; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import reactor.test.StepVerifier; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.Delay; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; +import io.lettuce.test.WithPassword; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Mark Paluch + * @author Nikolai Perevozchikov + * @author Tugdual Grall + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +class ReactiveConnectionIntegrationTests extends TestSupport { + + private final StatefulRedisConnection connection; + private final RedisCommands redis; + private final RedisReactiveCommands reactive; + + @Inject + ReactiveConnectionIntegrationTests(StatefulRedisConnection connection) { + this.connection = connection; + this.redis = connection.sync(); + this.reactive = connection.reactive(); + } + + @BeforeEach + void setUp() { + this.connection.async().flushall(); + } + + @Test + void doNotFireCommandUntilObservation() { + + RedisReactiveCommands reactive = connection.reactive(); + Mono set = reactive.set(key, value); + Delay.delay(Duration.ofMillis(50)); + assertThat(redis.get(key)).isNull(); + set.subscribe(); + Wait.untilEquals(value, () -> redis.get(key)).waitOrTimeout(); + + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void fireCommandAfterObserve() { + StepVerifier.create(reactive.set(key, value)).expectNext("OK").verifyComplete(); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void isOpen() { + assertThat(reactive.isOpen()).isTrue(); + } + + @Test + void getStatefulConnection() { + assertThat(reactive.getStatefulConnection()).isSameAs(connection); + } + + @Test + @Inject + void testCancelCommand(@New StatefulRedisConnection connection) { + + RedisReactiveCommands reactive = connection.reactive(); + List result = new ArrayList<>(); + reactive.clientPause(2000).subscribe(); + Delay.delay(Duration.ofMillis(50)); + + reactive.set(key, value).subscribe(new CompletionSubscriber(result)); + Delay.delay(Duration.ofMillis(50)); + + reactive.reset(); + assertThat(result).isEmpty(); + } + + @Test + void testEcho() { + StepVerifier.create(reactive.echo("echo")).expectNext("echo").verifyComplete(); + } + + @Test + @Inject + void testMonoMultiCancel(@New StatefulRedisConnection connection) { + + RedisReactiveCommands reactive = connection.reactive(); + + List result = new ArrayList<>(); + reactive.clientPause(1000).subscribe(); + Delay.delay(Duration.ofMillis(50)); + + Mono set = reactive.set(key, value); + set.subscribe(new CompletionSubscriber(result)); + set.subscribe(new CompletionSubscriber(result)); + set.subscribe(new CompletionSubscriber(result)); + Delay.delay(Duration.ofMillis(50)); + + reactive.reset(); + assertThat(result).isEmpty(); + } + + @Test + @Inject + void testFluxCancel(@New StatefulRedisConnection connection) { + + RedisReactiveCommands reactive = connection.reactive(); + + List result = new ArrayList<>(); + reactive.clientPause(1000).subscribe(); + Delay.delay(Duration.ofMillis(100)); + + Flux> set = reactive.mget(key, value); + set.subscribe(new CompletionSubscriber(result)); + set.subscribe(new CompletionSubscriber(result)); + set.subscribe(new CompletionSubscriber(result)); + Delay.delay(Duration.ofMillis(100)); + + reactive.reset(); + assertThat(result).isEmpty(); + } + + @Test + void multiSubscribe() throws Exception { + + CountDownLatch latch = new CountDownLatch(4); + reactive.set(key, "1").subscribe(s -> latch.countDown()); + Mono incr = reactive.incr(key); + incr.subscribe(s -> latch.countDown()); + incr.subscribe(s -> latch.countDown()); + incr.subscribe(s -> latch.countDown()); + + latch.await(); + + Wait.untilEquals("4", () -> redis.get(key)).waitOrTimeout(); + + assertThat(redis.get(key)).isEqualTo("4"); + } + + @Test + @Inject + void transactional(RedisClient client) throws Exception { + + final CountDownLatch sync = new CountDownLatch(1); + + RedisReactiveCommands reactive = client.connect().reactive(); + + reactive.multi().subscribe(multiResponse -> { + reactive.set(key, "1").subscribe(); + reactive.incr(key).subscribe(getResponse -> { + sync.countDown(); + }); + reactive.exec().subscribe(); + }); + + sync.await(5, TimeUnit.SECONDS); + + String result = redis.get(key); + assertThat(result).isEqualTo("2"); + + reactive.getStatefulConnection().close(); + } + + @Test + void auth() { + WithPassword.enableAuthentication(this.connection.sync()); + + try { + StepVerifier.create(reactive.auth("error")).expectError().verify(); + } finally { + WithPassword.disableAuthentication(this.connection.sync()); + } + } + + @Test + @EnabledOnCommand("ACL") + void authWithUsername() { + + try { + + StepVerifier.create(reactive.auth(username, "error")).expectNext("OK").verifyComplete(); + + WithPassword.enableAuthentication(this.connection.sync()); + + StepVerifier.create(reactive.auth(username, "error")).expectError().verify(); + StepVerifier.create(reactive.auth(aclUsername, aclPasswd)).expectNext("OK").verifyComplete(); + StepVerifier.create(reactive.auth(aclUsername, "error")).expectError().verify(); + } finally { + WithPassword.disableAuthentication(this.connection.sync()); + } + } + + @Test + void subscriberCompletingWithExceptionShouldBeHandledSafely() { + + StepVerifier.create(Flux.concat(reactive.set("keyA", "valueA"), reactive.set("keyB", "valueB"))).expectNextCount(2) + .verifyComplete(); + + reactive.get("keyA").subscribe(createSubscriberWithExceptionOnComplete()); + reactive.get("keyA").subscribe(createSubscriberWithExceptionOnComplete()); + + StepVerifier.create(reactive.get("keyB")).expectNext("valueB").verifyComplete(); + } + + @Test + @Inject + void subscribeWithDisconnectedClient(RedisClient client) { + + client.setOptions(ClientOptions.builder().disconnectedBehavior(REJECT_COMMANDS).autoReconnect(false).build()); + + StatefulRedisConnection connection = client.connect(); + + connection.async().quit(); + Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); + + StepVerifier.create(connection.reactive().ping()).consumeErrorWith(throwable -> { + assertThat(throwable).isInstanceOf(RedisException.class) + .hasMessageContaining("not connected. Commands are rejected"); + + }).verify(); + + connection.close(); + } + + @Test + @Inject + void publishOnSchedulerTest(RedisClient client) { + + client.setOptions(ClientOptions.builder().publishOnScheduler(true).build()); + + RedisReactiveCommands reactive = client.connect().reactive(); + + int counter = 0; + for (int i = 0; i < 1000; i++) { + if (reactive.eval("return 1", INTEGER).next().block() == null) { + counter++; + } + } + + assertThat(counter).isZero(); + + reactive.getStatefulConnection().close(); + } + + private static Subscriber createSubscriberWithExceptionOnComplete() { + return new Subscriber() { + + @Override + public void onSubscribe(Subscription s) { + s.request(1000); + } + + @Override + public void onComplete() { + throw new RuntimeException("throwing something"); + } + + @Override + public void onError(Throwable e) { + } + + @Override + public void onNext(String s) { + } + }; + } + + private static class CompletionSubscriber implements Subscriber { + + private final List result; + + CompletionSubscriber(List result) { + this.result = result; + } + + @Override + public void onSubscribe(Subscription s) { + s.request(1000); + } + + @Override + public void onComplete() { + result.add("completed"); + } + + @Override + public void onError(Throwable e) { + result.add(e); + } + + @Override + public void onNext(Object o) { + result.add(o); + } + } +} diff --git a/src/test/java/io/lettuce/core/ReactiveStreamingOutputIntegrationTests.java b/src/test/java/io/lettuce/core/ReactiveStreamingOutputIntegrationTests.java new file mode 100644 index 0000000000..782e225b1d --- /dev/null +++ b/src/test/java/io/lettuce/core/ReactiveStreamingOutputIntegrationTests.java @@ -0,0 +1,121 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.ArrayList; +import java.util.Arrays; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.test.StepVerifier; +import io.lettuce.core.GeoArgs.Unit; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.KeysAndValues; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +class ReactiveStreamingOutputIntegrationTests extends TestSupport { + + private final RedisCommands redis; + private final RedisReactiveCommands reactive; + + @Inject + ReactiveStreamingOutputIntegrationTests(StatefulRedisConnection connection) { + this.redis = connection.sync(); + this.reactive = connection.reactive(); + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void keyListCommandShouldReturnAllElements() { + + redis.mset(KeysAndValues.MAP); + + StepVerifier.create(reactive.keys("*")).recordWith(ArrayList::new).expectNextCount(KeysAndValues.COUNT) + .expectRecordedMatches(strings -> strings.containsAll(KeysAndValues.KEYS)).verifyComplete(); + } + + @Test + void valueListCommandShouldReturnAllElements() { + + redis.mset(KeysAndValues.MAP); + + StepVerifier.create(reactive.mget(KeysAndValues.KEYS.toArray(new String[KeysAndValues.COUNT]))) + .expectNextCount(KeysAndValues.COUNT).verifyComplete(); + } + + @Test + void booleanListCommandShouldReturnAllElements() { + StepVerifier.create(reactive.scriptExists("a", "b", "c")).expectNextCount(3).verifyComplete(); + } + + @Test + void scoredValueListCommandShouldReturnAllElements() { + + redis.zadd(key, 1d, "v1", 2d, "v2", 3d, "v3"); + + StepVerifier.create(reactive.zrangeWithScores(key, 0, -1)).recordWith(ArrayList::new).expectNextCount(3) + .expectRecordedMatches(values -> values.containsAll(Arrays.asList(sv(1, "v1"), sv(2, "v2"), sv(3, "v3")))) + .verifyComplete(); + } + + @Test + @EnabledOnCommand("GEORADIUS") + void geoWithinListCommandShouldReturnAllElements() { + + redis.geoadd(key, 50, 20, "value1"); + redis.geoadd(key, 50, 21, "value2"); + + StepVerifier + .create(reactive.georadius(key, 50, 20, 1000, Unit.km, new GeoArgs().withHash())) + .recordWith(ArrayList::new) + .expectNextCount(2) + .consumeRecordedWith( + values -> { + assertThat(values).hasSize(2).contains(new GeoWithin<>("value1", null, 3542523898362974L, null), + new GeoWithin<>("value2", null, 3542609801095198L, null)); + + }).verifyComplete(); + } + + @Test + @EnabledOnCommand("GEOPOS") + void geoCoordinatesListCommandShouldReturnAllElements() { + + redis.geoadd(key, 50, 20, "value1"); + redis.geoadd(key, 50, 21, "value2"); + + StepVerifier.create(reactive.geopos(key, "value1", "value2")).expectNextCount(2).verifyComplete(); + } +} diff --git a/src/test/java/io/lettuce/core/RedisClientConnectIntegrationTests.java b/src/test/java/io/lettuce/core/RedisClientConnectIntegrationTests.java new file mode 100644 index 0000000000..6e8e6c8cd7 --- /dev/null +++ b/src/test/java/io/lettuce/core/RedisClientConnectIntegrationTests.java @@ -0,0 +1,277 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.RedisURI.Builder.redis; +import static io.lettuce.core.codec.StringCodec.UTF8; +import static java.util.concurrent.TimeUnit.SECONDS; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.AssertionsForClassTypes.assertThat; + +import java.time.Duration; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeoutException; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.test.TestFutures; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + * @author Jongyeol Choi + */ +@ExtendWith(LettuceExtension.class) +class RedisClientConnectIntegrationTests extends TestSupport { + + private static final Duration EXPECTED_TIMEOUT = Duration.ofMillis(500); + + private final RedisClient client; + + @Inject + RedisClientConnectIntegrationTests(RedisClient client) { + this.client = client; + } + + @BeforeEach + void before() { + client.setDefaultTimeout(EXPECTED_TIMEOUT); + } + + /* + * Standalone/Stateful + */ + @Test + void connectClientUri() { + + StatefulRedisConnection connection = client.connect(); + assertThat(connection.getTimeout()).isEqualTo(EXPECTED_TIMEOUT); + connection.close(); + } + + @Test + void connectCodecClientUri() { + StatefulRedisConnection connection = client.connect(UTF8); + assertThat(connection.getTimeout()).isEqualTo(EXPECTED_TIMEOUT); + connection.close(); + } + + @Test + void connectOwnUri() { + RedisURI redisURI = redis(host, port).build(); + StatefulRedisConnection connection = client.connect(redisURI); + assertThat(connection.getTimeout()).isEqualTo(redisURI.getTimeout()); + connection.close(); + } + + @Test + void connectMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connect(new RedisURI())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectSentinelMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connect(invalidSentinel())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectCodecOwnUri() { + RedisURI redisURI = redis(host, port).build(); + StatefulRedisConnection connection = client.connect(UTF8, redisURI); + assertThat(connection.getTimeout()).isEqualTo(redisURI.getTimeout()); + connection.close(); + } + + @Test + void connectAsyncCodecOwnUri() { + RedisURI redisURI = redis(host, port).build(); + ConnectionFuture> future = client.connectAsync(UTF8, redisURI); + StatefulRedisConnection connection = TestFutures.getOrTimeout(future.toCompletableFuture()); + assertThat(connection.getTimeout()).isEqualTo(redisURI.getTimeout()); + connection.close(); + } + + @Test + void connectCodecMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connect(UTF8, new RedisURI())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectcodecSentinelMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connect(UTF8, invalidSentinel())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + @Disabled("Non-deterministic behavior. Can cause a deadlock") + void shutdownSyncInRedisFutureTest() { + + RedisClient redisClient = RedisClient.create(); + StatefulRedisConnection connection = redisClient.connect(redis(host, port).build()); + + CompletableFuture f = connection.async().get("key1").whenComplete((result, e) -> { + connection.close(); + redisClient.shutdown(0, 0, SECONDS); // deadlock expected. + }).toCompletableFuture(); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(f)).isInstanceOf(TimeoutException.class); + } + + @Test + void shutdownAsyncInRedisFutureTest() { + + RedisClient redisClient = RedisClient.create(); + StatefulRedisConnection connection = redisClient.connect(redis(host, port).build()); + CompletableFuture f = connection.async().get("key1").thenCompose(result -> { + connection.close(); + return redisClient.shutdownAsync(0, 0, SECONDS); + }).toCompletableFuture(); + + TestFutures.awaitOrTimeout(f); + } + + /* + * Standalone/PubSub Stateful + */ + @Test + void connectPubSubClientUri() { + StatefulRedisPubSubConnection connection = client.connectPubSub(); + assertThat(connection.getTimeout()).isEqualTo(EXPECTED_TIMEOUT); + connection.close(); + } + + @Test + void connectPubSubCodecClientUri() { + StatefulRedisPubSubConnection connection = client.connectPubSub(UTF8); + assertThat(connection.getTimeout()).isEqualTo(EXPECTED_TIMEOUT); + connection.close(); + } + + @Test + void connectPubSubOwnUri() { + RedisURI redisURI = redis(host, port).build(); + StatefulRedisPubSubConnection connection = client.connectPubSub(redisURI); + assertThat(connection.getTimeout()).isEqualTo(redisURI.getTimeout()); + connection.close(); + } + + @Test + void connectPubSubMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connectPubSub(new RedisURI())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectPubSubSentinelMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connectPubSub(invalidSentinel())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectPubSubCodecOwnUri() { + RedisURI redisURI = redis(host, port).build(); + StatefulRedisPubSubConnection connection = client.connectPubSub(UTF8, redisURI); + assertThat(connection.getTimeout()).isEqualTo(redisURI.getTimeout()); + connection.close(); + } + + @Test + void connectPubSubAsync() { + RedisURI redisURI = redis(host, port).build(); + ConnectionFuture> future = client.connectPubSubAsync( +UTF8, redisURI); + StatefulRedisPubSubConnection connection = future.join(); + assertThat(connection.getTimeout()).isEqualTo(redisURI.getTimeout()); + connection.close(); + } + + @Test + void connectPubSubCodecMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connectPubSub(UTF8, new RedisURI())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectPubSubCodecSentinelMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connectPubSub(UTF8, invalidSentinel())).isInstanceOf(IllegalArgumentException.class); + } + + /* + * Sentinel Stateful + */ + @Test + void connectSentinelClientUri() { + StatefulRedisSentinelConnection connection = client.connectSentinel(); + assertThat(connection.getTimeout()).isEqualTo(EXPECTED_TIMEOUT); + connection.close(); + } + + @Test + void connectSentinelCodecClientUri() { + StatefulRedisSentinelConnection connection = client.connectSentinel(UTF8); + assertThat(connection.getTimeout()).isEqualTo(EXPECTED_TIMEOUT); + connection.close(); + } + + @Test + void connectSentinelAndMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connectSentinel(new RedisURI())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectSentinelSentinelMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connectSentinel(invalidSentinel())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectSentinelOwnUri() { + RedisURI redisURI = redis(host, port).build(); + StatefulRedisSentinelConnection connection = client.connectSentinel(redisURI); + assertThat(connection.getTimeout()).isEqualTo(Duration.ofMinutes(1)); + connection.close(); + } + + @Test + void connectSentinelCodecOwnUri() { + + RedisURI redisURI = redis(host, port).build(); + StatefulRedisSentinelConnection connection = client.connectSentinel(UTF8, redisURI); + assertThat(connection.getTimeout()).isEqualTo(redisURI.getTimeout()); + connection.close(); + } + + @Test + void connectSentinelCodecMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connectSentinel(UTF8, new RedisURI())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void connectSentinelCodecSentinelMissingHostAndSocketUri() { + assertThatThrownBy(() -> client.connectSentinel(UTF8, invalidSentinel())).isInstanceOf(IllegalArgumentException.class); + } + + private static RedisURI invalidSentinel() { + + RedisURI redisURI = new RedisURI(); + redisURI.getSentinels().add(new RedisURI()); + + return redisURI; + } +} diff --git a/src/test/java/io/lettuce/core/RedisClientFactoryBeanUnitTests.java b/src/test/java/io/lettuce/core/RedisClientFactoryBeanUnitTests.java new file mode 100644 index 0000000000..6c6559ba1b --- /dev/null +++ b/src/test/java/io/lettuce/core/RedisClientFactoryBeanUnitTests.java @@ -0,0 +1,176 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.net.URI; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.support.RedisClientFactoryBean; +import io.lettuce.test.resource.FastShutdown; + +/** + * @author Mark Paluch + */ +class RedisClientFactoryBeanUnitTests { + + private RedisClientFactoryBean sut = new RedisClientFactoryBean(); + + @AfterEach + void tearDown() throws Exception { + FastShutdown.shutdown(sut.getObject()); + sut.destroy(); + } + + @Test + void testSimpleUri() throws Exception { + String uri = "redis://localhost/2"; + + sut.setUri(URI.create(uri)); + sut.setPassword("password"); + sut.afterPropertiesSet(); + + RedisURI redisURI = sut.getRedisURI(); + + assertThat(redisURI.getDatabase()).isEqualTo(2); + assertThat(redisURI.getHost()).isEqualTo("localhost"); + assertThat(redisURI.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); + assertThat(new String(redisURI.getPassword())).isEqualTo("password"); + } + + @Test + void testSimpleUriWithoutDB() throws Exception { + String uri = "redis://localhost/"; + + sut.setUri(URI.create(uri)); + sut.afterPropertiesSet(); + + RedisURI redisURI = sut.getRedisURI(); + + assertThat(redisURI.getDatabase()).isEqualTo(0); + } + + @Test + void testSimpleUriWithoutDB2() throws Exception { + String uri = "redis://localhost/"; + + sut.setUri(URI.create(uri)); + sut.afterPropertiesSet(); + + RedisURI redisURI = sut.getRedisURI(); + + assertThat(redisURI.getDatabase()).isEqualTo(0); + } + + @Test + void testSimpleUriWithPort() throws Exception { + String uri = "redis://localhost:1234/0"; + + sut.setUri(URI.create(uri)); + sut.setPassword("password"); + sut.afterPropertiesSet(); + + RedisURI redisURI = sut.getRedisURI(); + + assertThat(redisURI.getDatabase()).isEqualTo(0); + assertThat(redisURI.getHost()).isEqualTo("localhost"); + assertThat(redisURI.getPort()).isEqualTo(1234); + assertThat(new String(redisURI.getPassword())).isEqualTo("password"); + } + + @Test + void testSentinelUri() throws Exception { + String uri = "redis-sentinel://localhost/1#myMaster"; + + sut.setUri(URI.create(uri)); + sut.setPassword("password"); + sut.afterPropertiesSet(); + + RedisURI redisURI = sut.getRedisURI(); + + assertThat(redisURI.getDatabase()).isEqualTo(1); + + RedisURI sentinelUri = redisURI.getSentinels().get(0); + assertThat(sentinelUri.getHost()).isEqualTo("localhost"); + assertThat(sentinelUri.getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); + assertThat(new String(redisURI.getPassword())).isEqualTo("password"); + assertThat(redisURI.getSentinelMasterId()).isEqualTo("myMaster"); + } + + @Test + void testSentinelUriWithPort() throws Exception { + String uri = "redis-sentinel://localhost:1234/1#myMaster"; + + sut.setUri(URI.create(uri)); + sut.setPassword("password"); + sut.afterPropertiesSet(); + + RedisURI redisURI = sut.getRedisURI(); + + assertThat(redisURI.getDatabase()).isEqualTo(1); + + RedisURI sentinelUri = redisURI.getSentinels().get(0); + assertThat(sentinelUri.getHost()).isEqualTo("localhost"); + assertThat(sentinelUri.getPort()).isEqualTo(1234); + assertThat(new String(redisURI.getPassword())).isEqualTo("password"); + assertThat(redisURI.getSentinelMasterId()).isEqualTo("myMaster"); + } + + @Test + void testMultipleSentinelUri() throws Exception { + String uri = "redis-sentinel://localhost,localhost2,localhost3/1#myMaster"; + + sut.setUri(URI.create(uri)); + sut.setPassword("password"); + sut.afterPropertiesSet(); + + RedisURI redisURI = sut.getRedisURI(); + + assertThat(redisURI.getDatabase()).isEqualTo(1); + assertThat(redisURI.getSentinels()).hasSize(3); + + RedisURI sentinelUri = redisURI.getSentinels().get(0); + assertThat(sentinelUri.getHost()).isEqualTo("localhost"); + assertThat(sentinelUri.getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); + assertThat(redisURI.getSentinelMasterId()).isEqualTo("myMaster"); + } + + @Test + void testMultipleSentinelUriWithPorts() throws Exception { + String uri = "redis-sentinel://localhost,localhost2:1234,localhost3/1#myMaster"; + + sut.setUri(URI.create(uri)); + sut.setPassword("password"); + sut.afterPropertiesSet(); + + RedisURI redisURI = sut.getRedisURI(); + + assertThat(redisURI.getDatabase()).isEqualTo(1); + assertThat(redisURI.getSentinels()).hasSize(3); + + RedisURI sentinelUri1 = redisURI.getSentinels().get(0); + assertThat(sentinelUri1.getHost()).isEqualTo("localhost"); + assertThat(sentinelUri1.getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); + + RedisURI sentinelUri2 = redisURI.getSentinels().get(1); + assertThat(sentinelUri2.getHost()).isEqualTo("localhost2"); + assertThat(sentinelUri2.getPort()).isEqualTo(1234); + assertThat(redisURI.getSentinelMasterId()).isEqualTo("myMaster"); + } +} diff --git a/src/test/java/io/lettuce/core/RedisClientFactoryUnitTests.java b/src/test/java/io/lettuce/core/RedisClientFactoryUnitTests.java new file mode 100644 index 0000000000..9942f4a510 --- /dev/null +++ b/src/test/java/io/lettuce/core/RedisClientFactoryUnitTests.java @@ -0,0 +1,101 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.resource.ClientResources; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class RedisClientFactoryUnitTests { + + private static final String URI = "redis://" + TestSettings.host() + ":" + TestSettings.port(); + private static final RedisURI REDIS_URI = RedisURI.create(URI); + + @Test + void plain() { + FastShutdown.shutdown(RedisClient.create()); + } + + @Test + void withStringUri() { + FastShutdown.shutdown(RedisClient.create(URI)); + } + + @Test + void withStringUriNull() { + assertThatThrownBy(() -> RedisClient.create((String) null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void withUri() { + FastShutdown.shutdown(RedisClient.create(REDIS_URI)); + } + + @Test + void withUriNull() { + assertThatThrownBy(() -> RedisClient.create((RedisURI) null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clientResources() { + FastShutdown.shutdown(RedisClient.create(TestClientResources.get())); + } + + @Test + void clientResourcesNull() { + assertThatThrownBy(() -> RedisClient.create((ClientResources) null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clientResourcesWithStringUri() { + FastShutdown.shutdown(RedisClient.create(TestClientResources.get(), URI)); + } + + @Test + void clientResourcesWithStringUriNull() { + assertThatThrownBy(() -> RedisClient.create(TestClientResources.get(), (String) null)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void clientResourcesNullWithStringUri() { + assertThatThrownBy(() -> RedisClient.create(null, URI)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clientResourcesWithUri() { + FastShutdown.shutdown(RedisClient.create(TestClientResources.get(), REDIS_URI)); + } + + @Test + void clientResourcesWithUriNull() { + assertThatThrownBy(() -> RedisClient.create(TestClientResources.get(), (RedisURI) null)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void clientResourcesNullWithUri() { + assertThatThrownBy(() -> RedisClient.create(null, REDIS_URI)).isInstanceOf(IllegalArgumentException.class); + } +} diff --git a/src/test/java/io/lettuce/core/RedisClientIntegrationTests.java b/src/test/java/io/lettuce/core/RedisClientIntegrationTests.java new file mode 100644 index 0000000000..75aee2f708 --- /dev/null +++ b/src/test/java/io/lettuce/core/RedisClientIntegrationTests.java @@ -0,0 +1,217 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.lang.reflect.Field; +import java.net.SocketAddress; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DefaultClientResources; +import io.lettuce.core.resource.DefaultEventLoopGroupProvider; +import io.lettuce.test.TestFutures; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; +import io.netty.util.concurrent.EventExecutorGroup; + +/** + * @author Mark Paluch + */ +class RedisClientIntegrationTests extends TestSupport { + + private final ClientResources clientResources = TestClientResources.get(); + + @Test + void shouldNotifyListener() { + + final TestConnectionListener listener = new TestConnectionListener(); + + RedisClient client = RedisClient.create(clientResources, RedisURI.Builder.redis(host, port).build()); + + client.addListener(listener); + + assertThat(listener.onConnected).isNull(); + assertThat(listener.onDisconnected).isNull(); + assertThat(listener.onException).isNull(); + + StatefulRedisConnection connection = client.connect(); + + Wait.untilTrue(() -> listener.onConnected != null).waitOrTimeout(); + assertThat(listener.onConnectedSocketAddress).isNotNull(); + + assertThat(listener.onConnected).isEqualTo(connection); + assertThat(listener.onDisconnected).isNull(); + + connection.sync().set(key, value); + connection.close(); + + Wait.untilTrue(() -> listener.onDisconnected != null).waitOrTimeout(); + + assertThat(listener.onConnected).isEqualTo(connection); + assertThat(listener.onDisconnected).isEqualTo(connection); + + FastShutdown.shutdown(client); + } + + @Test + void shouldNotNotifyListenerAfterRemoval() { + + final TestConnectionListener removedListener = new TestConnectionListener(); + final TestConnectionListener retainedListener = new TestConnectionListener(); + + RedisClient client = RedisClient.create(clientResources, RedisURI.Builder.redis(host, port).build()); + client.addListener(removedListener); + client.addListener(retainedListener); + client.removeListener(removedListener); + + // that's the sut call + client.connect().close(); + + Wait.untilTrue(() -> retainedListener.onConnected != null).waitOrTimeout(); + + assertThat(retainedListener.onConnected).isNotNull(); + + assertThat(removedListener.onConnected).isNull(); + assertThat(removedListener.onConnectedSocketAddress).isNull(); + assertThat(removedListener.onDisconnected).isNull(); + assertThat(removedListener.onException).isNull(); + + FastShutdown.shutdown(client); + } + + @Test + void reuseClientConnections() throws Exception { + + // given + DefaultClientResources clientResources = DefaultClientResources.create(); + Map, EventExecutorGroup> eventLoopGroups = getExecutors(clientResources); + + RedisClient redisClient1 = newClient(clientResources); + RedisClient redisClient2 = newClient(clientResources); + connectAndClose(redisClient1); + connectAndClose(redisClient2); + + // when + EventExecutorGroup executor = eventLoopGroups.values().iterator().next(); + redisClient1.shutdown(0, 0, TimeUnit.MILLISECONDS); + + // then + connectAndClose(redisClient2); + + TestFutures.awaitOrTimeout(clientResources.shutdown(0, 0, TimeUnit.MILLISECONDS)); + + assertThat(eventLoopGroups).isEmpty(); + assertThat(executor.isShuttingDown()).isTrue(); + assertThat(clientResources.eventExecutorGroup().isShuttingDown()).isTrue(); + } + + @Test + void reuseClientConnectionsShutdownTwoClients() throws Exception { + + // given + DefaultClientResources clientResources = DefaultClientResources.create(); + Map, EventExecutorGroup> eventLoopGroups = getExecutors(clientResources); + + RedisClient redisClient1 = newClient(clientResources); + RedisClient redisClient2 = newClient(clientResources); + connectAndClose(redisClient1); + connectAndClose(redisClient2); + + // when + EventExecutorGroup executor = eventLoopGroups.values().iterator().next(); + + redisClient1.shutdown(0, 0, TimeUnit.MILLISECONDS); + assertThat(executor.isShutdown()).isFalse(); + connectAndClose(redisClient2); + redisClient2.shutdown(0, 0, TimeUnit.MILLISECONDS); + + // then + assertThat(eventLoopGroups).isEmpty(); + assertThat(executor.isShutdown()).isTrue(); + assertThat(clientResources.eventExecutorGroup().isShuttingDown()).isFalse(); + + // cleanup + TestFutures.awaitOrTimeout(clientResources.shutdown(0, 0, TimeUnit.MILLISECONDS)); + assertThat(clientResources.eventExecutorGroup().isShuttingDown()).isTrue(); + } + + @Test + void managedClientResources() throws Exception { + + // given + RedisClient redisClient1 = RedisClient.create(RedisURI.create(TestSettings.host(), TestSettings.port())); + ClientResources clientResources = redisClient1.getResources(); + Map, EventExecutorGroup> eventLoopGroups = getExecutors(clientResources); + connectAndClose(redisClient1); + + // when + EventExecutorGroup executor = eventLoopGroups.values().iterator().next(); + + redisClient1.shutdown(0, 0, TimeUnit.MILLISECONDS); + + // then + assertThat(eventLoopGroups).isEmpty(); + assertThat(executor.isShuttingDown()).isTrue(); + assertThat(clientResources.eventExecutorGroup().isShuttingDown()).isTrue(); + } + + private void connectAndClose(RedisClient client) { + client.connect().close(); + } + + private RedisClient newClient(DefaultClientResources clientResources) { + return RedisClient.create(clientResources, RedisURI.create(TestSettings.host(), TestSettings.port())); + } + + private Map, EventExecutorGroup> getExecutors(ClientResources clientResources) + throws Exception { + Field eventLoopGroupsField = DefaultEventLoopGroupProvider.class.getDeclaredField("eventLoopGroups"); + eventLoopGroupsField.setAccessible(true); + return (Map) eventLoopGroupsField.get(clientResources.eventLoopGroupProvider()); + } + + private class TestConnectionListener implements RedisConnectionStateListener { + + volatile SocketAddress onConnectedSocketAddress; + volatile RedisChannelHandler onConnected; + volatile RedisChannelHandler onDisconnected; + volatile RedisChannelHandler onException; + + @Override + public void onRedisConnected(RedisChannelHandler connection, SocketAddress socketAddress) { + onConnected = connection; + onConnectedSocketAddress = socketAddress; + } + + @Override + public void onRedisDisconnected(RedisChannelHandler connection) { + onDisconnected = connection; + } + + @Override + public void onRedisExceptionCaught(RedisChannelHandler connection, Throwable cause) { + onException = connection; + } + } +} diff --git a/src/test/java/io/lettuce/core/RedisClientListenerIntegrationTests.java b/src/test/java/io/lettuce/core/RedisClientListenerIntegrationTests.java new file mode 100644 index 0000000000..4e197fc2cc --- /dev/null +++ b/src/test/java/io/lettuce/core/RedisClientListenerIntegrationTests.java @@ -0,0 +1,31 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.test.LettuceExtension; + +/** + * Integration tests for {@link RedisConnectionStateListener} via {@link RedisClient}. + * + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RedisClientListenerIntegrationTests extends TestSupport { + + +} diff --git a/src/test/java/io/lettuce/core/RedisClientUnitTests.java b/src/test/java/io/lettuce/core/RedisClientUnitTests.java new file mode 100644 index 0000000000..e121dc4b14 --- /dev/null +++ b/src/test/java/io/lettuce/core/RedisClientUnitTests.java @@ -0,0 +1,99 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyLong; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import java.io.Closeable; +import java.util.Set; +import java.util.concurrent.CompletableFuture; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.internal.AsyncCloseable; +import io.lettuce.core.resource.ClientResources; +import io.netty.util.concurrent.ImmediateEventExecutor; + +/** + * Unit tests for {@link RedisClient}. + * + * @author Mark Paluch + */ +@SuppressWarnings("unchecked") +@ExtendWith(MockitoExtension.class) +class RedisClientUnitTests { + + @Mock + ClientResources clientResources; + + @Mock(extraInterfaces = Closeable.class) + AsyncCloseable asyncCloseable; + + @Test + void shutdownShouldDeferResourcesShutdown() { + + when(clientResources.eventExecutorGroup()).thenReturn(ImmediateEventExecutor.INSTANCE); + + CompletableFuture completableFuture = new CompletableFuture<>(); + when(asyncCloseable.closeAsync()).thenReturn(completableFuture); + + RedisClient redisClient = RedisClient.create(clientResources, "redis://foo"); + ReflectionTestUtils.setField(redisClient, "sharedResources", false); + + Set closeableResources = (Set) ReflectionTestUtils.getField(redisClient, "closeableResources"); + closeableResources.add(asyncCloseable); + + CompletableFuture future = redisClient.shutdownAsync(); + + verify(asyncCloseable).closeAsync(); + verify(clientResources, never()).shutdown(anyLong(), anyLong(), any()); + assertThat(future).isNotDone(); + } + + @Test + void shutdownShutsDownResourcesAfterChannels() { + + when(clientResources.eventExecutorGroup()).thenReturn(ImmediateEventExecutor.INSTANCE); + + CompletableFuture completableFuture = new CompletableFuture<>(); + when(asyncCloseable.closeAsync()).thenReturn(completableFuture); + + RedisClient redisClient = RedisClient.create(clientResources, "redis://foo"); + ReflectionTestUtils.setField(redisClient, "sharedResources", false); + + Set closeableResources = (Set) ReflectionTestUtils.getField(redisClient, "closeableResources"); + closeableResources.add(asyncCloseable); + + CompletableFuture future = redisClient.shutdownAsync(); + + verify(asyncCloseable).closeAsync(); + verify(clientResources, never()).shutdown(anyLong(), anyLong(), any()); + + completableFuture.complete(null); + + verify(clientResources).shutdown(anyLong(), anyLong(), any()); + assertThat(future).isDone(); + } +} diff --git a/src/test/java/io/lettuce/core/RedisURIBuilderUnitTests.java b/src/test/java/io/lettuce/core/RedisURIBuilderUnitTests.java new file mode 100644 index 0000000000..d6695434af --- /dev/null +++ b/src/test/java/io/lettuce/core/RedisURIBuilderUnitTests.java @@ -0,0 +1,278 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.io.File; +import java.io.IOException; +import java.time.Duration; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.condition.DisabledOnOs; +import org.junit.jupiter.api.condition.EnabledOnOs; +import org.junit.jupiter.api.condition.OS; + +/** + * Unit tests for {@link RedisURI.Builder}. + * + * @author Mark Paluch + * @author Guy Korland + */ +class RedisURIBuilderUnitTests { + + @Test + void sentinel() { + RedisURI result = RedisURI.Builder.sentinel("localhost").withTimeout(Duration.ofHours(2)).build(); + assertThat(result.getSentinels()).hasSize(1); + assertThat(result.getTimeout()).isEqualTo(Duration.ofHours(2)); + } + + @Test + void sentinelWithHostShouldFail() { + assertThatThrownBy(() -> RedisURI.Builder.sentinel("localhost").withHost("localhost")).isInstanceOf( + IllegalStateException.class); + } + + @Test + void sentinelWithPort() { + RedisURI result = RedisURI.Builder.sentinel("localhost", 1).withTimeout(Duration.ofHours(2)).build(); + assertThat(result.getSentinels()).hasSize(1); + assertThat(result.getTimeout()).isEqualTo(Duration.ofHours(2)); + } + + @Test + void shouldFailIfBuilderIsEmpty() { + assertThatThrownBy(() -> RedisURI.builder().build()).isInstanceOf(IllegalStateException.class); + } + + @Test + void redisWithHostAndPort() { + RedisURI result = RedisURI.builder().withHost("localhost").withPort(1234).build(); + + assertThat(result.getSentinels()).isEmpty(); + assertThat(result.getHost()).isEqualTo("localhost"); + assertThat(result.getPort()).isEqualTo(1234); + } + + @Test + void redisWithPort() { + RedisURI result = RedisURI.Builder.redis("localhost").withPort(1234).build(); + + assertThat(result.getSentinels()).isEmpty(); + assertThat(result.getHost()).isEqualTo("localhost"); + assertThat(result.getPort()).isEqualTo(1234); + } + + @Test + void redisWithClientName() { + RedisURI result = RedisURI.Builder.redis("localhost").withClientName("hello").build(); + + assertThat(result.getHost()).isEqualTo("localhost"); + assertThat(result.getClientName()).isEqualTo("hello"); + } + + @Test + void redisHostAndPortWithInvalidPort() { + assertThatThrownBy(() -> RedisURI.Builder.redis("localhost", -1)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void redisWithInvalidPort() { + assertThatThrownBy(() -> RedisURI.Builder.redis("localhost").withPort(65536)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void redisFromUrl() { + RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS + "://password@localhost/21"); + + assertThat(result.getSentinels()).isEmpty(); + assertThat(result.getHost()).isEqualTo("localhost"); + assertThat(result.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); + assertThat(result.getPassword()).isEqualTo("password".toCharArray()); + assertThat(result.getDatabase()).isEqualTo(21); + assertThat(result.isSsl()).isFalse(); + } + + @Test + void redisFromUrlNoPassword() { + RedisURI redisURI = RedisURI.create("redis://localhost:1234/5"); + assertThat(redisURI.getPassword()).isNull(); + assertThat(redisURI.getUsername()).isNull(); + + redisURI = RedisURI.create("redis://h:@localhost.com:14589"); + assertThat(redisURI.getPassword()).isNull(); + assertThat(redisURI.getUsername()).isNull(); + } + + @Test + void redisFromUrlPassword() { + RedisURI redisURI = RedisURI.create("redis://h:password@localhost.com:14589"); + assertThat(redisURI.getPassword()).isEqualTo("password".toCharArray()); + assertThat(redisURI.getUsername()).isEqualTo("h"); + } + + @Test + void redisWithSSL() { + RedisURI result = RedisURI.Builder.redis("localhost").withSsl(true).withStartTls(true).build(); + + assertThat(result.getSentinels()).isEmpty(); + assertThat(result.getHost()).isEqualTo("localhost"); + assertThat(result.isSsl()).isTrue(); + assertThat(result.isStartTls()).isTrue(); + } + + @Test + void redisSslFromUrl() { + RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SECURE + "://:password@localhost/1"); + + assertThat(result.getSentinels()).isEmpty(); + assertThat(result.getHost()).isEqualTo("localhost"); + assertThat(result.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); + assertThat(result.getPassword()).isEqualTo("password".toCharArray()); + assertThat(result.getUsername()).isNull(); + assertThat(result.isSsl()).isTrue(); + } + + @Test + void redisSentinelFromUrl() { + RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SENTINEL + "://password@localhost/1#master"); + + assertThat(result.getSentinels()).hasSize(1); + assertThat(result.getHost()).isNull(); + assertThat(result.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); + assertThat(result.getPassword()).isEqualTo("password".toCharArray()); + assertThat(result.getSentinelMasterId()).isEqualTo("master"); + assertThat(result.toString()).contains("master"); + + result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SENTINEL + "://password@host1:1,host2:3423,host3/1#master"); + + assertThat(result.getSentinels()).hasSize(3); + assertThat(result.getHost()).isNull(); + assertThat(result.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); + assertThat(result.getPassword()).isEqualTo("password".toCharArray()); + assertThat(result.getSentinelMasterId()).isEqualTo("master"); + + RedisURI sentinel1 = result.getSentinels().get(0); + assertThat(sentinel1.getPort()).isEqualTo(1); + assertThat(sentinel1.getHost()).isEqualTo("host1"); + + RedisURI sentinel2 = result.getSentinels().get(1); + assertThat(sentinel2.getPort()).isEqualTo(3423); + assertThat(sentinel2.getHost()).isEqualTo("host2"); + + RedisURI sentinel3 = result.getSentinels().get(2); + assertThat(sentinel3.getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); + assertThat(sentinel3.getHost()).isEqualTo("host3"); + } + + @Test + void withAuthenticatedSentinel() { + + RedisURI result = RedisURI.Builder.sentinel("host", 1234, "master", "foo").build(); + + RedisURI sentinel = result.getSentinels().get(0); + assertThat(new String(sentinel.getPassword())).isEqualTo("foo"); + } + + @Test + void withTlsSentinel() { + + RedisURI result = RedisURI.Builder.sentinel("host", 1234, "master", "foo").withSsl(true).withStartTls(true) + .withVerifyPeer(false).build(); + + RedisURI sentinel = result.getSentinels().get(0); + assertThat(new String(sentinel.getPassword())).isEqualTo("foo"); + assertThat(sentinel.isSsl()).isTrue(); + assertThat(sentinel.isStartTls()).isTrue(); + assertThat(sentinel.isVerifyPeer()).isFalse(); + } + + @Test + void withAuthenticatedSentinelUri() { + + RedisURI sentinel = new RedisURI("host", 1234, Duration.ZERO); + sentinel.setPassword("bar"); + RedisURI result = RedisURI.Builder.sentinel("host", 1234, "master").withSentinel(sentinel).build(); + + assertThat(result.getSentinels().get(0).getPassword()).isNull(); + assertThat(new String(result.getSentinels().get(1).getPassword())).isEqualTo("bar"); + } + + @Test + void withAuthenticatedSentinelWithSentinel() { + + RedisURI result = RedisURI.Builder.sentinel("host", 1234, "master", "foo").withSentinel("bar").build(); + + assertThat(new String(result.getSentinels().get(0).getPassword())).isEqualTo("foo"); + assertThat(new String(result.getSentinels().get(1).getPassword())).isEqualTo("foo"); + + result = RedisURI.Builder.sentinel("host", 1234, "master", "foo").withSentinel("bar", 1234, "baz").build(); + + assertThat(new String(result.getSentinels().get(0).getPassword())).isEqualTo("foo"); + assertThat(new String(result.getSentinels().get(1).getPassword())).isEqualTo("baz"); + } + + @Test + void redisSentinelWithInvalidPort() { + assertThatThrownBy(() -> RedisURI.Builder.sentinel("a", 65536)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void redisSentinelWithMasterIdAndInvalidPort() { + assertThatThrownBy(() -> RedisURI.Builder.sentinel("a", 65536, "")).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void redisSentinelWithNullMasterId() { + assertThatThrownBy(() -> RedisURI.Builder.sentinel("a", 1, null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void invalidScheme() { + assertThatThrownBy(() -> RedisURI.create("http://www.web.de")).isInstanceOf(IllegalArgumentException.class); + } + + @Test + @DisabledOnOs(OS.WINDOWS) + void redisSocket() throws IOException { + File file = new File("work/socket-6479").getCanonicalFile(); + RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SOCKET + "://" + file.getCanonicalPath()); + + assertThat(result.getSocket()).isEqualTo(file.getCanonicalPath()); + assertThat(result.getSentinels()).isEmpty(); + assertThat(result.getPassword()).isNull(); + assertThat(result.getHost()).isNull(); + assertThat(result.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); + assertThat(result.isSsl()).isFalse(); + } + + @Test + @DisabledOnOs(OS.WINDOWS) + void redisSocketWithPassword() throws IOException { + File file = new File("work/socket-6479").getCanonicalFile(); + RedisURI result = RedisURI.create(RedisURI.URI_SCHEME_REDIS_SOCKET + "://password@" + file.getCanonicalPath()); + + assertThat(result.getSocket()).isEqualTo(file.getCanonicalPath()); + assertThat(result.getSentinels()).isEmpty(); + assertThat(result.getPassword()).isEqualTo("password".toCharArray()); + assertThat(result.getHost()).isNull(); + assertThat(result.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); + assertThat(result.isSsl()).isFalse(); + } +} diff --git a/src/test/java/io/lettuce/core/RedisURIUnitTests.java b/src/test/java/io/lettuce/core/RedisURIUnitTests.java new file mode 100644 index 0000000000..4227492334 --- /dev/null +++ b/src/test/java/io/lettuce/core/RedisURIUnitTests.java @@ -0,0 +1,247 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.time.Duration; +import java.util.LinkedHashMap; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.internal.LettuceSets; + +/** + * Unit tests for {@link RedisURI} + * + * @author Mark Paluch + */ +class RedisURIUnitTests { + + @Test + void equalsTest() { + + RedisURI redisURI1 = RedisURI.create("redis://auth@localhost:1234/5"); + RedisURI redisURI2 = RedisURI.create("redis://auth@localhost:1234/5"); + RedisURI redisURI3 = RedisURI.create("redis://auth@localhost:1231/5"); + + assertThat(redisURI1).isEqualTo(redisURI2); + assertThat(redisURI1.hashCode()).isEqualTo(redisURI2.hashCode()); + assertThat(redisURI1).hasToString("redis://auth@localhost:1234/5"); + + assertThat(redisURI3).isNotEqualTo(redisURI2); + assertThat(redisURI3.hashCode()).isNotEqualTo(redisURI2.hashCode()); + } + + @Test + void setUsage() { + + RedisURI redisURI1 = RedisURI.create("redis://auth@localhost:1234/5"); + RedisURI redisURI2 = RedisURI.create("redis://auth@localhost:1234/5"); + RedisURI redisURI3 = RedisURI.create("redis://auth@localhost:1234/6"); + + Set set = LettuceSets.unmodifiableSet(redisURI1, redisURI2, redisURI3); + + assertThat(set).hasSize(2); + } + + @Test + void mapUsage() { + + RedisURI redisURI1 = RedisURI.create("redis://auth@localhost:1234/5"); + RedisURI redisURI2 = RedisURI.create("redis://auth@localhost:1234/5"); + + Map map = new LinkedHashMap<>(); + map.put(redisURI1, "something"); + + assertThat(map.get(redisURI2)).isEqualTo("something"); + } + + @Test + void simpleUriTest() { + RedisURI redisURI = RedisURI.create("redis://localhost:6379"); + assertThat(redisURI).hasToString("redis://localhost"); + assertThat(redisURI).hasToString("redis://localhost"); + } + + @Test + void shouldThrowIllegalArgumentExceptionOnMalformedUri() { + assertThatThrownBy(() -> RedisURI.create("localhost")).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void sslUriTest() { + RedisURI redisURI = RedisURI.create("redis+ssl://localhost:6379"); + assertThat(redisURI).hasToString("rediss://localhost:6379"); + } + + @Test + void tlsUriTest() { + RedisURI redisURI = RedisURI.create("redis+tls://localhost:6379"); + assertThat(redisURI).hasToString("redis+tls://localhost:6379"); + } + + @Test + void multipleClusterNodesTest() { + RedisURI redisURI = RedisURI.create("redis+ssl://password@host1:6379,host2:6380"); + assertThat(redisURI).hasToString("rediss://password@host1:6379,host2:6380"); + } + + @Test + void sentinelEqualsTest() { + + RedisURI redisURI1 = RedisURI.create("redis-sentinel://auth@h1:222,h2,h3:1234/5?sentinelMasterId=masterId"); + RedisURI redisURI2 = RedisURI.create("redis-sentinel://auth@h1:222,h2,h3:1234/5#masterId"); + RedisURI redisURI3 = RedisURI.create("redis-sentinel://auth@h1,h2,h3:1234/5#OtherMasterId"); + + assertThat(redisURI1).isEqualTo(redisURI2); + assertThat(redisURI1.hashCode()).isEqualTo(redisURI2.hashCode()); + assertThat(redisURI1.toString()).contains("h1"); + + assertThat(redisURI3).isNotEqualTo(redisURI2); + assertThat(redisURI3.hashCode()).isNotEqualTo(redisURI2.hashCode()); + } + + @Test + void sentinelUriTest() { + + RedisURI redisURI = RedisURI.create("redis-sentinel://auth@h1:222,h2,h3:1234/5?sentinelMasterId=masterId"); + assertThat(redisURI.getSentinelMasterId()).isEqualTo("masterId"); + assertThat(redisURI.getSentinels().get(0).getPort()).isEqualTo(222); + assertThat(redisURI.getSentinels().get(1).getPort()).isEqualTo(RedisURI.DEFAULT_SENTINEL_PORT); + assertThat(redisURI.getSentinels().get(2).getPort()).isEqualTo(1234); + assertThat(redisURI.getDatabase()).isEqualTo(5); + + assertThat(redisURI).hasToString("redis-sentinel://auth@h1:222,h2,h3:1234/5?sentinelMasterId=masterId"); + } + + @Test + void sentinelSecureUriTest() { + + RedisURI redisURI = RedisURI.create("rediss-sentinel://auth@h1:222,h2,h3:1234/5?sentinelMasterId=masterId"); + assertThat(redisURI.isSsl()).isTrue(); + + assertThat(redisURI).hasToString("rediss-sentinel://auth@h1:222,h2,h3:1234/5?sentinelMasterId=masterId"); + } + + @Test + void socketEqualsTest() { + + RedisURI redisURI1 = RedisURI.create("redis-socket:///var/tmp/socket"); + RedisURI redisURI2 = RedisURI.create("redis-socket:///var/tmp/socket"); + RedisURI redisURI3 = RedisURI.create("redis-socket:///var/tmp/other-socket?database=2"); + + assertThat(redisURI1).isEqualTo(redisURI2); + assertThat(redisURI1.hashCode()).isEqualTo(redisURI2.hashCode()); + assertThat(redisURI1.toString()).contains("/var/tmp/socket"); + + assertThat(redisURI3).isNotEqualTo(redisURI2); + assertThat(redisURI3.hashCode()).isNotEqualTo(redisURI2.hashCode()); + assertThat(redisURI3).hasToString("redis-socket:///var/tmp/other-socket?database=2"); + } + + @Test + void socketUriTest() { + + RedisURI redisURI = RedisURI.create("redis-socket:///var/tmp/other-socket?db=2"); + + assertThat(redisURI.getDatabase()).isEqualTo(2); + assertThat(redisURI.getSocket()).isEqualTo("/var/tmp/other-socket"); + assertThat(redisURI).hasToString("redis-socket:///var/tmp/other-socket?database=2"); + } + + @Test + void socketAltUriTest() { + + RedisURI redisURI = RedisURI.create("redis+socket:///var/tmp/other-socket?db=2"); + + assertThat(redisURI.getDatabase()).isEqualTo(2); + assertThat(redisURI.getSocket()).isEqualTo("/var/tmp/other-socket"); + assertThat(redisURI).hasToString("redis-socket:///var/tmp/other-socket?database=2"); + } + + @Test + void timeoutParsingTest() { + + checkUriTimeout("redis://auth@localhost:1234/5?timeout=5000", 5000, TimeUnit.MILLISECONDS); + checkUriTimeout("redis://auth@localhost:1234/5?timeout=5000ms", 5000, TimeUnit.MILLISECONDS); + checkUriTimeout("redis://auth@localhost:1234/5?timeout=5s", 5, TimeUnit.SECONDS); + checkUriTimeout("redis://auth@localhost:1234/5?timeout=100us", 100, TimeUnit.MICROSECONDS); + checkUriTimeout("redis://auth@localhost:1234/5?TIMEOUT=1000000NS", 1000000, TimeUnit.NANOSECONDS); + checkUriTimeout("redis://auth@localhost:1234/5?timeout=60m", 60, TimeUnit.MINUTES); + checkUriTimeout("redis://auth@localhost:1234/5?timeout=24h", 24, TimeUnit.HOURS); + checkUriTimeout("redis://auth@localhost:1234/5?timeout=1d", 1, TimeUnit.DAYS); + + checkUriTimeout("redis://auth@localhost:1234/5?timeout=-1", 0, TimeUnit.MILLISECONDS); + + RedisURI defaultUri = new RedisURI(); + checkUriTimeout("redis://auth@localhost:1234/5?timeout=junk", defaultUri.getTimeout().getSeconds(), TimeUnit.SECONDS); + + RedisURI redisURI = RedisURI.create("redis://auth@localhost:1234/5?timeout=5000ms"); + assertThat(redisURI).hasToString("redis://auth@localhost:1234/5?timeout=5s"); + } + + @Test + void queryStringDecodingTest() { + String timeout = "%74%69%6D%65%6F%75%74"; + String eq = "%3d"; + String s = "%73"; + checkUriTimeout("redis://auth@localhost:1234/5?" + timeout + eq + "5" + s, 5, TimeUnit.SECONDS); + } + + @Test + void timeoutParsingWithJunkParamTest() { + RedisURI redisURI1 = RedisURI.create("redis-sentinel://auth@localhost:1234/5?timeout=5s;junkparam=#master-instance"); + assertThat(redisURI1.getTimeout()).isEqualTo(Duration.ofSeconds(5)); + assertThat(redisURI1.getSentinelMasterId()).isEqualTo("master-instance"); + } + + private RedisURI checkUriTimeout(String uri, long expectedTimeout, TimeUnit expectedUnit) { + RedisURI redisURI = RedisURI.create(uri); + assertThat(expectedUnit.convert(redisURI.getTimeout().toNanos(), TimeUnit.NANOSECONDS)).isEqualTo(expectedTimeout); + return redisURI; + } + + @Test + void databaseParsingTest() { + RedisURI redisURI = RedisURI.create("redis://auth@localhost:1234/?database=21"); + assertThat(redisURI.getDatabase()).isEqualTo(21); + + assertThat(redisURI).hasToString("redis://auth@localhost:1234/21"); + } + + @Test + void clientNameParsingTest() { + RedisURI redisURI = RedisURI.create("redis://auth@localhost:1234/?clientName=hello"); + assertThat(redisURI.getClientName()).isEqualTo("hello"); + + assertThat(redisURI).hasToString("redis://auth@localhost:1234?clientName=hello"); + } + + @Test + void parsingWithInvalidValuesTest() { + RedisURI redisURI = RedisURI + .create("redis://@host:1234/?database=AAA&database=&timeout=&timeout=XYZ&sentinelMasterId="); + assertThat(redisURI.getDatabase()).isEqualTo(0); + assertThat(redisURI.getSentinelMasterId()).isNull(); + + assertThat(redisURI).hasToString("redis://host:1234"); + } +} diff --git a/src/test/java/io/lettuce/core/ScanArgsUnitTests.java b/src/test/java/io/lettuce/core/ScanArgsUnitTests.java new file mode 100644 index 0000000000..67e2697ce8 --- /dev/null +++ b/src/test/java/io/lettuce/core/ScanArgsUnitTests.java @@ -0,0 +1,40 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.protocol.CommandArgs; + +/** + * @author Mark Paluch + */ +class ScanArgsUnitTests { + + @Test + void shouldEncodeMatchUsingUtf8() { + + ScanArgs args = ScanArgs.Builder.matches("ö"); + + CommandArgs commandArgs = new CommandArgs<>(StringCodec.UTF8); + args.build(commandArgs); + + assertThat(commandArgs.toCommandString()).isEqualTo("MATCH w7Y="); + } +} diff --git a/src/test/java/io/lettuce/core/ScanCursorUnitTests.java b/src/test/java/io/lettuce/core/ScanCursorUnitTests.java new file mode 100644 index 0000000000..0c6eab728b --- /dev/null +++ b/src/test/java/io/lettuce/core/ScanCursorUnitTests.java @@ -0,0 +1,44 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class ScanCursorUnitTests { + + @Test + void testFactory() { + ScanCursor scanCursor = ScanCursor.of("dummy"); + assertThat(scanCursor.getCursor()).isEqualTo("dummy"); + assertThat(scanCursor.isFinished()).isFalse(); + } + + @Test + void setCursorOnImmutableInstance() { + assertThatThrownBy(() -> ScanCursor.INITIAL.setCursor("")).isInstanceOf(UnsupportedOperationException.class); + } + + @Test + void setFinishedOnImmutableInstance() { + assertThatThrownBy(() -> ScanCursor.INITIAL.setFinished(false)).isInstanceOf(UnsupportedOperationException.class); + } +} diff --git a/src/test/java/io/lettuce/core/ScanIteratorIntegrationTests.java b/src/test/java/io/lettuce/core/ScanIteratorIntegrationTests.java new file mode 100644 index 0000000000..47ce2e61df --- /dev/null +++ b/src/test/java/io/lettuce/core/ScanIteratorIntegrationTests.java @@ -0,0 +1,257 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.AssertionsForClassTypes.fail; +import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat; + +import java.util.ArrayList; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.stream.Collectors; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.KeysAndValues; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +class ScanIteratorIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + ScanIteratorIntegrationTests(StatefulRedisConnection connection) { + this.redis = connection.sync(); + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void scanShouldThrowNoSuchElementExceptionOnEmpty() { + + redis.mset(KeysAndValues.MAP); + + ScanIterator scan = ScanIterator.scan(redis, ScanArgs.Builder.limit(50).match("key-foo")); + + assertThat(scan.hasNext()).isFalse(); + try { + scan.next(); + fail("Missing NoSuchElementException"); + } catch (NoSuchElementException e) { + assertThat(e).isInstanceOf(NoSuchElementException.class); + } + } + + @Test + void keysSinglePass() { + + redis.mset(KeysAndValues.MAP); + + ScanIterator scan = ScanIterator.scan(redis, ScanArgs.Builder.limit(50).match("key-11*")); + + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.hasNext()).isTrue(); + + for (int i = 0; i < 11; i++) { + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.next()).isNotNull(); + } + + assertThat(scan.hasNext()).isFalse(); + } + + @Test + void keysMultiPass() { + + redis.mset(KeysAndValues.MAP); + + ScanIterator scan = ScanIterator.scan(redis); + + List keys = scan.stream().collect(Collectors.toList()); + + assertThat(keys).containsAll(KeysAndValues.KEYS); + } + + @Test + void hscanShouldThrowNoSuchElementExceptionOnEmpty() { + + redis.mset(KeysAndValues.MAP); + + ScanIterator> scan = ScanIterator.hscan(redis, "none", + ScanArgs.Builder.limit(50).match("key-foo")); + + assertThat(scan.hasNext()).isFalse(); + try { + scan.next(); + fail("Missing NoSuchElementException"); + } catch (NoSuchElementException e) { + assertThat(e).isInstanceOf(NoSuchElementException.class); + } + } + + @Test + void hashSinglePass() { + + redis.hmset(key, KeysAndValues.MAP); + + ScanIterator> scan = ScanIterator.hscan(redis, key, + ScanArgs.Builder.limit(50).match("key-11*")); + + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.hasNext()).isTrue(); + + for (int i = 0; i < 11; i++) { + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.next()).isNotNull(); + } + + assertThat(scan.hasNext()).isFalse(); + } + + @Test + void hashMultiPass() { + + redis.hmset(key, KeysAndValues.MAP); + + ScanIterator> scan = ScanIterator.hscan(redis, key); + + List> keys = scan.stream().collect(Collectors.toList()); + + assertThat(keys).containsAll( + KeysAndValues.KEYS.stream().map(s -> KeyValue.fromNullable(s, KeysAndValues.MAP.get(s))).collect(Collectors.toList())); + } + + @Test + void sscanShouldThrowNoSuchElementExceptionOnEmpty() { + + redis.sadd(key, KeysAndValues.VALUES.toArray(new String[0])); + + ScanIterator scan = ScanIterator.sscan(redis, "none", + ScanArgs.Builder.limit(50).match("key-foo")); + + assertThat(scan.hasNext()).isFalse(); + try { + scan.next(); + fail("Missing NoSuchElementException"); + } catch (NoSuchElementException e) { + assertThat(e).isInstanceOf(NoSuchElementException.class); + } + } + + @Test + void setSinglePass() { + + redis.sadd(key, KeysAndValues.KEYS.toArray(new String[0])); + + ScanIterator scan = ScanIterator.sscan(redis, key, + ScanArgs.Builder.limit(50).match("key-11*")); + + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.hasNext()).isTrue(); + + for (int i = 0; i < 11; i++) { + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.next()).isNotNull(); + } + + assertThat(scan.hasNext()).isFalse(); + } + + @Test + void setMultiPass() { + + redis.sadd(key, KeysAndValues.KEYS.toArray(new String[0])); + + ScanIterator scan = ScanIterator.sscan(redis, key); + + List values = scan.stream().collect(Collectors.toList()); + + assertThat(values).containsAll(values); + } + + @Test + void zscanShouldThrowNoSuchElementExceptionOnEmpty() { + + for (int i = 0; i < KeysAndValues.COUNT; i++) { + redis.zadd(key, ScoredValue.just(i, KeysAndValues.KEYS.get(i))); + } + + + ScanIterator> scan = ScanIterator.zscan(redis, "none", + ScanArgs.Builder.limit(50).match("key-foo")); + + assertThat(scan.hasNext()).isFalse(); + try { + scan.next(); + fail("Missing NoSuchElementException"); + } catch (NoSuchElementException e) { + assertThat(e).isInstanceOf(NoSuchElementException.class); + } + } + + @Test + void zsetSinglePass() { + + for (int i = 0; i < KeysAndValues.COUNT; i++) { + redis.zadd(key, ScoredValue.just(i, KeysAndValues.KEYS.get(i))); + } + + ScanIterator> scan = ScanIterator.zscan(redis, key, + ScanArgs.Builder.limit(50).match("key-11*")); + + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.hasNext()).isTrue(); + + for (int i = 0; i < 11; i++) { + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.next()).isNotNull(); + } + + assertThat(scan.hasNext()).isFalse(); + } + + @Test + void zsetMultiPass() { + + List> expected = new ArrayList<>(); + for (int i = 0; i < KeysAndValues.COUNT; i++) { + ScoredValue scoredValue = ScoredValue.just(i, KeysAndValues.KEYS.get(i)); + expected.add(scoredValue); + redis.zadd(key, scoredValue); + } + + ScanIterator> scan = ScanIterator.zscan(redis, key); + + List> values = scan.stream().collect(Collectors.toList()); + + assertThat(values).containsAll(values); + } +} diff --git a/src/test/java/io/lettuce/core/ScanStreamIntegrationTests.java b/src/test/java/io/lettuce/core/ScanStreamIntegrationTests.java new file mode 100644 index 0000000000..c115565812 --- /dev/null +++ b/src/test/java/io/lettuce/core/ScanStreamIntegrationTests.java @@ -0,0 +1,138 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.List; +import java.util.stream.IntStream; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.core.publisher.Flux; +import reactor.test.StepVerifier; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +class ScanStreamIntegrationTests extends TestSupport { + + private final StatefulRedisConnection connection; + private final RedisCommands redis; + + @Inject + ScanStreamIntegrationTests(StatefulRedisConnection connection) { + this.connection = connection; + this.redis = connection.sync(); + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void shouldScanIteratively() { + + for (int i = 0; i < 1000; i++) { + redis.set("key-" + i, value); + } + ScanIterator scan = ScanIterator.scan(redis); + List list = Flux.fromIterable(() -> scan).collectList().block(); + + RedisReactiveCommands reactive = redis.getStatefulConnection().reactive(); + + StepVerifier.create(ScanStream.scan(reactive, ScanArgs.Builder.limit(200)).take(250)).expectNextCount(250) + .verifyComplete(); + StepVerifier.create(ScanStream.scan(reactive)).expectNextSequence(list).verifyComplete(); + } + + @Test + void shouldHscanIteratively() { + + for (int i = 0; i < 1000; i++) { + redis.hset(key, "field-" + i, "value-" + i); + } + + RedisReactiveCommands reactive = redis.getStatefulConnection().reactive(); + + StepVerifier.create(ScanStream.hscan(reactive, key, ScanArgs.Builder.limit(200)).take(250)).expectNextCount(250) + .verifyComplete(); + StepVerifier.create(ScanStream.hscan(reactive, key)).expectNextCount(1000).verifyComplete(); + } + + @Test + void shouldSscanIteratively() { + + for (int i = 0; i < 1000; i++) { + redis.sadd(key, "value-" + i); + } + + RedisReactiveCommands reactive = redis.getStatefulConnection().reactive(); + + StepVerifier.create(ScanStream.sscan(reactive, key, ScanArgs.Builder.limit(200)), 0).thenRequest(250) + .expectNextCount(250).thenCancel().verify(); + StepVerifier.create(ScanStream.sscan(reactive, key).count()).expectNext(1000L).verifyComplete(); + } + + @Test + void shouldZscanIteratively() { + + for (int i = 0; i < 1000; i++) { + redis.zadd(key, (double) i, "value-" + i); + } + + RedisReactiveCommands reactive = redis.getStatefulConnection().reactive(); + + StepVerifier.create(ScanStream.zscan(reactive, key, ScanArgs.Builder.limit(200)).take(250)).expectNextCount(250) + .verifyComplete(); + StepVerifier.create(ScanStream.zscan(reactive, key)).expectNextCount(1000).verifyComplete(); + } + + @Test + void shouldCorrectlyEmitItemsWithConcurrentPoll() { + + RedisReactiveCommands commands = connection.reactive(); + + String sourceKey = "source"; + String targetKey = "target"; + + IntStream.range(0, 10_000).forEach(num -> connection.async().hset(sourceKey, String.valueOf(num), String.valueOf(num))); + + redis.del(targetKey); + + ScanStream.hscan(commands, sourceKey).map(KeyValue::getValue) // + .map(Integer::parseInt) // + .filter(num -> num % 2 == 0) // + .concatMap(item -> commands.sadd(targetKey, String.valueOf(item))) // + .as(StepVerifier::create) // + .expectNextCount(5000) // + .verifyComplete(); + + assertThat(redis.scard(targetKey)).isEqualTo(5_000); + } +} diff --git a/src/test/java/io/lettuce/core/ScoredValueStreamingAdapter.java b/src/test/java/io/lettuce/core/ScoredValueStreamingAdapter.java new file mode 100644 index 0000000000..f661d8a628 --- /dev/null +++ b/src/test/java/io/lettuce/core/ScoredValueStreamingAdapter.java @@ -0,0 +1,38 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.ArrayList; +import java.util.List; + +import io.lettuce.core.output.ScoredValueStreamingChannel; + +/** + * @author Mark Paluch + * @since 3.0 + */ +public class ScoredValueStreamingAdapter implements ScoredValueStreamingChannel { + private List> list = new ArrayList<>(); + + @Override + public void onValue(ScoredValue value) { + list.add(value); + } + + public List> getList() { + return list; + } +} diff --git a/src/test/java/io/lettuce/core/ScoredValueUnitTests.java b/src/test/java/io/lettuce/core/ScoredValueUnitTests.java new file mode 100644 index 0000000000..baa0c605ba --- /dev/null +++ b/src/test/java/io/lettuce/core/ScoredValueUnitTests.java @@ -0,0 +1,124 @@ + +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.Assertions.offset; + +import java.util.Optional; + +import org.junit.jupiter.api.Test; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +class ScoredValueUnitTests { + + @Test + void shouldCreateEmptyScoredValueFromOptional() { + + ScoredValue value = ScoredValue.from(42, Optional. empty()); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateEmptyValue() { + + ScoredValue value = ScoredValue.empty(); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateNonEmptyValueFromOptional() { + + ScoredValue value = ScoredValue.from(4.2, Optional.of("hello")); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + assertThat(value.getScore()).isCloseTo(4.2, offset(0.01)); + } + + @Test + void shouldCreateEmptyValueFromValue() { + + ScoredValue value = ScoredValue.fromNullable(42, null); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateNonEmptyValueFromValue() { + + ScoredValue value = ScoredValue.fromNullable(42, "hello"); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void justShouldCreateValueFromValue() { + + ScoredValue value = ScoredValue.just(42, "hello"); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void justShouldRejectEmptyValueFromValue() { + assertThatThrownBy(() -> ScoredValue.just(null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void shouldCreateNonEmptyValue() { + + ScoredValue value = ScoredValue.from(12, Optional.of("hello")); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void equals() { + ScoredValue sv1 = ScoredValue.fromNullable(1.0, "a"); + assertThat(sv1.equals(ScoredValue.fromNullable(1.0, "a"))).isTrue(); + assertThat(sv1.equals(null)).isFalse(); + assertThat(sv1.equals(ScoredValue.fromNullable(1.1, "a"))).isFalse(); + assertThat(sv1.equals(ScoredValue.fromNullable(1.0, "b"))).isFalse(); + } + + @Test + void testHashCode() { + assertThat(ScoredValue.fromNullable(1.0, "a").hashCode() != 0).isTrue(); + assertThat(ScoredValue.fromNullable(0.0, "a").hashCode() != 0).isTrue(); + assertThat(ScoredValue.fromNullable(0.0, null).hashCode() == 0).isTrue(); + } + + @Test + void toStringShouldRenderCorrectly() { + + ScoredValue value = ScoredValue.from(12.34, Optional.of("hello")); + ScoredValue empty = ScoredValue.fromNullable(34, null); + + assertThat(value.toString()).contains("ScoredValue[12").contains("340000, hello]"); + assertThat(empty.toString()).contains("ScoredValue[34").contains("000000].empty"); + } +} diff --git a/src/test/java/io/lettuce/core/SocketOptionsIntegrationTests.java b/src/test/java/io/lettuce/core/SocketOptionsIntegrationTests.java new file mode 100644 index 0000000000..bee357c734 --- /dev/null +++ b/src/test/java/io/lettuce/core/SocketOptionsIntegrationTests.java @@ -0,0 +1,68 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.fail; + +import java.net.SocketException; +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.test.LettuceExtension; +import io.netty.channel.ConnectTimeoutException; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class SocketOptionsIntegrationTests extends TestSupport { + + private final RedisClient client; + + @Inject + SocketOptionsIntegrationTests(RedisClient client) { + this.client = client; + } + + @Test + void testConnectTimeout() { + + SocketOptions socketOptions = SocketOptions.builder().connectTimeout(100, TimeUnit.MILLISECONDS).build(); + client.setOptions(ClientOptions.builder().socketOptions(socketOptions).build()); + + try { + client.connect(RedisURI.create("2:4:5:5::1", 60000)); + fail("Missing RedisConnectionException"); + } catch (RedisConnectionException e) { + + if (e.getCause() instanceof ConnectTimeoutException) { + assertThat(e).hasRootCauseInstanceOf(ConnectTimeoutException.class); + assertThat(e.getCause()).hasMessageContaining("connection timed out"); + return; + } + + if (e.getCause() instanceof SocketException) { + // Network is unreachable or No route to host are OK as well. + return; + } + } + } +} diff --git a/src/test/java/io/lettuce/core/SocketOptionsUnitTests.java b/src/test/java/io/lettuce/core/SocketOptionsUnitTests.java new file mode 100644 index 0000000000..e7f5300f21 --- /dev/null +++ b/src/test/java/io/lettuce/core/SocketOptionsUnitTests.java @@ -0,0 +1,73 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * Unit tests for {@link SocketOptions}. + * + * @author Mark Paluch + */ +class SocketOptionsUnitTests { + + @Test + void testNew() { + checkAssertions(SocketOptions.create()); + } + + @Test + void testBuilder() { + + SocketOptions sut = SocketOptions.builder().connectTimeout(1, TimeUnit.MINUTES).keepAlive(true).tcpNoDelay(true) + .build(); + + assertThat(sut.isKeepAlive()).isTrue(); + assertThat(sut.isTcpNoDelay()).isTrue(); + assertThat(sut.getConnectTimeout()).isEqualTo(Duration.ofMinutes(1)); + } + + @Test + void mutateShouldConfigureNewOptions() { + + SocketOptions sut = SocketOptions.builder().connectTimeout(Duration.ofSeconds(1)).keepAlive(true).tcpNoDelay(true) + .build(); + + SocketOptions reconfigured = sut.mutate().tcpNoDelay(false).build(); + + assertThat(sut.isKeepAlive()).isTrue(); + assertThat(sut.isTcpNoDelay()).isTrue(); + assertThat(sut.getConnectTimeout()).isEqualTo(Duration.ofSeconds(1)); + + assertThat(reconfigured.isTcpNoDelay()).isFalse(); + } + + @Test + void testCopy() { + checkAssertions(SocketOptions.copyOf(SocketOptions.builder().build())); + } + + void checkAssertions(SocketOptions sut) { + assertThat(sut.isKeepAlive()).isFalse(); + assertThat(sut.isTcpNoDelay()).isFalse(); + assertThat(sut.getConnectTimeout()).isEqualTo(Duration.ofSeconds(10)); + } +} diff --git a/src/test/java/io/lettuce/core/SslIntegrationTests.java b/src/test/java/io/lettuce/core/SslIntegrationTests.java new file mode 100644 index 0000000000..1ff865df9e --- /dev/null +++ b/src/test/java/io/lettuce/core/SslIntegrationTests.java @@ -0,0 +1,397 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.test.settings.TestSettings.sslPort; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.io.File; +import java.net.MalformedURLException; +import java.net.URL; +import java.time.Duration; +import java.util.List; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.masterslave.MasterSlave; +import io.lettuce.core.pubsub.api.sync.RedisPubSubCommands; +import io.lettuce.test.CanConnect; +import io.lettuce.test.Delay; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; +import io.lettuce.test.settings.TestSettings; +import io.netty.handler.ssl.OpenSsl; + +/** + * Tests using SSL via {@link RedisClient}. + * + * @author Mark Paluch + * @author Adam McElwee + */ +@ExtendWith(LettuceExtension.class) +class SslIntegrationTests extends TestSupport { + + private static final String KEYSTORE = "work/keystore.jks"; + private static final String TRUSTSTORE = "work/truststore.jks"; + private static final File TRUSTSTORE_FILE = new File(TRUSTSTORE); + private static final File CA_CERT_FILE = new File("work/ca/certs/ca.cert.pem"); + private static final int MASTER_SLAVE_BASE_PORT_OFFSET = 2000; + + private static final RedisURI URI_VERIFY = sslURIBuilder(0) // + .withVerifyPeer(true) // + .build(); + + private static final RedisURI URI_NO_VERIFY = sslURIBuilder(1) // + .withVerifyPeer(false) // + .build(); + + private static final RedisURI URI_CLIENT_CERT_AUTH = sslURIBuilder(2) // + .withVerifyPeer(true) // + .build(); + + private static final List MASTER_SLAVE_URIS_NO_VERIFY = sslUris(IntStream.of(0, 1), + builder -> builder.withVerifyPeer(false)); + + private static final List MASTER_SLAVE_URIS_VERIFY = sslUris(IntStream.of(0, 1), + builder -> builder.withVerifyPeer(true)); + + private static final List MASTER_SLAVE_URIS_WITH_ONE_INVALID = sslUris(IntStream.of(0, 1, 2), + builder -> builder.withVerifyPeer(true)); + + private static final List MASTER_SLAVE_URIS_WITH_ALL_INVALID = sslUris(IntStream.of(2, 3), + builder -> builder.withVerifyPeer(true)); + + private final RedisClient redisClient; + + @Inject + SslIntegrationTests(RedisClient redisClient) { + this.redisClient = redisClient; + } + + @BeforeAll + static void beforeClass() { + + assumeTrue(CanConnect.to(TestSettings.host(), sslPort()), "Assume that stunnel runs on port 6443"); + assertThat(TRUSTSTORE_FILE).exists(); + } + + @Test + void standaloneWithSsl() { + + RedisCommands connection = redisClient.connect(URI_NO_VERIFY).sync(); + connection.set("key", "value"); + assertThat(connection.get("key")).isEqualTo("value"); + connection.getStatefulConnection().close(); + } + + @Test + void standaloneWithJdkSsl() { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(TRUSTSTORE_FILE) // + .build(); + setOptions(sslOptions); + + verifyConnection(URI_VERIFY); + } + + @Test + void standaloneWithPemCert() { + + SslOptions sslOptions = SslOptions.builder() // + .trustManager(CA_CERT_FILE) // + .build(); + setOptions(sslOptions); + + verifyConnection(URI_VERIFY); + } + + @Test + void standaloneWithJdkSslUsingTruststoreUrl() throws Exception { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(truststoreURL()) // + .build(); + setOptions(sslOptions); + + verifyConnection(URI_VERIFY); + } + + @Test + void standaloneWithClientCertificates() { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .keystore(new File(KEYSTORE), "changeit".toCharArray()) // + .truststore(TRUSTSTORE_FILE) // + .build(); + setOptions(sslOptions); + + verifyConnection(URI_CLIENT_CERT_AUTH); + } + + @Test + void standaloneWithClientCertificatesWithoutKeystore() { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(TRUSTSTORE_FILE) // + .build(); + setOptions(sslOptions); + + assertThatThrownBy(() -> verifyConnection(URI_CLIENT_CERT_AUTH)).isInstanceOf(RedisConnectionException.class); + } + + @Test + void standaloneWithJdkSslUsingTruststoreUrlWithWrongPassword() throws Exception { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(truststoreURL(), "knödel") // + .build(); + setOptions(sslOptions); + + assertThatThrownBy(() -> verifyConnection(URI_VERIFY)).isInstanceOf(RedisConnectionException.class); + } + + @Test + void standaloneWithJdkSslFailsWithWrongTruststore() { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .build(); + setOptions(sslOptions); + + assertThatThrownBy(() -> verifyConnection(URI_VERIFY)).isInstanceOf(RedisConnectionException.class); + } + + @Test + void standaloneWithOpenSsl() { + + assumeTrue(OpenSsl.isAvailable()); + + SslOptions sslOptions = SslOptions.builder() // + .openSslProvider() // + .truststore(TRUSTSTORE_FILE) // + .build(); + setOptions(sslOptions); + + verifyConnection(URI_VERIFY); + } + + @Test + void standaloneWithOpenSslFailsWithWrongTruststore() { + + assumeTrue(OpenSsl.isAvailable()); + + SslOptions sslOptions = SslOptions.builder() // + .openSslProvider() // + .build(); + setOptions(sslOptions); + + assertThatThrownBy(() -> verifyConnection(URI_VERIFY)).isInstanceOf(RedisConnectionException.class); + } + + @Test + void regularSslWithReconnect() { + + RedisCommands connection = redisClient.connect(URI_NO_VERIFY).sync(); + connection.quit(); + Delay.delay(Duration.ofMillis(200)); + assertThat(connection.ping()).isEqualTo("PONG"); + connection.getStatefulConnection().close(); + } + + @Test + void sslWithVerificationWillFail() { + + RedisURI redisUri = RedisURI.create("rediss://" + TestSettings.host() + ":" + sslPort()); + + assertThatThrownBy(() -> redisClient.connect(redisUri).sync()).isInstanceOf(RedisConnectionException.class); + } + + @Test + void masterSlaveWithSsl() { + + RedisCommands connection = MasterSlave + .connect(redisClient, StringCodec.UTF8, MASTER_SLAVE_URIS_NO_VERIFY).sync(); + connection.set("key", "value"); + assertThat(connection.get("key")).isEqualTo("value"); + connection.getStatefulConnection().close(); + } + + @Test + void masterSlaveWithJdkSsl() { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(TRUSTSTORE_FILE) // + .build(); + setOptions(sslOptions); + + verifyMasterSlaveConnection(MASTER_SLAVE_URIS_VERIFY); + } + + @Test + void masterSlaveWithJdkSslUsingTruststoreUrl() throws Exception { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(truststoreURL()) // + .build(); + setOptions(sslOptions); + + verifyMasterSlaveConnection(MASTER_SLAVE_URIS_VERIFY); + } + + @Test + void masterSlaveWithJdkSslUsingTruststoreUrlWithWrongPassword() throws Exception { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(truststoreURL(), "knödel") // + .build(); + setOptions(sslOptions); + + assertThatThrownBy(() -> verifyMasterSlaveConnection(MASTER_SLAVE_URIS_VERIFY)) + .isInstanceOf(RedisConnectionException.class); + } + + @Test + void masterSlaveWithJdkSslFailsWithWrongTruststore() { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .build(); + setOptions(sslOptions); + + assertThatThrownBy(() -> verifyMasterSlaveConnection(MASTER_SLAVE_URIS_VERIFY)) + .isInstanceOf(RedisConnectionException.class); + } + + @Test + void masterSlaveSslWithReconnect() { + RedisCommands connection = MasterSlave + .connect(redisClient, StringCodec.UTF8, MASTER_SLAVE_URIS_NO_VERIFY).sync(); + connection.quit(); + Delay.delay(Duration.ofMillis(200)); + assertThat(connection.ping()).isEqualTo("PONG"); + connection.getStatefulConnection().close(); + } + + @Test + void masterSlaveSslWithVerificationWillFail() { + assertThatThrownBy(() -> MasterSlave.connect(redisClient, StringCodec.UTF8, MASTER_SLAVE_URIS_VERIFY)) + .isInstanceOf(RedisConnectionException.class); + } + + @Test + void masterSlaveSslWithOneInvalidHostWillSucceed() { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(TRUSTSTORE_FILE) // + .build(); + setOptions(sslOptions); + + verifyMasterSlaveConnection(MASTER_SLAVE_URIS_WITH_ONE_INVALID); + } + + @Test + void masterSlaveSslWithAllInvalidHostsWillFail() { + + SslOptions sslOptions = SslOptions.builder() // + .jdkSslProvider() // + .truststore(TRUSTSTORE_FILE) // + .build(); + setOptions(sslOptions); + + assertThatThrownBy(() -> verifyMasterSlaveConnection(MASTER_SLAVE_URIS_WITH_ALL_INVALID)) + .isInstanceOf(RedisConnectionException.class); + } + + @Test + void pubSubSsl() { + + RedisPubSubCommands connection = redisClient.connectPubSub(URI_NO_VERIFY).sync(); + connection.subscribe("c1"); + connection.subscribe("c2"); + Delay.delay(Duration.ofMillis(200)); + + RedisPubSubCommands connection2 = redisClient.connectPubSub(URI_NO_VERIFY).sync(); + + assertThat(connection2.pubsubChannels()).contains("c1", "c2"); + connection.quit(); + Delay.delay(Duration.ofMillis(200)); + Wait.untilTrue(connection::isOpen).waitOrTimeout(); + Wait.untilEquals(2, () -> connection2.pubsubChannels().size()).waitOrTimeout(); + + assertThat(connection2.pubsubChannels()).contains("c1", "c2"); + + connection.getStatefulConnection().close(); + connection2.getStatefulConnection().close(); + } + + private static RedisURI.Builder sslURIBuilder(int portOffset) { + return RedisURI.Builder.redis(TestSettings.host(), sslPort(portOffset)).withSsl(true); + } + + private static List sslUris(IntStream masterSlaveOffsets, + Function builderCustomizer) { + + return masterSlaveOffsets.map(it -> it + MASTER_SLAVE_BASE_PORT_OFFSET) + .mapToObj(offset -> RedisURI.Builder.redis(TestSettings.host(), sslPort(offset)).withSsl(true)) + .map(builderCustomizer).map(RedisURI.Builder::build).collect(Collectors.toList()); + } + + private URL truststoreURL() throws MalformedURLException { + return TRUSTSTORE_FILE.toURI().toURL(); + } + + private void setOptions(SslOptions sslOptions) { + ClientOptions clientOptions = ClientOptions.builder().sslOptions(sslOptions).build(); + redisClient.setOptions(clientOptions); + } + + private void verifyConnection(RedisURI redisUri) { + + try (StatefulRedisConnection connection = redisClient.connect(redisUri)) { + connection.sync().ping(); + } + } + + private void verifyMasterSlaveConnection(List redisUris) { + + try (StatefulRedisConnection connection = MasterSlave.connect(redisClient, StringCodec.UTF8, + redisUris)) { + connection.sync().ping(); + } + } +} diff --git a/src/test/java/io/lettuce/core/SslOptionsUnitTests.java b/src/test/java/io/lettuce/core/SslOptionsUnitTests.java new file mode 100644 index 0000000000..cfb052ac73 --- /dev/null +++ b/src/test/java/io/lettuce/core/SslOptionsUnitTests.java @@ -0,0 +1,91 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Collections; + +import javax.net.ssl.SSLParameters; + +import org.junit.jupiter.api.Test; + +import io.netty.handler.ssl.SslContext; + +/** + * Unit tests for {@link SslOptions}. + * + * @author Mark Paluch + */ +class SslOptionsUnitTests { + + @Test + void shouldCreateEmptySslOptions() throws Exception { + + SslOptions options = SslOptions.builder().build(); + + assertThat(options.createSSLParameters()).isNotNull(); + assertThat(options.createSslContextBuilder()).isNotNull(); + } + + @Test + void shouldConfigureCipherSuiteAndProtocol() { + + SslOptions options = SslOptions.builder().cipherSuites("Foo", "Bar").protocols("TLSv1").build(); + + SSLParameters parameters = options.createSSLParameters(); + assertThat(parameters.getCipherSuites()).contains("Foo", "Bar"); + assertThat(parameters.getProtocols()).contains("TLSv1"); + } + + @Test + void shouldMutateOptions() { + + SslOptions options = SslOptions.builder().cipherSuites("Foo", "Bar").protocols("TLSv1").build(); + + SslOptions reconfigured = options.mutate().protocols("Baz").build(); + + assertThat(options.createSSLParameters().getProtocols()).contains("TLSv1"); + assertThat(reconfigured.createSSLParameters().getProtocols()).contains("Baz"); + } + + @Test + void shouldUseParameterSupplier() { + + SslOptions options = SslOptions.builder().sslParameters(() -> { + + SSLParameters parameters = new SSLParameters(); + parameters.setNeedClientAuth(true); + return parameters; + }).build(); + + SSLParameters parameters = options.createSSLParameters(); + assertThat(parameters.getNeedClientAuth()).isTrue(); + } + + @Test + void shouldApplyContextCustomizer() throws Exception { + + SslOptions options = SslOptions.builder().sslContext(sslContextBuilder -> { + + sslContextBuilder.ciphers(Collections.singletonList("TLS_RSA_WITH_AES_128_CBC_SHA")); + + }).build(); + + SslContext context = options.createSslContextBuilder().build(); + assertThat(context.cipherSuites()).containsOnly("TLS_RSA_WITH_AES_128_CBC_SHA"); + } +} diff --git a/src/test/java/io/lettuce/core/SyncAsyncApiConvergenceUnitTests.java b/src/test/java/io/lettuce/core/SyncAsyncApiConvergenceUnitTests.java new file mode 100644 index 0000000000..acf376b0f0 --- /dev/null +++ b/src/test/java/io/lettuce/core/SyncAsyncApiConvergenceUnitTests.java @@ -0,0 +1,82 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.lang.reflect.*; +import java.util.Arrays; +import java.util.stream.Stream; + +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; + +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; + +/** + * @author Mark Paluch + * @since 3.0 + */ +class SyncAsyncApiConvergenceUnitTests { + + @SuppressWarnings("rawtypes") + private Class asyncClass = RedisAsyncCommands.class; + + static Stream parameters() { + return Arrays.stream(RedisCommands.class.getMethods()); + } + + @ParameterizedTest + @MethodSource("parameters") + void testMethodPresentOnAsyncApi(Method syncMethod) throws Exception { + + Method method = RedisAsyncCommands.class.getMethod(syncMethod.getName(), syncMethod.getParameterTypes()); + assertThat(method).isNotNull(); + } + + @ParameterizedTest + @MethodSource("parameters") + void testMethodPresentOnReactiveApi(Method syncMethod) throws Exception { + + Method method = RedisReactiveCommands.class.getMethod(syncMethod.getName(), syncMethod.getParameterTypes()); + assertThat(method).isNotNull(); + } + + @ParameterizedTest + @MethodSource("parameters") + void testSameResultType(Method syncMethod) throws Exception { + + Method method = asyncClass.getMethod(syncMethod.getName(), syncMethod.getParameterTypes()); + Type returnType = method.getGenericReturnType(); + + if (method.getReturnType().equals(RedisFuture.class)) { + ParameterizedType genericReturnType = (ParameterizedType) method.getGenericReturnType(); + Type[] actualTypeArguments = genericReturnType.getActualTypeArguments(); + + if (actualTypeArguments[0] instanceof GenericArrayType) { + GenericArrayType arrayType = (GenericArrayType) actualTypeArguments[0]; + returnType = Array.newInstance((Class) arrayType.getGenericComponentType(), 0).getClass(); + } else { + returnType = actualTypeArguments[0]; + } + } + + assertThat(returnType.toString()).describedAs(syncMethod.toString()).isEqualTo( + syncMethod.getGenericReturnType().toString()); + } +} diff --git a/src/test/java/io/lettuce/core/TestRedisPublisher.java b/src/test/java/io/lettuce/core/TestRedisPublisher.java new file mode 100644 index 0000000000..653603eba7 --- /dev/null +++ b/src/test/java/io/lettuce/core/TestRedisPublisher.java @@ -0,0 +1,37 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.function.Supplier; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.protocol.RedisCommand; +import io.netty.util.concurrent.ImmediateEventExecutor; + +/** + * @author Mark Paluch + */ +public class TestRedisPublisher extends RedisPublisher { + + public TestRedisPublisher(RedisCommand staticCommand, StatefulConnection connection, boolean dissolve) { + super(staticCommand, connection, dissolve, ImmediateEventExecutor.INSTANCE); + } + + public TestRedisPublisher(Supplier> redisCommandSupplier, StatefulConnection connection, + boolean dissolve) { + super(redisCommandSupplier, connection, dissolve, ImmediateEventExecutor.INSTANCE); + } +} diff --git a/src/test/java/io/lettuce/core/TestSupport.java b/src/test/java/io/lettuce/core/TestSupport.java new file mode 100644 index 0000000000..61cdb7b1c5 --- /dev/null +++ b/src/test/java/io/lettuce/core/TestSupport.java @@ -0,0 +1,65 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Arrays; +import java.util.List; +import java.util.Set; + +import io.lettuce.core.internal.LettuceSets; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + * @author Tugdual Grall + */ +public abstract class TestSupport { + + public static final String host = TestSettings.hostAddr(); + public static final int port = TestSettings.port(); + public static final String username = TestSettings.username(); + public static final String passwd = TestSettings.password(); + + public static final String aclUsername = TestSettings.aclUsername(); + public static final String aclPasswd = TestSettings.aclPassword(); + + public static final String key = "key"; + public static final String value = "value"; + + protected static List list(String... args) { + return Arrays.asList(args); + } + + protected static List list(Object... args) { + return Arrays.asList(args); + } + + protected static List> svlist(ScoredValue... args) { + return Arrays.asList(args); + } + + protected static KeyValue kv(String key, String value) { + return KeyValue.fromNullable(key, value); + } + + protected static ScoredValue sv(double score, String value) { + return ScoredValue.fromNullable(score, value); + } + + protected static Set set(String... args) { + return LettuceSets.newHashSet(args); + } +} diff --git a/src/test/java/io/lettuce/core/TimeoutOptionsUnitTests.java b/src/test/java/io/lettuce/core/TimeoutOptionsUnitTests.java new file mode 100644 index 0000000000..1ac400251b --- /dev/null +++ b/src/test/java/io/lettuce/core/TimeoutOptionsUnitTests.java @@ -0,0 +1,61 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static io.lettuce.core.TimeoutOptions.TimeoutSource; +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class TimeoutOptionsUnitTests { + + @Test + void noTimeoutByDefault() { + + TimeoutOptions timeoutOptions = TimeoutOptions.create(); + + assertThat(timeoutOptions.isTimeoutCommands()).isFalse(); + assertThat(timeoutOptions.getSource()).isNull(); + } + + @Test + void defaultConnectionTimeout() { + + TimeoutOptions timeoutOptions = TimeoutOptions.enabled(); + + TimeoutSource source = timeoutOptions.getSource(); + assertThat(timeoutOptions.isTimeoutCommands()).isTrue(); + assertThat(timeoutOptions.isApplyConnectionTimeout()).isTrue(); + assertThat(source.getTimeout(null)).isEqualTo(-1); + } + + @Test + void fixedConnectionTimeout() { + + TimeoutOptions timeoutOptions = TimeoutOptions.enabled(Duration.ofMinutes(1)); + + TimeoutSource source = timeoutOptions.getSource(); + assertThat(timeoutOptions.isTimeoutCommands()).isTrue(); + assertThat(timeoutOptions.isApplyConnectionTimeout()).isFalse(); + assertThat(source.getTimeout(null)).isEqualTo(TimeUnit.MINUTES.toNanos(1)); + } +} diff --git a/src/test/java/io/lettuce/core/UnixDomainSocketIntegrationTests.java b/src/test/java/io/lettuce/core/UnixDomainSocketIntegrationTests.java new file mode 100644 index 0000000000..be7a7d0ac1 --- /dev/null +++ b/src/test/java/io/lettuce/core/UnixDomainSocketIntegrationTests.java @@ -0,0 +1,201 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.fail; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.io.File; +import java.io.IOException; +import java.util.Locale; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; +import io.netty.util.internal.SystemPropertyUtil; + +/** + * @author Mark Paluch + */ +class UnixDomainSocketIntegrationTests { + + private static final String MASTER_ID = "mymaster"; + + private static RedisClient sentinelClient; + + private Logger log = LogManager.getLogger(getClass()); + private String key = "key"; + private String value = "value"; + + @BeforeAll + static void setupClient() { + sentinelClient = getRedisSentinelClient(); + } + + @AfterAll + static void shutdownClient() { + FastShutdown.shutdown(sentinelClient); + } + + @Test + void standalone_RedisClientWithSocket() throws Exception { + + assumeTestSupported(); + + RedisURI redisURI = getSocketRedisUri(); + + RedisClient redisClient = RedisClient.create(TestClientResources.get(), redisURI); + + StatefulRedisConnection connection = redisClient.connect(); + someRedisAction(connection.sync()); + connection.close(); + + FastShutdown.shutdown(redisClient); + } + + @Test + void standalone_ConnectToSocket() throws Exception { + + assumeTestSupported(); + + RedisURI redisURI = getSocketRedisUri(); + + RedisClient redisClient = RedisClient.create(TestClientResources.get()); + + StatefulRedisConnection connection = redisClient.connect(redisURI); + + someRedisAction(connection.sync()); + connection.close(); + + FastShutdown.shutdown(redisClient); + } + + @Test + void sentinel_RedisClientWithSocket() throws Exception { + + assumeTestSupported(); + + RedisURI uri = new RedisURI(); + uri.getSentinels().add(getSentinelSocketRedisUri()); + uri.setSentinelMasterId("mymaster"); + + RedisClient redisClient = RedisClient.create(TestClientResources.get(), uri); + + StatefulRedisConnection connection = redisClient.connect(); + + someRedisAction(connection.sync()); + + connection.close(); + + StatefulRedisSentinelConnection sentinelConnection = redisClient.connectSentinel(); + + assertThat(sentinelConnection.sync().ping()).isEqualTo("PONG"); + sentinelConnection.close(); + + FastShutdown.shutdown(redisClient); + } + + @Test + void sentinel_ConnectToSocket() throws Exception { + + assumeTestSupported(); + + RedisURI uri = new RedisURI(); + uri.getSentinels().add(getSentinelSocketRedisUri()); + uri.setSentinelMasterId("mymaster"); + + RedisClient redisClient = RedisClient.create(TestClientResources.get()); + + StatefulRedisConnection connection = redisClient.connect(uri); + + someRedisAction(connection.sync()); + + connection.close(); + + StatefulRedisSentinelConnection sentinelConnection = redisClient.connectSentinel(uri); + + assertThat(sentinelConnection.sync().ping()).isEqualTo("PONG"); + sentinelConnection.close(); + + FastShutdown.shutdown(redisClient); + } + + @Test + void sentinel_socket_and_inet() throws Exception { + + assumeTestSupported(); + + RedisURI uri = new RedisURI(); + uri.getSentinels().add(getSentinelSocketRedisUri()); + uri.getSentinels().add(RedisURI.create(RedisURI.URI_SCHEME_REDIS + "://" + TestSettings.host() + ":26379")); + uri.setSentinelMasterId(MASTER_ID); + + RedisClient redisClient = RedisClient.create(TestClientResources.get(), uri); + + StatefulRedisSentinelConnection sentinelConnection = redisClient + .connectSentinel(getSentinelSocketRedisUri()); + log.info("Masters: " + sentinelConnection.sync().masters()); + + try { + redisClient.connect(); + fail("Missing validation exception"); + } catch (RedisConnectionException e) { + assertThat(e).hasMessageContaining("You cannot mix unix domain socket and IP socket URI's"); + } finally { + FastShutdown.shutdown(redisClient); + } + + } + + private void someRedisAction(RedisCommands connection) { + connection.set(key, value); + String result = connection.get(key); + + assertThat(result).isEqualTo(value); + } + + private static RedisClient getRedisSentinelClient() { + return RedisClient.create(TestClientResources.get(), RedisURI.Builder.sentinel(TestSettings.host(), MASTER_ID).build()); + } + + private void assumeTestSupported() { + String osName = SystemPropertyUtil.get("os.name").toLowerCase(Locale.UK).trim(); + assumeTrue(Transports.NativeTransports.isSocketSupported(), "Only supported on Linux/OSX, your os is " + osName + + " with epoll/kqueue support."); + } + + private static RedisURI getSocketRedisUri() throws IOException { + File file = new File(TestSettings.socket()).getCanonicalFile(); + return RedisURI.create(RedisURI.URI_SCHEME_REDIS_SOCKET + "://" + file.getCanonicalPath()); + } + + private static RedisURI getSentinelSocketRedisUri() throws IOException { + File file = new File(TestSettings.sentinelSocket()).getCanonicalFile(); + return RedisURI.create(RedisURI.URI_SCHEME_REDIS_SOCKET + "://" + file.getCanonicalPath()); + } + +} diff --git a/src/test/java/io/lettuce/core/Utf8StringCodecIntegrationTests.java b/src/test/java/io/lettuce/core/Utf8StringCodecIntegrationTests.java new file mode 100644 index 0000000000..87367a378f --- /dev/null +++ b/src/test/java/io/lettuce/core/Utf8StringCodecIntegrationTests.java @@ -0,0 +1,50 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Arrays; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class Utf8StringCodecIntegrationTests extends TestSupport { + + @Test + @Inject + void decodeHugeBuffer(StatefulRedisConnection connection) { + + RedisCommands redis = connection.sync(); + + char[] huge = new char[8192]; + Arrays.fill(huge, 'A'); + String value = new String(huge); + redis.set(key, value); + assertThat(redis.get(key)).isEqualTo(value); + } +} diff --git a/src/test/java/io/lettuce/core/ValueUnitTests.java b/src/test/java/io/lettuce/core/ValueUnitTests.java new file mode 100644 index 0000000000..954e2bc093 --- /dev/null +++ b/src/test/java/io/lettuce/core/ValueUnitTests.java @@ -0,0 +1,261 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.NoSuchElementException; +import java.util.Optional; +import java.util.concurrent.atomic.AtomicBoolean; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class ValueUnitTests { + + @Test + void shouldCreateEmptyValueFromOptional() { + + Value value = Value.from(Optional. empty()); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateEmptyValue() { + + Value value = Value.empty(); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateNonEmptyValueFromOptional() { + + Value value = Value.from(Optional.of("hello")); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void shouldCreateEmptyValueFromValue() { + + Value value = Value.fromNullable(null); + + assertThat(value.hasValue()).isFalse(); + } + + @Test + void shouldCreateNonEmptyValueFromValue() { + + Value value = Value.fromNullable("hello"); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void justShouldCreateValueFromValue() { + + Value value = Value.just("hello"); + + assertThat(value.hasValue()).isTrue(); + } + + @Test + void justShouldRejectEmptyValueFromValue() { + assertThatThrownBy(() -> Value.just(null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void shouldCreateNonEmptyValue() { + + Value value = Value.from(Optional.of("hello")); + + assertThat(value.hasValue()).isTrue(); + assertThat(value.getValue()).isEqualTo("hello"); + } + + @Test + void optionalShouldReturnOptional() { + + Value value = Value.from(Optional.of("hello")); + + assertThat(value.optional()).hasValue("hello"); + } + + @Test + void emptyValueOptionalShouldReturnOptional() { + + Value value = Value.from(Optional.empty()); + + assertThat(value.optional()).isEmpty(); + } + + @Test + void getValueOrElseShouldReturnValue() { + + Value value = Value.from(Optional.of("hello")); + + assertThat(value.getValueOrElse("world")).isEqualTo("hello"); + } + + @Test + void getValueOrElseShouldReturnOtherValue() { + + Value value = Value.from(Optional.empty()); + + assertThat(value.getValueOrElse("world")).isEqualTo("world"); + } + + @Test + void orElseThrowShouldReturnValue() { + + Value value = Value.from(Optional.of("hello")); + + assertThat(value.getValueOrElseThrow(IllegalArgumentException::new)).isEqualTo("hello"); + } + + @Test + void emptyValueGetValueOrElseShouldThrowException() { + + Value value = Value.from(Optional.empty()); + + assertThatThrownBy(() -> value.getValueOrElseThrow(IllegalArgumentException::new)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void getValueOrElseGetShouldReturnValue() { + + Value value = Value.from(Optional.of("hello")); + + assertThat(value.getValueOrElseGet(() -> "world")).isEqualTo("hello"); + } + + @Test + void emptyValueGetValueOrElseGetShouldReturnOtherValue() { + + Value value = Value.from(Optional.empty()); + + assertThat(value.getValueOrElseGet(() -> "world")).isEqualTo("world"); + } + + @Test + void mapShouldMapValue() { + + Value value = Value.from(Optional.of("hello")); + + assertThat(value.map(s -> s + "-world").getValue()).isEqualTo("hello-world"); + } + + @Test + void ifHasValueShouldExecuteCallback() { + + Value value = Value.just("hello"); + AtomicBoolean atomicBoolean = new AtomicBoolean(); + value.ifHasValue(s -> atomicBoolean.set(true)); + + assertThat(atomicBoolean.get()).isTrue(); + } + + @Test + void emptyValueShouldNotExecuteIfHasValueCallback() { + + Value value = Value.empty(); + AtomicBoolean atomicBoolean = new AtomicBoolean(); + value.ifHasValue(s -> atomicBoolean.set(true)); + + assertThat(atomicBoolean.get()).isFalse(); + } + + @Test + void ifEmptyShouldExecuteCallback() { + + Value value = Value.empty(); + AtomicBoolean atomicBoolean = new AtomicBoolean(); + value.ifEmpty(() -> atomicBoolean.set(true)); + + assertThat(atomicBoolean.get()).isTrue(); + } + + @Test + void valueShouldNotExecuteIfEmptyCallback() { + + Value value = Value.just("hello"); + AtomicBoolean atomicBoolean = new AtomicBoolean(); + value.ifEmpty(() -> atomicBoolean.set(true)); + + assertThat(atomicBoolean.get()).isFalse(); + } + + @Test + void emptyValueMapShouldNotMapEmptyValue() { + + Value value = Value.from(Optional.empty()); + + assertThat(value.map(s -> s + "-world")).isSameAs(value); + } + + @Test + void emptyValueGetEmptyValueShouldThrowException() { + assertThatThrownBy(() -> Value.from(Optional. empty()).getValue()).isInstanceOf(NoSuchElementException.class); + } + + @Test + void shouldBeEquals() { + + Value value = Value.from(Optional.of("hello")); + Value other = Value.fromNullable("hello"); + Value different = Value.fromNullable("different"); + + assertThat(value).isEqualTo(other); + assertThat(value).isNotEqualTo(different); + + assertThat(value.hashCode()).isEqualTo(other.hashCode()); + assertThat(value.hashCode()).isNotEqualTo(different.hashCode()); + } + + @Test + void toStringShouldRenderCorrectly() { + + Value value = Value.from(Optional.of("hello")); + Value empty = Value.fromNullable(null); + + assertThat(value.toString()).isEqualTo("Value[hello]"); + assertThat(empty.toString()).isEqualTo("Value.empty"); + } + + @Test + void emptyValueStreamShouldCreateEmptyStream() { + + Value empty = Value.fromNullable(null); + + assertThat(empty.stream().count()).isEqualTo(0); + } + + @Test + void streamShouldCreateAStream() { + + Value empty = Value.fromNullable("hello"); + + assertThat(empty.stream().count()).isEqualTo(1); + } +} diff --git a/src/test/java/io/lettuce/core/ZStoreArgsUnitTests.java b/src/test/java/io/lettuce/core/ZStoreArgsUnitTests.java new file mode 100644 index 0000000000..389ba07d8a --- /dev/null +++ b/src/test/java/io/lettuce/core/ZStoreArgsUnitTests.java @@ -0,0 +1,47 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.protocol.CommandArgs; + +/** + * @author Mark Paluch + */ +class ZStoreArgsUnitTests { + + @Test + void shouldRenderWeights() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8); + ZStoreArgs.Builder.weights(1, 2, 3).build(args); + + assertThat(args.toString()).contains("WEIGHTS"); + } + + @Test + void shouldOmitWeights() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8); + ZStoreArgs.Builder.weights().build(args); + + assertThat(args.toString()).doesNotContain("WEIGHTS"); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/AdvancedClusterClientIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/AdvancedClusterClientIntegrationTests.java new file mode 100644 index 0000000000..a6652c9c08 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/AdvancedClusterClientIntegrationTests.java @@ -0,0 +1,642 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.test.LettuceExtension.Connection; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.*; +import java.util.stream.Collectors; + +import javax.enterprise.inject.New; +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.reactive.RedisClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.test.TestFutures; +import io.lettuce.test.KeysAndValues; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.ListStreamingAdapter; +import io.lettuce.test.condition.EnabledOnCommand; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("rawtypes") +@ExtendWith(LettuceExtension.class) +class AdvancedClusterClientIntegrationTests extends TestSupport { + + private static final String KEY_ON_NODE_1 = "a"; + private static final String KEY_ON_NODE_2 = "b"; + + private final RedisClusterClient clusterClient; + private final StatefulRedisClusterConnection clusterConnection; + private final RedisAdvancedClusterAsyncCommands async; + private final RedisAdvancedClusterCommands sync; + + @Inject + AdvancedClusterClientIntegrationTests(RedisClusterClient clusterClient, + StatefulRedisClusterConnection clusterConnection) { + this.clusterClient = clusterClient; + + this.clusterConnection = clusterConnection; + this.async = clusterConnection.async(); + this.sync = clusterConnection.sync(); + } + + @BeforeEach + void setUp() { + this.sync.flushall(); + } + + @Test + void nodeConnections() { + + assertThat(clusterClient.getPartitions()).hasSize(4); + + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + RedisClusterAsyncCommands nodeConnection = async.getConnection(redisClusterNode.getNodeId()); + + String myid = TestFutures.getOrTimeout(nodeConnection.clusterMyId()); + assertThat(myid).isEqualTo(redisClusterNode.getNodeId()); + } + } + + @Test + void unknownNodeId() { + assertThatThrownBy(() -> async.getConnection("unknown")).isInstanceOf(RedisException.class); + } + + @Test + void invalidHost() { + assertThatThrownBy(() -> async.getConnection("invalid-host", -1)).isInstanceOf(RedisException.class); + } + + @Test + void partitions() { + + Partitions partitions = async.getStatefulConnection().getPartitions(); + assertThat(partitions).hasSize(4); + } + + @Test + void differentConnections() { + + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + RedisClusterAsyncCommands nodeId = async.getConnection(redisClusterNode.getNodeId()); + RedisClusterAsyncCommands hostAndPort = async.getConnection(redisClusterNode.getUri().getHost(), + redisClusterNode.getUri().getPort()); + + assertThat(nodeId).isNotSameAs(hostAndPort); + } + + StatefulRedisClusterConnection statefulConnection = async.getStatefulConnection(); + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + + StatefulRedisConnection nodeId = statefulConnection.getConnection(redisClusterNode.getNodeId()); + StatefulRedisConnection hostAndPort = statefulConnection.getConnection(redisClusterNode.getUri() + .getHost(), redisClusterNode.getUri().getPort()); + + assertThat(nodeId).isNotSameAs(hostAndPort); + } + + RedisAdvancedClusterCommands sync = statefulConnection.sync(); + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + + RedisClusterCommands nodeId = sync.getConnection(redisClusterNode.getNodeId()); + RedisClusterCommands hostAndPort = sync.getConnection(redisClusterNode.getUri().getHost(), + redisClusterNode.getUri().getPort()); + + assertThat(nodeId).isNotSameAs(hostAndPort); + } + + RedisAdvancedClusterReactiveCommands rx = statefulConnection.reactive(); + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + + RedisClusterReactiveCommands nodeId = rx.getConnection(redisClusterNode.getNodeId()); + RedisClusterReactiveCommands hostAndPort = rx.getConnection(redisClusterNode.getUri().getHost(), + redisClusterNode.getUri().getPort()); + + assertThat(nodeId).isNotSameAs(hostAndPort); + } + } + + @Test + void msetRegular() { + + Map mset = Collections.singletonMap(key, value); + + String result = sync.mset(mset); + + assertThat(result).isEqualTo("OK"); + assertThat(sync.get(key)).isEqualTo(value); + } + + @Test + void msetCrossSlot() { + + Map mset = prepareMset(); + + String result = sync.mset(mset); + + assertThat(result).isEqualTo("OK"); + + for (String mykey : mset.keySet()) { + String s1 = sync.get(mykey); + assertThat(s1).isEqualTo("value-" + mykey); + } + } + + @Test + void msetnxCrossSlot() { + + Map mset = prepareMset(); + + String key = mset.keySet().iterator().next(); + Map submap = Collections.singletonMap(key, mset.get(key)); + + assertThat(sync.msetnx(submap)).isTrue(); + assertThat(sync.msetnx(mset)).isFalse(); + + for (String mykey : mset.keySet()) { + String s1 = sync.get(mykey); + assertThat(s1).isEqualTo("value-" + mykey); + } + } + + @Test + void mgetRegular() { + + msetRegular(); + List> result = sync.mget(key); + + assertThat(result).hasSize(1); + } + + @Test + void mgetCrossSlot() { + + msetCrossSlot(); + List keys = new ArrayList<>(); + List> expectation = new ArrayList<>(); + for (char c = 'a'; c < 'z'; c++) { + String key = new String(new char[] { c, c, c }); + keys.add(key); + expectation.add(kv(key, "value-" + key)); + } + + List> result = sync.mget(keys.toArray(new String[keys.size()])); + + assertThat(result).hasSize(keys.size()); + assertThat(result).isEqualTo(expectation); + } + + @Test + @EnabledOnCommand("UNLINK") + void delRegular() { + + msetRegular(); + Long result = sync.unlink(key); + + assertThat(result).isEqualTo(1); + assertThat(TestFutures.getOrTimeout(async.get(key))).isNull(); + } + + @Test + void delCrossSlot() { + + List keys = prepareKeys(); + + Long result = sync.del(keys.toArray(new String[keys.size()])); + + assertThat(result).isEqualTo(25); + + for (String mykey : keys) { + String s1 = sync.get(mykey); + assertThat(s1).isNull(); + } + } + + @Test + @EnabledOnCommand("UNLINK") + void unlinkRegular() { + + msetRegular(); + Long result = sync.unlink(key); + + assertThat(result).isEqualTo(1); + assertThat(sync.get(key)).isNull(); + } + + @Test + @EnabledOnCommand("UNLINK") + void unlinkCrossSlot() { + + List keys = prepareKeys(); + + Long result = sync.unlink(keys.toArray(new String[keys.size()])); + + assertThat(result).isEqualTo(25); + + for (String mykey : keys) { + String s1 = sync.get(mykey); + assertThat(s1).isNull(); + } + } + + private List prepareKeys() { + + msetCrossSlot(); + List keys = new ArrayList<>(); + for (char c = 'a'; c < 'z'; c++) { + String key = new String(new char[] { c, c, c }); + keys.add(key); + } + return keys; + } + + @Test + void clientSetname() { + + String name = "test-cluster-client"; + + assertThat(clusterClient.getPartitions().size()).isGreaterThan(0); + + sync.clientSetname(name); + + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + RedisClusterCommands nodeConnection = async.getStatefulConnection().sync() + .getConnection(redisClusterNode.getNodeId()); + assertThat(nodeConnection.clientList()).contains(name); + } + + assertThat(sync.clientGetname()).isEqualTo(name); + } + + @Test + void clientSetnameRunOnError() { + assertThatThrownBy(() -> sync.clientSetname("not allowed")).isInstanceOf(RedisCommandExecutionException.class); + } + + @Test + void dbSize() { + + writeKeysToTwoNodes(); + + RedisClusterCommands nodeConnection1 = clusterConnection.getConnection(ClusterTestSettings.host, + ClusterTestSettings.port1).sync(); + RedisClusterCommands nodeConnection2 = clusterConnection.getConnection(ClusterTestSettings.host, + ClusterTestSettings.port1).sync(); + + assertThat(nodeConnection1.dbsize()).isEqualTo(1); + assertThat(nodeConnection2.dbsize()).isEqualTo(1); + + Long dbsize = sync.dbsize(); + assertThat(dbsize).isEqualTo(2); + } + + @Test + void flushall() { + + writeKeysToTwoNodes(); + + assertThat(sync.flushall()).isEqualTo("OK"); + + Long dbsize = sync.dbsize(); + assertThat(dbsize).isEqualTo(0); + } + + @Test + void flushdb() { + + writeKeysToTwoNodes(); + + assertThat(sync.flushdb()).isEqualTo("OK"); + + Long dbsize = sync.dbsize(); + assertThat(dbsize).isEqualTo(0); + } + + @Test + void keys() { + + writeKeysToTwoNodes(); + + assertThat(sync.keys("*")).contains(KEY_ON_NODE_1, KEY_ON_NODE_2); + } + + @Test + void keysStreaming() { + + writeKeysToTwoNodes(); + ListStreamingAdapter result = new ListStreamingAdapter<>(); + + assertThat(sync.keys(result, "*")).isEqualTo(2); + assertThat(result.getList()).contains(KEY_ON_NODE_1, KEY_ON_NODE_2); + } + + @Test + void randomKey() { + + writeKeysToTwoNodes(); + + assertThat(sync.randomkey()).isIn(KEY_ON_NODE_1, KEY_ON_NODE_2); + } + + @Test + void scriptFlush() { + assertThat(sync.scriptFlush()).isEqualTo("OK"); + } + + @Test + void scriptKill() { + assertThat(sync.scriptKill()).isEqualTo("OK"); + } + + @Test + void scriptLoad() { + + assertThat(sync.scriptFlush()).isEqualTo("OK"); + + String script = "return true"; + + String sha = LettuceStrings.digest(script.getBytes()); + assertThat(sync.scriptExists(sha)).contains(false); + + String returnedSha = sync.scriptLoad(script); + + assertThat(returnedSha).isEqualTo(sha); + assertThat(sync.scriptExists(sha)).contains(true); + } + + @Test + @Disabled("Run me manually, I will shutdown all your cluster nodes so you need to restart the Redis Cluster after this test") + void shutdown() { + sync.shutdown(true); + } + + @Test + void testSync() { + + RedisAdvancedClusterCommands sync = async.getStatefulConnection().sync(); + sync.set(key, value); + assertThat(sync.get(key)).isEqualTo(value); + + RedisClusterCommands node2Connection = sync.getConnection(ClusterTestSettings.host, + ClusterTestSettings.port2); + assertThat(node2Connection.get(key)).isEqualTo(value); + + assertThat(sync.getStatefulConnection()).isSameAs(async.getStatefulConnection()); + } + + @Test + @Inject + void routeCommandToNoAddrPartition(@New StatefulRedisClusterConnection connectionUnderTest) { + + RedisAdvancedClusterCommands sync = connectionUnderTest.sync(); + try { + + Partitions partitions = clusterClient.getPartitions(); + for (RedisClusterNode partition : partitions) { + partition.setUri(RedisURI.create("redis://non.existent.host:1234")); + } + + sync.set("A", "value");// 6373 + } catch (Exception e) { + assertThat(e).isInstanceOf(RedisException.class).hasMessageContaining("Unable to connect to"); + } finally { + clusterClient.getPartitions().clear(); + clusterClient.reloadPartitions(); + } + } + + @Test + @Inject + void routeCommandToForbiddenHostOnRedirect( + @Connection(requiresNew = true) StatefulRedisClusterConnection connectionUnderTest) { + + RedisAdvancedClusterCommands sync = connectionUnderTest.sync(); + try { + + Partitions partitions = clusterClient.getPartitions(); + for (RedisClusterNode partition : partitions) { + partition.setSlots(Collections.singletonList(0)); + if (partition.getUri().getPort() == 7380) { + partition.setSlots(Collections.singletonList(6373)); + } else { + partition.setUri(RedisURI.create("redis://non.existent.host:1234")); + } + } + + partitions.updateCache(); + + sync.set("A", "value");// 6373 + } catch (Exception e) { + assertThat(e).isInstanceOf(RedisException.class).hasMessageContaining("not allowed"); + } finally { + clusterClient.getPartitions().clear(); + clusterClient.reloadPartitions(); + } + } + + @Test + void getConnectionToNotAClusterMemberForbidden() { + + StatefulRedisClusterConnection sync = clusterClient.connect(); + try { + sync.getConnection(TestSettings.host(), TestSettings.port()); + } catch (RedisException e) { + assertThat(e).hasRootCauseExactlyInstanceOf(IllegalArgumentException.class); + } + sync.close(); + } + + @Test + void getConnectionToNotAClusterMemberAllowed() { + + clusterClient.setOptions(ClusterClientOptions.builder().validateClusterNodeMembership(false).build()); + StatefulRedisClusterConnection connection = clusterClient.connect(); + connection.getConnection(TestSettings.host(), TestSettings.port()); + connection.close(); + } + + @Test + @Inject + void pipelining(@New StatefulRedisClusterConnection connectionUnderTest) { + + RedisAdvancedClusterAsyncCommands async = connectionUnderTest.async(); + // preheat the first connection + TestFutures.awaitOrTimeout(async.get(key(0))); + + int iterations = 1000; + async.setAutoFlushCommands(false); + List> futures = new ArrayList<>(); + for (int i = 0; i < iterations; i++) { + futures.add(async.set(key(i), value(i))); + } + + for (int i = 0; i < iterations; i++) { + assertThat(this.sync.get(key(i))).as("Key " + key(i) + " must be null").isNull(); + } + + async.flushCommands(); + + boolean result = TestFutures.awaitOrTimeout(futures); + assertThat(result).isTrue(); + + for (int i = 0; i < iterations; i++) { + assertThat(this.sync.get(key(i))).as("Key " + key(i) + " must be " + value(i)).isEqualTo(value(i)); + } + } + + @Test + void clusterScan() { + + RedisAdvancedClusterCommands sync = async.getStatefulConnection().sync(); + sync.mset(KeysAndValues.MAP); + + Set allKeys = new HashSet<>(); + + KeyScanCursor scanCursor = null; + + do { + if (scanCursor == null) { + scanCursor = sync.scan(); + } else { + scanCursor = sync.scan(scanCursor); + } + allKeys.addAll(scanCursor.getKeys()); + } while (!scanCursor.isFinished()); + + assertThat(allKeys).containsAll(KeysAndValues.KEYS); + } + + @Test + void clusterScanWithArgs() { + + RedisAdvancedClusterCommands sync = async.getStatefulConnection().sync(); + sync.mset(KeysAndValues.MAP); + + Set allKeys = new HashSet<>(); + + KeyScanCursor scanCursor = null; + + do { + if (scanCursor == null) { + scanCursor = sync.scan(ScanArgs.Builder.matches("a*")); + } else { + scanCursor = sync.scan(scanCursor, ScanArgs.Builder.matches("a*")); + } + allKeys.addAll(scanCursor.getKeys()); + } while (!scanCursor.isFinished()); + + assertThat(allKeys) + .containsAll(KeysAndValues.KEYS.stream().filter(k -> k.startsWith("a")).collect(Collectors.toList())); + } + + @Test + void clusterScanStreaming() { + + RedisAdvancedClusterCommands sync = async.getStatefulConnection().sync(); + sync.mset(KeysAndValues.MAP); + + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor scanCursor = null; + + do { + if (scanCursor == null) { + scanCursor = sync.scan(adapter); + } else { + scanCursor = sync.scan(adapter, scanCursor); + } + } while (!scanCursor.isFinished()); + + assertThat(adapter.getList()).containsAll(KeysAndValues.KEYS); + + } + + @Test + void clusterScanStreamingWithArgs() { + + RedisAdvancedClusterCommands sync = async.getStatefulConnection().sync(); + sync.mset(KeysAndValues.MAP); + + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor scanCursor = null; + do { + if (scanCursor == null) { + scanCursor = sync.scan(adapter, ScanArgs.Builder.matches("a*")); + } else { + scanCursor = sync.scan(adapter, scanCursor, ScanArgs.Builder.matches("a*")); + } + } while (!scanCursor.isFinished()); + + assertThat(adapter.getList()).containsAll( + KeysAndValues.KEYS.stream().filter(k -> k.startsWith("a")).collect(Collectors.toList())); + + } + + @Test + void clusterScanCursorFinished() { + assertThatThrownBy(() -> sync.scan(ScanCursor.FINISHED)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clusterScanCursorNotReused() { + assertThatThrownBy(() -> sync.scan(ScanCursor.of("dummy"))).isInstanceOf(IllegalArgumentException.class); + } + + String value(int i) { + return value + "-" + i; + } + + String key(int i) { + return key + "-" + i; + } + + private void writeKeysToTwoNodes() { + sync.set(KEY_ON_NODE_1, value); + sync.set(KEY_ON_NODE_2, value); + } + + Map prepareMset() { + Map mset = new HashMap<>(); + for (char c = 'a'; c < 'z'; c++) { + String key = new String(new char[] { c, c, c }); + mset.put(key, "value-" + key); + } + return mset; + } + +} diff --git a/src/test/java/io/lettuce/core/cluster/AdvancedClusterReactiveIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/AdvancedClusterReactiveIntegrationTests.java new file mode 100644 index 0000000000..48662d7ecb --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/AdvancedClusterReactiveIntegrationTests.java @@ -0,0 +1,434 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.stream.Collectors; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.core.publisher.Flux; +import reactor.test.StepVerifier; +import io.lettuce.core.*; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.reactive.RedisClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.test.*; +import io.lettuce.test.condition.EnabledOnCommand; +import io.netty.util.internal.ConcurrentSet; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class AdvancedClusterReactiveIntegrationTests extends TestSupport { + + private static final String KEY_ON_NODE_1 = "a"; + private static final String KEY_ON_NODE_2 = "b"; + + private final RedisClusterClient clusterClient; + private final RedisAdvancedClusterReactiveCommands commands; + private final RedisAdvancedClusterCommands syncCommands; + + @Inject + AdvancedClusterReactiveIntegrationTests(RedisClusterClient clusterClient, + StatefulRedisClusterConnection connection) { + this.clusterClient = clusterClient; + this.commands = connection.reactive(); + this.syncCommands = connection.sync(); + } + + @BeforeEach + void setUp() { + syncCommands.flushall(); + } + + @Test + void unknownNodeId() { + assertThatThrownBy(() -> commands.getConnection("unknown")).isInstanceOf(RedisException.class); + } + + @Test + void invalidHost() { + assertThatThrownBy(() -> commands.getConnection("invalid-host", -1)).isInstanceOf(RedisException.class); + } + + @Test + void msetCrossSlot() { + + StepVerifier.create(commands.mset(KeysAndValues.MAP)).expectNext("OK").verifyComplete(); + + for (String mykey : KeysAndValues.KEYS) { + String s1 = syncCommands.get(mykey); + assertThat(s1).isEqualTo(KeysAndValues.MAP.get(mykey)); + } + } + + @Test + void msetnxCrossSlot() { + + Map mset = prepareMset(); + + String key = mset.keySet().iterator().next(); + Map submap = Collections.singletonMap(key, mset.get(key)); + + StepVerifier.create(commands.msetnx(submap)).expectNext(true).verifyComplete(); + StepVerifier.create(commands.msetnx(mset)).expectNext(false).verifyComplete(); + + for (String mykey : mset.keySet()) { + String s1 = syncCommands.get(mykey); + assertThat(s1).isEqualTo(mset.get(mykey)); + } + } + + @Test + void mgetCrossSlot() { + + msetCrossSlot(); + + Map> partitioned = SlotHash.partition(StringCodec.UTF8, KeysAndValues.KEYS); + assertThat(partitioned.size()).isGreaterThan(100); + + Flux> flux = commands.mget(KeysAndValues.KEYS.toArray(new String[KeysAndValues.COUNT])); + List> result = flux.collectList().block(); + + assertThat(result).hasSize(KeysAndValues.COUNT); + assertThat(result.stream().map(Value::getValue).collect(Collectors.toList())).isEqualTo(KeysAndValues.VALUES); + } + + @Test + void mgetCrossSlotStreaming() { + + msetCrossSlot(); + + KeyValueStreamingAdapter result = new KeyValueStreamingAdapter<>(); + + StepVerifier.create(commands.mget(result, KeysAndValues.KEYS.toArray(new String[KeysAndValues.COUNT]))) + .expectNext((long) KeysAndValues.COUNT).verifyComplete(); + } + + @Test + void delCrossSlot() { + + msetCrossSlot(); + + StepVerifier.create(commands.del(KeysAndValues.KEYS.toArray(new String[KeysAndValues.COUNT]))) + .expectNext((long) KeysAndValues.COUNT).verifyComplete(); + + for (String mykey : KeysAndValues.KEYS) { + String s1 = syncCommands.get(mykey); + assertThat(s1).isNull(); + } + } + + @Test + @EnabledOnCommand("UNLINK") + void unlinkCrossSlot() { + + msetCrossSlot(); + + StepVerifier.create(commands.unlink(KeysAndValues.KEYS.toArray(new String[KeysAndValues.COUNT]))) + .expectNext((long) KeysAndValues.COUNT).verifyComplete(); + + for (String mykey : KeysAndValues.KEYS) { + String s1 = syncCommands.get(mykey); + assertThat(s1).isNull(); + } + } + + @Test + void clientSetname() { + + String name = "test-cluster-client"; + + assertThat(clusterClient.getPartitions().size()).isGreaterThan(0); + + StepVerifier.create(commands.clientSetname(name)).expectNext("OK").verifyComplete(); + + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + RedisClusterCommands nodeConnection = commands.getStatefulConnection().sync() + .getConnection(redisClusterNode.getNodeId()); + assertThat(nodeConnection.clientList()).contains(name); + } + + StepVerifier.create(commands.clientGetname()).expectNext(name).verifyComplete(); + } + + @Test + void clientSetnameRunOnError() { + + try { + StepVerifier.create(commands.clientSetname("not allowed")).expectError().verify(); + } catch (RuntimeException e) { + + // sometimes reactor.core.Exceptions$CancelException: The subscriber has denied dispatching happens + if (!e.getClass().getSimpleName().contains("CancelException")) { + throw e; + } + } + } + + @Test + void dbSize() { + + writeKeysToTwoNodes(); + + StepVerifier.create(commands.dbsize()).expectNext(2L).verifyComplete(); + } + + @Test + void flushall() { + + writeKeysToTwoNodes(); + + StepVerifier.create(commands.flushall()).expectNext("OK").verifyComplete(); + + Long dbsize = syncCommands.dbsize(); + assertThat(dbsize).isEqualTo(0); + } + + @Test + void flushdb() { + + writeKeysToTwoNodes(); + + StepVerifier.create(commands.flushdb()).expectNext("OK").verifyComplete(); + + Long dbsize = syncCommands.dbsize(); + assertThat(dbsize).isEqualTo(0); + } + + @Test + void keys() { + + writeKeysToTwoNodes(); + + StepVerifier.create(commands.keys("*")).recordWith(ConcurrentSet::new).expectNextCount(2) + .consumeRecordedWith(actual -> assertThat(actual).contains(KEY_ON_NODE_1, KEY_ON_NODE_2)).verifyComplete(); + } + + @Test + void keysDoesNotRunIntoRaceConditions() { + + List> futures = new ArrayList<>(); + RedisClusterAsyncCommands async = commands.getStatefulConnection().async(); + TestFutures.awaitOrTimeout(async.flushall()); + + for (int i = 0; i < 1000; i++) { + futures.add(async.set("key-" + i, "value-" + i)); + } + + TestFutures.awaitOrTimeout(futures); + + for (int i = 0; i < 1000; i++) { + CompletableFuture future = commands.keys("*").count().toFuture(); + TestFutures.awaitOrTimeout(future); + assertThat(future).isCompletedWithValue(1000L); + } + } + + @Test + void keysStreaming() { + + writeKeysToTwoNodes(); + ListStreamingAdapter result = new ListStreamingAdapter<>(); + + StepVerifier.create(commands.keys(result, "*")).expectNext(2L).verifyComplete(); + assertThat(result.getList()).contains(KEY_ON_NODE_1, KEY_ON_NODE_2); + } + + @Test + void randomKey() { + + writeKeysToTwoNodes(); + + StepVerifier.create(commands.randomkey()) + .consumeNextWith(actual -> assertThat(actual).isIn(KEY_ON_NODE_1, KEY_ON_NODE_2)).verifyComplete(); + } + + @Test + void scriptFlush() { + StepVerifier.create(commands.scriptFlush()).expectNext("OK").verifyComplete(); + } + + @Test + void scriptKill() { + StepVerifier.create(commands.scriptKill()).expectNext("OK").verifyComplete(); + } + + @Test + void scriptLoad() { + + scriptFlush(); + + String script = "return true"; + + String sha = LettuceStrings.digest(script.getBytes()); + + StepVerifier.create(commands.scriptExists(sha)).expectNext(false).verifyComplete(); + + StepVerifier.create(commands.scriptLoad(script)).expectNext(sha).verifyComplete(); + + StepVerifier.create(commands.scriptExists(sha)).expectNext(true).verifyComplete(); + } + + @Test + @Disabled("Run me manually, I will shutdown all your cluster nodes so you need to restart the Redis Cluster after this test") + void shutdown() { + commands.shutdown(true).subscribe(); + } + + @Test + void readFromReplicas() { + + RedisClusterReactiveCommands connection = commands.getConnection(ClusterTestSettings.host, + ClusterTestSettings.port4); + connection.readOnly().subscribe(); + commands.set(key, value).subscribe(); + + NodeSelectionAsyncIntegrationTests.waitForReplication(commands.getStatefulConnection().async(), ClusterTestSettings.key, + ClusterTestSettings.port4); + + AtomicBoolean error = new AtomicBoolean(); + connection.get(key).doOnError(throwable -> error.set(true)).block(); + + assertThat(error.get()).isFalse(); + + connection.readWrite().subscribe(); + + StepVerifier.create(connection.get(key)).expectError(RedisCommandExecutionException.class).verify(); + } + + @Test + void clusterScan() { + + RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); + sync.mset(KeysAndValues.MAP); + + Set allKeys = new HashSet<>(); + + KeyScanCursor scanCursor = null; + do { + + if (scanCursor == null) { + scanCursor = commands.scan().block(); + } else { + scanCursor = commands.scan(scanCursor).block(); + } + allKeys.addAll(scanCursor.getKeys()); + } while (!scanCursor.isFinished()); + + assertThat(allKeys).containsAll(KeysAndValues.KEYS); + + } + + @Test + void clusterScanWithArgs() { + + RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); + sync.mset(KeysAndValues.MAP); + + Set allKeys = new HashSet<>(); + + KeyScanCursor scanCursor = null; + do { + + if (scanCursor == null) { + scanCursor = commands.scan(ScanArgs.Builder.matches("a*")).block(); + } else { + scanCursor = commands.scan(scanCursor, ScanArgs.Builder.matches("a*")).block(); + } + allKeys.addAll(scanCursor.getKeys()); + } while (!scanCursor.isFinished()); + + assertThat(allKeys) + .containsAll(KeysAndValues.KEYS.stream().filter(k -> k.startsWith("a")).collect(Collectors.toList())); + + } + + @Test + void clusterScanStreaming() { + + RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); + sync.mset(KeysAndValues.MAP); + + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor scanCursor = null; + do { + + if (scanCursor == null) { + scanCursor = commands.scan(adapter).block(); + } else { + scanCursor = commands.scan(adapter, scanCursor).block(); + } + } while (!scanCursor.isFinished()); + + assertThat(adapter.getList()).containsAll(KeysAndValues.KEYS); + + } + + @Test + void clusterScanStreamingWithArgs() { + + RedisAdvancedClusterCommands sync = commands.getStatefulConnection().sync(); + sync.mset(KeysAndValues.MAP); + + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor scanCursor = null; + do { + + if (scanCursor == null) { + scanCursor = commands.scan(adapter, ScanArgs.Builder.matches("a*")).block(); + } else { + scanCursor = commands.scan(adapter, scanCursor, ScanArgs.Builder.matches("a*")).block(); + } + } while (!scanCursor.isFinished()); + + assertThat(adapter.getList()) + .containsAll(KeysAndValues.KEYS.stream().filter(k -> k.startsWith("a")).collect(Collectors.toList())); + } + + private void writeKeysToTwoNodes() { + syncCommands.set(KEY_ON_NODE_1, value); + syncCommands.set(KEY_ON_NODE_2, value); + } + + Map prepareMset() { + Map mset = new HashMap<>(); + for (char c = 'a'; c < 'z'; c++) { + String key = new String(new char[] { c, c, c }); + mset.put(key, "value-" + key); + } + return mset; + } +} diff --git a/src/test/java/io/lettuce/core/cluster/AsyncConnectionProviderIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/AsyncConnectionProviderIntegrationTests.java new file mode 100644 index 0000000000..ca9e4ba710 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/AsyncConnectionProviderIntegrationTests.java @@ -0,0 +1,207 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.io.IOException; +import java.net.InetSocketAddress; +import java.net.ServerSocket; +import java.net.Socket; +import java.time.Duration; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.springframework.util.SocketUtils; +import org.springframework.util.StopWatch; + +import reactor.core.publisher.Mono; +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.RedisURI; +import io.lettuce.core.SocketOptions; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.ClusterNodeConnectionFactory.ConnectionKey; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.AsyncConnectionProvider; +import io.lettuce.core.protocol.ProtocolVersion; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.TestFutures; +import io.lettuce.test.settings.TestSettings; +import io.netty.channel.ConnectTimeoutException; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class AsyncConnectionProviderIntegrationTests { + + private final ClientResources resources; + private RedisClusterClient client; + private ServerSocket serverSocket; + private CountDownLatch connectInitiated = new CountDownLatch(1); + + private AsyncConnectionProvider, ConnectionFuture>> sut; + + @Inject + AsyncConnectionProviderIntegrationTests(ClientResources resources) { + this.resources = resources; + } + + @BeforeEach + void before() throws Exception { + + serverSocket = new ServerSocket(SocketUtils.findAvailableTcpPort(), 1); + + client = RedisClusterClient.create(resources, "redis://localhost"); + client.setOptions(ClusterClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).build()); + sut = new AsyncConnectionProvider<>(new AbstractClusterNodeConnectionFactory(resources) { + @Override + public ConnectionFuture> apply(ConnectionKey connectionKey) { + + RedisURI redisURI = RedisURI.create(TestSettings.host(), serverSocket.getLocalPort()); + redisURI.setTimeout(Duration.ofSeconds(5)); + + ConnectionFuture> future = client.connectToNodeAsync(StringCodec.UTF8, + "", null, Mono.just(new InetSocketAddress(connectionKey.host, serverSocket.getLocalPort()))); + + connectInitiated.countDown(); + + return future; + } + }); + } + + @AfterEach + void after() throws Exception { + serverSocket.close(); + } + + @Test + void shouldCloseConnectionByKey() throws IOException { + + ConnectionKey connectionKey = new ConnectionKey(ClusterConnectionProvider.Intent.READ, TestSettings.host(), + TestSettings.port()); + + sut.getConnection(connectionKey); + sut.close(connectionKey); + + assertThat(sut.getConnectionCount()).isEqualTo(0); + sut.close(); + + serverSocket.accept(); + } + + @Test + void shouldCloseConnections() throws IOException { + + ConnectionKey connectionKey = new ConnectionKey(ClusterConnectionProvider.Intent.READ, TestSettings.host(), + TestSettings.port()); + + sut.getConnection(connectionKey); + TestFutures.awaitOrTimeout(sut.close()); + + assertThat(sut.getConnectionCount()).isEqualTo(0); + TestFutures.awaitOrTimeout(sut.close()); + + serverSocket.accept(); + } + + @Test + void connectShouldFail() throws Exception { + + Socket socket = new Socket(TestSettings.host(), serverSocket.getLocalPort()); + + ClusterClientOptions clientOptions = ClusterClientOptions.builder().protocolVersion(ProtocolVersion.RESP2) + .socketOptions(SocketOptions.builder().connectTimeout(1, TimeUnit.SECONDS).build()).build(); + + client.setOptions(clientOptions); + + ConnectionKey connectionKey = new ConnectionKey(ClusterConnectionProvider.Intent.READ, "8.8.8.8", TestSettings.port()); + + StopWatch stopWatch = new StopWatch(); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(sut.getConnection(connectionKey))) + .hasCauseInstanceOf( + ConnectTimeoutException.class); + + stopWatch.start(); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(sut.getConnection(connectionKey))) + .hasCauseInstanceOf( + ConnectTimeoutException.class); + + stopWatch.stop(); + + assertThat(stopWatch.getLastTaskTimeMillis()).isBetween(0L, 1200L); + + sut.close(); + + socket.close(); + } + + @Test + void connectShouldFailConcurrently() throws Exception { + + Socket socket = new Socket(TestSettings.host(), serverSocket.getLocalPort()); + + ClusterClientOptions clientOptions = ClusterClientOptions.builder().protocolVersion(ProtocolVersion.RESP2) + .socketOptions(SocketOptions.builder().connectTimeout(1, TimeUnit.SECONDS).build()).build(); + + client.setOptions(clientOptions); + + ConnectionKey connectionKey = new ConnectionKey(ClusterConnectionProvider.Intent.READ, "8.8.8.8", TestSettings.port()); + + Thread t1 = new Thread(() -> { + try { + sut.getConnection(connectionKey); + } catch (Exception e) { + } + }); + + Thread t2 = new Thread(() -> { + try { + sut.getConnection(connectionKey); + } catch (Exception e) { + } + }); + + t1.start(); + t2.start(); + + connectInitiated.await(); + + StopWatch stopWatch = new StopWatch(); + stopWatch.start(); + + t1.join(2000); + t2.join(2000); + + stopWatch.stop(); + + assertThat(stopWatch.getLastTaskTimeMillis()).isBetween(0L, 1300L); + + sut.close(); + socket.close(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ByteCodecClusterIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/ByteCodecClusterIntegrationTests.java new file mode 100644 index 0000000000..cc591d521b --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ByteCodecClusterIntegrationTests.java @@ -0,0 +1,45 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.TestSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ByteCodecClusterIntegrationTests extends TestSupport { + + @Test + @Inject + void testByteCodec(RedisClusterClient clusterClient) { + + StatefulRedisClusterConnection connection = clusterClient.connect(new ByteArrayCodec()); + + connection.sync().set(key.getBytes(), value.getBytes()); + assertThat(connection.sync().get(key.getBytes())).isEqualTo(value.getBytes()); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterClientOptionsIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/ClusterClientOptionsIntegrationTests.java new file mode 100644 index 0000000000..8635dbebe1 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterClientOptionsIntegrationTests.java @@ -0,0 +1,94 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.time.Duration; +import java.util.concurrent.ExecutionException; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisCommandTimeoutException; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.TestSupport; +import io.lettuce.core.TimeoutOptions; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ClusterClientOptionsIntegrationTests extends TestSupport { + + private final RedisClusterClient clusterClient; + + @Inject + ClusterClientOptionsIntegrationTests(RedisClusterClient clusterClient) { + this.clusterClient = clusterClient; + } + + @AfterEach + void tearDown() { + clusterClient.setOptions(ClusterClientOptions.create()); + } + + @Test + void shouldApplyTimeoutOptionsToClusterConnection() throws InterruptedException { + + clusterClient.setOptions(ClusterClientOptions.builder().timeoutOptions(TimeoutOptions.enabled(Duration.ofMillis(100))) + .build()); + + try (StatefulRedisClusterConnection connection = clusterClient.connect()) { + + connection.setTimeout(Duration.ZERO); + connection.async().clientPause(300); + + RedisFuture future = connection.async().ping(); + + assertThatThrownBy(future::get).isInstanceOf(ExecutionException.class) + .hasCauseInstanceOf(RedisCommandTimeoutException.class).hasMessageContaining("100 milli"); + } + + Thread.sleep(300); + } + + @Test + void shouldApplyTimeoutOptionsToPubSubClusterConnection() throws InterruptedException { + + clusterClient.setOptions(ClusterClientOptions.builder().timeoutOptions(TimeoutOptions.enabled(Duration.ofMillis(100))) + .build()); + + try (StatefulRedisClusterPubSubConnection connection = clusterClient.connectPubSub()) { + connection.setTimeout(Duration.ofMillis(100)); + + connection.async().clientPause(300); + + RedisFuture future = connection.async().ping(); + + assertThatThrownBy(future::get).isInstanceOf(ExecutionException.class) + .hasCauseInstanceOf(RedisCommandTimeoutException.class).hasMessageContaining("100 milli"); + } + + Thread.sleep(300); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterClientOptionsUnitTests.java b/src/test/java/io/lettuce/core/cluster/ClusterClientOptionsUnitTests.java new file mode 100644 index 0000000000..0ba2f9e661 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterClientOptionsUnitTests.java @@ -0,0 +1,98 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.charset.StandardCharsets; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.protocol.ProtocolVersion; + +/** + * Unit tests for {@link ClusterClientOptions}. + * + * @author Mark Paluch + */ +class ClusterClientOptionsUnitTests { + + @Test + void testCopy() { + + ClusterClientOptions options = ClusterClientOptions.builder().autoReconnect(false).requestQueueSize(100) + .suspendReconnectOnProtocolFailure(true).maxRedirects(1234).validateClusterNodeMembership(false) + .protocolVersion(ProtocolVersion.RESP2).build(); + + ClusterClientOptions copy = ClusterClientOptions.copyOf(options); + + assertThat(copy.getProtocolVersion()).isEqualTo(options.getProtocolVersion()); + assertThat(copy.getRefreshPeriod()).isEqualTo(options.getRefreshPeriod()); + assertThat(copy.isCloseStaleConnections()).isEqualTo(options.isCloseStaleConnections()); + assertThat(copy.isRefreshClusterView()).isEqualTo(options.isRefreshClusterView()); + assertThat(copy.isValidateClusterNodeMembership()).isEqualTo(options.isValidateClusterNodeMembership()); + assertThat(copy.getRequestQueueSize()).isEqualTo(options.getRequestQueueSize()); + assertThat(copy.isAutoReconnect()).isEqualTo(options.isAutoReconnect()); + assertThat(copy.isCancelCommandsOnReconnectFailure()).isEqualTo(options.isCancelCommandsOnReconnectFailure()); + assertThat(copy.isSuspendReconnectOnProtocolFailure()).isEqualTo(options.isSuspendReconnectOnProtocolFailure()); + assertThat(copy.getMaxRedirects()).isEqualTo(options.getMaxRedirects()); + assertThat(copy.getScriptCharset()).isEqualTo(StandardCharsets.UTF_8); + } + + @Test + void builderFromDefaultClientOptions() { + + ClientOptions clientOptions = ClientOptions.builder().build(); + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder(clientOptions).build(); + + assertThat(clusterClientOptions.getProtocolVersion()).isEqualTo(clusterClientOptions.getProtocolVersion()); + assertThat(clusterClientOptions.getDisconnectedBehavior()).isEqualTo(clusterClientOptions.getDisconnectedBehavior()); + assertThat(clusterClientOptions.getSslOptions()).isEqualTo(clusterClientOptions.getSslOptions()); + assertThat(clusterClientOptions.getTimeoutOptions()).isEqualTo(clusterClientOptions.getTimeoutOptions()); + assertThat(clusterClientOptions.getRequestQueueSize()).isEqualTo(clusterClientOptions.getRequestQueueSize()); + assertThat(clusterClientOptions.isAutoReconnect()).isEqualTo(clusterClientOptions.isAutoReconnect()); + assertThat(clusterClientOptions.isCloseStaleConnections()).isEqualTo(clusterClientOptions.isCloseStaleConnections()); + assertThat(clusterClientOptions.isCancelCommandsOnReconnectFailure()) + .isEqualTo(clusterClientOptions.isCancelCommandsOnReconnectFailure()); + assertThat(clusterClientOptions.isPublishOnScheduler()).isEqualTo(clusterClientOptions.isPublishOnScheduler()); + assertThat(clusterClientOptions.isSuspendReconnectOnProtocolFailure()) + .isEqualTo(clusterClientOptions.isSuspendReconnectOnProtocolFailure()); + assertThat(clusterClientOptions.getScriptCharset()).isEqualTo(clusterClientOptions.getScriptCharset()); + assertThat(clusterClientOptions.mutate()).isNotNull(); + } + + @Test + void builderFromClusterClientOptions() { + + ClusterClientOptions options = ClusterClientOptions.builder().maxRedirects(1234).validateClusterNodeMembership(false) + .scriptCharset(StandardCharsets.US_ASCII).build(); + + ClusterClientOptions copy = ClusterClientOptions.builder(options).build(); + + assertThat(copy.getRefreshPeriod()).isEqualTo(options.getRefreshPeriod()); + assertThat(copy.isCloseStaleConnections()).isEqualTo(options.isCloseStaleConnections()); + assertThat(copy.isRefreshClusterView()).isEqualTo(options.isRefreshClusterView()); + assertThat(copy.isValidateClusterNodeMembership()).isEqualTo(options.isValidateClusterNodeMembership()); + assertThat(copy.getRequestQueueSize()).isEqualTo(options.getRequestQueueSize()); + assertThat(copy.isAutoReconnect()).isEqualTo(options.isAutoReconnect()); + assertThat(copy.isCancelCommandsOnReconnectFailure()).isEqualTo(options.isCancelCommandsOnReconnectFailure()); + assertThat(copy.isSuspendReconnectOnProtocolFailure()).isEqualTo(options.isSuspendReconnectOnProtocolFailure()); + assertThat(copy.getMaxRedirects()).isEqualTo(options.getMaxRedirects()); + assertThat(copy.getScriptCharset()).isEqualTo(options.getScriptCharset()); + assertThat(options.mutate()).isNotSameAs(copy.mutate()); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/ClusterCommandIntegrationTests.java new file mode 100644 index 0000000000..075bb6680d --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterCommandIntegrationTests.java @@ -0,0 +1,259 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.ClusterTestUtil.getNodeId; +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.List; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.slots.ClusterSlotRange; +import io.lettuce.core.cluster.models.slots.ClusterSlotsParser; +import io.lettuce.test.Delay; +import io.lettuce.test.TestFutures; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ClusterCommandIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisClusterClient clusterClient; + private final StatefulRedisConnection connection; + private final RedisClusterAsyncCommands async; + private final RedisClusterCommands sync; + + @Inject + ClusterCommandIntegrationTests(RedisClient client, RedisClusterClient clusterClient) { + + this.client = client; + this.clusterClient = clusterClient; + + this.connection = client.connect(RedisURI.Builder.redis(host, ClusterTestSettings.port1).build()); + this.sync = connection.sync(); + this.async = connection.async(); + } + + @AfterEach + void after() { + connection.close(); + } + + @Test + void testClusterBumpEpoch() { + + RedisFuture future = async.clusterBumpepoch(); + + String result = TestFutures.getOrTimeout(future); + + assertThat(result).matches("(BUMPED|STILL).*"); + } + + @Test + void testClusterInfo() { + + String result = sync.clusterInfo(); + + assertThat(result).contains("cluster_known_nodes:"); + assertThat(result).contains("cluster_slots_fail:0"); + assertThat(result).contains("cluster_state:"); + } + + @Test + void testClusterNodes() { + + String result = sync.clusterNodes(); + + assertThat(result).contains("connected"); + assertThat(result).contains("master"); + assertThat(result).contains("myself"); + } + + @Test + void testClusterNodesSync() { + + StatefulRedisClusterConnection connection = clusterClient.connect(); + + String string = connection.sync().clusterNodes(); + connection.close(); + + assertThat(string).contains("connected"); + assertThat(string).contains("master"); + assertThat(string).contains("myself"); + } + + @Test + void testClusterReplicas() { + + sync.set("b", value); + RedisFuture replication = async.waitForReplication(1, 5); + assertThat(TestFutures.getOrTimeout(replication)).isGreaterThan(0L); + } + + @Test + void testAsking() { + assertThat(sync.asking()).isEqualTo("OK"); + } + + @Test + void testReset() { + + clusterClient.reloadPartitions(); + + StatefulRedisClusterConnection clusterConnection = clusterClient.connect(); + + TestFutures.awaitOrTimeout(clusterConnection.async().set("a", "myValue1")); + + clusterConnection.reset(); + + RedisFuture setA = clusterConnection.async().set("a", "myValue1"); + + assertThat(TestFutures.getOrTimeout(setA)).isEqualTo("OK"); + assertThat(setA.getError()).isNull(); + + connection.close(); + + } + + @Test + void testClusterSlots() { + + List reply = sync.clusterSlots(); + assertThat(reply.size()).isGreaterThan(1); + + List parse = ClusterSlotsParser.parse(reply); + assertThat(parse).hasSize(2); + + ClusterSlotRange clusterSlotRange = parse.get(0); + assertThat(clusterSlotRange.getFrom()).isEqualTo(0); + assertThat(clusterSlotRange.getTo()).isEqualTo(11999); + + assertThat(clusterSlotRange.toString()).contains(ClusterSlotRange.class.getSimpleName()); + } + + @Test + void readOnly() throws Exception { + + // cluster node 3 is a replica for key "b" + String key = "b"; + assertThat(SlotHash.getSlot(key)).isEqualTo(3300); + prepareReadonlyTest(key); + + // assume cluster node 3 is a replica for the master 1 + RedisCommands connect3 = client + .connect(RedisURI.Builder.redis(host, ClusterTestSettings.port3).build()).sync(); + + assertThat(connect3.readOnly()).isEqualTo("OK"); + waitUntilValueIsVisible(key, connect3); + + String resultBViewedByReplica = connect3.get("b"); + assertThat(resultBViewedByReplica).isEqualTo(value); + connect3.quit(); + + resultBViewedByReplica = connect3.get("b"); + assertThat(resultBViewedByReplica).isEqualTo(value); + } + + @Test + void readOnlyWithReconnect() throws Exception { + + // cluster node 3 is a replica for key "b" + String key = "b"; + assertThat(SlotHash.getSlot(key)).isEqualTo(3300); + prepareReadonlyTest(key); + + // assume cluster node 3 is a replica for the master 1 + RedisCommands connect3 = client + .connect(RedisURI.Builder.redis(host, ClusterTestSettings.port3).build()).sync(); + + assertThat(connect3.readOnly()).isEqualTo("OK"); + connect3.quit(); + waitUntilValueIsVisible(key, connect3); + + String resultViewedByReplica = connect3.get("b"); + assertThat(resultViewedByReplica).isEqualTo(value); + } + + @Test + void readOnlyReadWrite() throws Exception { + + // cluster node 3 is a replica for key "b" + String key = "b"; + assertThat(SlotHash.getSlot(key)).isEqualTo(3300); + prepareReadonlyTest(key); + + // assume cluster node 3 is a replica for the master 1 + final RedisCommands connect3 = client.connect( + RedisURI.Builder.redis(host, ClusterTestSettings.port3).build()).sync(); + + try { + connect3.get("b"); + } catch (Exception e) { + assertThat(e).hasMessageContaining("MOVED"); + } + + assertThat(connect3.readOnly()).isEqualTo("OK"); + waitUntilValueIsVisible(key, connect3); + + connect3.readWrite(); + try { + connect3.get("b"); + } catch (Exception e) { + assertThat(e).hasMessageContaining("MOVED"); + } + } + + @Test + void clusterSlaves() { + + String nodeId = getNodeId(sync); + List result = sync.clusterSlaves(nodeId); + + assertThat(result.size()).isGreaterThan(0); + } + + private void prepareReadonlyTest(String key) { + + async.set(key, value); + + String resultB = TestFutures.getOrTimeout(async.get(key)); + assertThat(resultB).isEqualTo(value); + Delay.delay(Duration.ofMillis(500)); // give some time to replicate + } + + private static void waitUntilValueIsVisible(String key, RedisCommands commands) { + Wait.untilTrue(() -> commands.get(key) != null).waitOrTimeout(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterCommandUnitTests.java b/src/test/java/io/lettuce/core/cluster/ClusterCommandUnitTests.java new file mode 100644 index 0000000000..c34b2dfe75 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterCommandUnitTests.java @@ -0,0 +1,118 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Mockito.verify; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; + +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class ClusterCommandUnitTests { + + @Mock + private RedisChannelWriter writerMock; + + private ClusterCommand sut; + private Command command = new Command<>(CommandType.TYPE, new StatusOutput<>(StringCodec.UTF8), + null); + + @BeforeEach + void before() { + sut = new ClusterCommand<>(command, writerMock, 1); + } + + @Test + void testException() { + + sut.completeExceptionally(new Exception()); + assertThat(sut.isCompleted()); + } + + @Test + void testCancel() { + + assertThat(command.isCancelled()).isFalse(); + sut.cancel(); + assertThat(command.isCancelled()).isTrue(); + } + + @Test + void testComplete() { + + sut.complete(); + assertThat(sut.isCompleted()).isTrue(); + assertThat(sut.isCancelled()).isFalse(); + } + + @Test + void testRedirect() { + + sut.getOutput().setError("MOVED 1234-2020 127.0.0.1:1000"); + sut.complete(); + + assertThat(sut.isCompleted()).isFalse(); + assertThat(sut.isCancelled()).isFalse(); + verify(writerMock).write(sut); + } + + @Test + void testRedirectLimit() { + + sut.getOutput().setError("MOVED 1234-2020 127.0.0.1:1000"); + sut.complete(); + + sut.getOutput().setError("MOVED 1234-2020 127.0.0.1:1000"); + sut.complete(); + + assertThat(sut.isCompleted()).isTrue(); + assertThat(sut.isCancelled()).isFalse(); + verify(writerMock).write(sut); + } + + @Test + void testCompleteListener() { + + final List someList = new ArrayList<>(); + + AsyncCommand asyncCommand = new AsyncCommand<>(sut); + + asyncCommand.thenRun(() -> someList.add("")); + asyncCommand.complete(); + asyncCommand.await(1, TimeUnit.MINUTES); + + assertThat(sut.isCompleted()).isTrue(); + assertThat(someList.size()).describedAs("Inner listener has to add one element").isEqualTo(1); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterDistributionChannelWriterUnitTests.java b/src/test/java/io/lettuce/core/cluster/ClusterDistributionChannelWriterUnitTests.java new file mode 100644 index 0000000000..e7564ee1c1 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterDistributionChannelWriterUnitTests.java @@ -0,0 +1,192 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyInt; +import static org.mockito.ArgumentMatchers.anyList; +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import java.util.Arrays; +import java.util.Collections; +import java.util.concurrent.CompletableFuture; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.ArgumentMatchers; +import org.mockito.InjectMocks; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; + +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.StatefulRedisConnectionImpl; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.ClusterConnectionProvider.Intent; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.output.ValueOutput; +import io.lettuce.core.protocol.*; + +/** + * Unit tests for {@link ClusterDistributionChannelWriter}. + * + * @author Mark Paluch + * @author koisyu + */ +@ExtendWith(MockitoExtension.class) +class ClusterDistributionChannelWriterUnitTests { + + @Mock + private RedisChannelWriter defaultWriter; + + @Mock + private ClusterEventListener clusterEventListener; + + @Mock + private StatefulRedisConnectionImpl connection; + + @Mock + private ClusterNodeEndpoint clusterNodeEndpoint; + + @Mock + private CompletableFuture> connectFuture; + + @Mock + private PooledClusterConnectionProvider pooledClusterConnectionProvider; + + @InjectMocks + private ClusterDistributionChannelWriter clusterDistributionChannelWriter; + + @Test + void shouldParseAskTargetCorrectly() { + + HostAndPort askTarget = ClusterDistributionChannelWriter.getAskTarget("ASK 1234-2020 127.0.0.1:6381"); + + assertThat(askTarget.getHostText()).isEqualTo("127.0.0.1"); + assertThat(askTarget.getPort()).isEqualTo(6381); + } + + @Test + void shouldParseIPv6AskTargetCorrectly() { + + HostAndPort askTarget = ClusterDistributionChannelWriter.getAskTarget("ASK 1234-2020 1:2:3:4::6:6381"); + + assertThat(askTarget.getHostText()).isEqualTo("1:2:3:4::6"); + assertThat(askTarget.getPort()).isEqualTo(6381); + } + + @Test + void shouldParseMovedTargetCorrectly() { + + HostAndPort moveTarget = ClusterDistributionChannelWriter.getMoveTarget("MOVED 1234-2020 127.0.0.1:6381"); + + assertThat(moveTarget.getHostText()).isEqualTo("127.0.0.1"); + assertThat(moveTarget.getPort()).isEqualTo(6381); + } + + @Test + void shouldParseIPv6MovedTargetCorrectly() { + + HostAndPort moveTarget = ClusterDistributionChannelWriter.getMoveTarget("MOVED 1234-2020 1:2:3:4::6:6381"); + + assertThat(moveTarget.getHostText()).isEqualTo("1:2:3:4::6"); + assertThat(moveTarget.getPort()).isEqualTo(6381); + } + + @Test + void shouldReturnIntentForWriteCommand() { + + RedisCommand set = new Command<>(CommandType.SET, null); + RedisCommand mset = new Command<>(CommandType.MSET, null); + + assertThat(ClusterDistributionChannelWriter.getIntent(Arrays.asList(set, mset))).isEqualTo(Intent.WRITE); + + assertThat(ClusterDistributionChannelWriter.getIntent(Collections.singletonList(set))).isEqualTo(Intent.WRITE); + } + + @Test + void shouldReturnDefaultIntentForNoCommands() { + + assertThat(ClusterDistributionChannelWriter.getIntent(Collections.emptyList())).isEqualTo(Intent.WRITE); + } + + @Test + void shouldReturnIntentForReadCommand() { + + RedisCommand get = new Command<>(CommandType.GET, null); + RedisCommand mget = new Command<>(CommandType.MGET, null); + + assertThat(ClusterDistributionChannelWriter.getIntent(Arrays.asList(get, mget))).isEqualTo(Intent.READ); + + assertThat(ClusterDistributionChannelWriter.getIntent(Collections.singletonList(get))).isEqualTo(Intent.READ); + } + + @Test + void shouldReturnIntentForMixedCommands() { + + RedisCommand set = new Command<>(CommandType.SET, null); + RedisCommand mget = new Command<>(CommandType.MGET, null); + + assertThat(ClusterDistributionChannelWriter.getIntent(Arrays.asList(set, mget))).isEqualTo(Intent.WRITE); + + assertThat(ClusterDistributionChannelWriter.getIntent(Collections.singletonList(set))).isEqualTo(Intent.WRITE); + } + + @Test + void shouldWriteCommandListWhenAsking() { + verifyWriteCommandCountWhenRedirecting(false); + } + + @Test + void shouldWriteOneCommandWhenMoved() { + verifyWriteCommandCountWhenRedirecting(true); + } + + private void verifyWriteCommandCountWhenRedirecting(boolean isMoved) { + + String outputError = isMoved ? "MOVED 1234 127.0.0.1:6379" : "ASK 1234 127.0.0.1:6379"; + + CommandArgs commandArgs = new CommandArgs<>(StringCodec.UTF8).addKey("KEY"); + ValueOutput valueOutput = new ValueOutput<>(StringCodec.UTF8); + Command command = new Command<>(CommandType.GET, valueOutput, commandArgs); + AsyncCommand asyncCommand = new AsyncCommand<>(command); + ClusterCommand clusterCommand = new ClusterCommand<>(asyncCommand, defaultWriter, 2); + clusterCommand.getOutput().setError(outputError); + clusterDistributionChannelWriter.setClusterConnectionProvider(pooledClusterConnectionProvider); + + when(connectFuture.isDone()).thenReturn(true); + when(connectFuture.isCompletedExceptionally()).thenReturn(false); + when(connectFuture.join()).thenReturn(connection); + when(pooledClusterConnectionProvider.getConnectionAsync(any(Intent.class), anyString(), anyInt())) + .thenReturn(connectFuture); + when(connection.getChannelWriter()).thenReturn(clusterNodeEndpoint); + + clusterDistributionChannelWriter.write(clusterCommand); + + if (isMoved) { + verify(clusterNodeEndpoint, never()).write(anyList()); + verify(clusterNodeEndpoint, times(1)).write(ArgumentMatchers.> any()); + } else { + verify(clusterNodeEndpoint, times(1)).write(anyList()); + verify(clusterNodeEndpoint, never()).write(ArgumentMatchers.> any()); + } + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterNodeEndpointUnitTests.java b/src/test/java/io/lettuce/core/cluster/ClusterNodeEndpointUnitTests.java new file mode 100644 index 0000000000..9ecba5f1d0 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterNodeEndpointUnitTests.java @@ -0,0 +1,162 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.AssertionsForClassTypes.assertThat; +import static org.mockito.Matchers.any; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyZeroInteractions; +import static org.mockito.Mockito.when; + +import java.util.Queue; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.test.TestFutures; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class ClusterNodeEndpointUnitTests { + + private AsyncCommand command = new AsyncCommand<>( + new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8), null)); + + private Queue> disconnectedBuffer; + + @Mock + private ClientOptions clientOptions; + + @Mock + private ClientResources clientResources; + + @Mock + private RedisChannelWriter clusterChannelWriter; + + private ClusterNodeEndpoint sut; + + @BeforeEach + void before() { + + when(clientOptions.getRequestQueueSize()).thenReturn(1000); + when(clientOptions.getDisconnectedBehavior()).thenReturn(ClientOptions.DisconnectedBehavior.DEFAULT); + + prepareNewEndpoint(); + } + + @Test + void closeWithoutCommands() { + + sut.close(); + verifyZeroInteractions(clusterChannelWriter); + } + + @Test + void closeWithQueuedCommands() { + + disconnectedBuffer.add(command); + + sut.close(); + + verify(clusterChannelWriter).write(command); + } + + @Test + void closeWithCancelledQueuedCommands() { + + disconnectedBuffer.add(command); + command.cancel(); + + sut.close(); + + verifyZeroInteractions(clusterChannelWriter); + } + + @Test + void closeWithQueuedCommandsFails() { + + disconnectedBuffer.add(command); + when(clusterChannelWriter.write(any(RedisCommand.class))).thenThrow(new RedisException("meh")); + + sut.close(); + + assertThat(command.isDone()).isTrue(); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(command)).isInstanceOf(RedisException.class); + } + + @Test + void closeWithBufferedCommands() { + + when(clientOptions.getDisconnectedBehavior()).thenReturn(ClientOptions.DisconnectedBehavior.ACCEPT_COMMANDS); + prepareNewEndpoint(); + + sut.write(command); + + sut.close(); + + verify(clusterChannelWriter).write(command); + } + + @Test + void closeWithCancelledBufferedCommands() { + + when(clientOptions.getDisconnectedBehavior()).thenReturn(ClientOptions.DisconnectedBehavior.ACCEPT_COMMANDS); + prepareNewEndpoint(); + + sut.write(command); + command.cancel(); + + sut.close(); + + verifyZeroInteractions(clusterChannelWriter); + } + + @Test + void closeWithBufferedCommandsFails() { + + when(clientOptions.getDisconnectedBehavior()).thenReturn(ClientOptions.DisconnectedBehavior.ACCEPT_COMMANDS); + prepareNewEndpoint(); + + sut.write(command); + when(clusterChannelWriter.write(any(RedisCommand.class))).thenThrow(new RedisException("")); + + sut.close(); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(command)).isInstanceOf(RedisException.class); + } + + private void prepareNewEndpoint() { + sut = new ClusterNodeEndpoint(clientOptions, clientResources, clusterChannelWriter); + disconnectedBuffer = (Queue) ReflectionTestUtils.getField(sut, "disconnectedBuffer"); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterPartiallyDownIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/ClusterPartiallyDownIntegrationTests.java new file mode 100644 index 0000000000..0ba124f71b --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterPartiallyDownIntegrationTests.java @@ -0,0 +1,138 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Fail.fail; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class ClusterPartiallyDownIntegrationTests extends TestSupport { + + private static ClientResources clientResources; + + private static int port1 = 7579; + private static int port2 = 7580; + private static int port3 = 7581; + private static int port4 = 7582; + + private static final RedisURI URI_1 = RedisURI.create(TestSettings.host(), port1); + private static final RedisURI URI_2 = RedisURI.create(TestSettings.host(), port2); + private static final RedisURI URI_3 = RedisURI.create(TestSettings.host(), port3); + private static final RedisURI URI_4 = RedisURI.create(TestSettings.host(), port4); + + private RedisClusterClient redisClusterClient; + + @BeforeAll + static void beforeClass() { + clientResources = TestClientResources.get(); + } + + @AfterEach + void after() { + redisClusterClient.shutdown(); + } + + @Test + void connectToPartiallyDownCluster() { + + List seed = LettuceLists.unmodifiableList(URI_1, URI_2, URI_3, URI_4); + redisClusterClient = RedisClusterClient.create(clientResources, seed); + StatefulRedisClusterConnection connection = redisClusterClient.connect(); + + assertThat(connection.sync().ping()).isEqualTo("PONG"); + + connection.close(); + } + + @Test + void operateOnPartiallyDownCluster() { + + List seed = LettuceLists.unmodifiableList(URI_1, URI_2, URI_3, URI_4); + redisClusterClient = RedisClusterClient.create(clientResources, seed); + StatefulRedisClusterConnection connection = redisClusterClient.connect(); + + String key_10439 = "aaa"; + assertThat(SlotHash.getSlot(key_10439)).isEqualTo(10439); + + try { + connection.sync().get(key_10439); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).hasRootCauseInstanceOf(IOException.class); + } + + connection.close(); + } + + @Test + void seedNodesAreOffline() { + + List seed = LettuceLists.unmodifiableList(URI_1, URI_2, URI_3); + redisClusterClient = RedisClusterClient.create(clientResources, seed); + + try { + redisClusterClient.connect(); + fail("Missing RedisException"); + } catch (RedisException e) { + + assertThat(e).isInstanceOf(RedisConnectionException.class) + .hasMessageStartingWith("Unable to establish a connection to Redis Cluster"); + } + } + + @Test + void partitionNodesAreOffline() { + + List seed = LettuceLists.unmodifiableList(URI_1, URI_2, URI_3); + redisClusterClient = RedisClusterClient.create(clientResources, seed); + + Partitions partitions = new Partitions(); + partitions.addPartition(new RedisClusterNode(URI_1, "a", true, null, 0, 0, 0, new ArrayList<>(), new HashSet<>())); + partitions.addPartition(new RedisClusterNode(URI_2, "b", true, null, 0, 0, 0, new ArrayList<>(), new HashSet<>())); + + redisClusterClient.setPartitions(partitions); + + try { + redisClusterClient.connect(); + fail("Missing RedisConnectionException"); + } catch (RedisConnectionException e) { + assertThat(e).hasRootCauseInstanceOf(IOException.class); + } + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/ClusterReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..34ea207a40 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterReactiveCommandIntegrationTests.java @@ -0,0 +1,112 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.List; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.test.StepVerifier; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.reactive.RedisClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.models.slots.ClusterSlotRange; +import io.lettuce.core.cluster.models.slots.ClusterSlotsParser; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ClusterReactiveCommandIntegrationTests { + + private final RedisClusterClient clusterClient; + private final RedisClusterReactiveCommands reactive; + private final RedisClusterCommands sync; + + @Inject + ClusterReactiveCommandIntegrationTests(RedisClusterClient clusterClient, + StatefulRedisClusterConnection connection) { + this.clusterClient = clusterClient; + + this.reactive = connection.reactive(); + this.sync = connection.sync(); + } + + @Test + void testClusterBumpEpoch() { + StepVerifier.create(reactive.clusterBumpepoch()) + .consumeNextWith(actual -> assertThat(actual).matches("(BUMPED|STILL).*")).verifyComplete(); + } + + @Test + void testClusterInfo() { + + StepVerifier.create(reactive.clusterInfo()).consumeNextWith(actual -> { + assertThat(actual).contains("cluster_known_nodes:"); + assertThat(actual).contains("cluster_slots_fail:0"); + assertThat(actual).contains("cluster_state:"); + }).verifyComplete(); + } + + @Test + void testClusterNodes() { + + StepVerifier.create(reactive.clusterNodes()).consumeNextWith(actual -> { + assertThat(actual).contains("connected"); + assertThat(actual).contains("master"); + assertThat(actual).contains("myself"); + }).verifyComplete(); + } + + @Test + void testAsking() { + StepVerifier.create(reactive.asking()).expectNext("OK").verifyComplete(); + } + + @Test + void testClusterSlots() { + + List reply = reactive.clusterSlots().collectList().block(); + assertThat(reply.size()).isGreaterThan(1); + + List parse = ClusterSlotsParser.parse(reply); + assertThat(parse).hasSize(2); + + ClusterSlotRange clusterSlotRange = parse.get(0); + assertThat(clusterSlotRange.getFrom()).isEqualTo(0); + assertThat(clusterSlotRange.getTo()).isEqualTo(11999); + + assertThat(clusterSlotRange.toString()).contains(ClusterSlotRange.class.getSimpleName()); + } + + @Test + void clusterSlaves() { + + RedisClusterNode master = clusterClient.getPartitions().stream().filter(it -> it.is(RedisClusterNode.NodeFlag.MASTER)) + .findFirst().get(); + + List result = reactive.clusterSlaves(master.getNodeId()).collectList().block(); + + assertThat(result.size()).isGreaterThan(0); + } +} diff --git a/src/test/java/com/lambdaworks/redis/cluster/ClusterRule.java b/src/test/java/io/lettuce/core/cluster/ClusterRule.java similarity index 79% rename from src/test/java/com/lambdaworks/redis/cluster/ClusterRule.java rename to src/test/java/io/lettuce/core/cluster/ClusterRule.java index d3c8d3cb9a..df475f95f6 100644 --- a/src/test/java/com/lambdaworks/redis/cluster/ClusterRule.java +++ b/src/test/java/io/lettuce/core/cluster/ClusterRule.java @@ -1,4 +1,19 @@ -package com.lambdaworks.redis.cluster; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; import java.net.InetSocketAddress; import java.util.ArrayList; @@ -15,12 +30,13 @@ import org.junit.runner.Description; import org.junit.runners.model.Statement; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.cluster.api.async.RedisClusterAsyncCommands; -import com.lambdaworks.redis.cluster.models.partitions.ClusterPartitionParser; -import com.lambdaworks.redis.cluster.models.partitions.Partitions; -import com.lambdaworks.redis.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.async.RedisServerAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.models.partitions.ClusterPartitionParser; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; /** * @author Mark Paluch @@ -47,7 +63,7 @@ public Statement apply(final Statement base, Description description) { final Statement beforeCluster = new Statement() { @Override - public void evaluate() throws Throwable { + public void evaluate() { flushdb(); } }; @@ -64,7 +80,7 @@ public void evaluate() throws Throwable { } /** - * + * * @return true if the cluster state is {@code ok} and there are no failing nodes */ public boolean isStable() { @@ -100,7 +116,7 @@ public boolean isStable() { /** * Flush data on all nodes, ignore failures. */ - public void flushdb() { + private void flushdb() { onAllConnections(c -> c.flushdb(), true); } @@ -108,13 +124,14 @@ public void flushdb() { * Cluster reset on all nodes. */ public void clusterReset() { + onAllConnections(RedisServerAsyncCommands::flushall, true); onAllConnections(c -> c.clusterReset(true)); onAllConnections(RedisClusterAsyncCommands::clusterFlushslots); } /** * Meet on all nodes. - * + * * @param host * @param port */ @@ -147,7 +164,6 @@ private void onAllConnections(Function, Futu } } - private void await(List> futures, boolean ignoreExecutionException) throws InterruptedException, java.util.concurrent.ExecutionException, java.util.concurrent.TimeoutException { for (Future future : futures) { diff --git a/src/test/java/io/lettuce/core/cluster/ClusterSetup.java b/src/test/java/io/lettuce/core/cluster/ClusterSetup.java new file mode 100644 index 0000000000..d70c0f77d3 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterSetup.java @@ -0,0 +1,138 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.stream.Stream; + +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.test.TestFutures; +import io.lettuce.test.Wait; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class ClusterSetup { + + /** + * Setup a cluster consisting of two members (see {@link ClusterTestSettings#port5} to {@link ClusterTestSettings#port6}). + * Two masters (0-11999 and 12000-16383) + * + * @param clusterRule + */ + public static void setup2Masters(ClusterRule clusterRule) { + + clusterRule.clusterReset(); + clusterRule.meet(ClusterTestSettings.host, ClusterTestSettings.port5); + clusterRule.meet(ClusterTestSettings.host, ClusterTestSettings.port6); + + RedisAdvancedClusterAsyncCommands connection = clusterRule.getClusterClient().connect().async(); + Wait.untilTrue(() -> { + + clusterRule.getClusterClient().reloadPartitions(); + return clusterRule.getClusterClient().getPartitions().size() == 2; + + }).waitOrTimeout(); + + Partitions partitions = clusterRule.getClusterClient().getPartitions(); + for (RedisClusterNode partition : partitions) { + + if (!partition.getSlots().isEmpty()) { + RedisClusterAsyncCommands nodeConnection = connection.getConnection(partition.getNodeId()); + + for (Integer slot : partition.getSlots()) { + nodeConnection.clusterDelSlots(slot); + } + } + } + + RedisClusterAsyncCommands node1 = connection.getConnection(ClusterTestSettings.host, + ClusterTestSettings.port5); + node1.clusterAddSlots(ClusterTestSettings.createSlots(0, 12000)); + + RedisClusterAsyncCommands node2 = connection.getConnection(ClusterTestSettings.host, + ClusterTestSettings.port6); + node2.clusterAddSlots(ClusterTestSettings.createSlots(12000, 16384)); + + Wait.untilTrue(clusterRule::isStable).waitOrTimeout(); + + Wait.untilEquals(2L, () -> { + clusterRule.getClusterClient().reloadPartitions(); + + return partitionStream(clusterRule) + .filter(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)).count(); + }).waitOrTimeout(); + + connection.getStatefulConnection().close(); + } + + /** + * Setup a cluster consisting of two members (see {@link ClusterTestSettings#port5} to {@link ClusterTestSettings#port6}). + * One master (0-16383) and one replica. + * + * @param clusterRule + */ + public static void setupMasterWithReplica(ClusterRule clusterRule) { + + clusterRule.clusterReset(); + clusterRule.meet(ClusterTestSettings.host, ClusterTestSettings.port5); + clusterRule.meet(ClusterTestSettings.host, ClusterTestSettings.port6); + + RedisAdvancedClusterAsyncCommands connection = clusterRule.getClusterClient().connect().async(); + StatefulRedisClusterConnection statefulConnection = connection.getStatefulConnection(); + + Wait.untilEquals(2, () -> { + clusterRule.getClusterClient().reloadPartitions(); + return clusterRule.getClusterClient().getPartitions().size(); + }).waitOrTimeout(); + + RedisClusterCommands node1 = statefulConnection + .getConnection(TestSettings.hostAddr(), ClusterTestSettings.port5).sync(); + node1.clusterAddSlots(ClusterTestSettings.createSlots(0, 16384)); + + Wait.untilTrue(clusterRule::isStable).waitOrTimeout(); + + TestFutures.awaitOrTimeout(connection.getConnection(ClusterTestSettings.host, ClusterTestSettings.port6) + .clusterReplicate( + node1.clusterMyId())); + + clusterRule.getClusterClient().reloadPartitions(); + + Wait.untilEquals(1L, () -> { + clusterRule.getClusterClient().reloadPartitions(); + return partitionStream(clusterRule) + .filter(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MASTER)).count(); + }).waitOrTimeout(); + + Wait.untilEquals(1L, () -> { + clusterRule.getClusterClient().reloadPartitions(); + return partitionStream(clusterRule).filter(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE)) + .count(); + }).waitOrTimeout(); + + connection.getStatefulConnection().close(); + } + + private static Stream partitionStream(ClusterRule clusterRule) { + return clusterRule.getClusterClient().getPartitions().getPartitions().stream(); + } + +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterTestSettings.java b/src/test/java/io/lettuce/core/cluster/ClusterTestSettings.java new file mode 100644 index 0000000000..ceefae308d --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterTestSettings.java @@ -0,0 +1,63 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import io.lettuce.core.TestSupport; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +public abstract class ClusterTestSettings extends TestSupport { + + public static final String host = TestSettings.hostAddr(); + + public static final int SLOT_A = SlotHash.getSlot("a".getBytes()); + public static final int SLOT_B = SlotHash.getSlot("b".getBytes()); + + // default test cluster 2 masters + 2 slaves + public static final int port1 = TestSettings.port(900); + public static final int port2 = port1 + 1; + public static final int port3 = port1 + 2; + public static final int port4 = port1 + 3; + + // master+replica or master+master + public static final int port5 = port1 + 4; + public static final int port6 = port1 + 5; + + // auth cluster + public static final int port7 = port1 + 6; + public static final String KEY_A = "a"; + public static final String KEY_B = "b"; + public static final String KEY_D = "d"; + + /** + * Don't allow instances. + */ + private ClusterTestSettings() { + } + + public static int[] createSlots(int from, int to) { + int[] result = new int[to - from]; + int counter = 0; + for (int i = from; i < to; i++) { + result[counter++] = i; + + } + return result; + } + +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterTestUtil.java b/src/test/java/io/lettuce/core/cluster/ClusterTestUtil.java new file mode 100644 index 0000000000..48c233b879 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterTestUtil.java @@ -0,0 +1,98 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; + +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.partitions.ClusterPartitionParser; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.test.RoutingInvocationHandler; + +/** + * @author Mark Paluch + * @since 3.0 + */ +public class ClusterTestUtil { + + /** + * Retrieve the cluster node Id from the {@code connection}. + * + * @param connection + * @return + */ + public static String getNodeId(RedisClusterCommands connection) { + RedisClusterNode ownPartition = getOwnPartition(connection); + if (ownPartition != null) { + return ownPartition.getNodeId(); + } + + return null; + } + + /** + * Retrieve the {@link RedisClusterNode} from the {@code connection}. + * + * @param connection + * @return + */ + public static RedisClusterNode getOwnPartition(RedisClusterCommands connection) { + Partitions partitions = ClusterPartitionParser.parse(connection.clusterNodes()); + + for (RedisClusterNode partition : partitions) { + if (partition.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + return partition; + } + } + return null; + } + + /** + * Flush databases of all cluster nodes. + * + * @param connection the cluster connection + */ + public static void flushDatabaseOfAllNodes(StatefulRedisClusterConnection connection) { + for (RedisClusterNode node : connection.getPartitions()) { + try { + connection.getConnection(node.getNodeId()).sync().flushall(); + connection.getConnection(node.getNodeId()).sync().flushdb(); + } catch (Exception o_O) { + // ignore + } + } + } + + /** + * Create an API wrapper which exposes the {@link RedisCommands} API by using internally a cluster connection. + * + * @param connection + * @return + */ + public static RedisCommands redisCommandsOverCluster( + StatefulRedisClusterConnection connection) { + StatefulRedisClusterConnectionImpl clusterConnection = (StatefulRedisClusterConnectionImpl) connection; + + InvocationHandler h = new RoutingInvocationHandler(connection.async(), + clusterConnection.syncInvocationHandler()); + return (RedisCommands) Proxy.newProxyInstance(ClusterTestUtil.class.getClassLoader(), + new Class[] { RedisCommands.class }, h); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterTopologyRefreshOptionsUnitTests.java b/src/test/java/io/lettuce/core/cluster/ClusterTopologyRefreshOptionsUnitTests.java new file mode 100644 index 0000000000..57ffa43caa --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterTopologyRefreshOptionsUnitTests.java @@ -0,0 +1,105 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.cluster.ClusterTopologyRefreshOptions.RefreshTrigger; + +/** + * @author Mark Paluch + */ +class ClusterTopologyRefreshOptionsUnitTests { + + @Test + void testBuilder() { + + ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.builder()// + .enablePeriodicRefresh(true).refreshPeriod(10, TimeUnit.MINUTES)// + .dynamicRefreshSources(false) // + .enableAdaptiveRefreshTrigger(RefreshTrigger.MOVED_REDIRECT)// + .adaptiveRefreshTriggersTimeout(15, TimeUnit.MILLISECONDS)// + .closeStaleConnections(false)// + .refreshTriggersReconnectAttempts(2)// + .build(); + + assertThat(options.getRefreshPeriod()).isEqualTo(Duration.ofMinutes(10)); + assertThat(options.isCloseStaleConnections()).isEqualTo(false); + assertThat(options.isPeriodicRefreshEnabled()).isTrue(); + assertThat(options.useDynamicRefreshSources()).isFalse(); + assertThat(options.getAdaptiveRefreshTimeout()).isEqualTo(Duration.ofMillis(15)); + assertThat(options.getAdaptiveRefreshTriggers()).containsOnly(RefreshTrigger.MOVED_REDIRECT); + assertThat(options.getRefreshTriggersReconnectAttempts()).isEqualTo(2); + } + + @Test + void testCopy() { + + ClusterTopologyRefreshOptions master = ClusterTopologyRefreshOptions.builder()// + .enablePeriodicRefresh(true).refreshPeriod(10, TimeUnit.MINUTES)// + .dynamicRefreshSources(false) // + .enableAdaptiveRefreshTrigger(RefreshTrigger.MOVED_REDIRECT)// + .adaptiveRefreshTriggersTimeout(15, TimeUnit.MILLISECONDS)// + .closeStaleConnections(false)// + .refreshTriggersReconnectAttempts(2)// + .build(); + + ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.copyOf(master); + + assertThat(options.getRefreshPeriod()).isEqualTo(Duration.ofMinutes(10)); + assertThat(options.isCloseStaleConnections()).isEqualTo(false); + assertThat(options.isPeriodicRefreshEnabled()).isTrue(); + assertThat(options.useDynamicRefreshSources()).isFalse(); + assertThat(options.getAdaptiveRefreshTimeout()).isEqualTo(Duration.ofMillis(15)); + assertThat(options.getAdaptiveRefreshTriggers()).containsOnly(RefreshTrigger.MOVED_REDIRECT); + assertThat(options.getRefreshTriggersReconnectAttempts()).isEqualTo(2); + } + + @Test + void testDefault() { + + ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.create(); + + assertThat(options.getRefreshPeriod()).isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_REFRESH_PERIOD_DURATION); + assertThat(options.isCloseStaleConnections()).isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_CLOSE_STALE_CONNECTIONS); + assertThat(options.isPeriodicRefreshEnabled()) + .isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_PERIODIC_REFRESH_ENABLED).isFalse(); + assertThat(options.useDynamicRefreshSources()).isEqualTo(ClusterTopologyRefreshOptions.DEFAULT_DYNAMIC_REFRESH_SOURCES) + .isTrue(); + assertThat(options.getAdaptiveRefreshTimeout()).isEqualTo( + ClusterTopologyRefreshOptions.DEFAULT_ADAPTIVE_REFRESH_TIMEOUT_DURATION); + assertThat(options.getAdaptiveRefreshTriggers()).isEqualTo( + ClusterTopologyRefreshOptions.DEFAULT_ADAPTIVE_REFRESH_TRIGGERS); + assertThat(options.getRefreshTriggersReconnectAttempts()).isEqualTo( + ClusterTopologyRefreshOptions.DEFAULT_REFRESH_TRIGGERS_RECONNECT_ATTEMPTS); + } + + @Test + void testEnabled() { + + ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.enabled(); + + assertThat(options.isPeriodicRefreshEnabled()).isTrue(); + assertThat(options.useDynamicRefreshSources()).isTrue(); + assertThat(options.getAdaptiveRefreshTriggers()).contains(RefreshTrigger.ASK_REDIRECT, RefreshTrigger.MOVED_REDIRECT, + RefreshTrigger.PERSISTENT_RECONNECTS); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ClusterTopologyRefreshSchedulerUnitTests.java b/src/test/java/io/lettuce/core/cluster/ClusterTopologyRefreshSchedulerUnitTests.java new file mode 100644 index 0000000000..60b1ee475d --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ClusterTopologyRefreshSchedulerUnitTests.java @@ -0,0 +1,294 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Matchers.any; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.ArgumentCaptor; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.cluster.AdaptiveRefreshTriggeredEvent; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.test.Delay; +import io.netty.util.concurrent.EventExecutorGroup; + +/** + * Unit test for {@link ClusterTopologyRefreshScheduler}. + * + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class ClusterTopologyRefreshSchedulerUnitTests { + + private ClusterTopologyRefreshScheduler sut; + + private ClusterTopologyRefreshOptions immediateRefresh = ClusterTopologyRefreshOptions.builder() + .enablePeriodicRefresh(1, TimeUnit.MILLISECONDS).enableAllAdaptiveRefreshTriggers().build(); + + private ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + + @Mock + private ClientResources clientResources; + + @Mock + private RedisClusterClient clusterClient; + + @Mock + private EventExecutorGroup eventExecutors; + + @Mock + private EventBus eventBus; + + @BeforeEach + void before() { + + when(clientResources.eventBus()).thenReturn(eventBus); + when(clientResources.eventExecutorGroup()).thenReturn(eventExecutors); + + sut = new ClusterTopologyRefreshScheduler(clusterClient::getClusterClientOptions, clusterClient::getPartitions, + clusterClient::refreshPartitionsAsync, clientResources); + } + + @Test + void runShouldSubmitRefreshShouldTrigger() { + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.run(); + verify(eventExecutors).submit(any(Runnable.class)); + } + + @Test + void runnableShouldCallPartitionRefresh() { + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + when(eventExecutors.submit(any(Runnable.class))).then(invocation -> { + ((Runnable) invocation.getArguments()[0]).run(); + return null; + }); + + sut.run(); + + verify(clusterClient).refreshPartitionsAsync(); + } + + @Test + void shouldNotSubmitIfExecutorIsShuttingDown() { + + when(eventExecutors.isShuttingDown()).thenReturn(true); + + sut.run(); + verify(eventExecutors, never()).submit(any(Runnable.class)); + } + + @Test + void shouldNotSubmitIfExecutorIsShutdown() { + + when(eventExecutors.isShutdown()).thenReturn(true); + + sut.run(); + verify(eventExecutors, never()).submit(any(Runnable.class)); + } + + @Test + void shouldNotSubmitIfExecutorIsTerminated() { + + when(eventExecutors.isTerminated()).thenReturn(true); + + sut.run(); + verify(eventExecutors, never()).submit(any(Runnable.class)); + } + + @Test + void shouldTriggerRefreshOnAskRedirection() { + + ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = ClusterTopologyRefreshOptions.builder() + .enableAllAdaptiveRefreshTriggers().build(); + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder() + .topologyRefreshOptions(clusterTopologyRefreshOptions).build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onAskRedirection(); + verify(eventExecutors).submit(any(Runnable.class)); + } + + @Test + void shouldNotTriggerAdaptiveRefreshUsingDefaults() { + + ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = ClusterTopologyRefreshOptions.create(); + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder() + .topologyRefreshOptions(clusterTopologyRefreshOptions).build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onAskRedirection(); + verify(eventExecutors, never()).submit(any(Runnable.class)); + } + + @Test + void shouldTriggerRefreshOnMovedRedirection() { + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onMovedRedirection(); + verify(eventExecutors).submit(any(Runnable.class)); + } + + @Test + void shouldTriggerRefreshOnReconnect() { + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onReconnectAttempt(10); + verify(eventExecutors).submit(any(Runnable.class)); + } + + @Test + void shouldTriggerRefreshOnUncoveredSlot() { + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onUncoveredSlot(1234); + verify(eventExecutors).submit(any(Runnable.class)); + } + + @Test + void shouldTriggerRefreshOnUnknownNode() { + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onUnknownNode(); + verify(eventExecutors).submit(any(Runnable.class)); + } + + @Test + void shouldNotTriggerRefreshOnFirstReconnect() { + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onReconnectAttempt(1); + verify(eventExecutors, never()).submit(any(Runnable.class)); + } + + @Test + void shouldRateLimitAdaptiveRequests() { + + ClusterTopologyRefreshOptions adaptiveTimeout = ClusterTopologyRefreshOptions.builder().enablePeriodicRefresh(false) + .enableAllAdaptiveRefreshTriggers().adaptiveRefreshTriggersTimeout(50, TimeUnit.MILLISECONDS).build(); + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(adaptiveTimeout) + .build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + for (int i = 0; i < 10; i++) { + sut.onAskRedirection(); + } + + Delay.delay(Duration.ofMillis(100)); + sut.onAskRedirection(); + + verify(eventExecutors, times(2)).submit(any(Runnable.class)); + } + + @Test + void shouldEmitAdaptiveRefreshEventOnSchedule() { + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onMovedRedirection(); + verify(eventExecutors).submit(any(Runnable.class)); + verify(eventBus).publish(any(AdaptiveRefreshTriggeredEvent.class)); + } + + @Test + void shouldScheduleRefreshViaAdaptiveRefreshTriggeredEvent() { + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onMovedRedirection(); + + ArgumentCaptor captor = ArgumentCaptor.forClass(AdaptiveRefreshTriggeredEvent.class); + verify(eventBus).publish(captor.capture()); + + AdaptiveRefreshTriggeredEvent capture = captor.getValue(); + + capture.scheduleRefresh(); + verify(eventExecutors, times(2)).submit(any(Runnable.class)); + } + + @Test + void shouldRetrievePartitionsViaAdaptiveRefreshTriggeredEvent() { + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().topologyRefreshOptions(immediateRefresh) + .build(); + when(clusterClient.getClusterClientOptions()).thenReturn(clusterClientOptions); + + sut.onMovedRedirection(); + + ArgumentCaptor captor = ArgumentCaptor.forClass(AdaptiveRefreshTriggeredEvent.class); + verify(eventBus).publish(captor.capture()); + + AdaptiveRefreshTriggeredEvent capture = captor.getValue(); + + Partitions partitions = new Partitions(); + when(clusterClient.getPartitions()).thenReturn(partitions); + + assertThat(capture.getPartitions()).isSameAs(partitions); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/CommandSetIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/CommandSetIntegrationTests.java new file mode 100644 index 0000000000..eda77481fe --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/CommandSetIntegrationTests.java @@ -0,0 +1,67 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.List; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.models.command.CommandDetail; +import io.lettuce.core.models.command.CommandDetailParser; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +public class CommandSetIntegrationTests { + + private final RedisCommands redis; + + @Inject + CommandSetIntegrationTests(StatefulRedisConnection connection) { + this.redis = connection.sync(); + } + + @Test + void shouldDiscoverCommands() { + + List commandDetails = CommandDetailParser.parse(redis.command()); + CommandSet state = new CommandSet(commandDetails); + + assertThat(state.hasCommand(CommandType.GEOADD)).isTrue(); + assertThat(state.hasCommand(UnknownCommand.FOO)).isFalse(); + } + + enum UnknownCommand implements ProtocolKeyword { + + FOO; + + @Override + public byte[] getBytes() { + return name().getBytes(); + } + } +} diff --git a/src/test/java/io/lettuce/core/cluster/HealthyMajorityPartitionsConsensusUnitTests.java b/src/test/java/io/lettuce/core/cluster/HealthyMajorityPartitionsConsensusUnitTests.java new file mode 100644 index 0000000000..566ee907c1 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/HealthyMajorityPartitionsConsensusUnitTests.java @@ -0,0 +1,112 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.PartitionsConsensusTestSupport.createMap; +import static io.lettuce.core.cluster.PartitionsConsensusTestSupport.createNode; +import static io.lettuce.core.cluster.PartitionsConsensusTestSupport.createPartitions; +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Arrays; +import java.util.Collections; +import java.util.Map; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * @author Mark Paluch + */ +class HealthyMajorityPartitionsConsensusUnitTests { + + private RedisClusterNode node1 = createNode(1); + private RedisClusterNode node2 = createNode(2); + private RedisClusterNode node3 = createNode(3); + private RedisClusterNode node4 = createNode(4); + private RedisClusterNode node5 = createNode(5); + + @Test + void sameSharedViewShouldDecideForHealthyNodes() { + + Partitions partitions1 = createPartitions(node1, node2, node3, node4, node5); + Partitions partitions2 = createPartitions(node1, node2, node3, node4, node5); + Partitions partitions3 = createPartitions(node1, node2, node3, node4, node5); + + Map map = createMap(partitions1, partitions2, partitions3); + + Partitions result = PartitionsConsensus.HEALTHY_MAJORITY.getPartitions(null, map); + + assertThat(Arrays.asList(partitions1, partitions2, partitions3)).contains(result); + } + + @Test + void unhealthyNodeViewShouldDecideForHealthyNodes() { + + Partitions partitions1 = createPartitions(node1, node2); + Partitions partitions2 = createPartitions(node2, node3, node4, node5); + Partitions partitions3 = createPartitions(node2, node3, node4, node5); + + Map map = createMap(partitions1, partitions2, partitions3); + + node2.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + node3.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + node4.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + node5.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + + Partitions result = PartitionsConsensus.HEALTHY_MAJORITY.getPartitions(null, map); + + assertThat(result).isSameAs(partitions1); + } + + @Test + void splitNodeViewShouldDecideForHealthyNodes() { + + Partitions partitions1 = createPartitions(node1, node2, node3); + Partitions partitions2 = createPartitions(); + Partitions partitions3 = createPartitions(node3, node4, node5); + + Map map = createMap(partitions1, partitions2, partitions3); + + node1.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + node2.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + node3.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + + Partitions result = PartitionsConsensus.HEALTHY_MAJORITY.getPartitions(null, map); + + assertThat(result).isSameAs(partitions3); + } + + @Test + void splitUnhealthyNodeViewShouldDecideForHealthyNodes() { + + Partitions partitions1 = createPartitions(node1, node2); + Partitions partitions2 = createPartitions(node2, node3); + Partitions partitions3 = createPartitions(node3, node4, node5); + + Map map = createMap(partitions1, partitions2, partitions3); + + node2.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + node3.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + node4.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.FAIL)); + + Partitions result = PartitionsConsensus.HEALTHY_MAJORITY.getPartitions(null, map); + + assertThat(Arrays.asList(partitions1, partitions3)).contains(result); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/KnownMajorityPartitionsConsensusUnitTests.java b/src/test/java/io/lettuce/core/cluster/KnownMajorityPartitionsConsensusUnitTests.java new file mode 100644 index 0000000000..305c259b40 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/KnownMajorityPartitionsConsensusUnitTests.java @@ -0,0 +1,138 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.PartitionsConsensusTestSupport.createMap; +import static io.lettuce.core.cluster.PartitionsConsensusTestSupport.createNode; +import static io.lettuce.core.cluster.PartitionsConsensusTestSupport.createPartitions; +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Arrays; +import java.util.Map; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * @author Mark Paluch + */ +class KnownMajorityPartitionsConsensusUnitTests { + + private RedisClusterNode node1 = createNode(1); + private RedisClusterNode node2 = createNode(2); + private RedisClusterNode node3 = createNode(3); + private RedisClusterNode node4 = createNode(4); + private RedisClusterNode node5 = createNode(5); + + @Test + void sameSharedViewShouldDecideForKnownMajority() { + + Partitions current = createPartitions(node1, node2, node3, node4, node5); + + Partitions partitions1 = createPartitions(node1, node2, node3, node4, node5); + Partitions partitions2 = createPartitions(node1, node2, node3, node4, node5); + Partitions partitions3 = createPartitions(node1, node2, node3, node4, node5); + + Map map = createMap(partitions1, partitions2, partitions3); + + Partitions result = PartitionsConsensus.KNOWN_MAJORITY.getPartitions(current, map); + + assertThat(Arrays.asList(partitions1, partitions2, partitions3)).contains(result); + } + + @Test + void addedNodeViewShouldDecideForKnownMajority() { + + Partitions current = createPartitions(node1, node2, node3, node4); + + Partitions partitions1 = createPartitions(node1, node2, node3, node4, node5); + Partitions partitions2 = createPartitions(node1, node2, node3, node4, node5); + Partitions partitions3 = createPartitions(node1, node2, node3, node4, node5); + + Map map = createMap(partitions1, partitions2, partitions3); + + Partitions result = PartitionsConsensus.KNOWN_MAJORITY.getPartitions(current, map); + + assertThat(Arrays.asList(partitions1, partitions2, partitions3)).contains(result); + } + + @Test + void removedNodeViewShouldDecideForKnownMajority() { + + Partitions current = createPartitions(node1, node2, node3, node4, node5); + + Partitions partitions1 = createPartitions(node1, node2, node3, node4); + Partitions partitions2 = createPartitions(node1, node2, node3, node4); + Partitions partitions3 = createPartitions(node1, node2, node3, node4); + + Map map = createMap(partitions1, partitions2, partitions3); + + Partitions result = PartitionsConsensus.KNOWN_MAJORITY.getPartitions(current, map); + + assertThat(Arrays.asList(partitions1, partitions2, partitions3)).contains(result); + } + + @Test + void mixedViewShouldDecideForKnownMajority() { + + Partitions current = createPartitions(node1, node2, node3, node4, node5); + + Partitions partitions1 = createPartitions(node1, node2, node3, node4); + Partitions partitions2 = createPartitions(node1, node2, node3, node5); + Partitions partitions3 = createPartitions(node1, node2, node4, node5); + + Map map = createMap(partitions1, partitions2, partitions3); + + Partitions result = PartitionsConsensus.KNOWN_MAJORITY.getPartitions(current, map); + + assertThat(Arrays.asList(partitions1, partitions2, partitions3)).contains(result); + } + + @Test + void clusterSplitViewShouldDecideForKnownMajority() { + + Partitions current = createPartitions(node1, node2, node3, node4, node5); + + Partitions partitions1 = createPartitions(node1, node2); + Partitions partitions2 = createPartitions(node1, node2); + Partitions partitions3 = createPartitions(node1, node2, node3, node4, node5); + + Map map = createMap(partitions1, partitions2, partitions3); + + Partitions result = PartitionsConsensus.KNOWN_MAJORITY.getPartitions(current, map); + + assertThat(result).isEqualTo(partitions3).isNotEqualTo(partitions1); + } + + @Test + void strangeClusterSplitViewShouldDecideForKnownMajority() { + + Partitions current = createPartitions(node1, node2, node3, node4, node5); + + Partitions partitions1 = createPartitions(node1); + Partitions partitions2 = createPartitions(node2); + Partitions partitions3 = createPartitions(node3); + + Map map = createMap(partitions1, partitions2, partitions3); + + Partitions result = PartitionsConsensus.KNOWN_MAJORITY.getPartitions(current, map); + + assertThat(Arrays.asList(partitions1, partitions2, partitions3)).contains(result); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/NodeSelectionAsyncIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/NodeSelectionAsyncIntegrationTests.java new file mode 100644 index 0000000000..99af26897c --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/NodeSelectionAsyncIntegrationTests.java @@ -0,0 +1,316 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.ScriptOutputType.STATUS; +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.Future; +import java.util.stream.Collector; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.AsyncExecutions; +import io.lettuce.core.cluster.api.async.AsyncNodeSelection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.LettuceSets; +import io.lettuce.test.Delay; +import io.lettuce.test.TestFutures; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class NodeSelectionAsyncIntegrationTests extends TestSupport { + + private final RedisClusterClient clusterClient; + private final RedisAdvancedClusterAsyncCommands commands; + + @Inject + NodeSelectionAsyncIntegrationTests(RedisClusterClient clusterClient, + StatefulRedisClusterConnection connection) { + + this.clusterClient = clusterClient; + this.commands = connection.async(); + connection.sync().flushall(); + } + + @Test + void testMultiNodeOperations() { + + List expectation = new ArrayList<>(); + for (char c = 'a'; c < 'z'; c++) { + String key = new String(new char[] { c, c, c }); + expectation.add(key); + TestFutures.awaitOrTimeout(commands.set(key, value)); + } + + List result = new Vector<>(); + + TestFutures.awaitOrTimeout(commands.masters().commands().keys(result::add, "*")); + + assertThat(result).hasSize(expectation.size()); + + Collections.sort(expectation); + Collections.sort(result); + + assertThat(result).isEqualTo(expectation); + } + + @Test + void testThenCollect() { + + List expectation = new ArrayList<>(); + for (char c = 'a'; c < 'z'; c++) { + String key = new String(new char[] { c, c, c }); + expectation.add(key); + TestFutures.awaitOrTimeout(commands.set(key, value)); + } + + Collector, List, List> collector = Collector.of(ArrayList::new, List::addAll, (a, b) -> a, + it -> it); + + CompletableFuture> future = commands.masters().commands().keys("*").thenCollect(collector) + .toCompletableFuture(); + + TestFutures.awaitOrTimeout(future); + List result = future.join(); + + assertThat(result).hasSize(expectation.size()); + + Collections.sort(expectation); + Collections.sort(result); + + assertThat(result).isEqualTo(expectation); + } + + @Test + void testCompletionStageTransformation() { + + CompletableFuture transformed = commands.masters().commands().ping() + .thenApply(it -> String.join(" ", it.toArray(new String[0]))).toCompletableFuture(); + + TestFutures.awaitOrTimeout(transformed); + + assertThat(transformed.join()).isEqualTo("PONG PONG"); + } + + @Test + void testNodeSelectionCount() { + assertThat(commands.all().size()).isEqualTo(4); + assertThat(commands.slaves().size()).isEqualTo(2); + assertThat(commands.masters().size()).isEqualTo(2); + + assertThat(commands.nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MYSELF)).size()).isEqualTo( + 1); + } + + @Test + void testNodeSelection() { + + AsyncNodeSelection onlyMe = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( + RedisClusterNode.NodeFlag.MYSELF)); + Map> map = onlyMe.asMap(); + + assertThat(map).hasSize(1); + + RedisClusterAsyncCommands node = onlyMe.commands(0); + assertThat(node).isNotNull(); + + RedisClusterNode redisClusterNode = onlyMe.node(0); + assertThat(redisClusterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MYSELF); + + assertThat(onlyMe.asMap()).hasSize(1); + } + + @Test + void testDynamicNodeSelection() { + + Partitions partitions = commands.getStatefulConnection().getPartitions(); + partitions.forEach(redisClusterNode -> redisClusterNode.setFlags(Collections + .singleton(RedisClusterNode.NodeFlag.MASTER))); + + AsyncNodeSelection selection = commands.nodes( + redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF), true); + + assertThat(selection.asMap()).hasSize(0); + partitions.getPartition(0).setFlags( + LettuceSets.unmodifiableSet(RedisClusterNode.NodeFlag.MYSELF, RedisClusterNode.NodeFlag.MASTER)); + assertThat(selection.asMap()).hasSize(1); + + partitions.getPartition(1).setFlags( + LettuceSets.unmodifiableSet(RedisClusterNode.NodeFlag.MYSELF, RedisClusterNode.NodeFlag.MASTER)); + assertThat(selection.asMap()).hasSize(2); + + clusterClient.reloadPartitions(); + } + + @Test + void testNodeSelectionAsyncPing() { + + AsyncNodeSelection onlyMe = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( + RedisClusterNode.NodeFlag.MYSELF)); + Map> map = onlyMe.asMap(); + + assertThat(map).hasSize(1); + + AsyncExecutions ping = onlyMe.commands().ping(); + CompletionStage completionStage = ping.get(onlyMe.node(0)); + + assertThat(TestFutures.getOrTimeout(completionStage)).isEqualTo("PONG"); + } + + @Test + void testStaticNodeSelection() { + + AsyncNodeSelection selection = commands.nodes( + redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF), false); + + assertThat(selection.asMap()).hasSize(1); + + commands.getStatefulConnection().getPartitions().getPartition(2) + .setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MYSELF)); + + assertThat(selection.asMap()).hasSize(1); + + clusterClient.reloadPartitions(); + } + + @Test + void testAsynchronicityOfMultiNodeExecution() { + + StatefulRedisClusterConnection connection2 = clusterClient.connect(); + RedisAdvancedClusterAsyncCommands async2 = connection2.async(); + + AsyncNodeSelection masters = async2.masters(); + TestFutures.awaitOrTimeout(masters.commands().configSet("lua-time-limit", "10")); + + AsyncExecutions eval = masters.commands().eval("while true do end", STATUS, new String[0]); + + for (CompletableFuture future : eval.futures()) { + assertThat(future.isDone()).isFalse(); + assertThat(future.isCancelled()).isFalse(); + } + Delay.delay(Duration.ofMillis(200)); + + AsyncExecutions kill = commands.masters().commands().scriptKill(); + TestFutures.awaitOrTimeout(kill); + + for (CompletionStage execution : kill) { + assertThat(TestFutures.getOrTimeout(execution)).isEqualTo("OK"); + } + + TestFutures.awaitOrTimeout(CompletableFuture.allOf(eval.futures()).exceptionally(throwable -> null)); + for (CompletableFuture future : eval.futures()) { + assertThat(future.isDone()).isTrue(); + } + + connection2.close(); + } + + @Test + void testReplicaReadWrite() { + + AsyncNodeSelection nodes = commands.nodes(redisClusterNode -> redisClusterNode.getFlags().contains( + RedisClusterNode.NodeFlag.REPLICA)); + + assertThat(nodes.size()).isEqualTo(2); + + TestFutures.awaitOrTimeout(commands.set(key, value)); + + waitForReplication(key, ClusterTestSettings.port4); + + List t = new ArrayList<>(); + AsyncExecutions keys = nodes.commands().get(key); + keys.stream().forEach(lcs -> { + lcs.toCompletableFuture().exceptionally(throwable -> { + t.add(throwable); + return null; + }); + }); + + TestFutures.awaitOrTimeout(CompletableFuture.allOf(keys.futures()).exceptionally(throwable -> null)); + + assertThat(t.size()).isGreaterThan(0); + } + + @Test + void testReplicasWithReadOnly() { + + AsyncNodeSelection nodes = commands.replicas(redisClusterNode -> redisClusterNode + .is(RedisClusterNode.NodeFlag.REPLICA)); + + assertThat(nodes.size()).isEqualTo(2); + + TestFutures.awaitOrTimeout(commands.set(key, value)); + waitForReplication(key, ClusterTestSettings.port4); + + List t = new ArrayList<>(); + List strings = new ArrayList<>(); + AsyncExecutions keys = nodes.commands().get(key); + keys.stream().forEach(lcs -> { + lcs.toCompletableFuture().exceptionally(throwable -> { + t.add(throwable); + return null; + }); + lcs.thenAccept(strings::add); + }); + + TestFutures.awaitOrTimeout(CompletableFuture.allOf(keys.futures()).exceptionally(throwable -> null)); + Wait.untilEquals(1, t::size).waitOrTimeout(); + + assertThat(t).hasSize(1); + assertThat(strings).hasSize(1).contains(value); + } + + void waitForReplication(String key, int port) { + waitForReplication(commands, key, port); + } + + static void waitForReplication(RedisAdvancedClusterAsyncCommands commands, String key, int port) + { + + AsyncNodeSelection selection = commands + .replicas(redisClusterNode -> redisClusterNode.getUri().getPort() == port); + Wait.untilNotEquals(null, () -> { + for (CompletableFuture future : selection.commands().get(key).futures()) { + + TestFutures.awaitOrTimeout(future); + + String result = TestFutures.getOrTimeout((Future) future); + if (result != null) { + return result; + } + } + return null; + }).waitOrTimeout(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/NodeSelectionSyncIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/NodeSelectionSyncIntegrationTests.java new file mode 100644 index 0000000000..d268de6ab5 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/NodeSelectionSyncIntegrationTests.java @@ -0,0 +1,246 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.ScriptOutputType.STATUS; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Fail.fail; + +import java.time.Duration; +import java.util.*; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.RedisCommandTimeoutException; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.Executions; +import io.lettuce.core.cluster.api.sync.NodeSelection; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.LettuceSets; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class NodeSelectionSyncIntegrationTests extends TestSupport { + + private final RedisClusterClient clusterClient; + private final RedisAdvancedClusterCommands commands; + + @Inject + NodeSelectionSyncIntegrationTests(RedisClusterClient clusterClient, + StatefulRedisClusterConnection connection) { + + this.clusterClient = clusterClient; + this.commands = connection.sync(); + connection.sync().flushall(); + } + + @Test + void testMultiNodeOperations() { + + List expectation = new ArrayList<>(); + for (char c = 'a'; c < 'z'; c++) { + String key = new String(new char[] { c, c, c }); + expectation.add(key); + commands.set(key, value); + } + + List result = new Vector<>(); + + Executions executions = commands.masters().commands().keys(result::add, "*"); + + assertThat(executions).hasSize(2); + + Collections.sort(expectation); + Collections.sort(result); + + assertThat(result).isEqualTo(expectation); + } + + @Test + void testNodeSelectionCount() { + assertThat(commands.all().size()).isEqualTo(4); + assertThat(commands.slaves().size()).isEqualTo(2); + assertThat(commands.masters().size()).isEqualTo(2); + + assertThat(commands.nodes(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.MYSELF)).size()) + .isEqualTo(1); + } + + @Test + void testNodeSelection() { + + NodeSelection onlyMe = commands + .nodes(redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)); + Map> map = onlyMe.asMap(); + + assertThat(map).hasSize(1); + + RedisCommands node = onlyMe.commands(0); + assertThat(node).isNotNull(); + + RedisClusterNode redisClusterNode = onlyMe.node(0); + assertThat(redisClusterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MYSELF); + + assertThat(onlyMe.asMap()).hasSize(1); + } + + @Test + void testDynamicNodeSelection() { + + Partitions partitions = commands.getStatefulConnection().getPartitions(); + partitions.forEach( + redisClusterNode -> redisClusterNode.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MASTER))); + + NodeSelection selection = commands + .nodes(redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF), true); + + assertThat(selection.asMap()).hasSize(0); + partitions.getPartition(0) + .setFlags(LettuceSets.unmodifiableSet(RedisClusterNode.NodeFlag.MYSELF, RedisClusterNode.NodeFlag.MASTER)); + assertThat(selection.asMap()).hasSize(1); + + partitions.getPartition(1) + .setFlags(LettuceSets.unmodifiableSet(RedisClusterNode.NodeFlag.MYSELF, RedisClusterNode.NodeFlag.MASTER)); + assertThat(selection.asMap()).hasSize(2); + + clusterClient.reloadPartitions(); + } + + @Test + void testNodeSelectionPing() { + + NodeSelection onlyMe = commands + .nodes(redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)); + Map> map = onlyMe.asMap(); + + assertThat(map).hasSize(1); + + Executions ping = onlyMe.commands().ping(); + + assertThat(ping.get(onlyMe.node(0))).isEqualTo("PONG"); + } + + @Test + void testStaticNodeSelection() { + + NodeSelection selection = commands + .nodes(redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF), false); + + assertThat(selection.asMap()).hasSize(1); + + commands.getStatefulConnection().getPartitions().getPartition(2) + .setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MYSELF)); + + assertThat(selection.asMap()).hasSize(1); + + clusterClient.reloadPartitions(); + } + + @Test + void testAsynchronicityOfMultiNodeExecution() { + + RedisAdvancedClusterCommands connection2 = clusterClient.connect().sync(); + + connection2.setTimeout(Duration.ofSeconds(1)); + NodeSelection masters = connection2.masters(); + masters.commands().configSet("lua-time-limit", "10"); + + Executions eval = null; + try { + eval = masters.commands().eval("while true do end", STATUS, new String[0]); + fail("missing exception"); + } catch (RedisCommandTimeoutException e) { + assertThat(e).hasMessageContaining("Command timed out for node(s)"); + } + + commands.masters().commands().scriptKill(); + } + + @Test + void testReplicasReadWrite() { + + NodeSelection nodes = commands + .nodes(redisClusterNode -> redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.SLAVE)); + + assertThat(nodes.size()).isEqualTo(2); + + commands.set(key, value); + waitForReplication(key, ClusterTestSettings.port4); + + try { + + nodes.commands().get(key); + fail("Missing RedisCommandExecutionException: MOVED"); + } catch (RedisCommandExecutionException e) { + if (e.getMessage().startsWith("MOVED")) { + assertThat(e.getSuppressed()).isEmpty(); + } else { + assertThat(e.getSuppressed()).isNotEmpty(); + } + } + } + + @Test + void testSlavesWithReadOnly() { + + int slot = SlotHash.getSlot(key); + Optional master = clusterClient.getPartitions().getPartitions().stream() + .filter(redisClusterNode -> redisClusterNode.hasSlot(slot)).findFirst(); + + NodeSelection nodes = commands + .slaves(redisClusterNode -> redisClusterNode.is(RedisClusterNode.NodeFlag.SLAVE) + && redisClusterNode.getSlaveOf().equals(master.get().getNodeId())); + + assertThat(nodes.size()).isEqualTo(1); + + commands.set(key, value); + waitForReplication(key, ClusterTestSettings.port4); + + Executions keys = nodes.commands().get(key); + assertThat(keys).hasSize(1).contains(value); + } + + void waitForReplication(String key, int port) { + waitForReplication(commands, key, port); + } + + static void waitForReplication(RedisAdvancedClusterCommands commands, String key, int port) { + + NodeSelection selection = commands + .slaves(redisClusterNode -> redisClusterNode.getUri().getPort() == port); + Wait.untilNotEquals(null, () -> { + + Executions strings = selection.commands().get(key); + if (strings.stream().filter(s -> s != null).findFirst().isPresent()) { + return "OK"; + } + + return null; + }).waitOrTimeout(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/PartitionsConsensusTestSupport.java b/src/test/java/io/lettuce/core/cluster/PartitionsConsensusTestSupport.java new file mode 100644 index 0000000000..88a23fb40b --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/PartitionsConsensusTestSupport.java @@ -0,0 +1,52 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.*; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; + +/** + * @author Mark Paluch + */ +class PartitionsConsensusTestSupport { + + static RedisClusterNode createNode(int nodeId) { + return new RedisClusterNode(RedisURI.create("localhost", 6379-2020 + nodeId), "" + nodeId, true, "", 0, 0, 0, + Collections.emptyList(), new HashSet<>()); + } + + static Partitions createPartitions(RedisClusterNode... nodes) { + + Partitions partitions = new Partitions(); + partitions.addAll(Arrays.asList(nodes)); + return partitions; + } + + static Map createMap(Partitions... partitionses) { + + Map partitionsMap = new HashMap<>(); + + int counter = 0; + for (Partitions partitions : partitionses) { + partitionsMap.put(createNode(counter++).getUri(), partitions); + } + + return partitionsMap; + } +} diff --git a/src/test/java/io/lettuce/core/cluster/PipelinedRedisFutureUnitTests.java b/src/test/java/io/lettuce/core/cluster/PipelinedRedisFutureUnitTests.java new file mode 100644 index 0000000000..8d850da4e1 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/PipelinedRedisFutureUnitTests.java @@ -0,0 +1,56 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.HashMap; + +import org.junit.jupiter.api.Test; + +import io.lettuce.test.TestFutures; + +/** + * @author Mark Paluch + */ +class PipelinedRedisFutureUnitTests { + + private PipelinedRedisFuture sut; + + @Test + void testComplete() { + + String other = "other"; + + sut = new PipelinedRedisFuture<>(new HashMap<>(), o -> other); + + sut.complete(""); + assertThat(TestFutures.getOrTimeout(sut.toCompletableFuture())).isEqualTo(other); + assertThat(sut.getError()).isNull(); + } + + @Test + void testCompleteExceptionally() { + + String other = "other"; + + sut = new PipelinedRedisFuture<>(new HashMap<>(), o -> other); + + sut.completeExceptionally(new Exception()); + assertThat(TestFutures.getOrTimeout(sut.toCompletableFuture())).isEqualTo(other); + assertThat(sut.getError()).isNull(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/PooledClusterConnectionProviderUnitTests.java b/src/test/java/io/lettuce/core/cluster/PooledClusterConnectionProviderUnitTests.java new file mode 100644 index 0000000000..6221b98b9a --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/PooledClusterConnectionProviderUnitTests.java @@ -0,0 +1,405 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.Assert.fail; +import static org.mockito.Matchers.any; +import static org.mockito.Matchers.eq; +import static org.mockito.Mockito.*; + +import java.net.SocketAddress; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.ClusterConnectionProvider.Intent; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.resource.ClientResources; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class PooledClusterConnectionProviderUnitTests { + + private PooledClusterConnectionProvider sut; + + @Mock + SocketAddress socketAddressMock; + + @Mock + RedisClusterClient clientMock; + + @Mock + RedisChannelWriter writerMock; + + @Mock(extraInterfaces = StatefulRedisConnection.class) + RedisChannelHandler channelHandlerMock; + + private StatefulRedisConnection nodeConnectionMock; + + @Mock + RedisCommands commandsMock; + + @Mock + RedisAsyncCommands asyncCommandsMock; + + @Mock + ClientResources clientResourcesMock; + + @Mock + ClusterEventListener clusterEventListener; + + private Partitions partitions = new Partitions(); + + @BeforeEach + void before() { + + nodeConnectionMock = (StatefulRedisConnection) channelHandlerMock; + + sut = new PooledClusterConnectionProvider<>(clientMock, writerMock, StringCodec.UTF8, clusterEventListener); + + List slots1 = IntStream.range(0, 8192).boxed().collect(Collectors.toList()); + List slots2 = IntStream.range(8192, SlotHash.SLOT_COUNT).boxed().collect(Collectors.toList()); + + partitions.add(new RedisClusterNode(RedisURI.create("localhost", 1), "1", true, null, 0, 0, 0, slots1, Collections + .singleton(RedisClusterNode.NodeFlag.MASTER))); + partitions.add(new RedisClusterNode(RedisURI.create("localhost", 2), "2", true, "1", 0, 0, 0, slots2, Collections + .singleton(RedisClusterNode.NodeFlag.SLAVE))); + + sut.setPartitions(partitions); + + when(nodeConnectionMock.async()).thenReturn(asyncCommandsMock); + } + + @Test + void shouldObtainConnection() { + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any())) + .thenReturn( + ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + StatefulRedisConnection connection = sut.getConnection(Intent.READ, 1); + + assertThat(connection).isSameAs(nodeConnectionMock); + verify(connection).setAutoFlushCommands(true); + verifyNoMoreInteractions(connection); + } + + @Test + void shouldReuseMasterConnectionForReadFromMaster() { + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any())) + .thenReturn( + ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + sut.setReadFrom(ReadFrom.MASTER); + + StatefulRedisConnection connection = sut.getConnection(Intent.READ, 1); + + assertThat(connection).isSameAs(nodeConnectionMock); + verify(connection).setAutoFlushCommands(true); + verifyNoMoreInteractions(connection); + } + + @Test + void shouldObtainConnectionReadFromSlave() { + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + AsyncCommand async = new AsyncCommand<>(new Command<>(CommandType.READONLY, null, null)); + async.complete(); + + when(asyncCommandsMock.readOnly()).thenReturn(async); + + sut.setReadFrom(ReadFrom.REPLICA); + + StatefulRedisConnection connection = sut.getConnection(Intent.READ, 1); + + assertThat(connection).isSameAs(nodeConnectionMock); + verify(connection).async(); + verify(asyncCommandsMock).readOnly(); + verify(connection).setAutoFlushCommands(true); + } + + @Test + void shouldRandomizeReadNode() { + + StatefulRedisConnection nodeConnectionMock2 = mock(StatefulRedisConnection.class); + when(nodeConnectionMock.isOpen()).thenReturn(true); + when(nodeConnectionMock2.isOpen()).thenReturn(true); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock2))); + + AsyncCommand async = new AsyncCommand<>(new Command<>(CommandType.READONLY, null, null)); + async.complete(); + + when(asyncCommandsMock.readOnly()).thenReturn(async); + when(nodeConnectionMock2.async()).thenReturn(asyncCommandsMock); + + sut.setReadFrom(ReadFrom.ANY); + + List> readCandidates = new ArrayList<>(); + + for (int i = 0; i < 10; i++) { + readCandidates.add(sut.getConnection(Intent.READ, 1)); + } + + assertThat(readCandidates).contains(nodeConnectionMock, nodeConnectionMock2); + } + + @Test + void shouldNotRandomizeReadNode() { + + StatefulRedisConnection nodeConnectionMock2 = mock(StatefulRedisConnection.class); + when(nodeConnectionMock.isOpen()).thenReturn(true); + when(nodeConnectionMock2.isOpen()).thenReturn(true); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock2))); + + AsyncCommand async = new AsyncCommand<>(new Command<>(CommandType.READONLY, null, null)); + async.complete(); + + when(asyncCommandsMock.readOnly()).thenReturn(async); + when(nodeConnectionMock2.async()).thenReturn(asyncCommandsMock); + + sut.setReadFrom(ReadFrom.REPLICA); + + List> readCandidates = new ArrayList<>(); + + for (int i = 0; i < 10; i++) { + readCandidates.add(sut.getConnection(Intent.READ, 1)); + } + + assertThat(readCandidates).contains(nodeConnectionMock2).doesNotContain(nodeConnectionMock); + } + + @Test + void shouldCloseConnectionOnConnectFailure() { + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + AsyncCommand async = new AsyncCommand<>(new Command<>(CommandType.READONLY, null, null)); + async.completeExceptionally(new RuntimeException()); + + when(asyncCommandsMock.readOnly()).thenReturn(async); + + sut.setReadFrom(ReadFrom.REPLICA); + + try { + sut.getConnection(Intent.READ, 1); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).hasRootCauseInstanceOf(RuntimeException.class); + } + + verify(nodeConnectionMock).close(); + verify(clientMock).connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any()); + } + + @Test + void shouldRetryConnectionAttemptAfterConnectionAttemptWasBroken() { + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + AsyncCommand async = new AsyncCommand<>(new Command<>(CommandType.READONLY, null, null)); + async.completeExceptionally(new RuntimeException()); + + when(asyncCommandsMock.readOnly()).thenReturn(async); + + sut.setReadFrom(ReadFrom.REPLICA); + + try { + sut.getConnection(Intent.READ, 1); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).hasRootCauseInstanceOf(RuntimeException.class); + } + verify(nodeConnectionMock).close(); + + async = new AsyncCommand<>(new Command<>(CommandType.READONLY, null, null)); + async.complete(); + + when(asyncCommandsMock.readOnly()).thenReturn(async); + + sut.getConnection(Intent.READ, 1); + + verify(clientMock, times(2)).connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any()); + } + + @Test + void shouldSelectSuccessfulConnectionIfOtherNodesFailed() { + + CompletableFuture> failed = new CompletableFuture<>(); + failed.completeExceptionally(new IllegalStateException()); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, failed)); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + AsyncCommand async = new AsyncCommand<>(new Command<>(CommandType.READONLY, null, null)); + async.complete("OK"); + + when(asyncCommandsMock.readOnly()).thenReturn(async); + + sut.setReadFrom(ReadFrom.MASTER_PREFERRED); + + assertThat(sut.getConnection(Intent.READ, 1)).isNotNull().isSameAs(nodeConnectionMock); + + // cache access + assertThat(sut.getConnection(Intent.READ, 1)).isNotNull().isSameAs(nodeConnectionMock); + + verify(clientMock).connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any()); + verify(clientMock).connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any()); + } + + @Test + void shouldFailIfAllReadCandidateNodesFail() { + + CompletableFuture> failed1 = new CompletableFuture<>(); + failed1.completeExceptionally(new IllegalStateException()); + + CompletableFuture> failed2 = new CompletableFuture<>(); + failed2.completeExceptionally(new IllegalStateException()); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, failed2)); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, failed2)); + + AsyncCommand async = new AsyncCommand<>(new Command<>(CommandType.READONLY, null, null)); + async.complete("OK"); + + sut.setReadFrom(ReadFrom.MASTER_PREFERRED); + + try { + sut.getConnection(Intent.READ, 1); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).isInstanceOf(RedisConnectionException.class) + .hasRootCauseExactlyInstanceOf(IllegalStateException.class); + } + + verify(clientMock).connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any()); + verify(clientMock).connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:2"), any(), any()); + } + + @Test + void shouldNotifyListerOnUncoveredWriteSlot() { + + partitions.clear(); + + sut.getConnectionAsync(Intent.WRITE, 2); + + verify(clusterEventListener).onUncoveredSlot(2); + } + + @Test + void shouldNotifyListerOnUncoveredReadSlot() { + + partitions.clear(); + + sut.getConnectionAsync(Intent.WRITE, 2); + + verify(clusterEventListener).onUncoveredSlot(2); + } + + @Test + void shouldNotifyListerOnUncoveredReadSlotAfterSelection() { + + sut.setReadFrom(new ReadFrom() { + @Override + public List select(Nodes nodes) { + return Collections.emptyList(); + } + }); + + sut.getConnectionAsync(Intent.READ, 2); + + verify(clusterEventListener).onUncoveredSlot(2); + } + + @Test + void shouldCloseConnections() { + + when(channelHandlerMock.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + + when(clientMock.connectToNodeAsync(eq(StringCodec.UTF8), eq("localhost:1"), any(), any())) + .thenReturn(ConnectionFuture.from(socketAddressMock, CompletableFuture.completedFuture(nodeConnectionMock))); + + StatefulRedisConnection connection = sut.getConnection(Intent.READ, 1); + assertThat(connection).isNotNull(); + + sut.close(); + + verify(channelHandlerMock).closeAsync(); + } + + @Test + void shouldRejectConnectionsToUnknownNodeId() { + + assertThatThrownBy(() -> sut.getConnection(Intent.READ, "foobar")).isInstanceOf(UnknownPartitionException.class); + + verify(clusterEventListener).onUnknownNode(); + } + + @Test + void shouldRejectConnectionsToUnknownNodeHostAndPort() { + + assertThatThrownBy(() -> sut.getConnection(Intent.READ, "localhost", 1234)) + .isInstanceOf(UnknownPartitionException.class); + + verify(clusterEventListener).onUnknownNode(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ReadFromUnitTests.java b/src/test/java/io/lettuce/core/cluster/ReadFromUnitTests.java new file mode 100644 index 0000000000..0a4fb01da1 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ReadFromUnitTests.java @@ -0,0 +1,135 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.Collections; +import java.util.Iterator; +import java.util.List; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.models.role.RedisNodeDescription; + +/** + * @author Mark Paluch + * @author Ryosuke Hasebe + */ +class ReadFromUnitTests { + + private Partitions sut = new Partitions(); + private RedisClusterNode nearest = new RedisClusterNode(); + private RedisClusterNode master = new RedisClusterNode(); + private RedisClusterNode replica = new RedisClusterNode(); + + @BeforeEach + void before() { + + master.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.MASTER)); + nearest.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.SLAVE)); + replica.setFlags(Collections.singleton(RedisClusterNode.NodeFlag.SLAVE)); + + sut.addPartition(nearest); + sut.addPartition(master); + sut.addPartition(replica); + } + + @Test + void master() { + List result = ReadFrom.MASTER.select(getNodes()); + assertThat(result).hasSize(1).containsOnly(master); + } + + @Test + void masterPreferred() { + List result = ReadFrom.MASTER_PREFERRED.select(getNodes()); + assertThat(result).hasSize(3).containsExactly(master, nearest, replica); + } + + @Test + void replica() { + List result = ReadFrom.REPLICA.select(getNodes()); + assertThat(result).hasSize(2).contains(nearest, replica); + } + + @Test + void replicaPreferred() { + List result = ReadFrom.REPLICA_PREFERRED.select(getNodes()); + assertThat(result).hasSize(3).containsExactly(nearest, replica, master); + } + + @Test + void nearest() { + List result = ReadFrom.NEAREST.select(getNodes()); + assertThat(result).hasSize(3).containsExactly(nearest, master, replica); + } + + @Test + void valueOfNull() { + assertThatThrownBy(() -> ReadFrom.valueOf(null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void valueOfUnknown() { + assertThatThrownBy(() -> ReadFrom.valueOf("unknown")).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void valueOfNearest() { + assertThat(ReadFrom.valueOf("nearest")).isEqualTo(ReadFrom.NEAREST); + } + + @Test + void valueOfMaster() { + assertThat(ReadFrom.valueOf("master")).isEqualTo(ReadFrom.MASTER); + } + + @Test + void valueOfMasterPreferred() { + assertThat(ReadFrom.valueOf("masterPreferred")).isEqualTo(ReadFrom.MASTER_PREFERRED); + } + + @Test + void valueOfSlave() { + assertThat(ReadFrom.valueOf("slave")).isEqualTo(ReadFrom.REPLICA); + } + + @Test + void valueOfSlavePreferred() { + assertThat(ReadFrom.valueOf("slavePreferred")).isEqualTo(ReadFrom.REPLICA_PREFERRED); + } + + private ReadFrom.Nodes getNodes() { + return new ReadFrom.Nodes() { + @Override + public List getNodes() { + return (List) sut.getPartitions(); + } + + @Override + public Iterator iterator() { + return getNodes().iterator(); + } + }; + + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ReadOnlyCommandsUnitTests.java b/src/test/java/io/lettuce/core/cluster/ReadOnlyCommandsUnitTests.java new file mode 100644 index 0000000000..453307908b --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ReadOnlyCommandsUnitTests.java @@ -0,0 +1,44 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Tests for {@link ReadOnlyCommands}. + * + * @author Mark Paluch + */ +class ReadOnlyCommandsUnitTests { + + @Test + void testCount() { + assertThat(ReadOnlyCommands.getReadOnlyCommands()).hasSize(78); + } + + @Test + void testResolvableCommandNames() { + + for (ProtocolKeyword readOnlyCommand : ReadOnlyCommands.getReadOnlyCommands()) { + assertThat(readOnlyCommand.name()).isEqualTo(CommandType.valueOf(readOnlyCommand.name()).name()); + } + } +} diff --git a/src/test/java/io/lettuce/core/cluster/RedisClusterClientFactoryTests.java b/src/test/java/io/lettuce/core/cluster/RedisClusterClientFactoryTests.java new file mode 100644 index 0000000000..0a340d1bf5 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RedisClusterClientFactoryTests.java @@ -0,0 +1,141 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.Arrays; +import java.util.List; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class RedisClusterClientFactoryTests { + + private static final String URI = "redis://" + TestSettings.host() + ":" + TestSettings.port(); + private static final RedisURI REDIS_URI = RedisURI.create(URI); + private static final List REDIS_URIS = LettuceLists.newList(REDIS_URI); + + @Test + void withStringUri() { + FastShutdown.shutdown(RedisClusterClient.create(TestClientResources.get(), URI)); + } + + @Test + void withStringUriNull() { + assertThatThrownBy(() -> RedisClusterClient.create((String) null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void withUri() { + FastShutdown.shutdown(RedisClusterClient.create(REDIS_URI)); + } + + @Test + void withUriUri() { + assertThatThrownBy(() -> RedisClusterClient.create((RedisURI) null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void withUriIterable() { + FastShutdown.shutdown(RedisClusterClient.create(LettuceLists.newList(REDIS_URI))); + } + + @Test + void withUriIterableNull() { + assertThatThrownBy(() -> RedisClusterClient.create((Iterable) null)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void clientResourcesWithStringUri() { + FastShutdown.shutdown(RedisClusterClient.create(TestClientResources.get(), URI)); + } + + @Test + void clientResourcesWithStringUriNull() { + assertThatThrownBy(() -> RedisClusterClient.create(TestClientResources.get(), (String) null)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void clientResourcesNullWithStringUri() { + assertThatThrownBy(() -> RedisClusterClient.create(null, URI)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clientResourcesWithUri() { + FastShutdown.shutdown(RedisClusterClient.create(TestClientResources.get(), REDIS_URI)); + } + + @Test + void clientResourcesWithUriNull() { + assertThatThrownBy(() -> RedisClusterClient.create(TestClientResources.get(), (RedisURI) null)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void clientResourcesWithUriUri() { + assertThatThrownBy(() -> RedisClusterClient.create(null, REDIS_URI)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clientResourcesWithUriIterable() { + FastShutdown.shutdown(RedisClusterClient.create(TestClientResources.get(), LettuceLists.newList(REDIS_URI))); + } + + @Test + void clientResourcesWithUriIterableNull() { + assertThatThrownBy(() -> RedisClusterClient.create(TestClientResources.get(), (Iterable) null)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void clientResourcesNullWithUriIterable() { + assertThatThrownBy(() -> RedisClusterClient.create(null, REDIS_URIS)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clientWithDifferentSslSettings() { + assertThatThrownBy( + () -> RedisClusterClient.create(Arrays.asList(RedisURI.create("redis://host1"), + RedisURI.create("redis+ssl://host1")))).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clientWithDifferentTlsSettings() { + assertThatThrownBy( + () -> RedisClusterClient.create(Arrays.asList(RedisURI.create("rediss://host1"), + RedisURI.create("redis+tls://host1")))).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void clientWithDifferentVerifyPeerSettings() { + RedisURI redisURI = RedisURI.create("rediss://host1"); + redisURI.setVerifyPeer(false); + + assertThatThrownBy(() -> RedisClusterClient.create(Arrays.asList(redisURI, RedisURI.create("rediss://host1")))) + .isInstanceOf(IllegalArgumentException.class); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/RedisClusterClientIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/RedisClusterClientIntegrationTests.java new file mode 100644 index 0000000000..0a2edc96ee --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RedisClusterClientIntegrationTests.java @@ -0,0 +1,625 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.ClusterTestUtil.getOwnPartition; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.Assertions.fail; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.test.Delay; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.TestFutures; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("unchecked") +@ExtendWith(LettuceExtension.class) +class RedisClusterClientIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisClusterClient clusterClient; + + private StatefulRedisConnection redis1; + private StatefulRedisConnection redis2; + private StatefulRedisConnection redis3; + private StatefulRedisConnection redis4; + + private RedisCommands redissync1; + private RedisCommands redissync2; + private RedisCommands redissync3; + private RedisCommands redissync4; + + private RedisAdvancedClusterCommands sync; + private StatefulRedisClusterConnection connection; + + @Inject + RedisClusterClientIntegrationTests(RedisClient client, RedisClusterClient clusterClient) { + this.client = client; + this.clusterClient = clusterClient; + } + + @BeforeEach + void before() { + + clusterClient.setOptions(ClusterClientOptions.create()); + + redis1 = client.connect(RedisURI.Builder.redis(host, ClusterTestSettings.port1).build()); + redis2 = client.connect(RedisURI.Builder.redis(host, ClusterTestSettings.port2).build()); + redis3 = client.connect(RedisURI.Builder.redis(host, ClusterTestSettings.port3).build()); + redis4 = client.connect(RedisURI.Builder.redis(host, ClusterTestSettings.port4).build()); + + redissync1 = redis1.sync(); + redissync2 = redis2.sync(); + redissync3 = redis3.sync(); + redissync4 = redis4.sync(); + + clusterClient.reloadPartitions(); + connection = clusterClient.connect(); + sync = connection.sync(); + } + + @AfterEach + void after() { + connection.close(); + redis1.close(); + + redissync1.getStatefulConnection().close(); + redissync2.getStatefulConnection().close(); + redissync3.getStatefulConnection().close(); + redissync4.getStatefulConnection().close(); + } + + @Test + void statefulConnectionFromSync() { + RedisAdvancedClusterCommands sync = clusterClient.connect().sync(); + assertThat(sync.getStatefulConnection().sync()).isSameAs(sync); + connection.close(); + } + + @Test + void statefulConnectionFromAsync() { + RedisAdvancedClusterAsyncCommands async = clusterClient.connect().async(); + assertThat(async.getStatefulConnection().async()).isSameAs(async); + connection.close(); + } + + @Test + void shouldApplyTimeoutOnRegularConnection() { + + StatefulRedisClusterConnection connection = clusterClient.connect(); + + assertThat(connection.getTimeout()).isEqualTo(Duration.ofMinutes(1)); + assertThat(connection.getConnection(host, ClusterTestSettings.port1).getTimeout()).isEqualTo(Duration.ofMinutes(1)); + + connection.close(); + } + + @Test + void shouldApplyTimeoutOnRegularConnectionUsingCodec() { + + clusterClient.setDefaultTimeout(2, TimeUnit.MINUTES); + + StatefulRedisClusterConnection connection = clusterClient.connect(StringCodec.UTF8); + + assertThat(connection.getTimeout()).isEqualTo(Duration.ofMinutes(2)); + assertThat(connection.getConnection(host, ClusterTestSettings.port1).getTimeout()).isEqualTo(Duration.ofMinutes(2)); + + connection.close(); + } + + @Test + void shouldApplyTimeoutOnPubSubConnection() { + + clusterClient.setDefaultTimeout(Duration.ofMinutes(1)); + + StatefulRedisPubSubConnection connection = clusterClient.connectPubSub(); + + assertThat(connection.getTimeout()).isEqualTo(Duration.ofMinutes(1)); + connection.close(); + } + + @Test + void shouldApplyTimeoutOnPubSubConnectionUsingCodec() { + + clusterClient.setDefaultTimeout(Duration.ofMinutes(1)); + StatefulRedisPubSubConnection connection = clusterClient.connectPubSub(StringCodec.UTF8); + + assertThat(connection.getTimeout()).isEqualTo(Duration.ofMinutes(1)); + connection.close(); + } + + @Test + void clusterConnectionShouldSetClientName() { + + StatefulRedisClusterConnection connection = clusterClient.connect(); + + assertThat(connection.sync().clientGetname()).isEqualTo("my-client"); + Delay.delay(Duration.ofMillis(10)); + connection.sync().quit(); + Wait.untilTrue(connection::isOpen).waitOrTimeout(); + assertThat(connection.sync().clientGetname()).isEqualTo("my-client"); + + StatefulRedisConnection nodeConnection = connection + .getConnection(connection.getPartitions().getPartition(0).getNodeId()); + assertThat(nodeConnection.sync().clientGetname()).isEqualTo("my-client"); + + connection.close(); + } + + @Test + void pubSubclusterConnectionShouldSetClientName() { + + StatefulRedisClusterPubSubConnection connection = clusterClient.connectPubSub(); + + assertThat(connection.sync().clientGetname()).isEqualTo("my-client"); + Delay.delay(Duration.ofMillis(10)); + connection.sync().quit(); + Wait.untilTrue(connection::isOpen).waitOrTimeout(); + + assertThat(connection.sync().clientGetname()).isEqualTo("my-client"); + + StatefulRedisConnection nodeConnection = connection + .getConnection(connection.getPartitions().getPartition(0).getNodeId()); + assertThat(nodeConnection.sync().clientGetname()).isEqualTo("my-client"); + + connection.close(); + } + + @Test + void reloadPartitions() { + + clusterClient.reloadPartitions(); + assertThat(clusterClient.getPartitions()).hasSize(4); + } + + @Test + void reloadPartitionsWithDynamicSourcesFallsBackToInitialSeedNodes() { + + client.setOptions(ClusterClientOptions.builder() + .topologyRefreshOptions(ClusterTopologyRefreshOptions.builder().dynamicRefreshSources(true).build()).build()); + + Partitions partitions = clusterClient.getPartitions(); + partitions.clear(); + partitions.add(new RedisClusterNode(RedisURI.create("localhost", 1), "foo", false, null, 0, 0, 0, + Collections.emptyList(), Collections.emptySet())); + + Partitions reloaded = clusterClient.loadPartitions(); + + assertThat(reloaded).hasSize(4); + } + + @Test + void testClusteredOperations() { + + SlotHash.getSlot(ClusterTestSettings.KEY_B.getBytes()); // 3300-2020 -> Node 1 and Slave (Node 3) + SlotHash.getSlot(ClusterTestSettings.KEY_A.getBytes()); // 15495 -> Node 2 + + RedisFuture result = redis1.async().set(ClusterTestSettings.KEY_B, value); + assertThat(result.getError()).isEqualTo(null); + assertThat(redissync1.set(ClusterTestSettings.KEY_B, "value")).isEqualTo("OK"); + + RedisFuture resultMoved = redis1.async().set(ClusterTestSettings.KEY_A, value); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(resultMoved)).hasMessageContaining("MOVED 15495"); + + clusterClient.reloadPartitions(); + RedisAdvancedClusterCommands connection = clusterClient.connect().sync(); + + assertThat(connection.set(ClusterTestSettings.KEY_A, value)).isEqualTo("OK"); + assertThat(connection.set(ClusterTestSettings.KEY_B, "myValue2")).isEqualTo("OK"); + assertThat(connection.set(ClusterTestSettings.KEY_D, "myValue2")).isEqualTo("OK"); + + connection.getStatefulConnection().close(); + } + + @Test + void testReset() { + + clusterClient.reloadPartitions(); + StatefulRedisClusterConnection connection = clusterClient.connect(); + + connection.sync().set(ClusterTestSettings.KEY_A, value); + connection.reset(); + + assertThat(connection.sync().set(ClusterTestSettings.KEY_A, value)).isEqualTo("OK"); + connection.close(); + } + + @Test + @SuppressWarnings({ "rawtypes" }) + void testClusterCommandRedirection() { + + RedisAdvancedClusterCommands connection = clusterClient.connect().sync(); + + // Command on node within the default connection + assertThat(connection.set(ClusterTestSettings.KEY_B, value)).isEqualTo("OK"); + + // gets redirection to node 3 + assertThat(connection.set(ClusterTestSettings.KEY_A, value)).isEqualTo("OK"); + connection.getStatefulConnection().close(); + } + + @Test + @SuppressWarnings({ "rawtypes" }) + void testClusterRedirection() { + + RedisAdvancedClusterAsyncCommands connection = clusterClient.connect().async(); + Partitions partitions = clusterClient.getPartitions(); + + for (RedisClusterNode partition : partitions) { + partition.setSlots(Collections.emptyList()); + if (partition.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + partition.setSlots(IntStream.range(0, SlotHash.SLOT_COUNT).boxed().collect(Collectors.toList())); + } + } + partitions.updateCache(); + + // appropriate cluster node + RedisFuture setB = connection.set(ClusterTestSettings.KEY_B, value); + + assertThat(setB.toCompletableFuture()).isInstanceOf(AsyncCommand.class); + + TestFutures.awaitOrTimeout(setB); + assertThat(setB.getError()).isNull(); + assertThat(TestFutures.getOrTimeout(setB)).isEqualTo("OK"); + + // gets redirection to node 3 + RedisFuture setA = connection.set(ClusterTestSettings.KEY_A, value); + + assertThat((CompletionStage) setA).isInstanceOf(AsyncCommand.class); + + TestFutures.awaitOrTimeout(setA); + assertThat(setA.getError()).isNull(); + assertThat(TestFutures.getOrTimeout(setA)).isEqualTo("OK"); + + connection.getStatefulConnection().close(); + } + + @Test + @SuppressWarnings({ "rawtypes" }) + void testClusterRedirectionLimit() throws Exception { + + clusterClient.setOptions(ClusterClientOptions.builder().maxRedirects(0).build()); + RedisAdvancedClusterAsyncCommands connection = clusterClient.connect().async(); + Partitions partitions = clusterClient.getPartitions(); + + for (RedisClusterNode partition : partitions) { + + if (partition.getSlots().contains(15495)) { + partition.setSlots(Collections.emptyList()); + } else { + partition.setSlots(IntStream.range(0, SlotHash.SLOT_COUNT).boxed().collect(Collectors.toList())); + } + + } + partitions.updateCache(); + + // gets redirection to node 3 + RedisFuture setA = connection.set(ClusterTestSettings.KEY_A, value); + + assertThat(setA instanceof AsyncCommand).isTrue(); + + setA.await(10, TimeUnit.SECONDS); + assertThat(setA.getError()).isEqualTo("MOVED 15495 127.0.0.1:7380"); + + connection.getStatefulConnection().close(); + } + + @Test + void closeConnection() { + + RedisAdvancedClusterCommands connection = clusterClient.connect().sync(); + + List time = connection.time(); + assertThat(time).hasSize(2); + + connection.getStatefulConnection().close(); + + assertThatThrownBy(connection::time).isInstanceOf(RedisException.class); + } + + @Test + void clusterAuth() { + + RedisClusterClient clusterClient = RedisClusterClient.create(TestClientResources.get(), + RedisURI.Builder.redis(TestSettings.host(), ClusterTestSettings.port7).withPassword("foobared").build()); + + StatefulRedisClusterConnection connection = clusterClient.connect(); + RedisAdvancedClusterCommands sync = connection.sync(); + + List time = sync.time(); + assertThat(time).hasSize(2); + + TestFutures.awaitOrTimeout(connection.async().quit()); + + Wait.untilTrue(connection::isOpen).waitOrTimeout(); + + time = sync.time(); + assertThat(time).hasSize(2); + + connection.close(); + FastShutdown.shutdown(clusterClient); + } + + @Test + void partitionRetrievalShouldFail() { + + RedisClusterClient clusterClient = RedisClusterClient.create(TestClientResources.get(), + RedisURI.Builder.redis(TestSettings.host(), ClusterTestSettings.port7).build()); + + assertThatThrownBy(clusterClient::getPartitions).isInstanceOf(RedisException.class) + .hasMessageContaining("Cannot obtain initial Redis Cluster topology"); + + FastShutdown.shutdown(clusterClient); + } + + @Test + void clusterNeedsAuthButNotSupplied() { + + RedisClusterClient clusterClient = RedisClusterClient.create(TestClientResources.get(), + RedisURI.Builder.redis(TestSettings.host(), ClusterTestSettings.port7).build()); + + try { + assertThatThrownBy(clusterClient::connect).isInstanceOf(RedisException.class); + } finally { + connection.close(); + FastShutdown.shutdown(clusterClient); + } + } + + @Test + void noClusterNodeAvailable() { + + RedisClusterClient clusterClient = RedisClusterClient.create(TestClientResources.get(), + RedisURI.Builder.redis(host, 40400).build()); + try { + clusterClient.connect(); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).isInstanceOf(RedisException.class); + } finally { + FastShutdown.shutdown(clusterClient); + } + } + + @Test + void getClusterNodeConnection() { + + RedisClusterNode redis1Node = getOwnPartition(redissync2); + + RedisClusterCommands connection = sync.getConnection(TestSettings.hostAddr(), + ClusterTestSettings.port2); + + String result = connection.clusterMyId(); + assertThat(result).isEqualTo(redis1Node.getNodeId()); + + } + + @Test + void operateOnNodeConnection() { + + sync.set(ClusterTestSettings.KEY_A, value); + sync.set(ClusterTestSettings.KEY_B, "d"); + + StatefulRedisConnection statefulRedisConnection = connection.getConnection(TestSettings.hostAddr(), + ClusterTestSettings.port2); + + RedisClusterCommands connection = statefulRedisConnection.sync(); + + assertThat(connection.get(ClusterTestSettings.KEY_A)).isEqualTo(value); + try { + connection.get(ClusterTestSettings.KEY_B); + fail("missing RedisCommandExecutionException: MOVED"); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("MOVED"); + } + } + + @Test + void testGetConnectionAsyncByNodeId() { + + RedisClusterNode partition = connection.getPartitions().getPartition(0); + + StatefulRedisConnection node = TestFutures + .getOrTimeout(connection.getConnectionAsync(partition.getNodeId())); + + assertThat(node.sync().ping()).isEqualTo("PONG"); + } + + @Test + void testGetConnectionAsyncByHostAndPort() { + + RedisClusterNode partition = connection.getPartitions().getPartition(0); + + RedisURI uri = partition.getUri(); + StatefulRedisConnection node = connection.getConnectionAsync(uri.getHost(), uri.getPort()).join(); + + assertThat(node.sync().ping()).isEqualTo("PONG"); + } + + @Test + void testStatefulConnection() { + RedisAdvancedClusterAsyncCommands async = connection.async(); + + assertThat(TestFutures.getOrTimeout(async.ping())).isEqualTo("PONG"); + } + + @Test + void getButNoPartitionForSlothash() { + + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + redisClusterNode.setSlots(new ArrayList<>()); + + } + RedisChannelHandler rch = (RedisChannelHandler) connection; + ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) rch.getChannelWriter(); + writer.setPartitions(clusterClient.getPartitions()); + clusterClient.getPartitions().reload(clusterClient.getPartitions().getPartitions()); + + assertThatThrownBy(() -> sync.get(key)).isInstanceOf(RedisException.class); + } + + @Test + void readOnlyOnCluster() { + + sync.readOnly(); + // commands are dispatched to a different connection, therefore it works for us. + sync.set(ClusterTestSettings.KEY_B, value); + + TestFutures.awaitOrTimeout(connection.async().quit()); + + assertThat(connection).extracting("connectionState").extracting("readOnly").isEqualTo(Boolean.TRUE); + + sync.readWrite(); + + assertThat(connection).extracting("connectionState").extracting("readOnly").isEqualTo(Boolean.FALSE); + RedisClusterClient clusterClient = RedisClusterClient.create(TestClientResources.get(), + RedisURI.Builder.redis(host, 40400).build()); + try { + clusterClient.connect(); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).isInstanceOf(RedisException.class); + } finally { + FastShutdown.shutdown(clusterClient); + } + } + + @Test + void getKeysInSlot() { + + sync.set(ClusterTestSettings.KEY_A, value); + sync.set(ClusterTestSettings.KEY_B, value); + + List keysA = sync.clusterGetKeysInSlot(ClusterTestSettings.SLOT_A, 10); + assertThat(keysA).isEqualTo(Collections.singletonList(ClusterTestSettings.KEY_A)); + + List keysB = sync.clusterGetKeysInSlot(ClusterTestSettings.SLOT_B, 10); + assertThat(keysB).isEqualTo(Collections.singletonList(ClusterTestSettings.KEY_B)); + + } + + @Test + void countKeysInSlot() { + + sync.set(ClusterTestSettings.KEY_A, value); + sync.set(ClusterTestSettings.KEY_B, value); + + Long result = sync.clusterCountKeysInSlot(ClusterTestSettings.SLOT_A); + assertThat(result).isEqualTo(1L); + + result = sync.clusterCountKeysInSlot(ClusterTestSettings.SLOT_B); + assertThat(result).isEqualTo(1L); + + int slotZZZ = SlotHash.getSlot("ZZZ".getBytes()); + result = sync.clusterCountKeysInSlot(slotZZZ); + assertThat(result).isEqualTo(0L); + + } + + @Test + void testClusterCountFailureReports() { + RedisClusterNode ownPartition = getOwnPartition(redissync1); + assertThat(redissync1.clusterCountFailureReports(ownPartition.getNodeId())).isGreaterThanOrEqualTo(0); + } + + @Test + void testClusterKeyslot() { + assertThat(redissync1.clusterKeyslot(ClusterTestSettings.KEY_A)).isEqualTo(ClusterTestSettings.SLOT_A); + assertThat(SlotHash.getSlot(ClusterTestSettings.KEY_A)).isEqualTo(ClusterTestSettings.SLOT_A); + } + + @Test + void testClusterSaveconfig() { + assertThat(redissync1.clusterSaveconfig()).isEqualTo("OK"); + } + + @Test + void testClusterSetConfigEpoch() { + try { + redissync1.clusterSetConfigEpoch(1L); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("ERR The user can assign a config epoch only"); + } + } + + @Test + void testReadFrom() { + StatefulRedisClusterConnection statefulConnection = connection; + + assertThat(statefulConnection.getReadFrom()).isEqualTo(ReadFrom.MASTER); + + statefulConnection.setReadFrom(ReadFrom.NEAREST); + assertThat(statefulConnection.getReadFrom()).isEqualTo(ReadFrom.NEAREST); + } + + @Test + void testReadFromNull() { + assertThatThrownBy(() -> connection.setReadFrom(null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void testPfmerge() { + + RedisAdvancedClusterCommands connection = clusterClient.connect().sync(); + + assertThat(SlotHash.getSlot("key2660")).isEqualTo(SlotHash.getSlot("key7112")).isEqualTo(SlotHash.getSlot("key8885")); + + connection.pfadd("key2660", "rand", "mat"); + connection.pfadd("key7112", "mat", "perrin"); + + connection.pfmerge("key8885", "key2660", "key7112"); + + assertThat(connection.pfcount("key8885")).isEqualTo(3); + + connection.getStatefulConnection().close(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/RedisClusterPasswordSecuredSslIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/RedisClusterPasswordSecuredSslIntegrationTests.java new file mode 100644 index 0000000000..11e0b76a70 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RedisClusterPasswordSecuredSslIntegrationTests.java @@ -0,0 +1,171 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.test.settings.TestSettings.host; +import static io.lettuce.test.settings.TestSettings.hostAddr; +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.List; +import java.util.stream.Collectors; + +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.Executions; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.test.CanConnect; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; + +/** + * @author Mark Paluch + */ +class RedisClusterPasswordSecuredSslIntegrationTests extends TestSupport { + + private static final int CLUSTER_PORT_SSL_1 = 7443; + private static final int CLUSTER_PORT_SSL_2 = 7444; + private static final int CLUSTER_PORT_SSL_3 = 7445; + + private static final String SLOT_1_KEY = "8HMdi"; + private static final String SLOT_16352_KEY = "UyAa4KqoWgPGKa"; + + private static RedisURI redisURI = RedisURI.Builder.redis(host(), CLUSTER_PORT_SSL_1).withPassword("foobared").withSsl(true) + .withVerifyPeer(false).build(); + private static RedisClusterClient redisClient = RedisClusterClient.create(TestClientResources.get(), redisURI); + + @BeforeEach + void before() { + assumeTrue(CanConnect.to(host(), CLUSTER_PORT_SSL_1), "Assume that stunnel runs on port 7443"); + assumeTrue(CanConnect.to(host(), CLUSTER_PORT_SSL_2), "Assume that stunnel runs on port 7444"); + assumeTrue(CanConnect.to(host(), CLUSTER_PORT_SSL_3), "Assume that stunnel runs on port 7445"); + assumeTrue(CanConnect.to(host(), 7479), "Assume that Redis runs on port 7479"); + assumeTrue(CanConnect.to(host(), 7480), "Assume that Redis runs on port 7480"); + assumeTrue(CanConnect.to(host(), 7481), "Assume that Redis runs on port 7481"); + } + + @AfterAll + static void afterClass() { + FastShutdown.shutdown(redisClient); + } + + @Test + void defaultClusterConnectionShouldWork() { + + StatefulRedisClusterConnection connection = redisClient.connect(); + assertThat(connection.sync().ping()).isEqualTo("PONG"); + + connection.close(); + } + + @Test + void partitionViewShouldContainClusterPorts() { + + StatefulRedisClusterConnection connection = redisClient.connect(); + List ports = connection.getPartitions().stream().map(redisClusterNode -> redisClusterNode.getUri().getPort()) + .collect(Collectors.toList()); + connection.close(); + + assertThat(ports).contains(CLUSTER_PORT_SSL_1, CLUSTER_PORT_SSL_2, CLUSTER_PORT_SSL_3); + } + + @Test + void routedOperationsAreWorking() { + + StatefulRedisClusterConnection connection = redisClient.connect(); + RedisAdvancedClusterCommands sync = connection.sync(); + + sync.set(SLOT_1_KEY, "value1"); + sync.set(SLOT_16352_KEY, "value2"); + + assertThat(sync.get(SLOT_1_KEY)).isEqualTo("value1"); + assertThat(sync.get(SLOT_16352_KEY)).isEqualTo("value2"); + + connection.close(); + } + + @Test + void nodeConnectionsShouldWork() { + + StatefulRedisClusterConnection connection = redisClient.connect(); + + // replica + StatefulRedisConnection node2Connection = connection.getConnection(hostAddr(), 7444); + + try { + node2Connection.sync().get(SLOT_1_KEY); + } catch (RedisCommandExecutionException e) { + assertThat(e).hasMessage("MOVED 1 127.0.0.1:7443"); + } + + connection.close(); + } + + @Test + void nodeSelectionApiShouldWork() { + + StatefulRedisClusterConnection connection = redisClient.connect(); + + Executions ping = connection.sync().all().commands().ping(); + assertThat(ping).hasSize(3).contains("PONG"); + + connection.close(); + } + + @Test + void connectionWithoutPasswordShouldFail() { + + RedisURI redisURI = RedisURI.Builder.redis(host(), CLUSTER_PORT_SSL_1).withSsl(true).withVerifyPeer(false).build(); + RedisClusterClient redisClusterClient = RedisClusterClient.create(TestClientResources.get(), redisURI); + + try { + redisClusterClient.reloadPartitions(); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("Cannot reload Redis Cluster topology"); + } finally { + FastShutdown.shutdown(redisClusterClient); + } + } + + @Test + void connectionWithoutPasswordShouldFail2() { + + RedisURI redisURI = RedisURI.Builder.redis(host(), CLUSTER_PORT_SSL_1).withSsl(true).withVerifyPeer(false).build(); + RedisClusterClient redisClusterClient = RedisClusterClient.create(TestClientResources.get(), redisURI); + + try { + redisClusterClient.connect(); + } catch (RedisConnectionException e) { + assertThat(e).hasMessageContaining("Unable to establish a connection to Redis Cluster"); + } finally { + FastShutdown.shutdown(redisClusterClient); + } + } + + @Test + void clusterNodeRefreshWorksForMultipleIterations() { + + redisClient.reloadPartitions(); + redisClient.reloadPartitions(); + redisClient.reloadPartitions(); + redisClient.reloadPartitions(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/RedisClusterReadFromIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/RedisClusterReadFromIntegrationTests.java new file mode 100644 index 0000000000..0eec456274 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RedisClusterReadFromIntegrationTests.java @@ -0,0 +1,115 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.TestSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("unchecked") +@ExtendWith(LettuceExtension.class) +class RedisClusterReadFromIntegrationTests extends TestSupport { + + private final RedisClusterClient clusterClient; + private StatefulRedisClusterConnection connection; + private RedisAdvancedClusterCommands sync; + + @Inject + RedisClusterReadFromIntegrationTests(RedisClusterClient clusterClient) { + this.clusterClient = clusterClient; + } + + @BeforeEach + void before() { + connection = clusterClient.connect(); + sync = connection.sync(); + } + + @AfterEach + void after() { + connection.close(); + } + + @Test + void defaultTest() { + assertThat(connection.getReadFrom()).isEqualTo(ReadFrom.MASTER); + } + + @Test + void readWriteMaster() { + + connection.setReadFrom(ReadFrom.MASTER); + + sync.set(key, value); + assertThat(sync.get(key)).isEqualTo(value); + } + + @Test + void readWriteMasterPreferred() { + + connection.setReadFrom(ReadFrom.MASTER_PREFERRED); + + sync.set(key, value); + assertThat(sync.get(key)).isEqualTo(value); + } + + @Test + void readWriteReplica() { + + connection.setReadFrom(ReadFrom.REPLICA); + + sync.set(key, "value1"); + + connection.getConnection(ClusterTestSettings.host, ClusterTestSettings.port2).sync().waitForReplication(1, 1000); + assertThat(sync.get(key)).isEqualTo("value1"); + } + + @Test + void readWriteReplicaPreferred() { + + connection.setReadFrom(ReadFrom.REPLICA_PREFERRED); + + sync.set(key, "value1"); + + connection.getConnection(ClusterTestSettings.host, ClusterTestSettings.port2).sync().waitForReplication(1, 1000); + assertThat(sync.get(key)).isEqualTo("value1"); + } + + @Test + void readWriteNearest() { + + connection.setReadFrom(ReadFrom.NEAREST); + + sync.set(key, "value1"); + + connection.getConnection(ClusterTestSettings.host, ClusterTestSettings.port2).sync().waitForReplication(1, 1000); + assertThat(sync.get(key)).isEqualTo("value1"); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/RedisClusterSetupTest.java b/src/test/java/io/lettuce/core/cluster/RedisClusterSetupTest.java new file mode 100644 index 0000000000..459251e6cc --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RedisClusterSetupTest.java @@ -0,0 +1,578 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.ClusterTestSettings.createSlots; +import static io.lettuce.core.cluster.ClusterTestUtil.getNodeId; +import static io.lettuce.core.cluster.ClusterTestUtil.getOwnPartition; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; +import java.util.stream.Collectors; + +import org.junit.*; + +import io.lettuce.category.SlowTests; +import io.lettuce.core.*; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.cluster.models.partitions.ClusterPartitionParser; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.test.ConnectionTestUtil; +import io.lettuce.test.TestFutures; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.DefaultRedisClient; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; + +/** + * Test for mutable cluster setup scenarios. + * + * @author Mark Paluch + * @since 3.0 + */ +@SuppressWarnings({ "unchecked" }) +@SlowTests +public class RedisClusterSetupTest extends TestSupport { + + private static final String host = TestSettings.hostAddr(); + + private static final ClusterTopologyRefreshOptions PERIODIC_REFRESH_ENABLED = ClusterTopologyRefreshOptions.builder() + .enablePeriodicRefresh(1, TimeUnit.SECONDS).dynamicRefreshSources(false).build(); + + private static RedisClusterClient clusterClient; + private static RedisClient client = DefaultRedisClient.get(); + + private RedisCommands redis1; + private RedisCommands redis2; + + @Rule + public ClusterRule clusterRule = new ClusterRule(clusterClient, ClusterTestSettings.port5, ClusterTestSettings.port6); + + @BeforeClass + public static void setupClient() { + clusterClient = RedisClusterClient.create(TestClientResources.get(), + RedisURI.Builder.redis(host, ClusterTestSettings.port5).build()); + } + + @AfterClass + public static void shutdownClient() { + FastShutdown.shutdown(clusterClient); + } + + @Before + public void openConnection() { + redis1 = client.connect(RedisURI.Builder.redis(ClusterTestSettings.host, ClusterTestSettings.port5).build()).sync(); + redis2 = client.connect(RedisURI.Builder.redis(ClusterTestSettings.host, ClusterTestSettings.port6).build()).sync(); + clusterRule.clusterReset(); + } + + @After + public void closeConnection() { + redis1.getStatefulConnection().close(); + redis2.getStatefulConnection().close(); + } + + @Test + public void clusterMeet() { + + clusterRule.clusterReset(); + + Partitions partitionsBeforeMeet = ClusterPartitionParser.parse(redis1.clusterNodes()); + assertThat(partitionsBeforeMeet.getPartitions()).hasSize(1); + + String result = redis1.clusterMeet(host, ClusterTestSettings.port6); + assertThat(result).isEqualTo("OK"); + + Wait.untilEquals(2, () -> ClusterPartitionParser.parse(redis1.clusterNodes()).size()).waitOrTimeout(); + + Partitions partitionsAfterMeet = ClusterPartitionParser.parse(redis1.clusterNodes()); + assertThat(partitionsAfterMeet.getPartitions()).hasSize(2); + } + + @Test + public void clusterForget() { + + clusterRule.clusterReset(); + + String result = redis1.clusterMeet(host, ClusterTestSettings.port6); + assertThat(result).isEqualTo("OK"); + Wait.untilTrue(() -> redis1.clusterNodes().contains(redis2.clusterMyId())).waitOrTimeout(); + Wait.untilTrue(() -> redis2.clusterNodes().contains(redis1.clusterMyId())).waitOrTimeout(); + Wait.untilTrue(() -> { + Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); + if (partitions.size() != 2) { + return false; + } + for (RedisClusterNode redisClusterNode : partitions) { + if (redisClusterNode.is(RedisClusterNode.NodeFlag.HANDSHAKE)) { + return false; + } + } + return true; + }).waitOrTimeout(); + + redis1.clusterForget(redis2.clusterMyId()); + + Wait.untilEquals(1, () -> ClusterPartitionParser.parse(redis1.clusterNodes()).size()).waitOrTimeout(); + + Partitions partitionsAfterForget = ClusterPartitionParser.parse(redis1.clusterNodes()); + assertThat(partitionsAfterForget.getPartitions()).hasSize(1); + } + + @Test + public void clusterDelSlots() { + + ClusterSetup.setup2Masters(clusterRule); + + redis1.clusterDelSlots(1, 2, 5, 6); + + Wait.untilEquals(11996, () -> getOwnPartition(redis1).getSlots().size()).waitOrTimeout(); + } + + @Test + public void clusterSetSlots() { + + ClusterSetup.setup2Masters(clusterRule); + + redis1.clusterSetSlotNode(6, getNodeId(redis2)); + + waitForSlots(redis1, 11999); + waitForSlots(redis2, 4384); + + Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); + for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { + if (redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + assertThat(redisClusterNode.getSlots()).contains(1, 2, 3, 4, 5).doesNotContain(6); + } + } + } + + @Test + public void clusterSlotMigrationImport() { + + ClusterSetup.setup2Masters(clusterRule); + + String nodeId2 = getNodeId(redis2); + assertThat(redis1.clusterSetSlotMigrating(6, nodeId2)).isEqualTo("OK"); + assertThat(redis1.clusterSetSlotImporting(15000, nodeId2)).isEqualTo("OK"); + + assertThat(redis1.clusterSetSlotStable(6)).isEqualTo("OK"); + } + + @Test + public void clusterTopologyRefresh() { + + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(PERIODIC_REFRESH_ENABLED).build()); + clusterClient.reloadPartitions(); + + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + assertThat(clusterClient.getPartitions()).hasSize(1); + + ClusterSetup.setup2Masters(clusterRule); + assertThat(clusterClient.getPartitions()).hasSize(2); + + clusterConnection.getStatefulConnection().close(); + } + + @Test + public void changeTopologyWhileOperations() throws Exception { + + ClusterSetup.setup2Masters(clusterRule); + + ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = ClusterTopologyRefreshOptions.builder() + .enableAllAdaptiveRefreshTriggers().build(); + + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(clusterTopologyRefreshOptions).build()); + StatefulRedisClusterConnection connection = clusterClient.connect(); + RedisAdvancedClusterCommands sync = connection.sync(); + RedisAdvancedClusterAsyncCommands async = connection.async(); + + Partitions partitions = connection.getPartitions(); + assertThat(partitions.getPartitionBySlot(0).getSlots().size()).isEqualTo(12000); + assertThat(partitions.getPartitionBySlot(16380).getSlots().size()).isEqualTo(4384); + assertRoutedExecution(async); + + sync.del("A"); + sync.del("t"); + sync.del("p"); + + shiftAllSlotsToNode1(); + assertRoutedExecution(async); + + Wait.untilTrue(() -> { + if (clusterClient.getPartitions().size() == 2) { + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + if (redisClusterNode.getSlots().size() > 16380) { + return true; + } + } + } + + return false; + }).waitOrTimeout(); + + assertThat(partitions.getPartitionBySlot(0).getSlots().size()).isEqualTo(16384); + + assertThat(sync.get("A")).isEqualTo("value"); + assertThat(sync.get("t")).isEqualTo("value"); + assertThat(sync.get("p")).isEqualTo("value"); + + async.getStatefulConnection().close(); + } + + @Test + public void slotMigrationShouldUseAsking() { + + ClusterSetup.setup2Masters(clusterRule); + + StatefulRedisClusterConnection connection = clusterClient.connect(); + + RedisAdvancedClusterCommands sync = connection.sync(); + RedisAdvancedClusterAsyncCommands async = connection.async(); + + Partitions partitions = connection.getPartitions(); + assertThat(partitions.getPartitionBySlot(0).getSlots().size()).isEqualTo(12000); + assertThat(partitions.getPartitionBySlot(16380).getSlots().size()).isEqualTo(4384); + + redis1.clusterSetSlotMigrating(3300, redis2.clusterMyId()); + redis2.clusterSetSlotImporting(3300, redis1.clusterMyId()); + + assertThat(sync.get("b")).isNull(); + + async.getStatefulConnection().close(); + } + + @Test + public void disconnectedConnectionRejectTest() throws Exception { + + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(PERIODIC_REFRESH_ENABLED) + .disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + clusterClient.setOptions(ClusterClientOptions.builder() + .disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS).build()); + ClusterSetup.setup2Masters(clusterRule); + + assertRoutedExecution(clusterConnection); + + RedisClusterNode partition1 = getOwnPartition(redis1); + RedisClusterAsyncCommands node1Connection = clusterConnection.getConnection(partition1.getUri() + .getHost(), partition1.getUri().getPort()); + + shiftAllSlotsToNode1(); + + suspendConnection(node1Connection); + + RedisFuture set = clusterConnection.set("t", "value"); // 15891 + + set.await(5, TimeUnit.SECONDS); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(set)).hasRootCauseInstanceOf(RedisException.class); + clusterConnection.getStatefulConnection().close(); + } + + @Test + public void atLeastOnceForgetNodeFailover() throws Exception { + + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(PERIODIC_REFRESH_ENABLED).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + clusterClient.setOptions(ClusterClientOptions.create()); + ClusterSetup.setup2Masters(clusterRule); + + assertRoutedExecution(clusterConnection); + + RedisClusterNode partition1 = getOwnPartition(redis1); + RedisClusterNode partition2 = getOwnPartition(redis2); + RedisClusterAsyncCommands node1Connection = clusterConnection.getConnection(partition1.getUri() + .getHost(), partition1.getUri().getPort()); + + RedisClusterAsyncCommands node2Connection = clusterConnection.getConnection(partition2.getUri() + .getHost(), partition2.getUri().getPort()); + + shiftAllSlotsToNode1(); + + suspendConnection(node2Connection); + + List> futures = new ArrayList<>(); + + futures.add(clusterConnection.set("t", "value")); // 15891 + futures.add(clusterConnection.set("p", "value")); // 16023 + + clusterConnection.set("A", "value").get(1, TimeUnit.SECONDS); // 6373 + + for (RedisFuture future : futures) { + assertThat(future.isDone()).isFalse(); + assertThat(future.isCancelled()).isFalse(); + } + redis1.clusterForget(partition2.getNodeId()); + redis2.clusterForget(partition1.getNodeId()); + + Partitions partitions = clusterClient.getPartitions(); + partitions.remove(partition2); + partitions.getPartition(0).setSlots(Arrays.stream(createSlots(0, 16384)).boxed().collect(Collectors.toList())); + partitions.updateCache(); + clusterClient.updatePartitionsInConnections(); + + Wait.untilTrue(() -> TestFutures.areAllDone(futures)).waitOrTimeout(); + + assertRoutedExecution(clusterConnection); + + clusterConnection.getStatefulConnection().close(); + } + + @Test + public void expireStaleNodeIdConnections() throws Exception { + + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(PERIODIC_REFRESH_ENABLED).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + ClusterSetup.setup2Masters(clusterRule); + + PooledClusterConnectionProvider clusterConnectionProvider = getPooledClusterConnectionProvider(clusterConnection); + + assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(0); + + assertRoutedExecution(clusterConnection); + + assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(2); + + Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); + for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { + if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + redis1.clusterForget(redisClusterNode.getNodeId()); + } + } + + partitions = ClusterPartitionParser.parse(redis2.clusterNodes()); + for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { + if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + redis2.clusterForget(redisClusterNode.getNodeId()); + } + } + + Wait.untilEquals(1, () -> clusterClient.getPartitions().size()).waitOrTimeout(); + Wait.untilEquals(1, () -> clusterConnectionProvider.getConnectionCount()).waitOrTimeout(); + + clusterConnection.getStatefulConnection().close(); + + } + + private void assertRoutedExecution(RedisClusterAsyncCommands clusterConnection) { + assertExecuted(clusterConnection.set("A", "value")); // 6373 + assertExecuted(clusterConnection.set("t", "value")); // 15891 + assertExecuted(clusterConnection.set("p", "value")); // 16023 + } + + @Test + public void doNotExpireStaleNodeIdConnections() throws Exception { + + clusterClient.setOptions(ClusterClientOptions.builder() + .topologyRefreshOptions(ClusterTopologyRefreshOptions.builder().closeStaleConnections(false).build()).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + ClusterSetup.setup2Masters(clusterRule); + + PooledClusterConnectionProvider clusterConnectionProvider = getPooledClusterConnectionProvider(clusterConnection); + + assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(0); + + assertRoutedExecution(clusterConnection); + + assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(2); + + Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); + for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { + if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + redis1.clusterForget(redisClusterNode.getNodeId()); + } + } + + partitions = ClusterPartitionParser.parse(redis2.clusterNodes()); + for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { + if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + redis2.clusterForget(redisClusterNode.getNodeId()); + } + } + + clusterClient.reloadPartitions(); + + assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(2); + + clusterConnection.getStatefulConnection().close(); + + } + + @Test + public void expireStaleHostAndPortConnections() throws Exception { + + clusterClient.setOptions(ClusterClientOptions.builder().build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + ClusterSetup.setup2Masters(clusterRule); + + final PooledClusterConnectionProvider clusterConnectionProvider = getPooledClusterConnectionProvider(clusterConnection); + + assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(0); + + assertRoutedExecution(clusterConnection); + assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(2); + + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + clusterConnection.getConnection(redisClusterNode.getUri().getHost(), redisClusterNode.getUri().getPort()); + clusterConnection.getConnection(redisClusterNode.getNodeId()); + } + + assertThat(clusterConnectionProvider.getConnectionCount()).isEqualTo(4); + + Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); + for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { + if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + redis1.clusterForget(redisClusterNode.getNodeId()); + } + } + + partitions = ClusterPartitionParser.parse(redis2.clusterNodes()); + for (RedisClusterNode redisClusterNode : partitions.getPartitions()) { + if (!redisClusterNode.getFlags().contains(RedisClusterNode.NodeFlag.MYSELF)) { + redis2.clusterForget(redisClusterNode.getNodeId()); + } + } + + clusterClient.reloadPartitions(); + + Wait.untilEquals(1, () -> clusterClient.getPartitions().size()).waitOrTimeout(); + Wait.untilEquals(2L, () -> clusterConnectionProvider.getConnectionCount()).waitOrTimeout(); + + clusterConnection.getStatefulConnection().close(); + } + + @Test + public void readFromReplicaTest() { + + ClusterSetup.setup2Masters(clusterRule); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + clusterConnection.getStatefulConnection().setReadFrom(ReadFrom.REPLICA); + + TestFutures.awaitOrTimeout(clusterConnection.set(key, value)); + + try { + clusterConnection.get(key); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("Cannot determine a partition to read for slot"); + } + + clusterConnection.getStatefulConnection().close(); + } + + @Test + public void readFromNearestTest() { + + ClusterSetup.setup2Masters(clusterRule); + RedisAdvancedClusterCommands clusterConnection = clusterClient.connect().sync(); + clusterConnection.getStatefulConnection().setReadFrom(ReadFrom.NEAREST); + + clusterConnection.set(key, value); + + assertThat(clusterConnection.get(key)).isEqualTo(value); + + clusterConnection.getStatefulConnection().close(); + } + + private PooledClusterConnectionProvider getPooledClusterConnectionProvider( + RedisAdvancedClusterAsyncCommands clusterAsyncConnection) { + + RedisChannelHandler channelHandler = getChannelHandler(clusterAsyncConnection); + ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) channelHandler.getChannelWriter(); + return (PooledClusterConnectionProvider) writer.getClusterConnectionProvider(); + } + + private RedisChannelHandler getChannelHandler( + RedisAdvancedClusterAsyncCommands clusterAsyncConnection) { + return (RedisChannelHandler) clusterAsyncConnection.getStatefulConnection(); + } + + private void assertExecuted(RedisFuture set) { + TestFutures.awaitOrTimeout(set); + assertThat(set.getError()).isNull(); + assertThat(TestFutures.getOrTimeout(set)).isEqualTo("OK"); + } + + private void suspendConnection(RedisClusterAsyncCommands asyncCommands) { + + ConnectionTestUtil.getConnectionWatchdog(((RedisAsyncCommands) asyncCommands).getStatefulConnection()) + .setReconnectSuspended(true); + asyncCommands.quit(); + + Wait.untilTrue(() -> !asyncCommands.isOpen()).waitOrTimeout(); + } + + private void shiftAllSlotsToNode1() { + + redis1.clusterDelSlots(createSlots(12000, 16384)); + redis2.clusterDelSlots(createSlots(12000, 16384)); + + waitForSlots(redis2, 0); + + RedisClusterNode redis2Partition = getOwnPartition(redis2); + + Wait.untilTrue(new Supplier() { + @Override + public Boolean get() { + Partitions partitions = ClusterPartitionParser.parse(redis1.clusterNodes()); + RedisClusterNode partition = partitions.getPartitionByNodeId(redis2Partition.getNodeId()); + + if (!partition.getSlots().isEmpty()) { + removeRemaining(partition); + } + + return partition.getSlots().size() == 0; + } + + private void removeRemaining(RedisClusterNode partition) { + try { + redis1.clusterDelSlots(toIntArray(partition.getSlots())); + } catch (Exception o_O) { + // ignore + } + } + }).waitOrTimeout(); + + redis1.clusterAddSlots(createSlots(12000, 16384)); + waitForSlots(redis1, 16384); + + Wait.untilTrue(clusterRule::isStable).waitOrTimeout(); + } + + private int[] toIntArray(List list) { + return list.parallelStream().mapToInt(Integer::intValue).toArray(); + } + + private void waitForSlots(RedisClusterCommands connection, int slotCount) { + Wait.untilEquals(slotCount, () -> getOwnPartition(connection).getSlots().size()).waitOrTimeout(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/RedisClusterStressScenariosTest.java b/src/test/java/io/lettuce/core/cluster/RedisClusterStressScenariosTest.java new file mode 100644 index 0000000000..fc2555c8f8 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RedisClusterStressScenariosTest.java @@ -0,0 +1,164 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.ClusterTestUtil.getOwnPartition; +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Collections; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.junit.*; +import org.junit.runners.MethodSorters; + +import io.lettuce.category.SlowTests; +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; + +@FixMethodOrder(MethodSorters.NAME_ASCENDING) +@SuppressWarnings("unchecked") +@SlowTests +public class RedisClusterStressScenariosTest extends TestSupport { + + private static final String host = TestSettings.hostAddr(); + + private static RedisClient client; + private static RedisClusterClient clusterClient; + + private Logger log = LogManager.getLogger(getClass()); + + private StatefulRedisConnection redis5; + private StatefulRedisConnection redis6; + + private RedisCommands redissync5; + private RedisCommands redissync6; + + protected String key = "key"; + protected String value = "value"; + + @Rule + public ClusterRule clusterRule = new ClusterRule(clusterClient, ClusterTestSettings.port5, ClusterTestSettings.port6); + + @BeforeClass + public static void setupClient() { + client = RedisClient.create(TestClientResources.get(), RedisURI.Builder.redis(host, ClusterTestSettings.port5).build()); + clusterClient = RedisClusterClient.create(TestClientResources.get(), + Collections.singletonList(RedisURI.Builder.redis(host, ClusterTestSettings.port5).build())); + } + + @AfterClass + public static void shutdownClient() { + FastShutdown.shutdown(client); + } + + @Before + public void before() { + + ClusterSetup.setupMasterWithReplica(clusterRule); + + redis5 = client.connect(RedisURI.Builder.redis(host, ClusterTestSettings.port5).build()); + redis6 = client.connect(RedisURI.Builder.redis(host, ClusterTestSettings.port6).build()); + + redissync5 = redis5.sync(); + redissync6 = redis6.sync(); + clusterClient.reloadPartitions(); + + Wait.untilTrue(clusterRule::isStable).waitOrTimeout(); + + } + + @After + public void after() { + redis5.close(); + + redissync5.getStatefulConnection().close(); + redissync6.getStatefulConnection().close(); + } + + @Test + public void testClusterFailover() { + + log.info("Cluster node 5 is master"); + log.info("Cluster nodes seen from node 5:\n" + redissync5.clusterNodes()); + log.info("Cluster nodes seen from node 6:\n" + redissync6.clusterNodes()); + + Wait.untilTrue(() -> getOwnPartition(redissync5).is(RedisClusterNode.NodeFlag.MASTER)).waitOrTimeout(); + Wait.untilTrue(() -> getOwnPartition(redissync6).is(RedisClusterNode.NodeFlag.SLAVE)).waitOrTimeout(); + + String failover = redissync6.clusterFailover(true); + assertThat(failover).isEqualTo("OK"); + + Wait.untilTrue(() -> getOwnPartition(redissync6).is(RedisClusterNode.NodeFlag.MASTER)).waitOrTimeout(); + Wait.untilTrue(() -> getOwnPartition(redissync5).is(RedisClusterNode.NodeFlag.SLAVE)).waitOrTimeout(); + + log.info("Cluster nodes seen from node 5 after clusterFailover:\n" + redissync5.clusterNodes()); + log.info("Cluster nodes seen from node 6 after clusterFailover:\n" + redissync6.clusterNodes()); + + RedisClusterNode redis5Node = getOwnPartition(redissync5); + RedisClusterNode redis6Node = getOwnPartition(redissync6); + + assertThat(redis5Node.getFlags()).contains(RedisClusterNode.NodeFlag.SLAVE); + assertThat(redis6Node.getFlags()).contains(RedisClusterNode.NodeFlag.MASTER); + + } + + @Test + public void testClusterConnectionStability() { + + RedisAdvancedClusterAsyncCommandsImpl connection = (RedisAdvancedClusterAsyncCommandsImpl) clusterClient + .connect().async(); + + RedisChannelHandler statefulConnection = (RedisChannelHandler) connection.getStatefulConnection(); + + connection.set("a", "b"); + ClusterDistributionChannelWriter writer = (ClusterDistributionChannelWriter) statefulConnection.getChannelWriter(); + + StatefulRedisConnectionImpl statefulSlotConnection = (StatefulRedisConnectionImpl) writer + .getClusterConnectionProvider().getConnection(ClusterConnectionProvider.Intent.WRITE, 3300); + + final RedisAsyncCommands slotConnection = statefulSlotConnection.async(); + + slotConnection.set("a", "b"); + slotConnection.getStatefulConnection().close(); + + Wait.untilTrue(() -> !slotConnection.isOpen()).waitOrTimeout(); + + assertThat(statefulSlotConnection.isClosed()).isTrue(); + assertThat(statefulSlotConnection.isOpen()).isFalse(); + + assertThat(connection.isOpen()).isTrue(); + assertThat(statefulConnection.isOpen()).isTrue(); + assertThat(statefulConnection.isClosed()).isFalse(); + + try { + connection.set("a", "b"); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("Connection is closed"); + } + + connection.getStatefulConnection().close(); + + } + +} diff --git a/src/test/java/io/lettuce/core/cluster/RedisClusterURIUtilUnitTests.java b/src/test/java/io/lettuce/core/cluster/RedisClusterURIUtilUnitTests.java new file mode 100644 index 0000000000..b7ae06736d --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RedisClusterURIUtilUnitTests.java @@ -0,0 +1,112 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.net.URI; +import java.util.List; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; + +/** + * @author Mark Paluch + */ +class RedisClusterURIUtilUnitTests { + + @Test + void testSimpleUri() { + + List redisURIs = RedisClusterURIUtil.toRedisURIs(URI.create("redis://host:7479")); + + assertThat(redisURIs).hasSize(1); + + RedisURI host1 = redisURIs.get(0); + assertThat(host1.getHost()).isEqualTo("host"); + assertThat(host1.getPort()).isEqualTo(7479); + } + + @Test + void testMultipleHosts() { + + List redisURIs = RedisClusterURIUtil.toRedisURIs(URI.create("redis://host1,host2")); + + assertThat(redisURIs).hasSize(2); + + RedisURI host1 = redisURIs.get(0); + assertThat(host1.getHost()).isEqualTo("host1"); + assertThat(host1.getPort()).isEqualTo(6379); + + RedisURI host2 = redisURIs.get(1); + assertThat(host2.getHost()).isEqualTo("host2"); + assertThat(host2.getPort()).isEqualTo(6379); + } + + @Test + void testMultipleHostsWithPorts() { + + List redisURIs = RedisClusterURIUtil.toRedisURIs(URI.create("redis://host1:6379,host2:6380")); + + assertThat(redisURIs).hasSize(2); + + RedisURI host1 = redisURIs.get(0); + assertThat(host1.getHost()).isEqualTo("host1"); + assertThat(host1.getPort()).isEqualTo(6379); + + RedisURI host2 = redisURIs.get(1); + assertThat(host2.getHost()).isEqualTo("host2"); + assertThat(host2.getPort()).isEqualTo(6380); + } + + @Test + void testSslWithPasswordSingleHost() { + + List redisURIs = RedisClusterURIUtil.toRedisURIs(URI.create("redis+tls://password@host1")); + + assertThat(redisURIs).hasSize(1); + + RedisURI host1 = redisURIs.get(0); + assertThat(host1.isSsl()).isTrue(); + assertThat(host1.isStartTls()).isTrue(); + assertThat(new String(host1.getPassword())).isEqualTo("password"); + assertThat(host1.getHost()).isEqualTo("host1"); + assertThat(host1.getPort()).isEqualTo(6379); + } + + @Test + void testSslWithPasswordMultipleHosts() { + + List redisURIs = RedisClusterURIUtil.toRedisURIs(URI.create("redis+tls://password@host1:6379,host2:6380")); + + assertThat(redisURIs).hasSize(2); + + RedisURI host1 = redisURIs.get(0); + assertThat(host1.isSsl()).isTrue(); + assertThat(host1.isStartTls()).isTrue(); + assertThat(new String(host1.getPassword())).isEqualTo("password"); + assertThat(host1.getHost()).isEqualTo("host1"); + assertThat(host1.getPort()).isEqualTo(6379); + + RedisURI host2 = redisURIs.get(1); + assertThat(host2.isSsl()).isTrue(); + assertThat(host2.isStartTls()).isTrue(); + assertThat(new String(host2.getPassword())).isEqualTo("password"); + assertThat(host2.getHost()).isEqualTo("host2"); + assertThat(host2.getPort()).isEqualTo(6380); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/RedisReactiveClusterClientIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/RedisReactiveClusterClientIntegrationTests.java new file mode 100644 index 0000000000..3a16bc3099 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RedisReactiveClusterClientIntegrationTests.java @@ -0,0 +1,115 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static io.lettuce.core.cluster.ClusterTestUtil.getOwnPartition; +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.test.StepVerifier; +import io.lettuce.core.TestSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("unchecked") +@ExtendWith(LettuceExtension.class) +class RedisReactiveClusterClientIntegrationTests extends TestSupport { + + private final RedisAdvancedClusterCommands sync; + private final RedisAdvancedClusterReactiveCommands reactive; + + @Inject + RedisReactiveClusterClientIntegrationTests(StatefulRedisClusterConnection connection) { + this.sync = connection.sync(); + this.reactive = connection.reactive(); + } + + @Test + void testClusterCommandRedirection() { + + // Command on node within the default connection + StepVerifier.create(reactive.set(ClusterTestSettings.KEY_B, "myValue1")).expectNext("OK").verifyComplete(); + + // gets redirection to node 3 + StepVerifier.create(reactive.set(ClusterTestSettings.KEY_A, "myValue1")).expectNext("OK").verifyComplete(); + } + + @Test + void getKeysInSlot() { + + sync.flushall(); + + sync.set(ClusterTestSettings.KEY_A, value); + sync.set(ClusterTestSettings.KEY_B, value); + + StepVerifier.create(reactive.clusterGetKeysInSlot(ClusterTestSettings.SLOT_A, 10)) + .expectNext(ClusterTestSettings.KEY_A).verifyComplete(); + StepVerifier.create(reactive.clusterGetKeysInSlot(ClusterTestSettings.SLOT_B, 10)) + .expectNext(ClusterTestSettings.KEY_B).verifyComplete(); + } + + @Test + void countKeysInSlot() { + + sync.flushall(); + + sync.set(ClusterTestSettings.KEY_A, value); + sync.set(ClusterTestSettings.KEY_B, value); + + StepVerifier.create(reactive.clusterCountKeysInSlot(ClusterTestSettings.SLOT_A)).expectNext(1L).verifyComplete(); + StepVerifier.create(reactive.clusterCountKeysInSlot(ClusterTestSettings.SLOT_B)).expectNext(1L).verifyComplete(); + + int slotZZZ = SlotHash.getSlot("ZZZ".getBytes()); + StepVerifier.create(reactive.clusterCountKeysInSlot(slotZZZ)).expectNext(0L).verifyComplete(); + } + + @Test + void testClusterCountFailureReports() { + RedisClusterNode ownPartition = getOwnPartition(sync); + StepVerifier.create(reactive.clusterCountFailureReports(ownPartition.getNodeId())).consumeNextWith(actual -> { + assertThat(actual).isGreaterThanOrEqualTo(0); + }).verifyComplete(); + } + + @Test + void testClusterKeyslot() { + StepVerifier.create(reactive.clusterKeyslot(ClusterTestSettings.KEY_A)).expectNext((long) ClusterTestSettings.SLOT_A) + .verifyComplete(); + assertThat(SlotHash.getSlot(ClusterTestSettings.KEY_A)).isEqualTo(ClusterTestSettings.SLOT_A); + } + + @Test + void testClusterSaveconfig() { + StepVerifier.create(reactive.clusterSaveconfig()).expectNext("OK").verifyComplete(); + } + + @Test + void testClusterSetConfigEpoch() { + StepVerifier.create(reactive.clusterSetConfigEpoch(1L)).consumeErrorWith(e -> { + assertThat(e).hasMessageContaining("ERR The user can assign a config epoch only"); + }).verify(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/RoundRobinSocketAddressSupplierUnitTests.java b/src/test/java/io/lettuce/core/cluster/RoundRobinSocketAddressSupplierUnitTests.java new file mode 100644 index 0000000000..a64fe3599c --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/RoundRobinSocketAddressSupplierUnitTests.java @@ -0,0 +1,121 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Mockito.when; + +import java.net.InetSocketAddress; +import java.time.Duration; +import java.util.ArrayList; +import java.util.HashSet; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DnsResolvers; +import io.lettuce.core.resource.SocketAddressResolver; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class RoundRobinSocketAddressSupplierUnitTests { + + private static RedisURI hap1 = new RedisURI("127.0.0.1", 1, Duration.ofSeconds(1)); + private static RedisURI hap2 = new RedisURI("127.0.0.1", 2, Duration.ofSeconds(1)); + private static RedisURI hap3 = new RedisURI("127.0.0.1", 3, Duration.ofSeconds(1)); + private static RedisURI hap4 = new RedisURI("127.0.0.1", 4, Duration.ofSeconds(1)); + + private static InetSocketAddress addr1 = new InetSocketAddress(hap1.getHost(), hap1.getPort()); + private static InetSocketAddress addr2 = new InetSocketAddress(hap2.getHost(), hap2.getPort()); + private static InetSocketAddress addr3 = new InetSocketAddress(hap3.getHost(), hap3.getPort()); + private static InetSocketAddress addr4 = new InetSocketAddress(hap4.getHost(), hap4.getPort()); + + private static Partitions partitions; + + @Mock + private ClientResources clientResourcesMock; + + + @BeforeEach + void before() { + + when(clientResourcesMock.socketAddressResolver()).thenReturn(SocketAddressResolver.create(DnsResolvers.JVM_DEFAULT)); + + partitions = new Partitions(); + partitions.addPartition(new RedisClusterNode(hap1, "1", true, "", 0, 0, 0, new ArrayList<>(), new HashSet<>())); + partitions.addPartition(new RedisClusterNode(hap2, "2", true, "", 0, 0, 0, new ArrayList<>(), new HashSet<>())); + partitions.addPartition(new RedisClusterNode(hap3, "3", true, "", 0, 0, 0, new ArrayList<>(), new HashSet<>())); + + partitions.updateCache(); + } + + @Test + void noOffset() { + + RoundRobinSocketAddressSupplier sut = new RoundRobinSocketAddressSupplier(partitions, + redisClusterNodes -> redisClusterNodes, clientResourcesMock); + + assertThat(sut.get()).isEqualTo(addr1); + assertThat(sut.get()).isEqualTo(addr2); + assertThat(sut.get()).isEqualTo(addr3); + assertThat(sut.get()).isEqualTo(addr1); + + assertThat(sut.get()).isNotEqualTo(addr3); + } + + @Test + void partitionTableChangesNewNode() { + + RoundRobinSocketAddressSupplier sut = new RoundRobinSocketAddressSupplier(partitions, + redisClusterNodes -> redisClusterNodes, clientResourcesMock); + + assertThat(sut.get()).isEqualTo(addr1); + assertThat(sut.get()).isEqualTo(addr2); + + partitions.add(new RedisClusterNode(hap4, "4", true, "", 0, 0, 0, new ArrayList<>(), new HashSet<>())); + + assertThat(sut.get()).isEqualTo(addr1); + assertThat(sut.get()).isEqualTo(addr2); + assertThat(sut.get()).isEqualTo(addr3); + assertThat(sut.get()).isEqualTo(addr4); + assertThat(sut.get()).isEqualTo(addr1); + } + + @Test + void partitionTableChangesNodeRemoved() { + + RoundRobinSocketAddressSupplier sut = new RoundRobinSocketAddressSupplier(partitions, + redisClusterNodes -> redisClusterNodes, clientResourcesMock); + + assertThat(sut.get()).isEqualTo(addr1); + assertThat(sut.get()).isEqualTo(addr2); + + partitions.remove(partitions.getPartition(2)); + + assertThat(sut.get()).isEqualTo(addr1); + assertThat(sut.get()).isEqualTo(addr2); + assertThat(sut.get()).isEqualTo(addr1); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ScanIteratorIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/ScanIteratorIntegrationTests.java new file mode 100644 index 0000000000..c9aff72241 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ScanIteratorIntegrationTests.java @@ -0,0 +1,269 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.AssertionsForClassTypes.fail; +import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat; + +import java.util.ArrayList; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.stream.Collectors; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.test.KeysAndValues; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +class ScanIteratorIntegrationTests extends TestSupport { + + private final StatefulRedisClusterConnection connection; + private final RedisClusterCommands redis; + + @Inject + ScanIteratorIntegrationTests(StatefulRedisClusterConnection connection) { + this.connection = connection; + this.redis = connection.sync(); + this.connection.sync().flushall(); + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + this.connection.setReadFrom(ReadFrom.MASTER); + } + + @Test + void scanShouldThrowNoSuchElementExceptionOnEmpty() { + + redis.mset(KeysAndValues.MAP); + + ScanIterator scan = ScanIterator.scan(redis, ScanArgs.Builder.limit(50).match("key-foo")); + + assertThat(scan.hasNext()).isFalse(); + try { + scan.next(); + fail("Missing NoSuchElementException"); + } catch (NoSuchElementException e) { + assertThat(e).isInstanceOf(NoSuchElementException.class); + } + } + + @Test + void keysSinglePass() { + + redis.mset(KeysAndValues.MAP); + + ScanIterator scan = ScanIterator.scan(redis, ScanArgs.Builder.limit(50).match("key-11*")); + + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.hasNext()).isTrue(); + + for (int i = 0; i < 11; i++) { + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.next()).isNotNull(); + } + + assertThat(scan.hasNext()).isFalse(); + } + + @Test + void keysMultiPass() { + + redis.mset(KeysAndValues.MAP); + + ScanIterator scan = ScanIterator.scan(redis); + + List keys = scan.stream().collect(Collectors.toList()); + + assertThat(keys).containsAll(KeysAndValues.KEYS); + } + + @Test + void keysMultiPassFromAnyNode() { + + redis.mset(KeysAndValues.MAP); + this.connection.setReadFrom(ReadFrom.ANY); + + ScanIterator scan = ScanIterator.scan(redis); + + List keys = scan.stream().collect(Collectors.toList()); + + assertThat(keys).containsAll(KeysAndValues.KEYS); + } + + @Test + void hscanShouldThrowNoSuchElementExceptionOnEmpty() { + + redis.mset(KeysAndValues.MAP); + + ScanIterator> scan = ScanIterator.hscan(redis, "none", + ScanArgs.Builder.limit(50).match("key-foo")); + + assertThat(scan.hasNext()).isFalse(); + try { + scan.next(); + fail("Missing NoSuchElementException"); + } catch (NoSuchElementException e) { + assertThat(e).isInstanceOf(NoSuchElementException.class); + } + } + + @Test + void hashSinglePass() { + + redis.hmset(key, KeysAndValues.MAP); + + ScanIterator> scan = ScanIterator.hscan(redis, key, + ScanArgs.Builder.limit(50).match("key-11*")); + + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.hasNext()).isTrue(); + + for (int i = 0; i < 11; i++) { + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.next()).isNotNull(); + } + + assertThat(scan.hasNext()).isFalse(); + } + + @Test + void hashMultiPass() { + + redis.hmset(key, KeysAndValues.MAP); + + ScanIterator> scan = ScanIterator.hscan(redis, key); + + List> keys = scan.stream().collect(Collectors.toList()); + + assertThat(keys).containsAll(KeysAndValues.KEYS.stream().map(s -> KeyValue.fromNullable(s, KeysAndValues.MAP.get(s))) + .collect(Collectors.toList())); + } + + @Test + void sscanShouldThrowNoSuchElementExceptionOnEmpty() { + + redis.sadd(key, KeysAndValues.VALUES.toArray(new String[0])); + + ScanIterator scan = ScanIterator.sscan(redis, "none", ScanArgs.Builder.limit(50).match("key-foo")); + + assertThat(scan.hasNext()).isFalse(); + try { + scan.next(); + fail("Missing NoSuchElementException"); + } catch (NoSuchElementException e) { + assertThat(e).isInstanceOf(NoSuchElementException.class); + } + } + + @Test + void setSinglePass() { + redis.sadd(key, KeysAndValues.KEYS.toArray(new String[0])); + + ScanIterator scan = ScanIterator.sscan(redis, key, ScanArgs.Builder.limit(50).match("key-11*")); + + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.hasNext()).isTrue(); + + for (int i = 0; i < 11; i++) { + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.next()).isNotNull(); + } + + assertThat(scan.hasNext()).isFalse(); + } + + @Test + void setMultiPass() { + + redis.sadd(key, KeysAndValues.KEYS.toArray(new String[0])); + + ScanIterator scan = ScanIterator.sscan(redis, key); + + List values = scan.stream().collect(Collectors.toList()); + + assertThat(values).containsAll(values); + } + + @Test + void zscanShouldThrowNoSuchElementExceptionOnEmpty() { + + for (int i = 0; i < KeysAndValues.COUNT; i++) { + redis.zadd(key, ScoredValue.just(i, KeysAndValues.KEYS.get(i))); + } + + ScanIterator> scan = ScanIterator.zscan(redis, "none", ScanArgs.Builder.limit(50).match("key-foo")); + + assertThat(scan.hasNext()).isFalse(); + try { + scan.next(); + fail("Missing NoSuchElementException"); + } catch (NoSuchElementException e) { + assertThat(e).isInstanceOf(NoSuchElementException.class); + } + } + + @Test + void zsetSinglePass() { + + for (int i = 0; i < KeysAndValues.COUNT; i++) { + redis.zadd(key, ScoredValue.just(i, KeysAndValues.KEYS.get(i))); + } + + ScanIterator> scan = ScanIterator.zscan(redis, key, ScanArgs.Builder.limit(50).match("key-11*")); + + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.hasNext()).isTrue(); + + for (int i = 0; i < 11; i++) { + assertThat(scan.hasNext()).isTrue(); + assertThat(scan.next()).isNotNull(); + } + + assertThat(scan.hasNext()).isFalse(); + } + + @Test + void zsetMultiPass() { + + List> expected = new ArrayList<>(); + for (int i = 0; i < KeysAndValues.COUNT; i++) { + ScoredValue scoredValue = ScoredValue.just(i, KeysAndValues.KEYS.get(i)); + expected.add(scoredValue); + redis.zadd(key, scoredValue); + } + + ScanIterator> scan = ScanIterator.zscan(redis, key); + + List> values = scan.stream().collect(Collectors.toList()); + + assertThat(values).containsAll(values); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/ScanStreamIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/ScanStreamIntegrationTests.java new file mode 100644 index 0000000000..26abcf0ce1 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/ScanStreamIntegrationTests.java @@ -0,0 +1,61 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.test.StepVerifier; +import io.lettuce.core.ScanArgs; +import io.lettuce.core.ScanStream; +import io.lettuce.core.TestSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ScanStreamIntegrationTests extends TestSupport { + + private final StatefulRedisClusterConnection connection; + private final RedisClusterCommands redis; + + @Inject + ScanStreamIntegrationTests(StatefulRedisClusterConnection connection) { + this.connection = connection; + this.redis = connection.sync(); + this.redis.flushall(); + } + + @Test + void shouldScanIteratively() { + + for (int i = 0; i < 1000; i++) { + redis.set("key-" + i, value); + } + + RedisAdvancedClusterReactiveCommands reactive = connection.reactive(); + + StepVerifier.create(ScanStream.scan(reactive, ScanArgs.Builder.limit(200)).take(250)).expectNextCount(250) + .verifyComplete(); + StepVerifier.create(ScanStream.scan(reactive)).expectNextCount(1000).verifyComplete(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/SingleThreadedReactiveClusterClientIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/SingleThreadedReactiveClusterClientIntegrationTests.java new file mode 100644 index 0000000000..539ba6fd4d --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/SingleThreadedReactiveClusterClientIntegrationTests.java @@ -0,0 +1,73 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.List; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.metrics.DefaultCommandLatencyCollectorOptions; +import io.lettuce.core.resource.DefaultClientResources; +import io.lettuce.core.resource.DefaultEventLoopGroupProvider; +import io.lettuce.test.resource.FastShutdown; +import io.netty.util.concurrent.ImmediateEventExecutor; + +/** + * @author Mark Paluch + */ +class SingleThreadedReactiveClusterClientIntegrationTests { + + private RedisClusterClient client; + + @BeforeEach + void before() { + + DefaultClientResources clientResources = DefaultClientResources.builder() + .eventExecutorGroup(ImmediateEventExecutor.INSTANCE) + .eventLoopGroupProvider(new DefaultEventLoopGroupProvider(1)) + .commandLatencyCollectorOptions(DefaultCommandLatencyCollectorOptions.disabled()).build(); + + client = RedisClusterClient.create(clientResources, RedisURI.create("localhost", 7379)); + } + + @AfterEach + void tearDown() { + + FastShutdown.shutdown(client); + FastShutdown.shutdown(client.getResources()); + } + + @Test + void shouldPropagateAsynchronousConnections() { + + StatefulRedisClusterConnection connect = client.connect(); + connect.sync().flushall(); + + List keys = connect.reactive().set("key", "value").flatMap(s -> connect.reactive().set("foo", "bar")) + .flatMapMany(s -> connect.reactive().keys("*")) // + .doOnError(Throwable::printStackTrace) // + .collectList() // + .block(); + + assertThat(keys).contains("key", "foo"); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/SlotHashUnitTests.java b/src/test/java/io/lettuce/core/cluster/SlotHashUnitTests.java new file mode 100644 index 0000000000..0b7f2bde5d --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/SlotHashUnitTests.java @@ -0,0 +1,60 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + * @since 3.0 + */ +class SlotHashUnitTests { + + private static final byte[] BYTES = "123456789".getBytes(); + private static final byte[] TAGGED = "key{123456789}a".getBytes(); + + @Test + void shouldGetSlotHeap() { + + int result = SlotHash.getSlot(BYTES); + assertThat(result).isEqualTo(0x31C3); + } + + @Test + void shouldGetTaggedSlotHeap() { + + int result = SlotHash.getSlot(TAGGED); + assertThat(result).isEqualTo(0x31C3); + } + + @Test + void shouldGetSlotDirect() { + + int result = SlotHash.getSlot((ByteBuffer) ByteBuffer.allocateDirect(BYTES.length).put(BYTES).flip()); + assertThat(result).isEqualTo(0x31C3); + } + + @Test + void testHashWithHash() { + + int result = SlotHash.getSlot((ByteBuffer) ByteBuffer.allocateDirect(TAGGED.length).put(TAGGED).flip()); + assertThat(result).isEqualTo(0x31C3); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/CustomClusterCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/CustomClusterCommandIntegrationTests.java new file mode 100644 index 0000000000..5b00c50cce --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/CustomClusterCommandIntegrationTests.java @@ -0,0 +1,146 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.Arrays; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.TestSupport; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.commands.CustomCommandIntegrationTests; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.test.TestFutures; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class CustomClusterCommandIntegrationTests extends TestSupport { + + private final StatefulRedisClusterConnection connection; + private RedisAdvancedClusterCommands redis; + + @Inject + CustomClusterCommandIntegrationTests(StatefulRedisClusterConnection connection) { + this.connection = connection; + this.redis = connection.sync(); + this.redis.flushall(); + } + + @Test + void dispatchSet() { + + String response = redis.dispatch(CustomCommandIntegrationTests.MyCommands.SET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey(key).addValue(value)); + + assertThat(response).isEqualTo("OK"); + } + + @Test + void dispatchWithoutArgs() { + + String response = redis.dispatch(CustomCommandIntegrationTests.MyCommands.INFO, new StatusOutput<>(StringCodec.UTF8)); + + assertThat(response).contains("connected_clients"); + } + + @Test + void dispatchShouldFailForWrongDataType() { + + redis.hset(key, key, value); + assertThatThrownBy( + () -> redis.dispatch(CommandType.GET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey(key))).isInstanceOf(RedisCommandExecutionException.class); + } + + @Test + void clusterAsyncPing() { + + RedisCommand command = new Command<>(CustomCommandIntegrationTests.MyCommands.PING, + new StatusOutput<>( + StringCodec.UTF8), null); + + AsyncCommand async = new AsyncCommand<>(command); + connection.dispatch(async); + + assertThat(TestFutures.getOrTimeout((RedisFuture) async)).isEqualTo("PONG"); + } + + @Test + void clusterAsyncBatchPing() { + + RedisCommand command1 = new Command<>(CustomCommandIntegrationTests.MyCommands.PING, + new StatusOutput<>( + StringCodec.UTF8), null); + + RedisCommand command2 = new Command<>(CustomCommandIntegrationTests.MyCommands.PING, + new StatusOutput<>( + StringCodec.UTF8), null); + + AsyncCommand async1 = new AsyncCommand<>(command1); + AsyncCommand async2 = new AsyncCommand<>(command2); + connection.dispatch(Arrays.asList(async1, async2)); + + assertThat(TestFutures.getOrTimeout(async1.toCompletableFuture())).isEqualTo("PONG"); + assertThat(TestFutures.getOrTimeout(async2.toCompletableFuture())).isEqualTo("PONG"); + } + + @Test + void clusterAsyncBatchSet() { + + RedisCommand command1 = new Command<>(CommandType.SET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey("key1").addValue("value")); + + RedisCommand command2 = new Command<>(CommandType.GET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey("key1")); + + RedisCommand command3 = new Command<>(CommandType.SET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey("other-key1").addValue("value")); + + AsyncCommand async1 = new AsyncCommand<>(command1); + AsyncCommand async2 = new AsyncCommand<>(command2); + AsyncCommand async3 = new AsyncCommand<>(command3); + connection.dispatch(Arrays.asList(async1, async2, async3)); + + assertThat(TestFutures.getOrTimeout(async1.toCompletableFuture())).isEqualTo("OK"); + assertThat(TestFutures.getOrTimeout(async2.toCompletableFuture())).isEqualTo("value"); + assertThat(TestFutures.getOrTimeout(async3.toCompletableFuture())).isEqualTo("OK"); + } + + @Test + void clusterFireAndForget() { + + RedisCommand command = new Command<>(CustomCommandIntegrationTests.MyCommands.PING, + new StatusOutput<>( + StringCodec.UTF8), null); + connection.dispatch(command); + assertThat(command.isCancelled()).isFalse(); + + } +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/GeoClusterCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/GeoClusterCommandIntegrationTests.java new file mode 100644 index 0000000000..de857c0a02 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/GeoClusterCommandIntegrationTests.java @@ -0,0 +1,75 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Disabled; + +import io.lettuce.core.cluster.ClusterTestUtil; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.commands.GeoCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class GeoClusterCommandIntegrationTests extends GeoCommandIntegrationTests { + + @Inject + GeoClusterCommandIntegrationTests(StatefulRedisClusterConnection clusterConnection) { + super(ClusterTestUtil.redisCommandsOverCluster(clusterConnection)); + } + + @Disabled("MULTI not available on Redis Cluster") + @Override + public void geoaddInTransaction() { + } + + @Disabled("MULTI not available on Redis Cluster") + @Override + public void geoaddMultiInTransaction() { + } + + @Disabled("MULTI not available on Redis Cluster") + @Override + public void georadiusInTransaction() { + } + + @Disabled("MULTI not available on Redis Cluster") + @Override + public void geodistInTransaction() { + } + + @Disabled("MULTI not available on Redis Cluster") + @Override + public void georadiusWithArgsAndTransaction() { + } + + @Disabled("MULTI not available on Redis Cluster") + @Override + public void georadiusbymemberWithArgsInTransaction() { + } + + @Disabled("MULTI not available on Redis Cluster") + @Override + public void geoposInTransaction() { + } + + @Disabled("MULTI not available on Redis Cluster") + @Override + public void geohashInTransaction() { + } +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/HashClusterCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/HashClusterCommandIntegrationTests.java new file mode 100644 index 0000000000..3a9b68298d --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/HashClusterCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands; + +import javax.inject.Inject; + +import io.lettuce.core.cluster.ClusterTestUtil; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.commands.HashCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +public class HashClusterCommandIntegrationTests extends HashCommandIntegrationTests { + + @Inject + public HashClusterCommandIntegrationTests(StatefulRedisClusterConnection connection) { + super(ClusterTestUtil.redisCommandsOverCluster(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/KeyClusterCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/KeyClusterCommandIntegrationTests.java new file mode 100644 index 0000000000..31b6714305 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/KeyClusterCommandIntegrationTests.java @@ -0,0 +1,103 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands; + +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.ClusterTestUtil; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +class KeyClusterCommandIntegrationTests extends TestSupport { + + private final StatefulRedisClusterConnection clusterConnection; + private final RedisCommands redis; + + @Inject + KeyClusterCommandIntegrationTests(StatefulRedisClusterConnection clusterConnection) { + this.clusterConnection = clusterConnection; + this.redis = ClusterTestUtil.redisCommandsOverCluster(clusterConnection); + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void del() { + + redis.set(key, "value"); + redis.set("a", "value"); + redis.set("b", "value"); + + assertThat(redis.del(key, "a", "b")).isEqualTo(3); + assertThat(redis.exists(key)).isEqualTo(0); + assertThat(redis.exists("a")).isEqualTo(0); + assertThat(redis.exists("b")).isEqualTo(0); + } + + @Test + void exists() { + + assertThat(redis.exists(key, "a", "b")).isEqualTo(0); + + redis.set(key, "value"); + redis.set("a", "value"); + redis.set("b", "value"); + + assertThat(redis.exists(key, "a", "b")).isEqualTo(3); + } + + @Test + @EnabledOnCommand("TOUCH") + void touch() { + + redis.set(key, "value"); + redis.set("a", "value"); + redis.set("b", "value"); + + assertThat(redis.touch(key, "a", "b")).isEqualTo(3); + assertThat(redis.exists(key, "a", "b")).isEqualTo(3); + } + + @Test + @EnabledOnCommand("UNLINK") + void unlink() { + + redis.set(key, "value"); + redis.set("a", "value"); + redis.set("b", "value"); + + assertThat(redis.unlink(key, "a", "b")).isEqualTo(3); + assertThat(redis.exists(key)).isEqualTo(0); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/ListClusterCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/ListClusterCommandIntegrationTests.java new file mode 100644 index 0000000000..cc96f6f473 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/ListClusterCommandIntegrationTests.java @@ -0,0 +1,80 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands; + +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.cluster.ClusterTestUtil; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.commands.ListCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class ListClusterCommandIntegrationTests extends ListCommandIntegrationTests { + + private final RedisClusterCommands redis; + + @Inject + ListClusterCommandIntegrationTests(StatefulRedisClusterConnection connection) { + super(ClusterTestUtil.redisCommandsOverCluster(connection)); + this.redis = connection.sync(); + } + + + // re-implementation because keys have to be on the same slot + @Test + void brpoplpush() { + + redis.rpush("UKPDHs8Zlp", "1", "2"); + redis.rpush("br7EPz9bbj", "3", "4"); + assertThat(redis.brpoplpush(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo("2"); + assertThat(redis.lrange("UKPDHs8Zlp", 0, -1)).isEqualTo(list("1")); + assertThat(redis.lrange("br7EPz9bbj", 0, -1)).isEqualTo(list("2", "3", "4")); + } + + @Test + void brpoplpushTimeout() { + assertThat(redis.brpoplpush(1, "UKPDHs8Zlp", "br7EPz9bbj")).isNull(); + } + + @Test + void blpop() { + redis.rpush("br7EPz9bbj", "2", "3"); + assertThat(redis.blpop(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo(kv("br7EPz9bbj", "2")); + } + + @Test + void brpop() { + redis.rpush("br7EPz9bbj", "2", "3"); + assertThat(redis.brpop(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo(kv("br7EPz9bbj", "3")); + } + + @Test + void rpoplpush() { + assertThat(redis.rpoplpush("UKPDHs8Zlp", "br7EPz9bbj")).isNull(); + redis.rpush("UKPDHs8Zlp", "1", "2"); + redis.rpush("br7EPz9bbj", "3", "4"); + assertThat(redis.rpoplpush("UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo("2"); + assertThat(redis.lrange("UKPDHs8Zlp", 0, -1)).isEqualTo(list("1")); + assertThat(redis.lrange("br7EPz9bbj", 0, -1)).isEqualTo(list("2", "3", "4")); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/StringClusterCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/StringClusterCommandIntegrationTests.java new file mode 100644 index 0000000000..cb63938df1 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/StringClusterCommandIntegrationTests.java @@ -0,0 +1,70 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.LinkedHashMap; +import java.util.Map; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.cluster.ClusterTestUtil; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.commands.StringCommandIntegrationTests; +import io.lettuce.test.KeyValueStreamingAdapter; + +/** + * @author Mark Paluch + */ +class StringClusterCommandIntegrationTests extends StringCommandIntegrationTests { + + private final RedisClusterCommands redis; + + @Inject + StringClusterCommandIntegrationTests(StatefulRedisClusterConnection connection) { + super(ClusterTestUtil.redisCommandsOverCluster(connection)); + this.redis = connection.sync(); + } + + @Test + void msetnx() { + redis.set("one", "1"); + Map map = new LinkedHashMap<>(); + map.put("one", "1"); + map.put("two", "2"); + assertThat(redis.msetnx(map)).isFalse(); + redis.del("one"); + redis.del("two"); // probably set on a different node + assertThat(redis.msetnx(map)).isTrue(); + assertThat(redis.get("two")).isEqualTo("2"); + } + + @Test + void mgetStreaming() { + setupMget(); + + KeyValueStreamingAdapter streamingAdapter = new KeyValueStreamingAdapter<>(); + Long count = redis.mget(streamingAdapter, "one", "two"); + + assertThat(streamingAdapter.getMap()).containsEntry("one", "1").containsEntry("two", "2"); + + assertThat(count.intValue()).isEqualTo(2); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/reactive/HashClusterReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/reactive/HashClusterReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..d933341e59 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/reactive/HashClusterReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.commands.HashCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class HashClusterReactiveCommandIntegrationTests extends HashCommandIntegrationTests { + + @Inject + HashClusterReactiveCommandIntegrationTests(StatefulRedisClusterConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/reactive/ListClusterReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/reactive/ListClusterReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..79897e9d39 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/reactive/ListClusterReactiveCommandIntegrationTests.java @@ -0,0 +1,80 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands.reactive; + +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.commands.ListCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class ListClusterReactiveCommandIntegrationTests extends ListCommandIntegrationTests { + + private final RedisClusterCommands redis; + + @Inject + ListClusterReactiveCommandIntegrationTests(StatefulRedisClusterConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + this.redis = ReactiveSyncInvocationHandler.sync(connection); + } + + // re-implementation because keys have to be on the same slot + @Test + void brpoplpush() { + + redis.rpush("UKPDHs8Zlp", "1", "2"); + redis.rpush("br7EPz9bbj", "3", "4"); + assertThat(redis.brpoplpush(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo("2"); + assertThat(redis.lrange("UKPDHs8Zlp", 0, -1)).isEqualTo(list("1")); + assertThat(redis.lrange("br7EPz9bbj", 0, -1)).isEqualTo(list("2", "3", "4")); + } + + @Test + void brpoplpushTimeout() { + assertThat(redis.brpoplpush(1, "UKPDHs8Zlp", "br7EPz9bbj")).isNull(); + } + + @Test + void blpop() { + redis.rpush("br7EPz9bbj", "2", "3"); + assertThat(redis.blpop(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo(kv("br7EPz9bbj", "2")); + } + + @Test + void brpop() { + redis.rpush("br7EPz9bbj", "2", "3"); + assertThat(redis.brpop(1, "UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo(kv("br7EPz9bbj", "3")); + } + + @Test + void rpoplpush() { + assertThat(redis.rpoplpush("UKPDHs8Zlp", "br7EPz9bbj")).isNull(); + redis.rpush("UKPDHs8Zlp", "1", "2"); + redis.rpush("br7EPz9bbj", "3", "4"); + assertThat(redis.rpoplpush("UKPDHs8Zlp", "br7EPz9bbj")).isEqualTo("2"); + assertThat(redis.lrange("UKPDHs8Zlp", 0, -1)).isEqualTo(list("1")); + assertThat(redis.lrange("br7EPz9bbj", 0, -1)).isEqualTo(list("2", "3", "4")); + } + +} diff --git a/src/test/java/io/lettuce/core/cluster/commands/reactive/StringClusterReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/commands/reactive/StringClusterReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..3c2cfdfa77 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/commands/reactive/StringClusterReactiveCommandIntegrationTests.java @@ -0,0 +1,76 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.commands.reactive; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.LinkedHashMap; +import java.util.Map; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; + +import reactor.core.publisher.Flux; +import reactor.test.StepVerifier; +import io.lettuce.core.KeyValue; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.commands.StringCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class StringClusterReactiveCommandIntegrationTests extends StringCommandIntegrationTests { + + private final StatefulRedisClusterConnection connection; + private final RedisClusterCommands redis; + + @Inject + StringClusterReactiveCommandIntegrationTests(StatefulRedisClusterConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + this.connection = connection; + this.redis = connection.sync(); + } + + @Test + void msetnx() { + redis.set("one", "1"); + Map map = new LinkedHashMap<>(); + map.put("one", "1"); + map.put("two", "2"); + assertThat(redis.msetnx(map)).isFalse(); + redis.del("one"); + redis.del("two"); // probably set on a different node + assertThat(redis.msetnx(map)).isTrue(); + assertThat(redis.get("two")).isEqualTo("2"); + } + + @Test + void mget() { + + redis.set(key, value); + redis.set("key1", value); + redis.set("key2", value); + + RedisAdvancedClusterReactiveCommands reactive = connection.reactive(); + + Flux> mget = reactive.mget(key, "key1", "key2"); + StepVerifier.create(mget.next()).expectNext(KeyValue.just(key, value)).verifyComplete(); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/models/partitions/ClusterPartitionParserUnitTests.java b/src/test/java/io/lettuce/core/cluster/models/partitions/ClusterPartitionParserUnitTests.java new file mode 100644 index 0000000000..32de1fdb92 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/models/partitions/ClusterPartitionParserUnitTests.java @@ -0,0 +1,150 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.partitions; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.hamcrest.CoreMatchers.hasItem; +import static org.junit.Assert.assertThat; + +import java.time.Duration; +import java.util.Collections; +import java.util.HashSet; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.internal.LettuceLists; + +class ClusterPartitionParserUnitTests { + + private static String nodes = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999 [8000->-4213a8dabb94f92eb6a860f4d0729e6a25d43e0c] [5461-<-c37ab8396be428403d4e55c0d317348be27ed973]\n" + + "4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 127.0.0.1:7379 myself,slave 4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 0 0 1 connected 0-6999 7001-7999 12001\n" + + "5f4a2236d00008fba7ac0dd24b95762b446767bd :0 myself,master - 0 0 1 connected [5460->-5f4a2236d00008fba7ac0dd24b95762b446767bd] [5461-<-5f4a2236d00008fba7ac0dd24b95762b446767bd]"; + + private static String nodesWithIPv6Addresses = "c37ab8396be428403d4e55c0d317348be27ed973 affe:affe:123:34::1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "3d005a179da7d8dc1adae6409d47b39c369e992b [dead:beef:dead:beef::1]:7380 master - 0 1401258245007 2 disconnected 8000-11999 [8000->-4213a8dabb94f92eb6a860f4d0729e6a25d43e0c] [5461-<-c37ab8396be428403d4e55c0d317348be27ed973]\n" + + "4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 127.0.0.1:7379 myself,slave 4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 0 0 1 connected 0-6999 7001-7999 12001\n" + + "5f4a2236d00008fba7ac0dd24b95762b446767bd :0 myself,master - 0 0 1 connected [5460->-5f4a2236d00008fba7ac0dd24b95762b446767bd] [5461-<-5f4a2236d00008fba7ac0dd24b95762b446767bd]"; + + private static String nodesWithBusPort = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381@17381 slave 4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 0 1454482721690 3 connected\n" + + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380@17380 master - 0 1454482721690 0 connected 12000-16383\n" + + "4213a8dabb94f92eb6a860f4d0729e6a25d43e0c 127.0.0.1:7379@17379 myself,master - 0 0 1 connected 0-11999\n" + + "5f4a2236d00008fba7ac0dd24b95762b446767bd 127.0.0.1:7382@17382 slave 3d005a179da7d8dc1adae6409d47b39c369e992b 0 1454482721690 2 connected"; + + @Test + void shouldParseNodesCorrectly() { + + Partitions result = ClusterPartitionParser.parse(nodes); + + assertThat(result.getPartitions()).hasSize(4); + + RedisClusterNode p1 = result.getPartitions().get(0); + + assertThat(p1.getNodeId()).isEqualTo("c37ab8396be428403d4e55c0d317348be27ed973"); + assertThat(p1.getUri().getHost()).isEqualTo("127.0.0.1"); + assertThat(p1.getUri().getPort()).isEqualTo(7381); + assertThat(p1.getSlaveOf()).isNull(); + assertThat(p1.getFlags()).isEqualTo(Collections.singleton(RedisClusterNode.NodeFlag.MASTER)); + assertThat(p1.getPingSentTimestamp()).isEqualTo(111); + assertThat(p1.getPongReceivedTimestamp()).isEqualTo(1401258245007L); + assertThat(p1.getConfigEpoch()).isEqualTo(222); + assertThat(p1.isConnected()).isTrue(); + + assertThat(p1.getSlots(), hasItem(7000)); + assertThat(p1.getSlots(), hasItem(12000)); + assertThat(p1.getSlots(), hasItem(12002)); + assertThat(p1.getSlots(), hasItem(12003)); + assertThat(p1.getSlots(), hasItem(16383)); + + RedisClusterNode p3 = result.getPartitions().get(2); + + assertThat(p3.getSlaveOf()).isEqualTo("4213a8dabb94f92eb6a860f4d0729e6a25d43e0c"); + assertThat(p3.toString()).contains(RedisClusterNode.class.getSimpleName()); + assertThat(result.toString()).contains(Partitions.class.getSimpleName()); + } + + @Test + void shouldParseNodesWithBusPort() { + + Partitions result = ClusterPartitionParser.parse(nodesWithBusPort); + + assertThat(result.getPartitions()).hasSize(4); + + RedisClusterNode p1 = result.getPartitions().get(0); + + assertThat(p1.getNodeId()).isEqualTo("c37ab8396be428403d4e55c0d317348be27ed973"); + assertThat(p1.getUri().getHost()).isEqualTo("127.0.0.1"); + assertThat(p1.getUri().getPort()).isEqualTo(7381); + } + + @Test + void shouldParseNodesIPv6Address() { + + Partitions result = ClusterPartitionParser.parse(nodesWithIPv6Addresses); + + assertThat(result.getPartitions()).hasSize(4); + + RedisClusterNode p1 = result.getPartitions().get(0); + + assertThat(p1.getUri().getHost()).isEqualTo("affe:affe:123:34::1"); + assertThat(p1.getUri().getPort()).isEqualTo(7381); + + RedisClusterNode p2 = result.getPartitions().get(1); + + assertThat(p2.getUri().getHost()).isEqualTo("dead:beef:dead:beef::1"); + assertThat(p2.getUri().getPort()).isEqualTo(7380); + } + + @Test + void getNodeByHashShouldReturnCorrectNode() { + + Partitions partitions = ClusterPartitionParser.parse(nodes); + assertThat(partitions.getPartitionBySlot(7000).getNodeId()).isEqualTo("c37ab8396be428403d4e55c0d317348be27ed973"); + assertThat(partitions.getPartitionBySlot(5460).getNodeId()).isEqualTo("4213a8dabb94f92eb6a860f4d0729e6a25d43e0c"); + } + + @Test + void testModel() { + RedisClusterNode node = mockRedisClusterNode(); + + assertThat(node.toString()).contains(RedisClusterNode.class.getSimpleName()); + assertThat(node.hasSlot(1)).isTrue(); + assertThat(node.hasSlot(9)).isFalse(); + } + + RedisClusterNode mockRedisClusterNode() { + RedisClusterNode node = new RedisClusterNode(); + node.setConfigEpoch(1); + node.setConnected(true); + node.setFlags(new HashSet<>()); + node.setNodeId("abcd"); + node.setPingSentTimestamp(2); + node.setPongReceivedTimestamp(3); + node.setSlaveOf("me"); + node.setSlots(LettuceLists.unmodifiableList(1, 2, 3)); + node.setUri(new RedisURI("localhost", 1, Duration.ofDays(1))); + return node; + } + + @Test + void createNode() { + RedisClusterNode original = mockRedisClusterNode(); + RedisClusterNode created = RedisClusterNode.of(original.getNodeId()); + + assertThat(original).isEqualTo(created); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/models/partitions/PartitionsUnitTests.java b/src/test/java/io/lettuce/core/cluster/models/partitions/PartitionsUnitTests.java new file mode 100644 index 0000000000..4dd74001df --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/models/partitions/PartitionsUnitTests.java @@ -0,0 +1,347 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.partitions; + +import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat; + +import java.util.Arrays; +import java.util.HashSet; +import java.util.Iterator; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; + +/** + * @author Mark Paluch + */ +class PartitionsUnitTests { + + private RedisClusterNode node1 = new RedisClusterNode(RedisURI.create("localhost", 6379), "a", true, "", 0, 0, 0, + Arrays.asList(1, 2, 3), new HashSet<>()); + private RedisClusterNode node2 = new RedisClusterNode(RedisURI.create("localhost", 6380), "b", true, "", 0, 0, 0, + Arrays.asList(4, 5, 6), new HashSet<>()); + + @Test + void contains() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + + assertThat(partitions.contains(node1)).isTrue(); + assertThat(partitions.contains(node2)).isFalse(); + } + + @Test + void containsUsesReadView() { + + Partitions partitions = new Partitions(); + partitions.getPartitions().add(node1); + + assertThat(partitions.contains(node1)).isFalse(); + partitions.updateCache(); + assertThat(partitions.contains(node1)).isTrue(); + } + + @Test + void containsAll() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + partitions.add(node2); + + assertThat(partitions.containsAll(Arrays.asList(node1, node2))).isTrue(); + } + + @Test + void containsAllUsesReadView() { + + Partitions partitions = new Partitions(); + partitions.getPartitions().add(node1); + + assertThat(partitions.containsAll(Arrays.asList(node1))).isFalse(); + partitions.updateCache(); + assertThat(partitions.containsAll(Arrays.asList(node1))).isTrue(); + } + + @Test + void add() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + partitions.add(node2); + + assertThat(partitions.getPartitionBySlot(1)).isEqualTo(node1); + assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); + } + + @Test + void addPartitionClearsCache() { + + Partitions partitions = new Partitions(); + partitions.addPartition(node1); + + assertThat(partitions.getPartitionBySlot(1)).isNull(); + } + + @Test + void addAll() { + + Partitions partitions = new Partitions(); + partitions.addAll(Arrays.asList(node1, node2)); + + assertThat(partitions.getPartitionBySlot(1)).isEqualTo(node1); + assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); + } + + @Test + void getPartitionBySlot() { + + Partitions partitions = new Partitions(); + + assertThat(partitions.getPartitionBySlot(1)).isNull(); + + partitions.add(node1); + assertThat(partitions.getPartitionBySlot(1)).isEqualTo(node1); + } + + @Test + void getPartitionByAlias() { + + Partitions partitions = new Partitions(); + node1.addAlias(RedisURI.create("foobar", 1234)); + partitions.add(node1); + + assertThat(partitions.getPartition(node1.getUri().getHost(), node1.getUri().getPort())).isEqualTo(node1); + assertThat(partitions.getPartition("foobar", 1234)).isEqualTo(node1); + assertThat(partitions.getPartition("unknown", 1234)).isNull(); + } + + @Test + void remove() { + + Partitions partitions = new Partitions(); + partitions.addAll(Arrays.asList(node1, node2)); + partitions.remove(node1); + + assertThat(partitions.getPartitionBySlot(1)).isNull(); + assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); + } + + @Test + void removeAll() { + + Partitions partitions = new Partitions(); + partitions.addAll(Arrays.asList(node1, node2)); + partitions.removeAll(Arrays.asList(node1)); + + assertThat(partitions.getPartitionBySlot(1)).isNull(); + assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); + } + + @Test + void clear() { + + Partitions partitions = new Partitions(); + partitions.addAll(Arrays.asList(node1, node2)); + partitions.clear(); + + assertThat(partitions.getPartitionBySlot(1)).isNull(); + assertThat(partitions.getPartitionBySlot(5)).isNull(); + } + + @Test + void retainAll() { + + Partitions partitions = new Partitions(); + partitions.addAll(Arrays.asList(node1, node2)); + partitions.retainAll(Arrays.asList(node2)); + + assertThat(partitions.getPartitionBySlot(1)).isNull(); + assertThat(partitions.getPartitionBySlot(5)).isEqualTo(node2); + } + + @Test + void toArray() { + + Partitions partitions = new Partitions(); + partitions.addAll(Arrays.asList(node1, node2)); + + assertThat(partitions.toArray()).contains(node1, node2); + } + + @Test + void toArrayUsesReadView() { + + Partitions partitions = new Partitions(); + partitions.getPartitions().addAll(Arrays.asList(node1, node2)); + + assertThat(partitions.toArray()).doesNotContain(node1, node2); + partitions.updateCache(); + assertThat(partitions.toArray()).contains(node1, node2); + } + + @Test + void toArray2() { + + Partitions partitions = new Partitions(); + partitions.addAll(Arrays.asList(node1, node2)); + + assertThat(partitions.toArray(new RedisClusterNode[2])).contains(node1, node2); + } + + @Test + void toArray2UsesReadView() { + + Partitions partitions = new Partitions(); + partitions.getPartitions().addAll(Arrays.asList(node1, node2)); + + assertThat(partitions.toArray(new RedisClusterNode[2])).doesNotContain(node1, node2); + + partitions.updateCache(); + + assertThat(partitions.toArray(new RedisClusterNode[2])).contains(node1, node2); + } + + @Test + void getPartitionByNodeId() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + partitions.add(node2); + + assertThat(partitions.getPartitionByNodeId("a")).isEqualTo(node1); + assertThat(partitions.getPartitionByNodeId("c")).isNull(); + } + + @Test + void reload() { + + RedisClusterNode other = new RedisClusterNode(RedisURI.create("localhost", 6666), "c", true, "", 0, 0, 0, + Arrays.asList(1, 2, 3, 4, 5, 6), new HashSet<>()); + + Partitions partitions = new Partitions(); + partitions.add(other); + + partitions.reload(Arrays.asList(node1, node1)); + + assertThat(partitions.getPartitionByNodeId("a")).isEqualTo(node1); + assertThat(partitions.getPartitionBySlot(1)).isEqualTo(node1); + } + + @Test + void reloadEmpty() { + + Partitions partitions = new Partitions(); + partitions.reload(Arrays.asList()); + + assertThat(partitions.getPartitionBySlot(1)).isNull(); + } + + @Test + void isEmpty() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + + assertThat(partitions.isEmpty()).isFalse(); + } + + @Test + void size() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + + assertThat(partitions.size()).isEqualTo(1); + } + + @Test + void sizeUsesReadView() { + + Partitions partitions = new Partitions(); + partitions.getPartitions().add(node1); + + assertThat(partitions.size()).isEqualTo(0); + + partitions.updateCache(); + + assertThat(partitions.size()).isEqualTo(1); + } + + @Test + void getPartition() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + + assertThat(partitions.getPartition(0)).isEqualTo(node1); + } + + @Test + void iterator() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + + assertThat(partitions.iterator().next()).isEqualTo(node1); + } + + @Test + void iteratorUsesReadView() { + + Partitions partitions = new Partitions(); + partitions.getPartitions().add(node1); + + assertThat(partitions.iterator().hasNext()).isFalse(); + partitions.updateCache(); + + assertThat(partitions.iterator().hasNext()).isTrue(); + } + + @Test + void iteratorIsSafeDuringUpdate() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + partitions.add(node2); + + Iterator iterator = partitions.iterator(); + + partitions.remove(node2); + + assertThat(iterator.hasNext()).isTrue(); + assertThat(iterator.next()).isEqualTo(node1); + assertThat(iterator.next()).isEqualTo(node2); + + iterator = partitions.iterator(); + + partitions.remove(node2); + + assertThat(iterator.hasNext()).isTrue(); + assertThat(iterator.next()).isEqualTo(node1); + assertThat(iterator.hasNext()).isFalse(); + } + + @Test + void testToString() { + + Partitions partitions = new Partitions(); + partitions.add(node1); + + assertThat(partitions.toString()).startsWith("Partitions ["); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/models/partitions/RedisClusterNodeUnitTests.java b/src/test/java/io/lettuce/core/cluster/models/partitions/RedisClusterNodeUnitTests.java new file mode 100644 index 0000000000..3347175b88 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/models/partitions/RedisClusterNodeUnitTests.java @@ -0,0 +1,66 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.partitions; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Arrays; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.SlotHash; + +/** + * @author Mark Paluch + */ +class RedisClusterNodeUnitTests { + + @Test + void shouldCopyNode() { + + RedisClusterNode node = new RedisClusterNode(); + node.setSlots(Arrays.asList(1, 2, 3, SlotHash.SLOT_COUNT - 1)); + node.addAlias(RedisURI.create("foo", 6379)); + + RedisClusterNode copy = new RedisClusterNode(node); + + assertThat(copy.getSlots()).containsExactly(1, 2, 3, SlotHash.SLOT_COUNT - 1); + assertThat(copy.hasSlot(1)).isTrue(); + assertThat(copy.hasSlot(SlotHash.SLOT_COUNT - 1)).isTrue(); + assertThat(copy.getAliases()).contains(RedisURI.create("foo", 6379)); + } + + @Test + void testEquality() { + + RedisClusterNode node = new RedisClusterNode(); + + assertThat(node).isEqualTo(new RedisClusterNode()); + assertThat(node.hashCode()).isEqualTo(new RedisClusterNode().hashCode()); + + node.setUri(new RedisURI()); + assertThat(node.hashCode()).isNotEqualTo(new RedisClusterNode()); + } + + @Test + void testToString() { + + RedisClusterNode node = new RedisClusterNode(); + + assertThat(node.toString()).contains(RedisClusterNode.class.getSimpleName()); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/models/slots/ClusterSlotsParserUnitTests.java b/src/test/java/io/lettuce/core/cluster/models/slots/ClusterSlotsParserUnitTests.java new file mode 100644 index 0000000000..4cd306261c --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/models/slots/ClusterSlotsParserUnitTests.java @@ -0,0 +1,134 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.slots; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.LettuceLists; + +@SuppressWarnings("unchecked") +class ClusterSlotsParserUnitTests { + + @Test + void testEmpty() { + List result = ClusterSlotsParser.parse(new ArrayList<>()); + assertThat(result).isNotNull().isEmpty(); + } + + @Test + void testOneString() { + List result = ClusterSlotsParser.parse(LettuceLists.newList("")); + assertThat(result).isNotNull().isEmpty(); + } + + @Test + void testOneStringInList() { + List list = Arrays.asList(LettuceLists.newList("0")); + List result = ClusterSlotsParser.parse(list); + assertThat(result).isNotNull().isEmpty(); + } + + @Test + void testParse() { + List list = Arrays.asList(LettuceLists.newList("0", "1", LettuceLists.newList("1", "2"))); + List result = ClusterSlotsParser.parse(list); + assertThat(result).hasSize(1); + + assertThat(result.get(0).getMasterNode()).isNotNull(); + } + + @Test + void testParseWithReplica() { + List list = Arrays.asList(LettuceLists.newList("100", "200", LettuceLists.newList("1", "2", "nodeId1"), + LettuceLists.newList("1", 2, "nodeId2"))); + List result = ClusterSlotsParser.parse(list); + assertThat(result).hasSize(1); + ClusterSlotRange clusterSlotRange = result.get(0); + + RedisClusterNode masterNode = clusterSlotRange.getMasterNode(); + assertThat(masterNode).isNotNull(); + assertThat(masterNode.getNodeId()).isEqualTo("nodeId1"); + assertThat(masterNode.getUri().getHost()).isEqualTo("1"); + assertThat(masterNode.getUri().getPort()).isEqualTo(2); + assertThat(masterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MASTER); + assertThat(masterNode.getSlots()).contains(100, 101, 199, 200); + assertThat(masterNode.getSlots()).doesNotContain(99, 201); + assertThat(masterNode.getSlots()).hasSize(101); + + assertThat(clusterSlotRange.getSlaveNodes()).hasSize(1); + + RedisClusterNode replica = clusterSlotRange.getReplicaNodes().get(0); + + assertThat(replica.getNodeId()).isEqualTo("nodeId2"); + assertThat(replica.getSlaveOf()).isEqualTo("nodeId1"); + assertThat(replica.getFlags()).contains(RedisClusterNode.NodeFlag.SLAVE); + } + + @Test + void testSameNode() { + List list = Arrays.asList( + LettuceLists.newList("100", "200", LettuceLists.newList("1", "2", "nodeId1"), + LettuceLists.newList("1", 2, "nodeId2")), + LettuceLists.newList("200", "300", LettuceLists.newList("1", "2", "nodeId1"), + LettuceLists.newList("1", 2, "nodeId2"))); + + List result = ClusterSlotsParser.parse(list); + assertThat(result).hasSize(2); + + assertThat(result.get(0).getMasterNode()).isSameAs(result.get(1).getMasterNode()); + + RedisClusterNode masterNode = result.get(0).getMasterNode(); + assertThat(masterNode).isNotNull(); + assertThat(masterNode.getNodeId()).isEqualTo("nodeId1"); + assertThat(masterNode.getUri().getHost()).isEqualTo("1"); + assertThat(masterNode.getUri().getPort()).isEqualTo(2); + assertThat(masterNode.getFlags()).contains(RedisClusterNode.NodeFlag.MASTER); + assertThat(masterNode.getSlots()).contains(100, 101, 199, 200, 203); + assertThat(masterNode.getSlots()).doesNotContain(99, 301); + assertThat(masterNode.getSlots()).hasSize(201); + } + + @Test + void testParseInvalidMaster() { + + List list = Arrays.asList(LettuceLists.newList("0", "1", LettuceLists.newList("1"))); + assertThatThrownBy(() -> ClusterSlotsParser.parse(list)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void testParseInvalidMaster2() { + List list = Arrays.asList(LettuceLists.newList("0", "1", "")); + assertThatThrownBy(() -> ClusterSlotsParser.parse(list)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void testModel() { + + ClusterSlotRange range = new ClusterSlotRange(); + range.setFrom(1); + range.setTo(2); + + assertThat(range.toString()).contains(ClusterSlotRange.class.getSimpleName()); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubConnectionIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubConnectionIntegrationTests.java new file mode 100644 index 0000000000..472a2fdf01 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/pubsub/RedisClusterPubSubConnectionIntegrationTests.java @@ -0,0 +1,322 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.pubsub; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.List; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.LinkedBlockingQueue; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.cluster.pubsub.api.async.NodeSelectionPubSubAsyncCommands; +import io.lettuce.core.cluster.pubsub.api.async.PubSubAsyncNodeSelection; +import io.lettuce.core.cluster.pubsub.api.reactive.NodeSelectionPubSubReactiveCommands; +import io.lettuce.core.cluster.pubsub.api.reactive.PubSubReactiveNodeSelection; +import io.lettuce.core.cluster.pubsub.api.sync.NodeSelectionPubSubCommands; +import io.lettuce.core.cluster.pubsub.api.sync.PubSubNodeSelection; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.support.PubSubTestListener; +import io.lettuce.test.TestFutures; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RedisClusterPubSubConnectionIntegrationTests extends TestSupport { + + private final RedisClusterClient clusterClient; + + private final PubSubTestListener connectionListener = new PubSubTestListener(); + private final PubSubTestListener nodeListener = new PubSubTestListener(); + + private StatefulRedisClusterConnection connection; + private StatefulRedisClusterPubSubConnection pubSubConnection; + private StatefulRedisClusterPubSubConnection pubSubConnection2; + + @Inject + RedisClusterPubSubConnectionIntegrationTests(RedisClusterClient clusterClient) { + this.clusterClient = clusterClient; + } + + @BeforeEach + void openPubSubConnection() { + connection = clusterClient.connect(); + pubSubConnection = clusterClient.connectPubSub(); + pubSubConnection2 = clusterClient.connectPubSub(); + + } + + @AfterEach + void closePubSubConnection() { + connection.close(); + pubSubConnection.close(); + pubSubConnection2.close(); + } + + @Test + void testRegularClientPubSubChannels() { + + String nodeId = pubSubConnection.sync().clusterMyId(); + RedisClusterNode otherNode = getOtherThan(nodeId); + pubSubConnection.sync().subscribe(key); + + List channelsOnSubscribedNode = connection.getConnection(nodeId).sync().pubsubChannels(); + assertThat(channelsOnSubscribedNode).hasSize(1); + + List channelsOnOtherNode = connection.getConnection(otherNode.getNodeId()).sync().pubsubChannels(); + assertThat(channelsOnOtherNode).isEmpty(); + } + + @Test + void testRegularClientPublish() throws Exception { + + String nodeId = pubSubConnection.sync().clusterMyId(); + RedisClusterNode otherNode = getOtherThan(nodeId); + pubSubConnection.sync().subscribe(key); + pubSubConnection.addListener(connectionListener); + + connection.getConnection(nodeId).sync().publish(key, value); + assertThat(connectionListener.getMessages().take()).isEqualTo(value); + + connection.getConnection(otherNode.getNodeId()).sync().publish(key, value); + assertThat(connectionListener.getMessages().take()).isEqualTo(value); + } + + @Test + void testPubSubClientPublish() throws Exception { + + String nodeId = pubSubConnection.sync().clusterMyId(); + pubSubConnection.sync().subscribe(key); + pubSubConnection.addListener(connectionListener); + + assertThat(pubSubConnection2.sync().clusterMyId()).isEqualTo(nodeId); + + pubSubConnection2.sync().publish(key, value); + assertThat(connectionListener.getMessages().take()).isEqualTo(value); + } + + @Test + void testConnectToLeastClientsNode() { + + clusterClient.reloadPartitions(); + String nodeId = pubSubConnection.sync().clusterMyId(); + + StatefulRedisPubSubConnection connectionAfterPartitionReload = clusterClient.connectPubSub(); + String newConnectionNodeId = connectionAfterPartitionReload.sync().clusterMyId(); + connectionAfterPartitionReload.close(); + + assertThat(nodeId).isNotEqualTo(newConnectionNodeId); + } + + @Test + void testRegularClientPubSubPublish() throws Exception { + + String nodeId = pubSubConnection.sync().clusterMyId(); + RedisClusterNode otherNode = getOtherThan(nodeId); + pubSubConnection.sync().subscribe(key); + pubSubConnection.addListener(connectionListener); + + List channelsOnSubscribedNode = connection.getConnection(nodeId).sync().pubsubChannels(); + assertThat(channelsOnSubscribedNode).hasSize(1); + + RedisCommands otherNodeConnection = connection.getConnection(otherNode.getNodeId()).sync(); + otherNodeConnection.publish(key, value); + assertThat(connectionListener.getChannels().take()).isEqualTo(key); + } + + @Test + void testGetConnectionAsyncByNodeId() { + + RedisClusterNode partition = pubSubConnection.getPartitions().getPartition(0); + + StatefulRedisPubSubConnection node = TestFutures + .getOrTimeout(pubSubConnection.getConnectionAsync(partition + .getNodeId())); + + assertThat(node.sync().ping()).isEqualTo("PONG"); + } + + @Test + void testGetConnectionAsyncByHostAndPort() { + + RedisClusterNode partition = pubSubConnection.getPartitions().getPartition(0); + + RedisURI uri = partition.getUri(); + StatefulRedisPubSubConnection node = TestFutures + .getOrTimeout(pubSubConnection.getConnectionAsync(uri.getHost(), + uri.getPort())); + + assertThat(node.sync().ping()).isEqualTo("PONG"); + } + + @Test + void testNodeIdSubscription() throws Exception { + + RedisClusterNode partition = pubSubConnection.getPartitions().getPartition(0); + + StatefulRedisPubSubConnection node = pubSubConnection.getConnection(partition.getNodeId()); + node.addListener(nodeListener); + + node.sync().subscribe("channel"); + + pubSubConnection2.sync().publish("channel", "message"); + + assertThat(nodeListener.getMessages().take()).isEqualTo("message"); + assertThat(connectionListener.getMessages().poll()).isNull(); + } + + @Test + void testNodeMessagePropagationSubscription() throws Exception { + + RedisClusterNode partition = pubSubConnection.getPartitions().getPartition(0); + pubSubConnection.setNodeMessagePropagation(true); + pubSubConnection.addListener(connectionListener); + + StatefulRedisPubSubConnection node = pubSubConnection.getConnection(partition.getNodeId()); + node.sync().subscribe("channel"); + + pubSubConnection2.sync().publish("channel", "message"); + + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + } + + @Test + void testNodeHostAndPortMessagePropagationSubscription() throws Exception { + + RedisClusterNode partition = pubSubConnection.getPartitions().getPartition(0); + pubSubConnection.setNodeMessagePropagation(true); + pubSubConnection.addListener(connectionListener); + + RedisURI uri = partition.getUri(); + StatefulRedisPubSubConnection node = pubSubConnection.getConnection(uri.getHost(), uri.getPort()); + node.sync().subscribe("channel"); + + pubSubConnection2.sync().publish("channel", "message"); + + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + } + + @Test + void testAsyncSubscription() throws Exception { + + pubSubConnection.setNodeMessagePropagation(true); + pubSubConnection.addListener(connectionListener); + + PubSubAsyncNodeSelection masters = pubSubConnection.async().masters(); + NodeSelectionPubSubAsyncCommands commands = masters.commands(); + + TestFutures.awaitOrTimeout(commands.psubscribe("chann*")); + + pubSubConnection2.sync().publish("channel", "message"); + + assertThat(masters.size()).isEqualTo(2); + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + } + + @Test + void testSyncSubscription() throws Exception { + + pubSubConnection.setNodeMessagePropagation(true); + pubSubConnection.addListener(connectionListener); + + PubSubNodeSelection masters = pubSubConnection.sync().masters(); + NodeSelectionPubSubCommands commands = masters.commands(); + + commands.psubscribe("chann*"); + + pubSubConnection2.sync().publish("channel", "message"); + + assertThat(masters.size()).isEqualTo(2); + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + } + + @Test + void testReactiveSubscription() throws Exception { + + pubSubConnection.setNodeMessagePropagation(true); + pubSubConnection.addListener(connectionListener); + + PubSubReactiveNodeSelection masters = pubSubConnection.reactive().masters(); + NodeSelectionPubSubReactiveCommands commands = masters.commands(); + + commands.psubscribe("chann*").flux().then().block(); + + pubSubConnection2.sync().publish("channel", "message"); + + assertThat(masters.size()).isEqualTo(2); + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + assertThat(connectionListener.getMessages().poll()).isNull(); + } + + @Test + void testClusterListener() throws Exception { + + BlockingQueue nodes = new LinkedBlockingQueue<>(); + pubSubConnection.setNodeMessagePropagation(true); + pubSubConnection.addListener(connectionListener); + pubSubConnection.addListener(new RedisClusterPubSubAdapter() { + + @Override + public void message(RedisClusterNode node, String pattern, String channel, String message) { + nodes.add(node); + } + }); + + PubSubNodeSelection masters = pubSubConnection.sync().masters(); + NodeSelectionPubSubCommands commands = masters.commands(); + + commands.psubscribe("chann*"); + + pubSubConnection2.sync().publish("channel", "message"); + + assertThat(masters.size()).isEqualTo(2); + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + assertThat(connectionListener.getMessages().take()).isEqualTo("message"); + assertThat(connectionListener.getMessages().poll()).isNull(); + + assertThat(nodes.take()).isNotNull(); + assertThat(nodes.take()).isNotNull(); + assertThat(nodes.poll()).isNull(); + } + + private RedisClusterNode getOtherThan(String nodeId) { + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + if (redisClusterNode.getNodeId().equals(nodeId)) { + continue; + } + return redisClusterNode; + } + + throw new IllegalStateException("No other nodes than " + nodeId + " available"); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/topology/ClusterTopologyRefreshUnitTests.java b/src/test/java/io/lettuce/core/cluster/topology/ClusterTopologyRefreshUnitTests.java new file mode 100644 index 0000000000..1900c1bb33 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/topology/ClusterTopologyRefreshUnitTests.java @@ -0,0 +1,560 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.*; + +import java.net.InetSocketAddress; +import java.nio.ByteBuffer; +import java.time.Duration; +import java.util.*; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionException; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DnsResolvers; +import io.lettuce.core.resource.SocketAddressResolver; +import io.lettuce.test.settings.TestSettings; +import io.netty.util.Timeout; +import io.netty.util.concurrent.EventExecutorGroup; + +/** + * @author Mark Paluch + * @author Christian Weitendorf + */ +@SuppressWarnings("unchecked") +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class ClusterTopologyRefreshUnitTests { + + private static final String NODE_1_VIEW = "1 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n" + + "2 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + private static final String NODE_2_VIEW = "1 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" + + "2 127.0.0.1:7381 master,myself - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + private DefaultClusterTopologyRefresh sut; + + @Mock + private RedisClusterClient client; + + @Mock + private StatefulRedisConnection connection; + + @Mock + private ClientResources clientResources; + + @Mock + private NodeConnectionFactory nodeConnectionFactory; + + @Mock + private StatefulRedisConnection connection1; + + @Mock + private RedisAsyncCommands asyncCommands1; + + @Mock + private StatefulRedisConnection connection2; + + @Mock + private RedisAsyncCommands asyncCommands2; + + @Mock + private EventExecutorGroup eventExecutors; + + @BeforeEach + void before() { + + io.netty.util.Timer timer = mock(io.netty.util.Timer.class); + when(timer.newTimeout(any(), anyLong(), any())).thenReturn(mock(Timeout.class)); + when(clientResources.timer()).thenReturn(timer); + when(clientResources.socketAddressResolver()).thenReturn(SocketAddressResolver.create(DnsResolvers.JVM_DEFAULT)); + when(clientResources.eventExecutorGroup()).thenReturn(eventExecutors); + when(connection1.async()).thenReturn(asyncCommands1); + when(connection2.async()).thenReturn(asyncCommands2); + when(connection1.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + when(connection2.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + + when(connection1.dispatch(any(RedisCommand.class))).thenAnswer(invocation -> { + + TimedAsyncCommand command = (TimedAsyncCommand) invocation.getArguments()[0]; + if (command.getType() == CommandType.CLUSTER) { + command.getOutput().set(ByteBuffer.wrap(NODE_1_VIEW.getBytes())); + command.complete(); + } + + if (command.getType() == CommandType.CLIENT) { + command.getOutput().set(ByteBuffer.wrap("c1\nc2\n".getBytes())); + command.complete(); + } + + if (command.getType() == CommandType.INFO) { + command.getOutput().set(ByteBuffer.wrap( + "# Clients\nconnected_clients:2\nclient_longest_output_list:0\nclient_biggest_input_buf:0\nblocked_clients:0" + .getBytes())); + command.complete(); + } + + command.encodedAtNs = 10; + command.completedAtNs = 50; + + return command; + }); + + when(connection2.dispatch(any(RedisCommand.class))).thenAnswer(invocation -> { + + TimedAsyncCommand command = (TimedAsyncCommand) invocation.getArguments()[0]; + if (command.getType() == CommandType.CLUSTER) { + command.getOutput().set(ByteBuffer.wrap(NODE_2_VIEW.getBytes())); + command.complete(); + } + + if (command.getType() == CommandType.CLIENT) { + command.getOutput().set(ByteBuffer.wrap("".getBytes())); + command.complete(); + } + + if (command.getType() == CommandType.INFO) { + command.getOutput().set(ByteBuffer.wrap( + "# Clients\nconnected_clients:2\nclient_longest_output_list:0\nclient_biggest_input_buf:0\nblocked_clients:0" + .getBytes())); + command.complete(); + } + + command.encodedAtNs = 10; + command.completedAtNs = 20; + + return command; + }); + + sut = new DefaultClusterTopologyRefresh(nodeConnectionFactory, clientResources); + } + + @Test + void getNodeTopologyView() throws Exception { + Requests requestedTopology = createClusterNodesRequests(1, NODE_1_VIEW); + Requests requestedClients = createClientListRequests(1, + "# Clients\r\nconnected_clients:2438\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0"); + RedisURI redisURI = RedisURI.create("redis://localhost:1"); + NodeTopologyView nodeTopologyView = NodeTopologyView.from(redisURI, requestedTopology, requestedClients); + assertThat(nodeTopologyView.getConnectedClients()).isEqualTo(2438); + } + + @Test + void getNodeSpecificViewsNode1IsFasterThanNode2() throws Exception { + + Requests requests = createClusterNodesRequests(1, NODE_1_VIEW); + requests = createClusterNodesRequests(2, NODE_2_VIEW).mergeWith(requests); + + Requests clientRequests = createClientListRequests(1, "c1\nc2\n").mergeWith(createClientListRequests(2, "c1\nc2\n")); + + NodeTopologyViews nodeSpecificViews = sut.getNodeSpecificViews(requests, clientRequests); + + Collection values = nodeSpecificViews.toMap().values(); + + assertThat(values).hasSize(2); + + for (Partitions value : values) { + assertThat(value).extracting("nodeId").containsSequence("1", "2"); + } + } + + @Test + void shouldNotRequestTopologyIfExecutorShutsDown() { + + when(eventExecutors.isShuttingDown()).thenReturn(true); + + List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380), RedisURI.create("127.0.0.1", 7381)); + + sut.loadViews(seed, Duration.ofSeconds(1), true); + + verifyZeroInteractions(nodeConnectionFactory); + } + + @Test + void partitionsReturnedAsReported() throws Exception { + + System.setProperty("io.lettuce.core.topology.sort", "none"); + + String NODE_1_VIEW = "2 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "1 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n"; + String NODE_2_VIEW = "2 127.0.0.1:7381 master,myself - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "1 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n"; + + Requests requests = createClusterNodesRequests(1, NODE_1_VIEW); + requests = createClusterNodesRequests(2, NODE_2_VIEW).mergeWith(requests); + + Requests clientRequests = createClientListRequests(1, "c1\nc2\n").mergeWith(createClientListRequests(2, "c1\nc2\n")); + + NodeTopologyViews nodeSpecificViews = sut.getNodeSpecificViews(requests, clientRequests); + + Collection values = nodeSpecificViews.toMap().values(); + + assertThat(values).hasSize(2); + + for (Partitions value : values) { + assertThat(value).extracting("nodeId").containsSequence("2", "1"); + } + + System.getProperties().remove("io.lettuce.core.topology.sort"); + } + + @Test + void getNodeSpecificViewTestingNoAddrFilter() throws Exception { + + String nodes1 = "n1 10.37.110.63:7000 slave n3 0 1452553664848 43 connected\n" + + "n2 10.37.110.68:7000 slave n6 0 1452553664346 45 connected\n" + + "badSlave :0 slave,fail,noaddr n5 1449160058028 1449160053146 46 disconnected\n" + + "n3 10.37.110.69:7000 master - 0 1452553662842 43 connected 3829-6787 7997-9999\n" + + "n4 10.37.110.62:7000 slave n3 0 1452553663844 43 connected\n" + + "n5 10.37.110.70:7000 myself,master - 0 0 46 connected 10039-14999\n" + + "n6 10.37.110.65:7000 master - 0 1452553663844 45 connected 0-3828 6788-7996 10000-10038 15000-16383"; + + Requests clusterNodesRequests = createClusterNodesRequests(1, nodes1); + Requests clientRequests = createClientListRequests(1, + "# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0"); + + NodeTopologyViews nodeSpecificViews = sut.getNodeSpecificViews(clusterNodesRequests, clientRequests); + + List values = new ArrayList<>(nodeSpecificViews.toMap().values()); + + assertThat(values).hasSize(1); + + for (Partitions value : values) { + assertThat(value).extracting("nodeId").containsOnly("n1", "n2", "n3", "n4", "n5", "n6"); + } + + RedisClusterNodeSnapshot firstPartition = (RedisClusterNodeSnapshot) values.get(0).getPartition(0); + RedisClusterNodeSnapshot selfPartition = (RedisClusterNodeSnapshot) values.get(0).getPartition(4); + assertThat(firstPartition.getConnectedClients()).isEqualTo(2); + assertThat(selfPartition.getConnectedClients()).isNull(); + + } + + @Test + void getNodeSpecificViewsNode2IsFasterThanNode1() throws Exception { + + Requests clusterNodesRequests = createClusterNodesRequests(5, NODE_1_VIEW); + clusterNodesRequests = createClusterNodesRequests(1, NODE_2_VIEW).mergeWith(clusterNodesRequests); + + Requests clientRequests = createClientListRequests(5, "c1\nc2\n").mergeWith(createClientListRequests(1, "c1\nc2\n")); + + NodeTopologyViews nodeSpecificViews = sut.getNodeSpecificViews(clusterNodesRequests, clientRequests); + List values = new ArrayList<>(nodeSpecificViews.toMap().values()); + + assertThat(values).hasSize(2); + + for (Partitions value : values) { + assertThat(value).extracting("nodeId").containsExactly("2", "1"); + } + } + + @Test + void shouldAttemptToConnectOnlyOnce() { + + List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380), RedisURI.create("127.0.0.1", 7381)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) + .thenReturn(completedWithException(new RedisException("connection failed"))); + + sut.loadViews(seed, Duration.ofSeconds(1), true); + + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381))); + } + + @Test + void shouldFailIfNoNodeConnects() { + + List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380), RedisURI.create("127.0.0.1", 7381)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedWithException(new RedisException("connection failed"))); + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) + .thenReturn(completedWithException(new RedisException("connection failed"))); + + try { + sut.loadViews(seed, Duration.ofSeconds(1), true).toCompletableFuture().join(); + fail("Missing RedisConnectionException"); + } catch (Exception e) { + + assertThat(e.getCause()).hasMessageStartingWith("Cannot retrieve cluster partitions from "); + assertThat(e.getCause().getSuppressed()).hasSize(2); + } + + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381))); + } + + @Test + void shouldShouldDiscoverNodes() { + + List seed = Collections.singletonList(RedisURI.create("127.0.0.1", 7380)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection2)); + + sut.loadViews(seed, Duration.ofSeconds(1), true); + + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381))); + } + + @Test + void shouldShouldNotDiscoverNodes() { + + List seed = Collections.singletonList(RedisURI.create("127.0.0.1", 7380)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + + sut.loadViews(seed, Duration.ofSeconds(1), false); + + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); + verifyNoMoreInteractions(nodeConnectionFactory); + } + + @Test + void shouldNotFailOnDuplicateSeedNodes() { + + List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380), RedisURI.create("127.0.0.1", 7381), + RedisURI.create("127.0.0.1", 7381)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection2)); + + sut.loadViews(seed, Duration.ofSeconds(1), true); + + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380))); + verify(nodeConnectionFactory).connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381))); + } + + @Test + void shouldCloseConnections() { + + List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380), RedisURI.create("127.0.0.1", 7381)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection2)); + + sut.loadViews(seed, Duration.ofSeconds(1), true); + + verify(connection1).closeAsync(); + verify(connection2).closeAsync(); + } + + @Test + void undiscoveredAdditionalNodesShouldBeLastUsingClientCount() { + + List seed = Collections.singletonList(RedisURI.create("127.0.0.1", 7380)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + + Map partitionsMap = sut.loadViews(seed, Duration.ofSeconds(1), false).toCompletableFuture() + .join(); + + Partitions partitions = partitionsMap.values().iterator().next(); + + List nodes = TopologyComparators.sortByClientCount(partitions); + + assertThat(nodes).hasSize(2).extracting(RedisClusterNode::getUri).containsSequence(seed.get(0), + RedisURI.create("127.0.0.1", 7381)); + } + + @Test + void discoveredAdditionalNodesShouldBeOrderedUsingClientCount() { + + List seed = Collections.singletonList(RedisURI.create("127.0.0.1", 7380)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection2)); + + Map partitionsMap = sut.loadViews(seed, Duration.ofSeconds(1), true).toCompletableFuture().join(); + + Partitions partitions = partitionsMap.values().iterator().next(); + + List nodes = TopologyComparators.sortByClientCount(partitions); + + assertThat(nodes).hasSize(2).extracting(RedisClusterNode::getUri).containsSequence(RedisURI.create("127.0.0.1", 7381), + seed.get(0)); + } + + @Test + void undiscoveredAdditionalNodesShouldBeLastUsingLatency() { + + List seed = Collections.singletonList(RedisURI.create("127.0.0.1", 7380)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + + Map partitionsMap = sut.loadViews(seed, Duration.ofSeconds(1), false).toCompletableFuture() + .join(); + + Partitions partitions = partitionsMap.values().iterator().next(); + + List nodes = TopologyComparators.sortByLatency(partitions); + + assertThat(nodes).hasSize(2).extracting(RedisClusterNode::getUri).containsSequence(seed.get(0), + RedisURI.create("127.0.0.1", 7381)); + } + + @Test + void discoveredAdditionalNodesShouldBeOrderedUsingLatency() { + + List seed = Collections.singletonList(RedisURI.create("127.0.0.1", 7380)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection2)); + + Map partitionsMap = sut.loadViews(seed, Duration.ofSeconds(1), true).toCompletableFuture().join(); + + Partitions partitions = partitionsMap.values().iterator().next(); + + List nodes = TopologyComparators.sortByLatency(partitions); + + assertThat(nodes).hasSize(2).extracting(RedisClusterNode::getUri).containsSequence(RedisURI.create("127.0.0.1", 7381), + seed.get(0)); + } + + @Test + void shouldPropagateCommandFailures() { + + List seed = Arrays.asList(RedisURI.create("127.0.0.1", 7380), RedisURI.create("127.0.0.1", 7381)); + + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7380)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection1)); + when(nodeConnectionFactory.connectToNodeAsync(any(RedisCodec.class), eq(new InetSocketAddress("127.0.0.1", 7381)))) + .thenReturn(completedFuture((StatefulRedisConnection) connection2)); + + reset(connection1, connection2); + + when(connection1.async()).thenReturn(asyncCommands1); + when(connection2.async()).thenReturn(asyncCommands2); + when(connection1.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + when(connection2.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + + when(connection1.dispatch(any(RedisCommand.class))).thenAnswer(invocation -> { + + TimedAsyncCommand command = invocation.getArgument(0); + command.completeExceptionally(new RedisException("AUTH")); + return command; + }); + + RedisException nestedException = new RedisException("NESTED"); + + when(connection2.dispatch(any(RedisCommand.class))).thenAnswer(invocation -> { + + TimedAsyncCommand command = invocation.getArgument(0); + command.completeExceptionally(nestedException); + return command; + }); + + CompletionStage> actual = sut.loadViews(seed, Duration.ofSeconds(1), true); + assertThat(actual).isCompletedExceptionally(); + + try { + actual.toCompletableFuture().join(); + fail("Missing CompletionException"); + } catch (CompletionException e) { + + assertThat(e.getCause()).hasSuppressedException(nestedException); + } + } + + Requests createClusterNodesRequests(int duration, String nodes) { + + RedisURI redisURI = RedisURI.create("redis://localhost:" + duration); + Connections connections = new Connections(clientResources, new HashMap<>()); + connections.addConnection(redisURI, connection); + + Requests requests = connections.requestTopology(100, TimeUnit.SECONDS); + TimedAsyncCommand command = requests.getRequest(redisURI); + + command.getOutput().set(ByteBuffer.wrap(nodes.getBytes())); + command.complete(); + command.encodedAtNs = 0; + command.completedAtNs = duration; + + return requests; + } + + Requests createClientListRequests(int duration, String response) { + + RedisURI redisURI = RedisURI.create("redis://localhost:" + duration); + Connections connections = new Connections(clientResources, new HashMap<>()); + connections.addConnection(redisURI, connection); + + Requests requests = connections.requestTopology(100, TimeUnit.SECONDS); + TimedAsyncCommand command = requests.getRequest(redisURI); + + command.getOutput().set(ByteBuffer.wrap(response.getBytes())); + command.complete(); + + return requests; + } + + private static ConnectionFuture completedFuture(T value) { + + return ConnectionFuture.from(InetSocketAddress.createUnresolved(TestSettings.host(), TestSettings.port()), + CompletableFuture.completedFuture(value)); + } + + private static ConnectionFuture completedWithException(Exception e) { + + CompletableFuture future = new CompletableFuture<>(); + future.completeExceptionally(e); + + return ConnectionFuture.from(InetSocketAddress.createUnresolved(TestSettings.host(), TestSettings.port()), future); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/topology/NodeTopologyViewsUnitTests.java b/src/test/java/io/lettuce/core/cluster/topology/NodeTopologyViewsUnitTests.java new file mode 100644 index 0000000000..05dcf15483 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/topology/NodeTopologyViewsUnitTests.java @@ -0,0 +1,68 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.Arrays; +import java.util.Set; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; + +/** + * @author Mark Paluch + */ +class NodeTopologyViewsUnitTests { + + @Test + void shouldReuseKnownUris() { + + RedisURI localhost = RedisURI.create("127.0.0.1", 6479); + RedisURI otherhost = RedisURI.create("127.0.0.2", 7000); + + RedisURI host3 = RedisURI.create("127.0.0.3", 7000); + + String viewByLocalhost = "1 127.0.0.1:6479 master,myself - 0 1401258245007 2 connected 8000-11999\n" + + "2 127.0.0.2:7000 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "3 127.0.0.3:7000 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + String viewByOtherhost = "1 127.0.0.2:6479 master - 0 1401258245007 2 connected 8000-11999\n" + + "2 127.0.0.2:7000 master,myself - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "3 127.0.0.3:7000 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + NodeTopologyView localhostView = new NodeTopologyView(localhost, viewByLocalhost, "", 0); + NodeTopologyView otherhostView = new NodeTopologyView(otherhost, viewByOtherhost, "", 0); + + NodeTopologyViews nodeTopologyViews = new NodeTopologyViews(Arrays.asList(localhostView, otherhostView)); + + Set clusterNodes = nodeTopologyViews.getClusterNodes(); + assertThat(clusterNodes).contains(localhost, otherhost, host3); + } + + @Test + void shouldFailWithoutOwnPartition() { + + RedisURI localhost = RedisURI.create("127.0.0.1", 6479); + + String viewByLocalhost = "1 127.0.0.1:6479 master - 0 1401258245007 2 connected 8000-11999\n"; + + assertThatThrownBy(() -> new NodeTopologyView(localhost, viewByLocalhost, "", 0).getOwnPartition()).isInstanceOf( + IllegalStateException.class); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/topology/RequestsUnitTests.java b/src/test/java/io/lettuce/core/cluster/topology/RequestsUnitTests.java new file mode 100644 index 0000000000..a33da4c4ac --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/topology/RequestsUnitTests.java @@ -0,0 +1,86 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; + +/** + * @author Mark Paluch + * @author Xujs + */ +class RequestsUnitTests { + + @Test + void shouldCreateTopologyView() throws Exception { + + RedisURI redisURI = RedisURI.create("localhost", 6379); + + Requests clusterNodesRequests = new Requests(); + String clusterNodesOutput = "1 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n"; + clusterNodesRequests.addRequest(redisURI, getCommand(clusterNodesOutput)); + + Requests infoClientRequests = new Requests(); + String infoClientOutput = "# Clients\r\nconnected_clients:100\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0"; + infoClientRequests.addRequest(redisURI, getCommand(infoClientOutput)); + + NodeTopologyView nodeTopologyView = NodeTopologyView.from(redisURI, clusterNodesRequests, infoClientRequests); + + assertThat(nodeTopologyView.isAvailable()).isTrue(); + assertThat(nodeTopologyView.getConnectedClients()).isEqualTo(100); + assertThat(nodeTopologyView.getPartitions()).hasSize(1); + assertThat(nodeTopologyView.getClusterNodes()).isEqualTo(clusterNodesOutput); + assertThat(nodeTopologyView.getClientList()).isEqualTo(infoClientOutput); + } + + @Test + void shouldCreateTopologyViewWithoutClientCount() throws Exception { + + RedisURI redisURI = RedisURI.create("localhost", 6379); + + Requests clusterNodesRequests = new Requests(); + String clusterNodesOutput = "1 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n"; + clusterNodesRequests.addRequest(redisURI, getCommand(clusterNodesOutput)); + + Requests clientListRequests = new Requests(); + + NodeTopologyView nodeTopologyView = NodeTopologyView.from(redisURI, clusterNodesRequests, clientListRequests); + + assertThat(nodeTopologyView.isAvailable()).isFalse(); + assertThat(nodeTopologyView.getConnectedClients()).isEqualTo(0); + assertThat(nodeTopologyView.getPartitions()).isEmpty(); + assertThat(nodeTopologyView.getClusterNodes()).isNull(); + } + + private TimedAsyncCommand getCommand(String response) { + Command command = new Command<>(CommandType.TYPE, new StatusOutput<>(StringCodec.UTF8)); + TimedAsyncCommand timedAsyncCommand = new TimedAsyncCommand(command); + + command.getOutput().set(ByteBuffer.wrap(response.getBytes())); + timedAsyncCommand.complete(); + return timedAsyncCommand; + } +} diff --git a/src/test/java/io/lettuce/core/cluster/topology/TopologyComparatorsUnitTests.java b/src/test/java/io/lettuce/core/cluster/topology/TopologyComparatorsUnitTests.java new file mode 100644 index 0000000000..ce14f53b97 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/topology/TopologyComparatorsUnitTests.java @@ -0,0 +1,366 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import static io.lettuce.core.cluster.topology.TopologyComparators.isChanged; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.util.Lists.newArrayList; + +import java.util.*; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.models.partitions.ClusterPartitionParser; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.internal.LettuceLists; + +/** + * Unit tests for {@link TopologyComparators}. + * + * @author Mark Paluch + * @author Alessandro Simi + */ +class TopologyComparatorsUnitTests { + + private RedisClusterNodeSnapshot node1 = createNode("1"); + private RedisClusterNodeSnapshot node2 = createNode("2"); + private RedisClusterNodeSnapshot node3 = createNode("3"); + + private static RedisClusterNodeSnapshot createNode(String nodeId) { + RedisClusterNodeSnapshot result = new RedisClusterNodeSnapshot(); + result.setNodeId(nodeId); + result.setUri(RedisURI.create("localhost", Integer.parseInt(nodeId))); + return result; + } + + @Test + void latenciesForAllNodes() { + + Map map = new HashMap<>(); + map.put(node1.getNodeId(), 1L); + map.put(node2.getNodeId(), 2L); + map.put(node3.getNodeId(), 3L); + + runTest(map, newArrayList(node1, node2, node3), newArrayList(node3, node1, node2)); + runTest(map, newArrayList(node1, node2, node3), newArrayList(node1, node2, node3)); + runTest(map, newArrayList(node1, node2, node3), newArrayList(node3, node2, node1)); + } + + @Test + void latenciesForTwoNodes_N1_N2() { + + Map map = new HashMap<>(); + map.put(node1.getNodeId(), 1L); + map.put(node2.getNodeId(), 2L); + + runTest(map, newArrayList(node1, node2, node3), newArrayList(node3, node1, node2)); + runTest(map, newArrayList(node1, node2, node3), newArrayList(node1, node2, node3)); + runTest(map, newArrayList(node1, node2, node3), newArrayList(node3, node2, node1)); + } + + @Test + void latenciesForTwoNodes_N2_N3() { + + Map map = new HashMap<>(); + map.put(node3.getNodeId(), 1L); + map.put(node2.getNodeId(), 2L); + + runTest(map, newArrayList(node3, node2, node1), newArrayList(node3, node1, node2)); + runTest(map, newArrayList(node3, node2, node1), newArrayList(node1, node2, node3)); + runTest(map, newArrayList(node3, node2, node1), newArrayList(node3, node2, node1)); + } + + @Test + void latenciesForOneNode() { + + Map map = Collections.singletonMap(node2.getNodeId(), 2L); + + runTest(map, newArrayList(node2, node3, node1), newArrayList(node3, node1, node2)); + runTest(map, newArrayList(node2, node1, node3), newArrayList(node1, node2, node3)); + runTest(map, newArrayList(node2, node3, node1), newArrayList(node3, node2, node1)); + } + + @Test + void shouldFail() { + + Map map = Collections.singletonMap(node2.getNodeId(), 2L); + + assertThatThrownBy(() -> runTest(map, newArrayList(node2, node1, node3), newArrayList(node3, node1, node2))) + .isInstanceOf(AssertionError.class); + } + + @Test + void testLatencyComparator() { + + RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); + node1.setLatencyNs(1L); + + RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); + node2.setLatencyNs(2L); + + RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); + node3.setLatencyNs(3L); + + List list = LettuceLists.newList(node2, node3, node1); + Collections.sort(list, TopologyComparators.LatencyComparator.INSTANCE); + + assertThat(list).containsSequence(node1, node2, node3); + } + + @Test + void testLatencyComparatorWithSomeNodesWithoutStats() { + + RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); + node1.setLatencyNs(1L); + + RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); + node2.setLatencyNs(2L); + + RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); + RedisClusterNodeSnapshot node4 = new RedisClusterNodeSnapshot(); + + List list = LettuceLists.newList(node2, node3, node4, node1); + Collections.sort(list, TopologyComparators.LatencyComparator.INSTANCE); + + assertThat(list).containsSequence(node1, node2, node3, node4); + } + + @Test + void testClientComparator() { + + RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); + node1.setConnectedClients(1); + + RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); + node2.setConnectedClients(2); + + RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); + node3.setConnectedClients(3); + + List list = LettuceLists.newList(node2, node3, node1); + Collections.sort(list, TopologyComparators.ClientCountComparator.INSTANCE); + + assertThat(list).containsSequence(node1, node2, node3); + } + + @Test + void testClientComparatorWithSomeNodesWithoutStats() { + + RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); + node1.setConnectedClients(1); + + RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); + node2.setConnectedClients(2); + + RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); + RedisClusterNodeSnapshot node4 = new RedisClusterNodeSnapshot(); + + List list = LettuceLists.newList(node2, node3, node4, node1); + Collections.sort(list, TopologyComparators.ClientCountComparator.INSTANCE); + + assertThat(list).containsSequence(node1, node2, node3, node4); + } + + @Test + void testLatencyComparatorWithoutClients() { + + RedisClusterNodeSnapshot node1 = new RedisClusterNodeSnapshot(); + node1.setConnectedClients(1); + + RedisClusterNodeSnapshot node2 = new RedisClusterNodeSnapshot(); + node2.setConnectedClients(null); + + RedisClusterNodeSnapshot node3 = new RedisClusterNodeSnapshot(); + node3.setConnectedClients(3); + + List list = LettuceLists.newList(node2, node3, node1); + Collections.sort(list, TopologyComparators.LatencyComparator.INSTANCE); + + assertThat(list).containsSequence(node1, node3, node2); + } + + @Test + void testFixedOrdering1() { + + List list = LettuceLists.newList(node2, node3, node1); + List fixedOrder = LettuceLists.newList(node1.getUri(), node2.getUri(), node3.getUri()); + + assertThat(TopologyComparators.predefinedSort(list, fixedOrder)).containsSequence(node1, node2, node3); + } + + @Test + void testFixedOrdering2() { + + List list = LettuceLists.newList(node2, node3, node1); + List fixedOrder = LettuceLists.newList(node3.getUri(), node2.getUri(), node1.getUri()); + + assertThat(TopologyComparators.predefinedSort(list, fixedOrder)).containsSequence(node3, node2, node1); + } + + @Test + void testFixedOrderingNoFixedPart() { + + List list = LettuceLists.newList(node2, node3, node1); + List fixedOrder = LettuceLists.newList(); + + assertThat(TopologyComparators.predefinedSort(list, fixedOrder)).containsSequence(node1, node2, node3); + } + + @Test + void testFixedOrderingPartiallySpecifiedOrder() { + + List list = LettuceLists.newList(node2, node3, node1); + List fixedOrder = LettuceLists.newList(node3.getUri(), node1.getUri()); + + assertThat(TopologyComparators.predefinedSort(list, fixedOrder)).containsSequence(node3, node1, node2); + } + + @Test + void isChangedSamePartitions() { + + String nodes = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n"; + + Partitions partitions1 = ClusterPartitionParser.parse(nodes); + Partitions partitions2 = ClusterPartitionParser.parse(nodes); + assertThat(isChanged(partitions1, partitions2)).isFalse(); + } + + @Test + void isChangedDifferentOrder() { + String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master,myself - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + String nodes2 = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master,myself - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n"; + + assertThat(nodes1).isNotEqualTo(nodes2); + Partitions partitions1 = ClusterPartitionParser.parse(nodes1); + Partitions partitions2 = ClusterPartitionParser.parse(nodes2); + assertThat(isChanged(partitions1, partitions2)).isFalse(); + } + + @Test + void isChangedPortChanged() { + String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7382 master - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + String nodes2 = "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n" + + "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n"; + + Partitions partitions1 = ClusterPartitionParser.parse(nodes1); + Partitions partitions2 = ClusterPartitionParser.parse(nodes2); + assertThat(isChanged(partitions1, partitions2)).isFalse(); + } + + @Test + void isChangedSlotsChanged() { + String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + String nodes2 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12001-16383\n"; + + Partitions partitions1 = ClusterPartitionParser.parse(nodes1); + Partitions partitions2 = ClusterPartitionParser.parse(nodes2); + assertThat(isChanged(partitions1, partitions2)).isTrue(); + } + + @Test + void isChangedNodeIdChanged() { + String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + String nodes2 = "3d005a179da7d8dc1adae6409d47b39c369e992aa 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + Partitions partitions1 = ClusterPartitionParser.parse(nodes1); + Partitions partitions2 = ClusterPartitionParser.parse(nodes2); + assertThat(isChanged(partitions1, partitions2)).isTrue(); + } + + @Test + void isChangedFlagsChangedReplicaToMaster() { + String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 slave - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + String nodes2 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + Partitions partitions1 = ClusterPartitionParser.parse(nodes1); + Partitions partitions2 = ClusterPartitionParser.parse(nodes2); + assertThat(isChanged(partitions1, partitions2)).isTrue(); + } + + @Test + void shouldConsiderNodesWithoutSlotsUnchanged() { + + String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 slave - 0 1401258245007 2 disconnected\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + Partitions partitions1 = ClusterPartitionParser.parse(nodes1); + Partitions partitions2 = ClusterPartitionParser.parse(nodes1); + assertThat(isChanged(partitions1, partitions2)).isFalse(); + } + + @Test + void nodesShouldHaveSameSlots() { + RedisClusterNode nodeA = createNode(1, 4, 36, 98); + RedisClusterNode nodeB = createNode(4, 36, 1, 98); + assertThat(nodeA.getSlots().containsAll(nodeB.getSlots())).isTrue(); + assertThat(nodeA.hasSameSlotsAs(nodeB)).isTrue(); + } + + @Test + void nodesShouldNotHaveSameSlots() { + RedisClusterNode nodeA = createNode(1, 4, 36, 99); + RedisClusterNode nodeB = createNode(4, 36, 1, 100); + assertThat(nodeA.getSlots().containsAll(nodeB.getSlots())).isFalse(); + assertThat(nodeA.hasSameSlotsAs(nodeB)).isFalse(); + } + + private RedisClusterNode createNode(Integer... slots) { + RedisClusterNode node = new RedisClusterNode(); + node.setSlots(Arrays.asList(slots)); + return node; + } + + @Test + void isChangedFlagsChangedMasterToReplica() { + String nodes1 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 master - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + String nodes2 = "3d005a179da7d8dc1adae6409d47b39c369e992b 127.0.0.1:7380 slave - 0 1401258245007 2 disconnected 8000-11999\n" + + "c37ab8396be428403d4e55c0d317348be27ed973 127.0.0.1:7381 master - 111 1401258245007 222 connected 7000 12000 12002-16383\n"; + + Partitions partitions1 = ClusterPartitionParser.parse(nodes1); + Partitions partitions2 = ClusterPartitionParser.parse(nodes2); + assertThat(isChanged(partitions1, partitions2)).isTrue(); + } + + void runTest(Map map, List expectation, List nodes) { + + for (RedisClusterNodeSnapshot node : nodes) { + node.setLatencyNs(map.get(node.getNodeId())); + } + List result = TopologyComparators.sortByLatency((Iterable) nodes); + + assertThat(result).containsExactly(expectation.toArray(new RedisClusterNodeSnapshot[expectation.size()])); + } +} diff --git a/src/test/java/io/lettuce/core/cluster/topology/TopologyRefreshIntegrationTests.java b/src/test/java/io/lettuce/core/cluster/topology/TopologyRefreshIntegrationTests.java new file mode 100644 index 0000000000..bd15113f07 --- /dev/null +++ b/src/test/java/io/lettuce/core/cluster/topology/TopologyRefreshIntegrationTests.java @@ -0,0 +1,314 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.topology; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.BiFunction; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.category.SlowTests; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.async.BaseRedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.ClusterClientOptions; +import io.lettuce.core.cluster.ClusterTestSettings; +import io.lettuce.core.cluster.ClusterTopologyRefreshOptions; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.async.RedisClusterAsyncCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.test.Delay; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.settings.TestSettings; +import io.netty.util.concurrent.ScheduledFuture; + +/** + * Test for topology refreshing. + * + * @author Mark Paluch + */ +@SuppressWarnings({ "unchecked" }) +@SlowTests +@ExtendWith(LettuceExtension.class) +class TopologyRefreshIntegrationTests extends TestSupport { + + private static final String host = TestSettings.hostAddr(); + private final RedisClient client; + + private RedisClusterClient clusterClient; + private RedisCommands redis1; + private RedisCommands redis2; + + @Inject + TopologyRefreshIntegrationTests(RedisClient client) { + this.client = client; + } + + @BeforeEach + void openConnection() { + clusterClient = RedisClusterClient.create(client.getResources(), RedisURI.Builder + .redis(host, ClusterTestSettings.port1).build()); + redis1 = client.connect(RedisURI.Builder.redis(ClusterTestSettings.host, ClusterTestSettings.port1).build()).sync(); + redis2 = client.connect(RedisURI.Builder.redis(ClusterTestSettings.host, ClusterTestSettings.port2).build()).sync(); + } + + @AfterEach + void closeConnection() { + redis1.getStatefulConnection().close(); + redis2.getStatefulConnection().close(); + FastShutdown.shutdown(clusterClient); + } + + @Test + void changeTopologyWhileOperations() { + + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() + .enablePeriodicRefresh(true)// + .refreshPeriod(1, TimeUnit.SECONDS)// + .build(); + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + clusterClient.getPartitions().clear(); + + Wait.untilTrue(() -> { + return !clusterClient.getPartitions().isEmpty(); + }).waitOrTimeout(); + + clusterConnection.getStatefulConnection().close(); + } + + @Test + void dynamicSourcesProvidesClientCountForAllNodes() { + + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.create(); + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + for (RedisClusterNode redisClusterNode : clusterClient.getPartitions()) { + assertThat(redisClusterNode).isInstanceOf(RedisClusterNodeSnapshot.class); + + RedisClusterNodeSnapshot snapshot = (RedisClusterNodeSnapshot) redisClusterNode; + assertThat(snapshot.getConnectedClients()).isNotNull().isGreaterThanOrEqualTo(0); + } + + clusterConnection.getStatefulConnection().close(); + } + + @Test + void staticSourcesProvidesClientCountForSeedNodes() { + + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() + .dynamicRefreshSources(false).build(); + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + Partitions partitions = clusterClient.getPartitions(); + RedisClusterNodeSnapshot node1 = (RedisClusterNodeSnapshot) partitions.getPartitionBySlot(0); + assertThat(node1.getConnectedClients()).isGreaterThanOrEqualTo(1); + + RedisClusterNodeSnapshot node2 = (RedisClusterNodeSnapshot) partitions.getPartitionBySlot(15000); + assertThat(node2.getConnectedClients()).isNull(); + + clusterConnection.getStatefulConnection().close(); + } + + @Test + void adaptiveTopologyUpdateOnDisconnectNodeIdConnection() { + + runReconnectTest((clusterConnection, node) -> { + RedisClusterAsyncCommands connection = clusterConnection.getConnection(node.getUri().getHost(), + node.getUri().getPort()); + + return connection; + }); + } + + @Test + void adaptiveTopologyUpdateOnDisconnectHostAndPortConnection() { + + runReconnectTest((clusterConnection, node) -> { + RedisClusterAsyncCommands connection = clusterConnection.getConnection(node.getUri().getHost(), + node.getUri().getPort()); + + return connection; + }); + } + + @Test + void adaptiveTopologyUpdateOnDisconnectDefaultConnection() { + + runReconnectTest((clusterConnection, node) -> { + return clusterConnection; + }); + } + + @Test + void adaptiveTopologyUpdateIsRateLimited() { + + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// + .adaptiveRefreshTriggersTimeout(1, TimeUnit.HOURS)// + .refreshTriggersReconnectAttempts(0)// + .enableAllAdaptiveRefreshTriggers()// + .build(); + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + clusterClient.getPartitions().clear(); + clusterConnection.quit(); + + Wait.untilTrue(() -> { + return !clusterClient.getPartitions().isEmpty(); + }).waitOrTimeout(); + + clusterClient.getPartitions().clear(); + clusterConnection.quit(); + + Delay.delay(Duration.ofMillis(200)); + + assertThat(clusterClient.getPartitions()).isEmpty(); + + clusterConnection.getStatefulConnection().close(); + } + + @Test + void adaptiveTopologyUpdatetUsesTimeout() { + + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// + .adaptiveRefreshTriggersTimeout(500, TimeUnit.MILLISECONDS)// + .refreshTriggersReconnectAttempts(0)// + .enableAllAdaptiveRefreshTriggers()// + .build(); + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + clusterConnection.quit(); + Delay.delay(Duration.ofMillis(700)); + + Wait.untilTrue(() -> { + return !clusterClient.getPartitions().isEmpty(); + }).waitOrTimeout(); + + clusterClient.getPartitions().clear(); + clusterConnection.quit(); + + Wait.untilTrue(() -> { + return !clusterClient.getPartitions().isEmpty(); + }).waitOrTimeout(); + + clusterConnection.getStatefulConnection().close(); + } + + @Test + void adaptiveTriggerDoesNotFireOnSingleReconnect() { + + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// + .enableAllAdaptiveRefreshTriggers()// + .build(); + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + clusterClient.getPartitions().clear(); + + clusterConnection.quit(); + Delay.delay(Duration.ofMillis(500)); + + assertThat(clusterClient.getPartitions()).isEmpty(); + clusterConnection.getStatefulConnection().close(); + } + + @Test + void adaptiveTriggerOnMoveRedirection() { + + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// + .enableAdaptiveRefreshTrigger(ClusterTopologyRefreshOptions.RefreshTrigger.MOVED_REDIRECT)// + .build(); + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); + + StatefulRedisClusterConnection connection = clusterClient.connect(); + RedisAdvancedClusterAsyncCommands clusterConnection = connection.async(); + + Partitions partitions = connection.getPartitions(); + RedisClusterNode node1 = partitions.getPartitionBySlot(0); + RedisClusterNode node2 = partitions.getPartitionBySlot(12000); + + List slots = node2.getSlots(); + slots.addAll(node1.getSlots()); + node2.setSlots(slots); + node1.setSlots(Collections.emptyList()); + partitions.updateCache(); + + assertThat(clusterClient.getPartitions().getPartitionByNodeId(node1.getNodeId()).getSlots()).hasSize(0); + assertThat(clusterClient.getPartitions().getPartitionByNodeId(node2.getNodeId()).getSlots()).hasSize(16384); + + clusterConnection.set("b", value); // slot 3300 + + Wait.untilEquals(12000, () -> clusterClient.getPartitions().getPartitionByNodeId(node1.getNodeId()).getSlots().size()) + .waitOrTimeout(); + + assertThat(clusterClient.getPartitions().getPartitionByNodeId(node1.getNodeId()).getSlots()).hasSize(12000); + assertThat(clusterClient.getPartitions().getPartitionByNodeId(node2.getNodeId()).getSlots()).hasSize(4384); + clusterConnection.getStatefulConnection().close(); + } + + private void runReconnectTest( + BiFunction, RedisClusterNode, BaseRedisAsyncCommands> function) { + + ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// + .refreshTriggersReconnectAttempts(0)// + .enableAllAdaptiveRefreshTriggers()// + .build(); + clusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build()); + RedisAdvancedClusterAsyncCommands clusterConnection = clusterClient.connect().async(); + + RedisClusterNode node = clusterClient.getPartitions().getPartition(0); + BaseRedisAsyncCommands closeable = function.apply(clusterConnection, node); + clusterClient.getPartitions().clear(); + + closeable.quit(); + + Wait.untilTrue(() -> { + return !clusterClient.getPartitions().isEmpty(); + }).waitOrTimeout(); + + if (closeable instanceof RedisAdvancedClusterCommands) { + ((RedisAdvancedClusterCommands) closeable).getStatefulConnection().close(); + } + clusterConnection.getStatefulConnection().close(); + } +} diff --git a/src/test/java/io/lettuce/core/codec/CipherCodecUnitTests.java b/src/test/java/io/lettuce/core/codec/CipherCodecUnitTests.java new file mode 100644 index 0000000000..af2ba5c93f --- /dev/null +++ b/src/test/java/io/lettuce/core/codec/CipherCodecUnitTests.java @@ -0,0 +1,157 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.security.GeneralSecurityException; +import java.util.Arrays; +import java.util.List; +import java.util.UUID; + +import javax.crypto.Cipher; +import javax.crypto.spec.IvParameterSpec; +import javax.crypto.spec.SecretKeySpec; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.Unpooled; + +/** + * Unit tests for {@link CipherCodec}. + * + * @author Mark Paluch + */ +class CipherCodecUnitTests { + + private final SecretKeySpec key = new SecretKeySpec("1234567890123456".getBytes(), "AES"); + private final IvParameterSpec iv = new IvParameterSpec("1234567890123456".getBytes()); + private final String transform = "AES/CBC/PKCS5Padding"; + + CipherCodec.CipherSupplier encrypt = new CipherCodec.CipherSupplier() { + @Override + public Cipher get(CipherCodec.KeyDescriptor keyDescriptor) throws GeneralSecurityException { + + Cipher cipher = Cipher.getInstance(transform); + cipher.init(Cipher.ENCRYPT_MODE, key, iv); + return cipher; + } + + @Override + public CipherCodec.KeyDescriptor encryptionKey() { + return CipherCodec.KeyDescriptor.create("foobar", 142); + } + }; + + CipherCodec.CipherSupplier decrypt = (CipherCodec.KeyDescriptor keyDescriptor) -> { + + Cipher cipher = Cipher.getInstance(transform); + cipher.init(Cipher.DECRYPT_MODE, key, iv); + return cipher; + }; + + @ParameterizedTest + @MethodSource("cryptoTestValues") + void shouldEncryptValue(CryptoTestArgs testArgs) { + + RedisCodec crypto = CipherCodec.forValues(StringCodec.UTF8, encrypt, decrypt); + + ByteBuffer encrypted = crypto.encodeValue(testArgs.content); + assertThat(encrypted).isNotEqualTo(ByteBuffer.wrap(testArgs.content.getBytes())); + + assertThat(new String(encrypted.array())).startsWith("$foobar+142$"); + + String decrypted = crypto.decodeValue(encrypted); + assertThat(decrypted).isEqualTo(testArgs.content); + assertThat(StringCodec.UTF8.encodeValue(testArgs.content)).isEqualTo(ByteBuffer.wrap(testArgs.content.getBytes())); + } + + @ParameterizedTest + @MethodSource("cryptoTestValues") + void shouldEncryptValueToByteBuf(CryptoTestArgs testArgs) { + + RedisCodec crypto = CipherCodec.forValues(StringCodec.UTF8, encrypt, decrypt); + + ToByteBufEncoder direct = (ToByteBufEncoder) crypto; + + ByteBufAllocator allocator = ByteBufAllocator.DEFAULT; + ByteBuf target = allocator.buffer(); + + direct.encodeValue(testArgs.content, target); + + assertThat(target).isNotEqualTo(Unpooled.wrappedBuffer(testArgs.content.getBytes())); + assertThat(target.toString(0, 20, StandardCharsets.US_ASCII)).startsWith("$foobar+142$"); + + String result = crypto.decodeValue(target.nioBuffer()); + assertThat(result).isEqualTo(testArgs.content); + } + + static List cryptoTestValues() { + + StringBuilder hugeString = new StringBuilder(); + for (int i = 0; i < 1000; i++) { + hugeString.append(UUID.randomUUID().toString()); + } + + return Arrays.asList(new CryptoTestArgs("foobar"), new CryptoTestArgs(hugeString.toString())); + } + + @Test + void shouldDecryptValue() { + + RedisCodec crypto = CipherCodec.forValues(StringCodec.UTF8, encrypt, decrypt); + + ByteBuffer encrypted = ByteBuffer.wrap(new byte[] { 36, 43, 48, 36, -99, -39, 126, -106, -7, -88, 118, -74, 42, 98, + 117, 81, 37, -124, 26, -88 });// crypto.encodeValue("foobar"); + + String result = crypto.decodeValue(encrypted); + assertThat(result).isEqualTo("foobar"); + } + + @Test + void shouldRejectPlusAndDollarKeyNames() { + + assertThatThrownBy(() -> CipherCodec.KeyDescriptor.create("my+key")).isInstanceOf(IllegalArgumentException.class); + assertThatThrownBy(() -> CipherCodec.KeyDescriptor.create("my$key")).isInstanceOf(IllegalArgumentException.class); + } + + static class CryptoTestArgs { + + private final int size; + private final String content; + + public CryptoTestArgs(String content) { + this.size = content.length(); + this.content = content; + } + + @Override + public String toString() { + final StringBuffer sb = new StringBuffer(); + sb.append(getClass().getSimpleName()); + sb.append(" [size=").append(size); + sb.append(']'); + return sb.toString(); + } + } +} diff --git a/src/test/java/io/lettuce/core/codec/CompressionCodecUnitTests.java b/src/test/java/io/lettuce/core/codec/CompressionCodecUnitTests.java new file mode 100644 index 0000000000..49a154c793 --- /dev/null +++ b/src/test/java/io/lettuce/core/codec/CompressionCodecUnitTests.java @@ -0,0 +1,89 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class CompressionCodecUnitTests { + + private String key = "key"; + private byte[] keyGzipBytes = new byte[] { 31, -117, 8, 0, 0, 0, 0, 0, 0, 0, -53, 78, -83, 4, 0, -87, -85, -112, -118, 3, 0, + 0, 0 }; + private byte[] keyDeflateBytes = new byte[] { 120, -100, -53, 78, -83, 4, 0, 2, -121, 1, 74 }; + private String value = "value"; + + @Test + void keyPassthroughTest() { + RedisCodec sut = CompressionCodec.valueCompressor(StringCodec.UTF8, + CompressionCodec.CompressionType.GZIP); + ByteBuffer byteBuffer = sut.encodeKey(value); + assertThat(toString(byteBuffer.duplicate())).isEqualTo(value); + + String s = sut.decodeKey(byteBuffer); + assertThat(s).isEqualTo(value); + } + + @Test + void gzipValueTest() { + RedisCodec sut = CompressionCodec.valueCompressor(StringCodec.UTF8, + CompressionCodec.CompressionType.GZIP); + ByteBuffer byteBuffer = sut.encodeValue(key); + assertThat(toBytes(byteBuffer.duplicate())).isEqualTo(keyGzipBytes); + + String s = sut.decodeValue(ByteBuffer.wrap(keyGzipBytes)); + assertThat(s).isEqualTo(key); + } + + @Test + void deflateValueTest() { + RedisCodec sut = CompressionCodec.valueCompressor(StringCodec.UTF8, + CompressionCodec.CompressionType.DEFLATE); + ByteBuffer byteBuffer = sut.encodeValue(key); + assertThat(toBytes(byteBuffer.duplicate())).isEqualTo(keyDeflateBytes); + + String s = sut.decodeValue(ByteBuffer.wrap(keyDeflateBytes)); + assertThat(s).isEqualTo(key); + } + + @Test + void wrongCompressionTypeOnDecode() { + RedisCodec sut = CompressionCodec.valueCompressor(StringCodec.UTF8, + CompressionCodec.CompressionType.DEFLATE); + + assertThatThrownBy(() -> sut.decodeValue(ByteBuffer.wrap(keyGzipBytes))) + .isInstanceOf(IllegalStateException.class); + } + + private String toString(ByteBuffer buffer) { + byte[] bytes = toBytes(buffer); + return new String(bytes, StandardCharsets.UTF_8); + } + + private byte[] toBytes(ByteBuffer buffer) { + byte[] bytes = new byte[buffer.remaining()]; + buffer.get(bytes); + return bytes; + } +} diff --git a/src/test/java/io/lettuce/core/codec/StringCodecUnitTests.java b/src/test/java/io/lettuce/core/codec/StringCodecUnitTests.java new file mode 100644 index 0000000000..6099e64a9d --- /dev/null +++ b/src/test/java/io/lettuce/core/codec/StringCodecUnitTests.java @@ -0,0 +1,119 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +import org.junit.jupiter.api.Test; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; + +/** + * @author Mark Paluch + */ +class StringCodecUnitTests { + + private String teststring = "hello üäü~∑†®†ª€∂‚¶¢ Wørld"; + private String teststringPlain = "hello uufadsfasdfadssdfadfs"; + + @Test + void encodeUtf8Buf() { + + StringCodec codec = new StringCodec(StandardCharsets.UTF_8); + + ByteBuf buffer = Unpooled.buffer(1234); + codec.encode(teststring, buffer); + + assertThat(buffer.toString(StandardCharsets.UTF_8)).isEqualTo(teststring); + } + + @Test + void encodeAsciiBuf() { + + StringCodec codec = new StringCodec(StandardCharsets.US_ASCII); + + ByteBuf buffer = Unpooled.buffer(1234); + codec.encode(teststringPlain, buffer); + + assertThat(buffer.toString(StandardCharsets.US_ASCII)).isEqualTo(teststringPlain); + } + + @Test + void encodeIso88591Buf() { + + StringCodec codec = new StringCodec(StandardCharsets.ISO_8859_1); + + ByteBuf buffer = Unpooled.buffer(1234); + codec.encodeValue(teststringPlain, buffer); + + assertThat(buffer.toString(StandardCharsets.ISO_8859_1)).isEqualTo(teststringPlain); + } + + @Test + void encodeAndDecodeUtf8Buf() { + + StringCodec codec = new StringCodec(StandardCharsets.UTF_8); + + ByteBuf buffer = Unpooled.buffer(1234); + codec.encodeKey(teststring, buffer); + + assertThat(codec.decodeKey(buffer.nioBuffer())).isEqualTo(teststring); + } + + @Test + void encodeAndDecodeUtf8() { + + StringCodec codec = new StringCodec(StandardCharsets.UTF_8); + ByteBuffer byteBuffer = codec.encodeKey(teststring); + + assertThat(codec.decodeKey(byteBuffer)).isEqualTo(teststring); + } + + @Test + void encodeAndDecodeAsciiBuf() { + + StringCodec codec = new StringCodec(StandardCharsets.US_ASCII); + + ByteBuf buffer = Unpooled.buffer(1234); + codec.encode(teststringPlain, buffer); + + assertThat(codec.decodeKey(buffer.nioBuffer())).isEqualTo(teststringPlain); + } + + @Test + void encodeAndDecodeIso88591Buf() { + + StringCodec codec = new StringCodec(StandardCharsets.ISO_8859_1); + + ByteBuf buffer = Unpooled.buffer(1234); + codec.encode(teststringPlain, buffer); + + assertThat(codec.decodeKey(buffer.nioBuffer())).isEqualTo(teststringPlain); + } + + @Test + void estimateSize() { + + assertThat(new StringCodec(StandardCharsets.UTF_8).estimateSize(teststring)) + .isEqualTo((int) (teststring.length() * 1.1)); + assertThat(new StringCodec(StandardCharsets.US_ASCII).estimateSize(teststring)).isEqualTo(teststring.length()); + assertThat(new StringCodec(StandardCharsets.ISO_8859_1).estimateSize(teststring)).isEqualTo(teststring.length()); + } +} diff --git a/src/test/java/io/lettuce/core/commands/BitCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/BitCommandIntegrationTests.java new file mode 100644 index 0000000000..c1c7398ad7 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/BitCommandIntegrationTests.java @@ -0,0 +1,282 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static io.lettuce.core.BitFieldArgs.offset; +import static io.lettuce.core.BitFieldArgs.signed; +import static io.lettuce.core.BitFieldArgs.typeWidthBasedOffset; +import static io.lettuce.core.BitFieldArgs.unsigned; +import static io.lettuce.core.BitFieldArgs.OverflowType.WRAP; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.List; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.BitFieldArgs; +import io.lettuce.core.RedisClient; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class BitCommandIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisCommands redis; + + protected RedisCommands bitstring; + + @Inject + protected BitCommandIntegrationTests(RedisClient client, RedisCommands redis) { + this.client = client; + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + this.bitstring = client.connect(new BitStringCodec()).sync(); + } + + @AfterEach + void tearDown() { + this.bitstring.getStatefulConnection().close(); + } + + @Test + void bitcount() { + assertThat((long) redis.bitcount(key)).isEqualTo(0); + + redis.setbit(key, 0, 1); + redis.setbit(key, 1, 1); + redis.setbit(key, 2, 1); + + assertThat((long) redis.bitcount(key)).isEqualTo(3); + assertThat(redis.bitcount(key, 3, -1)).isEqualTo(0); + } + + @Test + void bitfieldType() { + assertThat(signed(64).getBits()).isEqualTo(64); + assertThat(signed(64).isSigned()).isTrue(); + assertThat(unsigned(63).getBits()).isEqualTo(63); + assertThat(unsigned(63).isSigned()).isFalse(); + } + + @Test + void bitfieldTypeSigned65() { + assertThatThrownBy(() -> signed(65)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void bitfieldTypeUnsigned64() { + assertThatThrownBy(() -> unsigned(64)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void bitfieldBuilderEmptyPreviousType() { + assertThatThrownBy(() -> new BitFieldArgs().overflow(WRAP).get()).isInstanceOf(IllegalStateException.class); + } + + @Test + void bitfieldArgsTest() { + + assertThat(signed(5).toString()).isEqualTo("i5"); + assertThat(unsigned(5).toString()).isEqualTo("u5"); + + assertThat(offset(5).toString()).isEqualTo("5"); + assertThat(typeWidthBasedOffset(5).toString()).isEqualTo("#5"); + } + + @Test + @EnabledOnCommand("BITFIELD") + void bitfield() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 1).set(5, 1).incrBy(2, 3).get().get(2); + + List values = redis.bitfield(key, bitFieldArgs); + + assertThat(values).containsExactly(0L, 32L, 3L, 0L, 3L); + assertThat(bitstring.get(key)).isEqualTo("0000000000010011"); + } + + @Test + @EnabledOnCommand("BITFIELD") + void bitfieldGetWithOffset() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 1).get(signed(2), typeWidthBasedOffset(1)); + + List values = redis.bitfield(key, bitFieldArgs); + + assertThat(values).containsExactly(0L, 0L); + assertThat(bitstring.get(key)).isEqualTo("10000000"); + } + + @Test + @EnabledOnCommand("BITFIELD") + void bitfieldSet() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 5).set(5); + + List values = redis.bitfield(key, bitFieldArgs); + + assertThat(values).containsExactly(0L, 5L); + assertThat(bitstring.get(key)).isEqualTo("10100000"); + } + + @Test + @EnabledOnCommand("BITFIELD") + void bitfieldWithOffsetSet() { + + redis.bitfield(key, BitFieldArgs.Builder.set(signed(8), typeWidthBasedOffset(2), 5)); + assertThat(bitstring.get(key)).isEqualTo("000000000000000010100000"); + + redis.del(key); + redis.bitfield(key, BitFieldArgs.Builder.set(signed(8), offset(2), 5)); + assertThat(bitstring.get(key)).isEqualTo("1000000000000010"); + } + + @Test + @EnabledOnCommand("BITFIELD") + void bitfieldIncrBy() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 5).incrBy(1); + + List values = redis.bitfield(key, bitFieldArgs); + + assertThat(values).containsExactly(0L, 6L); + assertThat(bitstring.get(key)).isEqualTo("01100000"); + } + + @Test + @EnabledOnCommand("BITFIELD") + void bitfieldWithOffsetIncrBy() { + + redis.bitfield(key, BitFieldArgs.Builder.incrBy(signed(8), typeWidthBasedOffset(2), 1)); + assertThat(bitstring.get(key)).isEqualTo("000000000000000010000000"); + + redis.del(key); + redis.bitfield(key, BitFieldArgs.Builder.incrBy(signed(8), offset(2), 1)); + assertThat(bitstring.get(key)).isEqualTo("0000000000000010"); + } + + @Test + @EnabledOnCommand("BITFIELD") + void bitfieldOverflow() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.overflow(WRAP).set(signed(8), 9, Integer.MAX_VALUE).get(signed(8)); + + List values = redis.bitfield(key, bitFieldArgs); + assertThat(values).containsExactly(0L, 0L); + assertThat(bitstring.get(key)).isEqualTo("000000001111111000000001"); + } + + @Test + void bitpos() { + assertThat((long) redis.bitcount(key)).isEqualTo(0); + redis.setbit(key, 0, 0); + redis.setbit(key, 1, 1); + + assertThat(bitstring.get(key)).isEqualTo("00000010"); + assertThat((long) redis.bitpos(key, true)).isEqualTo(1); + } + + @Test + void bitposOffset() { + assertThat((long) redis.bitcount(key)).isEqualTo(0); + redis.setbit(key, 0, 1); + redis.setbit(key, 1, 1); + redis.setbit(key, 2, 0); + redis.setbit(key, 3, 0); + redis.setbit(key, 4, 0); + redis.setbit(key, 5, 1); + redis.setbit(key, 16, 1); + + assertThat((long) bitstring.getbit(key, 1)).isEqualTo(1); + assertThat((long) bitstring.getbit(key, 4)).isEqualTo(0); + assertThat((long) bitstring.getbit(key, 5)).isEqualTo(1); + assertThat(bitstring.get(key)).isEqualTo("001000110000000000000001"); + assertThat((long) redis.bitpos(key, true, 1)).isEqualTo(16); + assertThat((long) redis.bitpos(key, false, 0, 0)).isEqualTo(2); + } + + @Test + void bitopAnd() { + redis.setbit("foo", 0, 1); + redis.setbit("bar", 1, 1); + redis.setbit("baz", 2, 1); + assertThat(redis.bitopAnd(key, "foo", "bar", "baz")).isEqualTo(1); + assertThat((long) redis.bitcount(key)).isEqualTo(0); + assertThat(bitstring.get(key)).isEqualTo("00000000"); + } + + @Test + void bitopNot() { + redis.setbit("foo", 0, 1); + redis.setbit("foo", 2, 1); + + assertThat(redis.bitopNot(key, "foo")).isEqualTo(1); + assertThat((long) redis.bitcount(key)).isEqualTo(6); + assertThat(bitstring.get(key)).isEqualTo("11111010"); + } + + @Test + void bitopOr() { + redis.setbit("foo", 0, 1); + redis.setbit("bar", 1, 1); + redis.setbit("baz", 2, 1); + assertThat(redis.bitopOr(key, "foo", "bar", "baz")).isEqualTo(1); + assertThat(bitstring.get(key)).isEqualTo("00000111"); + } + + @Test + void bitopXor() { + redis.setbit("foo", 0, 1); + redis.setbit("bar", 0, 1); + redis.setbit("baz", 2, 1); + assertThat(redis.bitopXor(key, "foo", "bar", "baz")).isEqualTo(1); + assertThat(bitstring.get(key)).isEqualTo("00000100"); + } + + @Test + void getbit() { + assertThat(redis.getbit(key, 0)).isEqualTo(0); + redis.setbit(key, 0, 1); + assertThat(redis.getbit(key, 0)).isEqualTo(1); + } + + @Test + void setbit() { + + assertThat(redis.setbit(key, 0, 1)).isEqualTo(0); + assertThat(redis.setbit(key, 0, 0)).isEqualTo(1); + } + +} diff --git a/src/test/java/io/lettuce/core/commands/BitStringCodec.java b/src/test/java/io/lettuce/core/commands/BitStringCodec.java new file mode 100644 index 0000000000..2bcb834f34 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/BitStringCodec.java @@ -0,0 +1,38 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import java.nio.ByteBuffer; + +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +public class BitStringCodec extends StringCodec { + + @Override + public String decodeValue(ByteBuffer bytes) { + StringBuilder bits = new StringBuilder(bytes.remaining() * 8); + while (bytes.remaining() > 0) { + byte b = bytes.get(); + for (int i = 0; i < 8; i++) { + bits.append(Integer.valueOf(b >>> i & 1)); + } + } + return bits.toString(); + } +} diff --git a/src/test/java/io/lettuce/core/commands/CustomCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/CustomCommandIntegrationTests.java new file mode 100644 index 0000000000..3b18efc5f2 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/CustomCommandIntegrationTests.java @@ -0,0 +1,183 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.Arrays; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.TestSupport; +import io.lettuce.core.TransactionResult; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.test.TestFutures; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class CustomCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + CustomCommandIntegrationTests(StatefulRedisConnection connection) { + this.redis = connection.sync(); + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void dispatchSet() { + + String response = redis.dispatch(MyCommands.SET, new StatusOutput<>(StringCodec.UTF8), new CommandArgs<>( + StringCodec.UTF8).addKey(key).addValue(value)); + + assertThat(response).isEqualTo("OK"); + } + + @Test + void dispatchWithoutArgs() { + + String response = redis.dispatch(MyCommands.INFO, new StatusOutput<>(StringCodec.UTF8)); + + assertThat(response).contains("connected_clients"); + } + + @Test + void dispatchShouldFailForWrongDataType() { + + redis.hset(key, key, value); + assertThatThrownBy( + () -> redis.dispatch(CommandType.GET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey(key))).isInstanceOf(RedisCommandExecutionException.class); + } + + @Test + void dispatchTransactions() { + + redis.multi(); + String response = redis.dispatch(CommandType.SET, new StatusOutput<>(StringCodec.UTF8), new CommandArgs<>( + StringCodec.UTF8).addKey(key).addValue(value)); + + TransactionResult exec = redis.exec(); + + assertThat(response).isNull(); + assertThat(exec).hasSize(1).contains("OK"); + } + + @Test + void standaloneAsyncPing() { + + RedisCommand command = new Command<>(MyCommands.PING, new StatusOutput<>(StringCodec.UTF8), + null); + + AsyncCommand async = new AsyncCommand<>(command); + getStandaloneConnection().dispatch(async); + + assertThat(TestFutures.getOrTimeout(async.toCompletableFuture())).isEqualTo("PONG"); + } + + @Test + void standaloneAsyncBatchPing() { + + RedisCommand command1 = new Command<>(MyCommands.PING, new StatusOutput<>(StringCodec.UTF8), + null); + + RedisCommand command2 = new Command<>(MyCommands.PING, new StatusOutput<>(StringCodec.UTF8), + null); + + AsyncCommand async1 = new AsyncCommand<>(command1); + AsyncCommand async2 = new AsyncCommand<>(command2); + getStandaloneConnection().dispatch(Arrays.asList(async1, async2)); + + assertThat(TestFutures.getOrTimeout(async1.toCompletableFuture())).isEqualTo("PONG"); + assertThat(TestFutures.getOrTimeout(async2.toCompletableFuture())).isEqualTo("PONG"); + } + + @Test + void standaloneAsyncBatchTransaction() { + + RedisCommand multi = new Command<>(CommandType.MULTI, new StatusOutput<>(StringCodec.UTF8)); + + RedisCommand set = new Command<>(CommandType.SET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey("key").add("value")); + + RedisCommand exec = new Command<>(CommandType.EXEC, null); + + AsyncCommand async1 = new AsyncCommand<>(multi); + AsyncCommand async2 = new AsyncCommand<>(set); + AsyncCommand async3 = new AsyncCommand<>(exec); + getStandaloneConnection().dispatch(Arrays.asList(async1, async2, async3)); + + assertThat(TestFutures.getOrTimeout(async1.toCompletableFuture())).isEqualTo("OK"); + assertThat(TestFutures.getOrTimeout(async2.toCompletableFuture())).isEqualTo("OK"); + + TransactionResult transactionResult = TestFutures.getOrTimeout(async3.toCompletableFuture()); + assertThat(transactionResult.wasDiscarded()).isFalse(); + assertThat(transactionResult. get(0)).isEqualTo("OK"); + } + + @Test + void standaloneFireAndForget() { + + RedisCommand command = new Command<>(MyCommands.PING, new StatusOutput<>(StringCodec.UTF8), + null); + getStandaloneConnection().dispatch(command); + assertThat(command.isCancelled()).isFalse(); + + } + + private StatefulRedisConnection getStandaloneConnection() { + + assumeTrue(redis.getStatefulConnection() instanceof StatefulRedisConnection); + return redis.getStatefulConnection(); + } + + public enum MyCommands implements ProtocolKeyword { + PING, SET, INFO; + + private final byte name[]; + + MyCommands() { + // cache the bytes for the command name. Reduces memory and cpu pressure when using commands. + name = name().getBytes(); + } + + @Override + public byte[] getBytes() { + return name; + } + } +} diff --git a/src/test/java/io/lettuce/core/commands/GeoCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/GeoCommandIntegrationTests.java new file mode 100644 index 0000000000..57ab149f00 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/GeoCommandIntegrationTests.java @@ -0,0 +1,532 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.Assertions.offset; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.List; +import java.util.Set; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.condition.EnabledOnCommand; +import io.lettuce.test.condition.RedisConditions; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@EnabledOnCommand("GEOADD") +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class GeoCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected GeoCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void geoadd() { + + Long result = redis.geoadd(key, -73.9454966, 40.747533, "lic market"); + assertThat(result).isEqualTo(1); + + Long readd = redis.geoadd(key, -73.9454966, 40.747533, "lic market"); + assertThat(readd).isEqualTo(0); + } + + @Test + public void geoaddInTransaction() { + + redis.multi(); + redis.geoadd(key, -73.9454966, 40.747533, "lic market"); + redis.geoadd(key, -73.9454966, 40.747533, "lic market"); + + assertThat(redis.exec()).containsSequence(1L, 0L); + } + + @Test + void geoaddMulti() { + + Long result = redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim", 8.3796281, 48.9978127, "EFS9", 8.665351, 49.553302, + "Bahn"); + assertThat(result).isEqualTo(3); + } + + @Test + public void geoaddMultiInTransaction() { + + redis.multi(); + redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim", 8.3796281, 48.9978127, "EFS9", 8.665351, 49.553302, "Bahn"); + + assertThat(redis.exec()).contains(3L); + } + + @Test + void geoaddMultiWrongArgument() { + assertThatThrownBy(() -> redis.geoadd(key, 49.528253)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void georadius() { + + prepareGeo(); + + Set georadius = redis.georadius(key, 8.6582861, 49.5285695, 1, GeoArgs.Unit.km); + assertThat(georadius).hasSize(1).contains("Weinheim"); + + Set largerGeoradius = redis.georadius(key, 8.6582861, 49.5285695, 5, GeoArgs.Unit.km); + assertThat(largerGeoradius).hasSize(2).contains("Weinheim").contains("Bahn"); + } + + @Test + public void georadiusInTransaction() { + + prepareGeo(); + + redis.multi(); + redis.georadius(key, 8.6582861, 49.5285695, 1, GeoArgs.Unit.km); + redis.georadius(key, 8.6582861, 49.5285695, 5, GeoArgs.Unit.km); + + TransactionResult exec = redis.exec(); + Set georadius = exec.get(0); + Set largerGeoradius = exec.get(1); + + assertThat(georadius).hasSize(1).contains("Weinheim"); + assertThat(largerGeoradius).hasSize(2).contains("Weinheim").contains("Bahn"); + } + + @Test + void georadiusWithCoords() { + + prepareGeo(); + + List> georadius = redis.georadius(key, 8.6582861, 49.5285695, 100, GeoArgs.Unit.km, + GeoArgs.Builder.coordinates()); + + assertThat(georadius).hasSize(3); + assertThat(getX(georadius, 0)).isBetween(8.66, 8.67); + assertThat(getY(georadius, 0)).isBetween(49.52, 49.53); + + assertThat(getX(georadius, 2)).isBetween(8.37, 8.38); + assertThat(getY(georadius, 2)).isBetween(48.99, 49.00); + } + + @Test + void geodist() { + + prepareGeo(); + + Double result = redis.geodist(key, "Weinheim", "Bahn", GeoArgs.Unit.km); + // 10 mins with the bike + assertThat(result).isGreaterThan(2.5).isLessThan(2.9); + } + + @Test + void geodistMissingElements() { + + assumeTrue(RedisConditions.of(redis).hasVersionGreaterOrEqualsTo("3.4")); + prepareGeo(); + + assertThat(redis.geodist("Unknown", "Unknown", "Bahn", GeoArgs.Unit.km)).isNull(); + assertThat(redis.geodist(key, "Unknown", "Bahn", GeoArgs.Unit.km)).isNull(); + assertThat(redis.geodist(key, "Weinheim", "Unknown", GeoArgs.Unit.km)).isNull(); + } + + @Test + public void geodistInTransaction() { + + prepareGeo(); + + redis.multi(); + redis.geodist(key, "Weinheim", "Bahn", GeoArgs.Unit.km); + Double result = (Double) redis.exec().get(0); + + // 10 mins with the bike + assertThat(result).isGreaterThan(2.5).isLessThan(2.9); + } + + @Test + public void geopos() { + + prepareGeo(); + + List geopos = redis.geopos(key, "Weinheim"); + + assertThat(geopos).hasSize(1); + assertThat(geopos.get(0).getX().doubleValue()).isEqualTo(8.6638, offset(0.001)); + + geopos = redis.geopos(key, "Weinheim", "foobar", "Bahn"); + + assertThat(geopos).hasSize(3); + assertThat(geopos.get(0).getX().doubleValue()).isEqualTo(8.6638, offset(0.001)); + assertThat(geopos.get(1)).isNull(); + assertThat(geopos.get(2)).isNotNull(); + } + + @Test + public void geoposInTransaction() { + + prepareGeo(); + + redis.multi(); + redis.geopos(key, "Weinheim", "foobar", "Bahn"); + redis.geopos(key, "Weinheim", "foobar", "Bahn"); + List geopos = redis.exec().get(1); + + assertThat(geopos).hasSize(3); + assertThat(geopos.get(0).getX().doubleValue()).isEqualTo(8.6638, offset(0.001)); + assertThat(geopos.get(1)).isNull(); + assertThat(geopos.get(2)).isNotNull(); + } + + @Test + void georadiusWithArgs() { + + prepareGeo(); + + GeoArgs geoArgs = new GeoArgs().withHash().withCoordinates().withDistance().withCount(1).desc(); + + List> result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, geoArgs); + assertThat(result).hasSize(1); + + GeoWithin weinheim = result.get(0); + + assertThat(weinheim.getMember()).isEqualTo("Weinheim"); + assertThat(weinheim.getGeohash()).isEqualTo(3666615932941099L); + + assertThat(weinheim.getDistance()).isEqualTo(2.7882, offset(0.5)); + assertThat(weinheim.getCoordinates().getX().doubleValue()).isEqualTo(8.663875, offset(0.5)); + assertThat(weinheim.getCoordinates().getY().doubleValue()).isEqualTo(49.52825, offset(0.5)); + + result = redis.georadius(key, 8.665351, 49.553302, 1, GeoArgs.Unit.km, new GeoArgs()); + assertThat(result).hasSize(1); + + GeoWithin bahn = result.get(0); + + assertThat(bahn.getMember()).isEqualTo("Bahn"); + assertThat(bahn.getGeohash()).isNull(); + + assertThat(bahn.getDistance()).isNull(); + assertThat(bahn.getCoordinates()).isNull(); + } + + @Test + public void georadiusWithArgsAndTransaction() { + + prepareGeo(); + + redis.multi(); + GeoArgs geoArgs = new GeoArgs().withHash().withCoordinates().withDistance().withCount(1).desc(); + redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, geoArgs); + redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, geoArgs); + TransactionResult exec = redis.exec(); + + assertThat(exec).hasSize(2); + + List> result = exec.get(1); + assertThat(result).hasSize(1); + + GeoWithin weinheim = result.get(0); + + assertThat(weinheim.getMember()).isEqualTo("Weinheim"); + assertThat(weinheim.getGeohash()).isEqualTo(3666615932941099L); + + assertThat(weinheim.getDistance()).isEqualTo(2.7882, offset(0.5)); + assertThat(weinheim.getCoordinates().getX().doubleValue()).isEqualTo(8.663875, offset(0.5)); + assertThat(weinheim.getCoordinates().getY().doubleValue()).isEqualTo(49.52825, offset(0.5)); + + result = redis.georadius(key, 8.665351, 49.553302, 1, GeoArgs.Unit.km, new GeoArgs()); + assertThat(result).hasSize(1); + + GeoWithin bahn = result.get(0); + + assertThat(bahn.getMember()).isEqualTo("Bahn"); + assertThat(bahn.getGeohash()).isNull(); + + assertThat(bahn.getDistance()).isNull(); + assertThat(bahn.getCoordinates()).isNull(); + } + + @Test + void geohash() { + + prepareGeo(); + + List> geohash = redis.geohash(key, "Weinheim", "Bahn", "dunno"); + + assertThat(geohash).filteredOn(Value::hasValue).extracting(Value::getValue).extracting(s -> s.substring(0, 10)) + .containsOnly("u0y1v0kffz", "u0y1vhvuvm"); + } + + @Test + void geohashUnknownKey() { + + assumeTrue(RedisConditions.of(redis).hasVersionGreaterOrEqualsTo("3.4")); + + prepareGeo(); + + List> geohash = redis.geohash("dunno", "member"); + + assertThat(geohash).hasSize(1); + assertThat(geohash.get(0)).isIn(null, Value.empty()); + } + + @Test + public void geohashInTransaction() { + + prepareGeo(); + + redis.multi(); + redis.geohash(key, "Weinheim", "Bahn", "dunno"); + redis.geohash(key, "Weinheim", "Bahn", "dunno"); + TransactionResult exec = redis.exec(); + + List> geohash = exec.get(1); + + assertThat(geohash).filteredOn(Value::hasValue).extracting(Value::getValue).extracting(s -> s.substring(0, 10)) + .containsOnly("u0y1v0kffz", "u0y1vhvuvm"); + } + + @Test + void georadiusStore() { + + prepareGeo(); + + String resultKey = "38o54"; // yields in same slot as "key" + Long result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, + new GeoRadiusStoreArgs<>().withStore(resultKey)); + assertThat(result).isEqualTo(2); + + List> results = redis.zrangeWithScores(resultKey, 0, -1); + assertThat(results).hasSize(2); + } + + @Test + void georadiusStoreWithCountAndSort() { + + prepareGeo(); + + String resultKey = "38o54"; // yields in same slot as "key" + Long result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, + new GeoRadiusStoreArgs<>().withCount(1).desc().withStore(resultKey)); + assertThat(result).isEqualTo(1); + + List> results = redis.zrangeWithScores(resultKey, 0, -1); + assertThat(results).hasSize(1); + assertThat(results.get(0).getScore()).isGreaterThan(99999); + } + + @Test + void georadiusStoreDist() { + + prepareGeo(); + + String resultKey = "38o54"; // yields in same slot as "key" + Long result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, + new GeoRadiusStoreArgs<>().withStoreDist("38o54")); + assertThat(result).isEqualTo(2); + + List> dist = redis.zrangeWithScores(resultKey, 0, -1); + assertThat(dist).hasSize(2); + } + + @Test + void georadiusStoreDistWithCountAndSort() { + + prepareGeo(); + + String resultKey = "38o54"; // yields in same slot as "key" + Long result = redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, + new GeoRadiusStoreArgs<>().withCount(1).desc().withStoreDist("38o54")); + assertThat(result).isEqualTo(1); + + List> dist = redis.zrangeWithScores(resultKey, 0, -1); + assertThat(dist).hasSize(1); + + assertThat(dist.get(0).getScore()).isBetween(2d, 3d); + } + + @Test + void georadiusWithNullArgs() { + assertThatThrownBy(() -> redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, (GeoArgs) null)) + .isInstanceOf(IllegalArgumentException.class); + } + + @Test + void georadiusStoreWithNullArgs() { + assertThatThrownBy( + () -> redis.georadius(key, 8.665351, 49.553302, 5, GeoArgs.Unit.km, (GeoRadiusStoreArgs) null)) + .isInstanceOf(IllegalArgumentException.class); + } + + @Test + void georadiusbymember() { + + prepareGeo(); + + Set empty = redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km); + assertThat(empty).hasSize(1).contains("Bahn"); + + Set georadiusbymember = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km); + assertThat(georadiusbymember).hasSize(2).contains("Bahn", "Weinheim"); + } + + @Test + void georadiusbymemberStoreDistWithCountAndSort() { + + prepareGeo(); + + String resultKey = "38o54"; // yields in same slot as "key" + Long result = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, + new GeoRadiusStoreArgs<>().withCount(1).desc().withStoreDist("38o54")); + assertThat(result).isEqualTo(1); + + List> dist = redis.zrangeWithScores(resultKey, 0, -1); + assertThat(dist).hasSize(1); + + assertThat(dist.get(0).getScore()).isBetween(2d, 3d); + } + + @Test + void georadiusbymemberWithArgs() { + + prepareGeo(); + + List> empty = redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km, + new GeoArgs().withHash().withCoordinates().withDistance().desc()); + assertThat(empty).isNotEmpty(); + + List> withDistanceAndCoordinates = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, + new GeoArgs().withCoordinates().withDistance().desc()); + assertThat(withDistanceAndCoordinates).hasSize(2); + + GeoWithin weinheim = withDistanceAndCoordinates.get(0); + assertThat(weinheim.getMember()).isEqualTo("Weinheim"); + assertThat(weinheim.getGeohash()).isNull(); + assertThat(weinheim.getDistance()).isNotNull(); + assertThat(weinheim.getCoordinates()).isNotNull(); + + List> withDistanceAndHash = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, + new GeoArgs().withDistance().withHash().desc()); + assertThat(withDistanceAndHash).hasSize(2); + + GeoWithin weinheimDistanceHash = withDistanceAndHash.get(0); + assertThat(weinheimDistanceHash.getMember()).isEqualTo("Weinheim"); + assertThat(weinheimDistanceHash.getGeohash()).isNotNull(); + assertThat(weinheimDistanceHash.getDistance()).isNotNull(); + assertThat(weinheimDistanceHash.getCoordinates()).isNull(); + + List> withCoordinates = redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, + new GeoArgs().withCoordinates().desc()); + assertThat(withCoordinates).hasSize(2); + + GeoWithin weinheimCoordinates = withCoordinates.get(0); + assertThat(weinheimCoordinates.getMember()).isEqualTo("Weinheim"); + assertThat(weinheimCoordinates.getGeohash()).isNull(); + assertThat(weinheimCoordinates.getDistance()).isNull(); + assertThat(weinheimCoordinates.getCoordinates()).isNotNull(); + } + + @Test + public void georadiusbymemberWithArgsInTransaction() { + + prepareGeo(); + + redis.multi(); + redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km, + new GeoArgs().withHash().withCoordinates().withDistance().desc()); + redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, new GeoArgs().withCoordinates().withDistance().desc()); + redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, new GeoArgs().withDistance().withHash().desc()); + redis.georadiusbymember(key, "Bahn", 5, GeoArgs.Unit.km, new GeoArgs().withCoordinates().desc()); + + TransactionResult exec = redis.exec(); + + List> empty = exec.get(0); + assertThat(empty).isNotEmpty(); + + List> withDistanceAndCoordinates = exec.get(1); + assertThat(withDistanceAndCoordinates).hasSize(2); + + GeoWithin weinheim = withDistanceAndCoordinates.get(0); + assertThat(weinheim.getMember()).isEqualTo("Weinheim"); + assertThat(weinheim.getGeohash()).isNull(); + assertThat(weinheim.getDistance()).isNotNull(); + assertThat(weinheim.getCoordinates()).isNotNull(); + + List> withDistanceAndHash = exec.get(2); + assertThat(withDistanceAndHash).hasSize(2); + + GeoWithin weinheimDistanceHash = withDistanceAndHash.get(0); + assertThat(weinheimDistanceHash.getMember()).isEqualTo("Weinheim"); + assertThat(weinheimDistanceHash.getGeohash()).isNotNull(); + assertThat(weinheimDistanceHash.getDistance()).isNotNull(); + assertThat(weinheimDistanceHash.getCoordinates()).isNull(); + + List> withCoordinates = exec.get(3); + assertThat(withCoordinates).hasSize(2); + + GeoWithin weinheimCoordinates = withCoordinates.get(0); + assertThat(weinheimCoordinates.getMember()).isEqualTo("Weinheim"); + assertThat(weinheimCoordinates.getGeohash()).isNull(); + assertThat(weinheimCoordinates.getDistance()).isNull(); + assertThat(weinheimCoordinates.getCoordinates()).isNotNull(); + } + + @Test + void georadiusbymemberWithNullArgs() { + assertThatThrownBy(() -> redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km, (GeoArgs) null)) + .isInstanceOf(IllegalArgumentException.class); + } + + @Test + void georadiusStorebymemberWithNullArgs() { + assertThatThrownBy(() -> redis.georadiusbymember(key, "Bahn", 1, GeoArgs.Unit.km, (GeoRadiusStoreArgs) null)) + .isInstanceOf(IllegalArgumentException.class); + } + + protected void prepareGeo() { + redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim"); + redis.geoadd(key, 8.3796281, 48.9978127, "EFS9", 8.665351, 49.553302, "Bahn"); + } + + private static double getY(List> georadius, int i) { + return georadius.get(i).getCoordinates().getY().doubleValue(); + } + + private static double getX(List> georadius, int i) { + return georadius.get(i).getCoordinates().getX().doubleValue(); + } + +} diff --git a/src/test/java/io/lettuce/core/commands/HLLCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/HLLCommandIntegrationTests.java new file mode 100644 index 0000000000..7a8a85e202 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/HLLCommandIntegrationTests.java @@ -0,0 +1,124 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.Fail.fail; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class HLLCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected HLLCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void pfadd() { + + assertThat(redis.pfadd(key, value, value)).isEqualTo(1); + assertThat(redis.pfadd(key, value, value)).isEqualTo(0); + assertThat(redis.pfadd(key, value)).isEqualTo(0); + } + + @Test + void pfaddNoValues() { + assertThatThrownBy(() -> redis.pfadd(key)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void pfaddNullValues() { + try { + redis.pfadd(key, null); + fail("Missing IllegalArgumentException"); + } catch (IllegalArgumentException e) { + } + try { + redis.pfadd(key, value, null); + fail("Missing IllegalArgumentException"); + } catch (IllegalArgumentException e) { + } + } + + @Test + void pfmerge() { + redis.pfadd(key, value); + redis.pfadd("key2", "value2"); + redis.pfadd("key3", "value3"); + + assertThat(redis.pfmerge(key, "key2", "key3")).isEqualTo("OK"); + assertThat(redis.pfcount(key)).isEqualTo(3); + + redis.pfadd("key2660", "rand", "mat"); + redis.pfadd("key7112", "mat", "perrin"); + + redis.pfmerge("key8885", "key2660", "key7112"); + + assertThat(redis.pfcount("key8885")).isEqualTo(3); + } + + @Test + void pfmergeNoKeys() { + assertThatThrownBy(() -> redis.pfmerge(key)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void pfcount() { + redis.pfadd(key, value); + redis.pfadd("key2", "value2"); + assertThat(redis.pfcount(key)).isEqualTo(1); + assertThat(redis.pfcount(key, "key2")).isEqualTo(2); + } + + @Test + void pfcountNoKeys() { + assertThatThrownBy(() -> redis.pfcount()).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void pfaddPfmergePfCount() { + + redis.pfadd("key2660", "rand", "mat"); + redis.pfadd("key7112", "mat", "perrin"); + + redis.pfmerge("key8885", "key2660", "key7112"); + + assertThat(redis.pfcount("key8885")).isEqualTo(3); + } +} diff --git a/src/test/java/io/lettuce/core/commands/HashCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/HashCommandIntegrationTests.java new file mode 100644 index 0000000000..43e2e02b9c --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/HashCommandIntegrationTests.java @@ -0,0 +1,399 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.offset; + +import java.util.Collections; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.KeyValueStreamingAdapter; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.ListStreamingAdapter; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * Integration tests for {@link io.lettuce.core.api.sync.RedisHashCommands}. + * + * @author Will Glozer + * @author Mark Paluch + * @author Hodur Heidarsson + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class HashCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected HashCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void hdel() { + assertThat(redis.hdel(key, "one")).isEqualTo(0); + redis.hset(key, "two", "2"); + assertThat(redis.hdel(key, "one")).isEqualTo(0); + redis.hset(key, "one", "1"); + assertThat(redis.hdel(key, "one")).isEqualTo(1); + redis.hset(key, "one", "1"); + assertThat(redis.hdel(key, "one", "two")).isEqualTo(2); + } + + @Test + void hexists() { + assertThat(redis.hexists(key, "one")).isFalse(); + redis.hset(key, "two", "2"); + assertThat(redis.hexists(key, "one")).isFalse(); + redis.hset(key, "one", "1"); + assertThat(redis.hexists(key, "one")).isTrue(); + } + + @Test + void hget() { + assertThat(redis.hget(key, "one")).isNull(); + redis.hset(key, "one", "1"); + assertThat(redis.hget(key, "one")).isEqualTo("1"); + } + + @Test + void hgetall() { + assertThat(redis.hgetall(key).isEmpty()).isTrue(); + + redis.hset(key, "zero", "0"); + redis.hset(key, "one", "1"); + redis.hset(key, "two", "2"); + + Map map = redis.hgetall(key); + + assertThat(map).hasSize(3); + assertThat(map.keySet()).containsExactly("zero", "one", "two"); + } + + @Test + void hgetallStreaming() { + + KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter<>(); + + assertThat(redis.hgetall(key).isEmpty()).isTrue(); + redis.hset(key, "one", "1"); + redis.hset(key, "two", "2"); + Long count = redis.hgetall(adapter, key); + Map map = adapter.getMap(); + assertThat(count.intValue()).isEqualTo(2); + assertThat(map).hasSize(2); + assertThat(map.get("one")).isEqualTo("1"); + assertThat(map.get("two")).isEqualTo("2"); + } + + @Test + void hincrby() { + assertThat(redis.hincrby(key, "one", 1)).isEqualTo(1); + assertThat(redis.hincrby(key, "one", -2)).isEqualTo(-1); + } + + @Test + void hincrbyfloat() { + assertThat(redis.hincrbyfloat(key, "one", 1.0)).isEqualTo(1.0); + assertThat(redis.hincrbyfloat(key, "one", -2.0)).isEqualTo(-1.0); + assertThat(redis.hincrbyfloat(key, "one", 1.23)).isEqualTo(0.23, offset(0.001)); + } + + @Test + void hkeys() { + setup(); + List keys = redis.hkeys(key); + assertThat(keys).hasSize(2); + assertThat(keys.containsAll(list("one", "two"))).isTrue(); + } + + @Test + void hkeysStreaming() { + setup(); + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + + Long count = redis.hkeys(streamingAdapter, key); + assertThat(count.longValue()).isEqualTo(2); + + List keys = streamingAdapter.getList(); + assertThat(keys).hasSize(2); + assertThat(keys.containsAll(list("one", "two"))).isTrue(); + } + + private void setup() { + assertThat(redis.hkeys(key)).isEqualTo(list()); + redis.hset(key, "one", "1"); + redis.hset(key, "two", "2"); + } + + @Test + void hlen() { + assertThat((long) redis.hlen(key)).isEqualTo(0); + redis.hset(key, "one", "1"); + assertThat((long) redis.hlen(key)).isEqualTo(1); + } + + @Test + @EnabledOnCommand("HSTRLEN") + void hstrlen() { + + assertThat((long) redis.hstrlen(key, "one")).isEqualTo(0); + redis.hset(key, "one", value); + assertThat((long) redis.hstrlen(key, "one")).isEqualTo(value.length()); + } + + @Test + void hmget() { + setupHmget(); + List> values = redis.hmget(key, "one", "two"); + assertThat(values).hasSize(2); + assertThat(values.containsAll(list(kv("one", "1"), kv("two", "2")))).isTrue(); + } + + private void setupHmget() { + assertThat(redis.hmget(key, "one", "two")).isEqualTo(list(KeyValue.empty("one"), KeyValue.empty("two"))); + redis.hset(key, "one", "1"); + redis.hset(key, "two", "2"); + } + + @Test + void hmgetStreaming() { + setupHmget(); + + KeyValueStreamingAdapter streamingAdapter = new KeyValueStreamingAdapter<>(); + Long count = redis.hmget(streamingAdapter, key, "one", "two"); + Map values = streamingAdapter.getMap(); + assertThat(count.intValue()).isEqualTo(2); + assertThat(values).hasSize(2); + assertThat(values).containsEntry("one", "1").containsEntry("two", "2"); + } + + @Test + void hmset() { + Map hash = new LinkedHashMap<>(); + hash.put("one", "1"); + hash.put("two", "2"); + assertThat(redis.hmset(key, hash)).isEqualTo("OK"); + assertThat(redis.hmget(key, "one", "two")).isEqualTo(list(kv("one", "1"), kv("two", "2"))); + } + + @Test + void hmsetWithNulls() { + Map hash = new LinkedHashMap<>(); + hash.put("one", null); + assertThat(redis.hmset(key, hash)).isEqualTo("OK"); + assertThat(redis.hmget(key, "one")).isEqualTo(list(kv("one", ""))); + + hash.put("one", ""); + assertThat(redis.hmset(key, hash)).isEqualTo("OK"); + assertThat(redis.hmget(key, "one")).isEqualTo(list(kv("one", ""))); + } + + @Test + void hset() { + assertThat(redis.hset(key, "one", "1")).isTrue(); + assertThat(redis.hset(key, "one", "1")).isFalse(); + } + + @Test + @EnabledOnCommand("UNLINK") // version guard for Redis 4 + void hsetMap() { + Map hash = new LinkedHashMap<>(); + hash.put("two", "2"); + hash.put("three", "3"); + assertThat(redis.hset(key, hash)).isEqualTo(2); + + hash.put("two", "second"); + assertThat(redis.hset(key, hash)).isEqualTo(0); + assertThat(redis.hget(key, "two")).isEqualTo("second"); + } + + @Test + void hsetnx() { + redis.hset(key, "one", "1"); + assertThat(redis.hsetnx(key, "one", "2")).isFalse(); + assertThat(redis.hget(key, "one")).isEqualTo("1"); + } + + @Test + void hvals() { + assertThat(redis.hvals(key)).isEqualTo(list()); + redis.hset(key, "one", "1"); + redis.hset(key, "two", "2"); + List values = redis.hvals(key); + assertThat(values).hasSize(2); + assertThat(values.containsAll(list("1", "1"))).isTrue(); + } + + @Test + void hvalsStreaming() { + assertThat(redis.hvals(key)).isEqualTo(list()); + redis.hset(key, "one", "1"); + redis.hset(key, "two", "2"); + + ListStreamingAdapter channel = new ListStreamingAdapter<>(); + Long count = redis.hvals(channel, key); + assertThat(count.intValue()).isEqualTo(2); + assertThat(channel.getList()).hasSize(2); + assertThat(channel.getList().containsAll(list("1", "1"))).isTrue(); + } + + @Test + void hscan() { + redis.hset(key, key, value); + MapScanCursor cursor = redis.hscan(key); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getMap()).isEqualTo(Collections.singletonMap(key, value)); + } + + @Test + void hscanWithCursor() { + redis.hset(key, key, value); + + MapScanCursor cursor = redis.hscan(key, ScanCursor.INITIAL); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getMap()).isEqualTo(Collections.singletonMap(key, value)); + } + + @Test + void hscanWithCursorAndArgs() { + redis.hset(key, key, value); + + MapScanCursor cursor = redis.hscan(key, ScanCursor.INITIAL, ScanArgs.Builder.limit(2)); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getMap()).isEqualTo(Collections.singletonMap(key, value)); + } + + @Test + void hscanStreaming() { + redis.hset(key, key, value); + KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter<>(); + + StreamScanCursor cursor = redis.hscan(adapter, key, ScanArgs.Builder.limit(100).match("*")); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(adapter.getMap()).isEqualTo(Collections.singletonMap(key, value)); + } + + @Test + void hscanStreamingWithCursor() { + redis.hset(key, key, value); + KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter<>(); + + StreamScanCursor cursor = redis.hscan(adapter, key, ScanCursor.INITIAL); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void hscanStreamingWithCursorAndArgs() { + redis.hset(key, key, value); + KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter<>(); + + StreamScanCursor cursor3 = redis.hscan(adapter, key, ScanCursor.INITIAL, ScanArgs.Builder.limit(100).match("*")); + + assertThat(cursor3.getCount()).isEqualTo(1); + assertThat(cursor3.getCursor()).isEqualTo("0"); + assertThat(cursor3.isFinished()).isTrue(); + } + + @Test + void hscanStreamingWithArgs() { + redis.hset(key, key, value); + KeyValueStreamingAdapter adapter = new KeyValueStreamingAdapter<>(); + + StreamScanCursor cursor = redis.hscan(adapter, key); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void hscanMultiple() { + + Map expect = new LinkedHashMap<>(); + Map check = new LinkedHashMap<>(); + setup100KeyValues(expect); + + MapScanCursor cursor = redis.hscan(key, ScanArgs.Builder.limit(5)); + + assertThat(cursor.getCursor()).isNotNull(); + assertThat(cursor.getMap()).hasSize(100); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + check.putAll(cursor.getMap()); + + while (!cursor.isFinished()) { + cursor = redis.hscan(key, cursor); + check.putAll(cursor.getMap()); + } + + assertThat(check).isEqualTo(expect); + } + + @Test + void hscanMatch() { + + Map expect = new LinkedHashMap<>(); + setup100KeyValues(expect); + + MapScanCursor cursor = redis.hscan(key, ScanArgs.Builder.limit(100).match("key1*")); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + assertThat(cursor.getMap()).hasSize(11); + } + + void setup100KeyValues(Map expect) { + for (int i = 0; i < 100; i++) { + expect.put(key + i, value + 1); + } + + redis.hmset(key, expect); + } +} diff --git a/src/test/java/io/lettuce/core/commands/KeyCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/KeyCommandIntegrationTests.java new file mode 100644 index 0000000000..4eb0f2ea94 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/KeyCommandIntegrationTests.java @@ -0,0 +1,458 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.Assert.assertNotEquals; + +import java.time.Duration; +import java.util.*; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.ListStreamingAdapter; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class KeyCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected KeyCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void del() { + redis.set(key, value); + assertThat((long) redis.del(key)).isEqualTo(1); + redis.set(key + "1", value); + redis.set(key + "2", value); + + assertThat(redis.del(key + "1", key + "2")).isEqualTo(2); + } + + @Test + @EnabledOnCommand("UNLINK") + void unlink() { + + redis.set(key, value); + assertThat((long) redis.unlink(key)).isEqualTo(1); + redis.set(key + "1", value); + redis.set(key + "2", value); + assertThat(redis.unlink(key + "1", key + "2")).isEqualTo(2); + } + + @Test + void dump() { + assertThat(redis.dump("invalid")).isNull(); + redis.set(key, value); + assertThat(redis.dump(key).length > 0).isTrue(); + } + + @Test + void exists() { + assertThat(redis.exists(key)).isEqualTo(0); + redis.set(key, value); + assertThat(redis.exists(key)).isEqualTo(1); + } + + @Test + void existsVariadic() { + assertThat(redis.exists(key, "key2", "key3")).isEqualTo(0); + redis.set(key, value); + redis.set("key2", value); + assertThat(redis.exists(key, "key2", "key3")).isEqualTo(2); + } + + @Test + void expire() { + assertThat(redis.expire(key, 10)).isFalse(); + redis.set(key, value); + assertThat(redis.expire(key, 10)).isTrue(); + assertThat((long) redis.ttl(key)).isEqualTo(10); + } + + @Test + void expireat() { + Date expiration = new Date(System.currentTimeMillis() + 10000); + assertThat(redis.expireat(key, expiration)).isFalse(); + redis.set(key, value); + assertThat(redis.expireat(key, expiration)).isTrue(); + + assertThat(redis.ttl(key)).isGreaterThanOrEqualTo(8); + } + + @Test + void keys() { + assertThat(redis.keys("*")).isEqualTo(list()); + Map map = new LinkedHashMap<>(); + map.put("one", "1"); + map.put("two", "2"); + map.put("three", "3"); + redis.mset(map); + List keys = redis.keys("???"); + assertThat(keys).hasSize(2); + assertThat(keys.contains("one")).isTrue(); + assertThat(keys.contains("two")).isTrue(); + } + + @Test + void keysStreaming() { + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + assertThat(redis.keys("*")).isEqualTo(list()); + Map map = new LinkedHashMap<>(); + map.put("one", "1"); + map.put("two", "2"); + map.put("three", "3"); + redis.mset(map); + Long count = redis.keys(adapter, "???"); + assertThat(count.intValue()).isEqualTo(2); + + List keys = adapter.getList(); + assertThat(keys).hasSize(2); + assertThat(keys.contains("one")).isTrue(); + assertThat(keys.contains("two")).isTrue(); + } + + @Test + public void move() { + redis.set(key, value); + redis.move(key, 1); + assertThat(redis.get(key)).isNull(); + redis.select(1); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void objectEncoding() { + redis.set(key, value); + assertThat(redis.objectEncoding(key)).isEqualTo("embstr"); + redis.set(key, String.valueOf(1)); + assertThat(redis.objectEncoding(key)).isEqualTo("int"); + } + + @Test + void objectIdletime() { + redis.set(key, value); + assertThat((long) redis.objectIdletime(key)).isLessThan(2); + } + + @Test + void objectRefcount() { + redis.set(key, value); + assertThat(redis.objectRefcount(key)).isGreaterThan(0); + } + + @Test + void persist() { + assertThat(redis.persist(key)).isFalse(); + redis.set(key, value); + assertThat(redis.persist(key)).isFalse(); + redis.expire(key, 10); + assertThat(redis.persist(key)).isTrue(); + } + + @Test + void pexpire() { + assertThat(redis.pexpire(key, 5000)).isFalse(); + redis.set(key, value); + assertThat(redis.pexpire(key, 5000)).isTrue(); + assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(5000); + } + + @Test + void pexpireat() { + Date expiration = new Date(System.currentTimeMillis() + 5000); + assertThat(redis.pexpireat(key, expiration)).isFalse(); + redis.set(key, value); + assertThat(redis.pexpireat(key, expiration)).isTrue(); + assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(5000); + } + + @Test + void pttl() { + assertThat((long) redis.pttl(key)).isEqualTo(-2); + redis.set(key, value); + assertThat((long) redis.pttl(key)).isEqualTo(-1); + redis.pexpire(key, 5000); + assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(5000); + } + + @Test + void randomkey() { + assertThat(redis.randomkey()).isNull(); + redis.set(key, value); + assertThat(redis.randomkey()).isEqualTo(key); + } + + @Test + void rename() { + redis.set(key, value); + + assertThat(redis.rename(key, key + "X")).isEqualTo("OK"); + assertThat(redis.get(key)).isNull(); + assertThat(redis.get(key + "X")).isEqualTo(value); + redis.set(key, value + "X"); + assertThat(redis.rename(key + "X", key)).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void renameNonexistentKey() { + assertThatThrownBy(() -> redis.rename(key, key + "X")).isInstanceOf(RedisException.class); + } + + @Test + void renamenx() { + redis.set(key, value); + assertThat(redis.renamenx(key, key + "X")).isTrue(); + assertThat(redis.get(key + "X")).isEqualTo(value); + redis.set(key, value); + assertThat(redis.renamenx(key + "X", key)).isFalse(); + } + + @Test + void renamenxNonexistentKey() { + assertThatThrownBy(() -> redis.renamenx(key, key + "X")).isInstanceOf(RedisException.class); + } + + @Test + void restore() { + redis.set(key, value); + byte[] bytes = redis.dump(key); + redis.del(key); + + assertThat(redis.restore(key, 0, bytes)).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + assertThat(redis.pttl(key).longValue()).isEqualTo(-1); + + redis.del(key); + assertThat(redis.restore(key, 1000, bytes)).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(1000); + + assertThatThrownBy(() -> redis.restore(key, 0, bytes)).isInstanceOf(RedisException.class); + } + + @Test + void restoreReplace() { + + redis.set(key, value); + byte[] bytes = redis.dump(key); + redis.set(key, "foo"); + + assertThat(redis.restore(key, bytes, RestoreArgs.Builder.ttl(Duration.ofSeconds(1)).replace())).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + assertThat(redis.pttl(key)).isGreaterThan(0).isLessThanOrEqualTo(1000); + } + + @Test + @EnabledOnCommand("TOUCH") + void touch() { + + assertThat((long) redis.touch(key)).isEqualTo(0); + redis.set(key, value); + assertThat((long) redis.touch(key, "key2")).isEqualTo(1); + } + + @Test + void ttl() { + assertThat((long) redis.ttl(key)).isEqualTo(-2); + redis.set(key, value); + assertThat((long) redis.ttl(key)).isEqualTo(-1); + redis.expire(key, 10); + assertThat((long) redis.ttl(key)).isEqualTo(10); + } + + @Test + void type() { + assertThat(redis.type(key)).isEqualTo("none"); + + redis.set(key, value); + assertThat(redis.type(key)).isEqualTo("string"); + + redis.hset(key + "H", value, "1"); + assertThat(redis.type(key + "H")).isEqualTo("hash"); + + redis.lpush(key + "L", "1"); + assertThat(redis.type(key + "L")).isEqualTo("list"); + + redis.sadd(key + "S", "1"); + assertThat(redis.type(key + "S")).isEqualTo("set"); + + redis.zadd(key + "Z", 1, "1"); + assertThat(redis.type(key + "Z")).isEqualTo("zset"); + } + + @Test + void scan() { + redis.set(key, value); + + KeyScanCursor cursor = redis.scan(); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getKeys()).isEqualTo(list(key)); + } + + @Test + void scanWithArgs() { + redis.set(key, value); + + KeyScanCursor cursor = redis.scan(ScanArgs.Builder.limit(10)); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + } + + @Test + void scanInitialCursor() { + redis.set(key, value); + + KeyScanCursor cursor = redis.scan(ScanCursor.INITIAL); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getKeys()).isEqualTo(list(key)); + } + + @Test + void scanFinishedCursor() { + redis.set(key, value); + assertThatThrownBy(() -> redis.scan(ScanCursor.FINISHED)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void scanNullCursor() { + redis.set(key, value); + assertThatThrownBy(() -> redis.scan((ScanCursor) null)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void scanStreaming() { + redis.set(key, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.scan(adapter); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(adapter.getList()).isEqualTo(list(key)); + } + + @Test + void scanStreamingWithCursor() { + redis.set(key, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.scan(adapter, ScanCursor.INITIAL); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void scanStreamingWithCursorAndArgs() { + redis.set(key, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.scan(adapter, ScanCursor.INITIAL, ScanArgs.Builder.limit(5)); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void scanStreamingArgs() { + redis.set(key, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.scan(adapter, ScanArgs.Builder.limit(100).match("*")); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(adapter.getList()).isEqualTo(list(key)); + } + + @Test + void scanMultiple() { + + Set expect = new HashSet<>(); + Set check = new HashSet<>(); + setup100KeyValues(expect); + + KeyScanCursor cursor = redis.scan(ScanArgs.Builder.limit(12)); + + assertThat(cursor.getCursor()).isNotNull(); + assertNotEquals("0", cursor.getCursor()); + assertThat(cursor.isFinished()).isFalse(); + + check.addAll(cursor.getKeys()); + + while (!cursor.isFinished()) { + cursor = redis.scan(cursor); + check.addAll(cursor.getKeys()); + } + + assertThat(check).isEqualTo(expect); + assertThat(check).hasSize(100); + } + + @Test + void scanMatch() { + + Set expect = new HashSet<>(); + setup100KeyValues(expect); + + KeyScanCursor cursor = redis.scan(ScanArgs.Builder.limit(200).match("key1*")); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + assertThat(cursor.getKeys()).hasSize(11); + } + + void setup100KeyValues(Set expect) { + for (int i = 0; i < 100; i++) { + redis.set(key + i, value + i); + expect.add(key + i); + } + } +} diff --git a/src/test/java/io/lettuce/core/commands/ListCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/ListCommandIntegrationTests.java new file mode 100644 index 0000000000..ba35d4c58d --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/ListCommandIntegrationTests.java @@ -0,0 +1,251 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.time.Duration; +import java.util.List; +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.ListStreamingAdapter; +import io.lettuce.test.condition.RedisConditions; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class ListCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected ListCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void blpop() { + redis.rpush("two", "2", "3"); + assertThat(redis.blpop(1, "one", "two")).isEqualTo(kv("two", "2")); + } + + @Test + void blpopTimeout() { + redis.setTimeout(Duration.ofSeconds(10)); + assertThat(redis.blpop(1, key)).isNull(); + } + + @Test + void brpop() { + redis.rpush("two", "2", "3"); + assertThat(redis.brpop(1, "one", "two")).isEqualTo(kv("two", "3")); + } + + @Test + void brpoplpush() { + redis.rpush("one", "1", "2"); + redis.rpush("two", "3", "4"); + assertThat(redis.brpoplpush(1, "one", "two")).isEqualTo("2"); + assertThat(redis.lrange("one", 0, -1)).isEqualTo(list("1")); + assertThat(redis.lrange("two", 0, -1)).isEqualTo(list("2", "3", "4")); + } + + @Test + void lindex() { + assertThat(redis.lindex(key, 0)).isNull(); + redis.rpush(key, "one"); + assertThat(redis.lindex(key, 0)).isEqualTo("one"); + } + + @Test + void linsert() { + assertThat(redis.linsert(key, false, "one", "two")).isEqualTo(0); + redis.rpush(key, "one"); + redis.rpush(key, "three"); + assertThat(redis.linsert(key, true, "three", "two")).isEqualTo(3); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two", "three")); + } + + @Test + void llen() { + assertThat((long) redis.llen(key)).isEqualTo(0); + redis.lpush(key, "one"); + assertThat((long) redis.llen(key)).isEqualTo(1); + } + + @Test + void lpop() { + assertThat(redis.lpop(key)).isNull(); + redis.rpush(key, "one", "two"); + assertThat(redis.lpop(key)).isEqualTo("one"); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("two")); + } + + @Test + void lpush() { + assertThat((long) redis.lpush(key, "two")).isEqualTo(1); + assertThat((long) redis.lpush(key, "one")).isEqualTo(2); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two")); + assertThat((long) redis.lpush(key, "three", "four")).isEqualTo(4); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("four", "three", "one", "two")); + } + + @Test + void lpushx() { + assertThat((long) redis.lpushx(key, "two")).isEqualTo(0); + redis.lpush(key, "two"); + assertThat((long) redis.lpushx(key, "one")).isEqualTo(2); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two")); + } + + @Test + void lpushxVariadic() { + + assumeTrue(RedisConditions.of(redis).hasCommandArity("LPUSHX", -3)); + + assertThat((long) redis.lpushx(key, "one", "two")).isEqualTo(0); + redis.lpush(key, "two"); + assertThat((long) redis.lpushx(key, "one", "zero")).isEqualTo(3); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("zero", "one", "two")); + } + + @Test + void lrange() { + assertThat(redis.lrange(key, 0, 10).isEmpty()).isTrue(); + redis.rpush(key, "one", "two", "three"); + List range = redis.lrange(key, 0, 1); + assertThat(range).hasSize(2); + assertThat(range.get(0)).isEqualTo("one"); + assertThat(range.get(1)).isEqualTo("two"); + assertThat(redis.lrange(key, 0, -1)).hasSize(3); + } + + @Test + void lrangeStreaming() { + assertThat(redis.lrange(key, 0, 10).isEmpty()).isTrue(); + redis.rpush(key, "one", "two", "three"); + + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + Long count = redis.lrange(adapter, key, 0, 1); + assertThat(count.longValue()).isEqualTo(2); + + List range = adapter.getList(); + + assertThat(range).hasSize(2); + assertThat(range.get(0)).isEqualTo("one"); + assertThat(range.get(1)).isEqualTo("two"); + assertThat(redis.lrange(key, 0, -1)).hasSize(3); + } + + @Test + void lrem() { + assertThat(redis.lrem(key, 0, value)).isEqualTo(0); + + redis.rpush(key, "1", "2", "1", "2", "1"); + assertThat((long) redis.lrem(key, 1, "1")).isEqualTo(1); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("2", "1", "2", "1")); + + redis.lpush(key, "1"); + assertThat((long) redis.lrem(key, -1, "1")).isEqualTo(1); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("1", "2", "1", "2")); + + redis.lpush(key, "1"); + assertThat((long) redis.lrem(key, 0, "1")).isEqualTo(3); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("2", "2")); + } + + @Test + void lset() { + redis.rpush(key, "one", "two", "three"); + assertThat(redis.lset(key, 2, "san")).isEqualTo("OK"); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two", "san")); + } + + @Test + void ltrim() { + redis.rpush(key, "1", "2", "3", "4", "5", "6"); + assertThat(redis.ltrim(key, 0, 3)).isEqualTo("OK"); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("1", "2", "3", "4")); + assertThat(redis.ltrim(key, -2, -1)).isEqualTo("OK"); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("3", "4")); + } + + @Test + void rpop() { + assertThat(redis.rpop(key)).isNull(); + redis.rpush(key, "one", "two"); + assertThat(redis.rpop(key)).isEqualTo("two"); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one")); + } + + @Test + void rpoplpush() { + assertThat(redis.rpoplpush("one", "two")).isNull(); + redis.rpush("one", "1", "2"); + redis.rpush("two", "3", "4"); + assertThat(redis.rpoplpush("one", "two")).isEqualTo("2"); + assertThat(redis.lrange("one", 0, -1)).isEqualTo(list("1")); + assertThat(redis.lrange("two", 0, -1)).isEqualTo(list("2", "3", "4")); + } + + @Test + void rpush() { + assertThat((long) redis.rpush(key, "one")).isEqualTo(1); + assertThat((long) redis.rpush(key, "two")).isEqualTo(2); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two")); + assertThat((long) redis.rpush(key, "three", "four")).isEqualTo(4); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two", "three", "four")); + } + + @Test + void rpushx() { + assertThat((long) redis.rpushx(key, "one")).isEqualTo(0); + redis.rpush(key, "one"); + assertThat((long) redis.rpushx(key, "two")).isEqualTo(2); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two")); + } + + @Test + void rpushxVariadic() { + + assumeTrue(RedisConditions.of(redis).hasCommandArity("RPUSHX", -3)); + + assertThat((long) redis.rpushx(key, "two", "three")).isEqualTo(0); + redis.rpush(key, "one"); + assertThat((long) redis.rpushx(key, "two", "three")).isEqualTo(3); + assertThat(redis.lrange(key, 0, -1)).isEqualTo(list("one", "two", "three")); + } +} diff --git a/src/test/java/io/lettuce/core/commands/NumericCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/NumericCommandIntegrationTests.java new file mode 100644 index 0000000000..e7b86202fc --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/NumericCommandIntegrationTests.java @@ -0,0 +1,82 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.offset; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class NumericCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected NumericCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void decr() { + assertThat((long) redis.decr(key)).isEqualTo(-1); + assertThat((long) redis.decr(key)).isEqualTo(-2); + } + + @Test + void decrby() { + assertThat(redis.decrby(key, 3)).isEqualTo(-3); + assertThat(redis.decrby(key, 3)).isEqualTo(-6); + } + + @Test + void incr() { + assertThat((long) redis.incr(key)).isEqualTo(1); + assertThat((long) redis.incr(key)).isEqualTo(2); + } + + @Test + void incrby() { + assertThat(redis.incrby(key, 3)).isEqualTo(3); + assertThat(redis.incrby(key, 3)).isEqualTo(6); + } + + @Test + void incrbyfloat() { + + assertThat(redis.incrbyfloat(key, 3.0)).isEqualTo(3.0, offset(0.1)); + assertThat(redis.incrbyfloat(key, 0.2)).isEqualTo(3.2, offset(0.1)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/RunOnlyOnceServerCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/RunOnlyOnceServerCommandIntegrationTests.java new file mode 100644 index 0000000000..13246fe9b3 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/RunOnlyOnceServerCommandIntegrationTests.java @@ -0,0 +1,139 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static io.lettuce.test.settings.TestSettings.host; +import static io.lettuce.test.settings.TestSettings.port; +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.Arrays; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.MigrateArgs; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.CanConnect; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RunOnlyOnceServerCommandIntegrationTests extends TestSupport { + + private final RedisClient client; + private final StatefulRedisConnection connection; + private final RedisCommands redis; + + @Inject + RunOnlyOnceServerCommandIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + + this.client = client; + this.connection = connection; + this.redis = connection.sync(); + } + + /** + * Executed in order: 1 this test causes a stop of the redis. This means, you cannot repeat the test without restarting your + * redis. + */ + @Test + void debugSegfault() { + + assumeTrue(CanConnect.to(host(), port(1))); + + final RedisAsyncCommands commands = client.connect(RedisURI.Builder.redis(host(), port(1)).build()) + .async(); + try { + commands.debugSegfault(); + + Wait.untilTrue(() -> !commands.getStatefulConnection().isOpen()).waitOrTimeout(); + assertThat(commands.getStatefulConnection().isOpen()).isFalse(); + } finally { + commands.getStatefulConnection().close(); + } + } + + /** + * Executed in order: 2 + */ + @Test + void migrate() { + + assumeTrue(CanConnect.to(host(), port(2))); + + redis.set(key, value); + + String result = redis.migrate("localhost", TestSettings.port(2), key, 0, 10); + assertThat(result).isEqualTo("OK"); + } + + /** + * Executed in order: 3 + */ + @Test + void migrateCopyReplace() { + + assumeTrue(CanConnect.to(host(), port(2))); + + redis.set(key, value); + redis.set("key1", value); + redis.set("key2", value); + + String result = redis.migrate("localhost", TestSettings.port(2), 0, 10, MigrateArgs.Builder.keys(key).copy().replace()); + assertThat(result).isEqualTo("OK"); + + result = redis.migrate("localhost", TestSettings.port(2), 0, 10, MigrateArgs.Builder + .keys(Arrays.asList("key1", "key2")).replace()); + assertThat(result).isEqualTo("OK"); + } + + /** + * Executed in order: 4 this test causes a stop of the redis. This means, you cannot repeat the test without restarting your + * redis. + */ + @Test + void shutdown() { + + assumeTrue(CanConnect.to(host(), port(2))); + + final RedisAsyncCommands commands = client.connect(RedisURI.Builder.redis(host(), port(2)).build()) + .async(); + try { + + commands.shutdown(true); + commands.shutdown(false); + Wait.untilTrue(() -> !commands.getStatefulConnection().isOpen()).waitOrTimeout(); + + assertThat(commands.getStatefulConnection().isOpen()).isFalse(); + + } finally { + commands.getStatefulConnection().close(); + } + } +} diff --git a/src/test/java/io/lettuce/core/commands/ScriptingCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/ScriptingCommandIntegrationTests.java new file mode 100644 index 0000000000..a89d36072a --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/ScriptingCommandIntegrationTests.java @@ -0,0 +1,179 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static io.lettuce.core.ScriptOutputType.*; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.Collections; +import java.util.List; +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisNoScriptException; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class ScriptingCommandIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisCommands redis; + + @Inject + protected ScriptingCommandIntegrationTests(RedisClient client, RedisCommands redis) { + this.client = client; + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @AfterEach + void tearDown() { + + Wait.untilNoException(() -> { + try { + redis.scriptKill(); + } catch (RedisException e) { + // ignore + } + redis.ping(); + }).waitOrTimeout(); + + } + + @Test + void eval() { + assertThat((Boolean) redis.eval("return 1 + 1 == 4", BOOLEAN)).isEqualTo(false); + assertThat((Number) redis.eval("return 1 + 1", INTEGER)).isEqualTo(2L); + assertThat((String) redis.eval("return {ok='status'}", STATUS)).isEqualTo("status"); + assertThat((String) redis.eval("return 'one'", VALUE)).isEqualTo("one"); + assertThat((List) redis.eval("return {1, 'one', {2}}", MULTI)).isEqualTo(list(1L, "one", list(2L))); + + assertThatThrownBy(() -> redis.eval("return {err='Oops!'}", STATUS)).hasMessageContaining("Oops!"); + } + + @Test + void evalWithSingleKey() { + assertThat((List) redis.eval("return KEYS[1]", MULTI, "one")).isEqualTo(list("one")); + } + + @Test + void evalWithNonAsciiChar() { + assertThat((Object) redis.eval("return 'füö'", VALUE, "one")).isEqualTo("füö"); + } + + @Test + void evalReturningNullInMulti() { + assertThat((List) redis.eval("return nil", MULTI, "one")).isEqualTo(Collections.singletonList(null)); + } + + @Test + void evalWithKeys() { + assertThat((List) redis.eval("return {KEYS[1], KEYS[2]}", MULTI, "one", "two")).isEqualTo(list("one", "two")); + } + + @Test + void evalWithArgs() { + String[] keys = new String[0]; + assertThat((List) redis.eval("return {ARGV[1], ARGV[2]}", MULTI, keys, "a", "b")).isEqualTo(list("a", "b")); + } + + @Test + void evalsha() { + redis.scriptFlush(); + String script = "return 1 + 1"; + String digest = redis.digest(script); + assertThat((Number) redis.eval(script, INTEGER)).isEqualTo(2L); + assertThat((Number) redis.evalsha(digest, INTEGER)).isEqualTo(2L); + + assertThatThrownBy(() -> redis.evalsha(redis.digest("return 1 + 1 == 4"), INTEGER)).isInstanceOf( + RedisNoScriptException.class).hasMessageContaining("NOSCRIPT No matching script. Please use EVAL."); + } + + @Test + void evalshaWithMulti() { + redis.scriptFlush(); + String digest = redis.digest("return {1234, 5678}"); + + assertThatThrownBy(() -> redis.evalsha(digest, MULTI)).isInstanceOf(RedisNoScriptException.class).hasMessageContaining( + "NOSCRIPT No matching script. Please use EVAL."); + } + + @Test + void evalshaWithKeys() { + redis.scriptFlush(); + String digest = redis.scriptLoad("return {KEYS[1], KEYS[2]}"); + assertThat((Object) redis.evalsha(digest, MULTI, "one", "two")).isEqualTo(list("one", "two")); + } + + @Test + void evalshaWithArgs() { + redis.scriptFlush(); + String digest = redis.scriptLoad("return {ARGV[1], ARGV[2]}"); + String[] keys = new String[0]; + assertThat((Object) redis.evalsha(digest, MULTI, keys, "a", "b")).isEqualTo(list("a", "b")); + } + + @Test + void script() throws InterruptedException { + assertThat(redis.scriptFlush()).isEqualTo("OK"); + + String script1 = "return 1 + 1"; + String digest1 = redis.digest(script1); + String script2 = "return 1 + 1 == 4"; + String digest2 = redis.digest(script2); + + assertThat(redis.scriptExists(digest1, digest2)).isEqualTo(list(false, false)); + assertThat(redis.scriptLoad(script1)).isEqualTo(digest1); + assertThat((Object) redis.evalsha(digest1, INTEGER)).isEqualTo(2L); + assertThat(redis.scriptExists(digest1, digest2)).isEqualTo(list(true, false)); + + assertThat(redis.scriptFlush()).isEqualTo("OK"); + assertThat(redis.scriptExists(digest1, digest2)).isEqualTo(list(false, false)); + + redis.configSet("lua-time-limit", "10"); + StatefulRedisConnection connection = client.connect(); + try { + connection.async().eval("while true do end", STATUS, new String[0]).await(100, TimeUnit.MILLISECONDS); + + assertThat(redis.scriptKill()).isEqualTo("OK"); + } finally { + connection.close(); + } + } +} diff --git a/src/test/java/io/lettuce/core/commands/ServerCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/ServerCommandIntegrationTests.java new file mode 100644 index 0000000000..e02f2cc9a1 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/ServerCommandIntegrationTests.java @@ -0,0 +1,434 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.hamcrest.CoreMatchers.containsString; +import static org.junit.Assert.assertThat; +import static org.junit.jupiter.api.Assumptions.assumeFalse; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.Date; +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.models.command.CommandDetail; +import io.lettuce.core.models.command.CommandDetailParser; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RoleParser; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; +import io.lettuce.test.condition.EnabledOnCommand; +import io.lettuce.test.condition.RedisConditions; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Will Glozer + * @author Mark Paluch + * @author Zhang Jessey + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class ServerCommandIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisCommands redis; + + @Inject + protected ServerCommandIntegrationTests(RedisClient client, RedisCommands redis) { + this.client = client; + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void bgrewriteaof() { + String msg = "Background append only file rewriting"; + assertThat(redis.bgrewriteaof(), containsString(msg)); + } + + @Test + void bgsave() { + + Wait.untilTrue(this::noSaveInProgress).waitOrTimeout(); + + String msg = "Background saving started"; + assertThat(redis.bgsave()).isEqualTo(msg); + } + + @Test + void clientGetSetname() { + assertThat(redis.clientGetname()).isNull(); + assertThat(redis.clientSetname("test")).isEqualTo("OK"); + assertThat(redis.clientGetname()).isEqualTo("test"); + assertThat(redis.clientSetname("")).isEqualTo("OK"); + assertThat(redis.clientGetname()).isNull(); + } + + @Test + void clientPause() { + assertThat(redis.clientPause(10)).isEqualTo("OK"); + } + + @Test + void clientKill() { + Pattern p = Pattern.compile(".*addr=([^ ]+).*"); + String clients = redis.clientList(); + Matcher m = p.matcher(clients); + + assertThat(m.lookingAt()).isTrue(); + assertThat(redis.clientKill(m.group(1))).isEqualTo("OK"); + } + + @Test + void clientKillExtended() { + + RedisCommands connection2 = client.connect().sync(); + connection2.clientSetname("killme"); + + Pattern p = Pattern.compile("^.*addr=([^ ]+).*name=killme.*$", Pattern.MULTILINE | Pattern.DOTALL); + String clients = redis.clientList(); + Matcher m = p.matcher(clients); + + assertThat(m.matches()).isTrue(); + String addr = m.group(1); + assertThat(redis.clientKill(KillArgs.Builder.addr(addr).skipme())).isGreaterThan(0); + + assertThat(redis.clientKill(KillArgs.Builder.id(4234))).isEqualTo(0); + assertThat(redis.clientKill(KillArgs.Builder.typeSlave().id(4234))).isEqualTo(0); + assertThat(redis.clientKill(KillArgs.Builder.typeNormal().id(4234))).isEqualTo(0); + assertThat(redis.clientKill(KillArgs.Builder.typePubsub().id(4234))).isEqualTo(0); + + connection2.getStatefulConnection().close(); + } + + @Test + void clientList() { + assertThat(redis.clientList().contains("addr=")).isTrue(); + } + + @Test + void clientUnblock() throws InterruptedException { + + try { + redis.clientUnblock(0, UnblockType.ERROR); + } catch (Exception e) { + assumeFalse(true, e.getMessage()); + } + + StatefulRedisConnection connection2 = client.connect(); + connection2.sync().clientSetname("blocked"); + + RedisFuture> blocked = connection2.async().brpop(100000, "foo"); + + Pattern p = Pattern.compile("^.*id=([^ ]+).*name=blocked.*$", Pattern.MULTILINE | Pattern.DOTALL); + String clients = redis.clientList(); + Matcher m = p.matcher(clients); + + assertThat(m.matches()).isTrue(); + String id = m.group(1); + + Long unblocked = redis.clientUnblock(Long.parseLong(id), UnblockType.ERROR); + assertThat(unblocked).isEqualTo(1); + + blocked.await(1, TimeUnit.SECONDS); + assertThat(blocked.getError()).contains("UNBLOCKED client unblocked"); + } + + @Test + void clientId() { + assertThat(redis.clientId()).isNotNull(); + } + + @Test + void commandCount() { + assertThat(redis.commandCount()).isGreaterThan(100); + } + + @Test + void command() { + + List result = redis.command(); + + assertThat(result.size()).isGreaterThan(100); + + List commands = CommandDetailParser.parse(result); + assertThat(commands).hasSameSizeAs(result); + } + + @Test + void commandInfo() { + + List result = redis.commandInfo(CommandType.GETRANGE, CommandType.SET); + + assertThat(result.size()).isEqualTo(2); + + List commands = CommandDetailParser.parse(result); + assertThat(commands).hasSameSizeAs(result); + + result = redis.commandInfo("a missing command"); + + assertThat(result.size()).isEqualTo(0); + } + + @Test + void configGet() { + assertThat(redis.configGet("maxmemory")).containsEntry("maxmemory", "0"); + } + + @Test + void configResetstat() { + redis.get(key); + redis.get(key); + assertThat(redis.configResetstat()).isEqualTo("OK"); + assertThat(redis.info().contains("keyspace_misses:0")).isTrue(); + } + + @Test + void configSet() { + String maxmemory = redis.configGet("maxmemory").get("maxmemory"); + assertThat(redis.configSet("maxmemory", "1024")).isEqualTo("OK"); + assertThat(redis.configGet("maxmemory")).containsEntry("maxmemory", "1024"); + redis.configSet("maxmemory", maxmemory); + } + + @Test + void configRewrite() { + + String result = redis.configRewrite(); + assertThat(result).isEqualTo("OK"); + } + + @Test + void dbsize() { + assertThat(redis.dbsize()).isEqualTo(0); + redis.set(key, value); + assertThat(redis.dbsize()).isEqualTo(1); + } + + @Test + @Disabled("Causes instabilities") + void debugCrashAndRecover() { + try { + assertThat(redis.debugCrashAndRecover(1L)).isNotNull(); + } catch (Exception e) { + assertThat(e).hasMessageContaining("ERR failed to restart the server"); + } + } + + @Test + void debugHtstats() { + redis.set(key, value); + String result = redis.debugHtstats(0); + assertThat(result).contains("Dictionary"); + } + + @Test + void debugObject() { + redis.set(key, value); + redis.debugObject(key); + } + + @Test + void debugReload() { + assertThat(redis.debugReload()).isEqualTo("OK"); + } + + @Test + @Disabled("Causes instabilities") + void debugRestart() { + try { + assertThat(redis.debugRestart(1L)).isNotNull(); + } catch (Exception e) { + assertThat(e).hasMessageContaining("ERR failed to restart the server"); + } + } + + @Test + void debugSdslen() { + redis.set(key, value); + String result = redis.debugSdslen(key); + assertThat(result).contains("key_sds_len"); + } + + @Test + void flushall() { + redis.set(key, value); + assertThat(redis.flushall()).isEqualTo("OK"); + assertThat(redis.get(key)).isNull(); + } + + @Test + void flushallAsync() { + + assumeTrue(RedisConditions.of(redis).hasVersionGreaterOrEqualsTo("3.4")); + + redis.set(key, value); + assertThat(redis.flushallAsync()).isEqualTo("OK"); + assertThat(redis.get(key)).isNull(); + } + + @Test + void flushdb() { + redis.set(key, value); + assertThat(redis.flushdb()).isEqualTo("OK"); + assertThat(redis.get(key)).isNull(); + } + + @Test + void flushdbAsync() { + + assumeTrue(RedisConditions.of(redis).hasVersionGreaterOrEqualsTo("3.4")); + + redis.set(key, value); + redis.select(1); + redis.set(key, value + "X"); + assertThat(redis.flushdbAsync()).isEqualTo("OK"); + assertThat(redis.get(key)).isNull(); + redis.select(0); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void info() { + assertThat(redis.info().contains("redis_version")).isTrue(); + assertThat(redis.info("server").contains("redis_version")).isTrue(); + } + + @Test + void lastsave() { + Date start = new Date(System.currentTimeMillis() / 1000); + assertThat(start.compareTo(redis.lastsave()) <= 0).isTrue(); + } + + @Test + @EnabledOnCommand("MEMORY") + void memoryUsage() { + + redis.set("foo", "bar"); + Long usedMemory = redis.memoryUsage("foo"); + + assertThat(usedMemory).isGreaterThanOrEqualTo(3); + } + + @Test + void save() { + + Wait.untilTrue(this::noSaveInProgress).waitOrTimeout(); + + assertThat(redis.save()).isEqualTo("OK"); + } + + @Test + void slaveof() { + + assertThat(redis.slaveof(TestSettings.host(), 0)).isEqualTo("OK"); + redis.slaveofNoOne(); + } + + @Test + void slaveofEmptyHost() { + assertThatThrownBy(() -> redis.slaveof("", 0)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void role() { + + List objects = redis.role(); + + assertThat(objects.get(0)).isEqualTo("master"); + assertThat(objects.get(1).getClass()).isEqualTo(Long.class); + + RedisInstance redisInstance = RoleParser.parse(objects); + assertThat(redisInstance.getRole()).isEqualTo(RedisInstance.Role.MASTER); + } + + @Test + void slaveofNoOne() { + assertThat(redis.slaveofNoOne()).isEqualTo("OK"); + } + + @Test + @SuppressWarnings("unchecked") + void slowlog() { + + long start = System.currentTimeMillis() / 1000; + + assertThat(redis.configSet("slowlog-log-slower-than", "0")).isEqualTo("OK"); + assertThat(redis.slowlogReset()).isEqualTo("OK"); + redis.set(key, value); + + List log = redis.slowlogGet(); + assumeTrue(!log.isEmpty()); + + List entry = (List) log.get(0); + assertThat(entry.size()).isGreaterThanOrEqualTo(4); + assertThat(entry.get(0) instanceof Long).isTrue(); + assertThat((Long) entry.get(1) >= start).isTrue(); + assertThat(entry.get(2) instanceof Long).isTrue(); + assertThat(entry.get(3)).isEqualTo(list("SET", key, value)); + + assertThat(redis.slowlogGet(1)).hasSize(1); + assertThat((long) redis.slowlogLen()).isGreaterThanOrEqualTo(1); + + redis.configSet("slowlog-log-slower-than", "10000"); + } + + @Test + @EnabledOnCommand("SWAPDB") + void swapdb() { + + redis.select(1); + redis.set(key, "value1"); + + redis.select(2); + redis.set(key, "value2"); + assertThat(redis.get(key)).isEqualTo("value2"); + + redis.swapdb(1, 2); + redis.select(1); + assertThat(redis.get(key)).isEqualTo("value2"); + + redis.select(2); + assertThat(redis.get(key)).isEqualTo("value1"); + } + + private boolean noSaveInProgress() { + + String info = redis.info(); + + return !info.contains("aof_rewrite_in_progress:1") && !info.contains("rdb_bgsave_in_progress:1"); + } +} diff --git a/src/test/java/io/lettuce/core/commands/SetCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/SetCommandIntegrationTests.java new file mode 100644 index 0000000000..2a20bf2cae --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/SetCommandIntegrationTests.java @@ -0,0 +1,387 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.TreeSet; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.ListStreamingAdapter; +import io.lettuce.test.condition.RedisConditions; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class SetCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected SetCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void sadd() { + assertThat(redis.sadd(key, "a")).isEqualTo(1L); + assertThat(redis.sadd(key, "a")).isEqualTo(0); + assertThat(redis.smembers(key)).isEqualTo(set("a")); + assertThat(redis.sadd(key, "b", "c")).isEqualTo(2); + assertThat(redis.smembers(key)).isEqualTo(set("a", "b", "c")); + } + + @Test + void scard() { + assertThat((long) redis.scard(key)).isEqualTo(0); + redis.sadd(key, "a"); + assertThat((long) redis.scard(key)).isEqualTo(1); + } + + @Test + void sdiff() { + setupSet(); + assertThat(redis.sdiff("key1", "key2", "key3")).isEqualTo(set("b", "d")); + } + + @Test + void sdiffStreaming() { + setupSet(); + + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + + Long count = redis.sdiff(streamingAdapter, "key1", "key2", "key3"); + assertThat(count.intValue()).isEqualTo(2); + assertThat(streamingAdapter.getList()).containsOnly("b", "d"); + } + + @Test + void sdiffstore() { + setupSet(); + assertThat(redis.sdiffstore("newset", "key1", "key2", "key3")).isEqualTo(2); + assertThat(redis.smembers("newset")).containsOnly("b", "d"); + } + + @Test + void sinter() { + setupSet(); + assertThat(redis.sinter("key1", "key2", "key3")).isEqualTo(set("c")); + } + + @Test + void sinterStreaming() { + setupSet(); + + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + Long count = redis.sinter(streamingAdapter, "key1", "key2", "key3"); + + assertThat(count.intValue()).isEqualTo(1); + assertThat(streamingAdapter.getList()).containsExactly("c"); + } + + @Test + void sinterstore() { + setupSet(); + assertThat(redis.sinterstore("newset", "key1", "key2", "key3")).isEqualTo(1); + assertThat(redis.smembers("newset")).containsExactly("c"); + } + + @Test + void sismember() { + assertThat(redis.sismember(key, "a")).isFalse(); + redis.sadd(key, "a"); + assertThat(redis.sismember(key, "a")).isTrue(); + } + + @Test + void smove() { + redis.sadd(key, "a", "b", "c"); + assertThat(redis.smove(key, "key1", "d")).isFalse(); + assertThat(redis.smove(key, "key1", "a")).isTrue(); + assertThat(redis.smembers(key)).isEqualTo(set("b", "c")); + assertThat(redis.smembers("key1")).isEqualTo(set("a")); + } + + @Test + void smembers() { + setupSet(); + assertThat(redis.smembers(key)).isEqualTo(set("a", "b", "c")); + } + + @Test + void smembersStreaming() { + setupSet(); + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + Long count = redis.smembers(streamingAdapter, key); + assertThat(count.longValue()).isEqualTo(3); + assertThat(streamingAdapter.getList()).containsOnly("a", "b", "c"); + } + + @Test + void spop() { + assertThat(redis.spop(key)).isNull(); + redis.sadd(key, "a", "b", "c"); + String rand = redis.spop(key); + assertThat(set("a", "b", "c").contains(rand)).isTrue(); + assertThat(redis.smembers(key).contains(rand)).isFalse(); + } + + @Test + void spopMultiple() { + + assumeTrue(RedisConditions.of(redis).hasCommandArity("SPOP", -2)); + + assertThat(redis.spop(key)).isNull(); + redis.sadd(key, "a", "b", "c"); + Set rand = redis.spop(key, 2); + assertThat(rand).hasSize(2); + assertThat(set("a", "b", "c").containsAll(rand)).isTrue(); + } + + @Test + void srandmember() { + assertThat(redis.spop(key)).isNull(); + redis.sadd(key, "a", "b", "c", "d"); + assertThat(set("a", "b", "c", "d").contains(redis.srandmember(key))).isTrue(); + assertThat(redis.smembers(key)).isEqualTo(set("a", "b", "c", "d")); + List rand = redis.srandmember(key, 3); + assertThat(rand).hasSize(3); + assertThat(set("a", "b", "c", "d").containsAll(rand)).isTrue(); + List randWithDuplicates = redis.srandmember(key, -10); + assertThat(randWithDuplicates).hasSize(10); + } + + @Test + void srandmemberStreaming() { + assertThat(redis.spop(key)).isNull(); + redis.sadd(key, "a", "b", "c", "d"); + + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + + Long count = redis.srandmember(streamingAdapter, key, 2); + + assertThat(count.longValue()).isEqualTo(2); + + assertThat(set("a", "b", "c", "d").containsAll(streamingAdapter.getList())).isTrue(); + + } + + @Test + void srem() { + redis.sadd(key, "a", "b", "c"); + assertThat(redis.srem(key, "d")).isEqualTo(0); + assertThat(redis.srem(key, "b")).isEqualTo(1); + assertThat(redis.smembers(key)).isEqualTo(set("a", "c")); + assertThat(redis.srem(key, "a", "c")).isEqualTo(2); + assertThat(redis.smembers(key)).isEqualTo(set()); + } + + @Test + void sremEmpty() { + assertThatThrownBy(() -> redis.srem(key)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void sremNulls() { + assertThatThrownBy(() -> redis.srem(key, new String[0])).isInstanceOf(IllegalArgumentException. class); + } + + @Test + void sunion() { + setupSet(); + assertThat(redis.sunion("key1", "key2", "key3")).isEqualTo(set("a", "b", "c", "d", "e")); + } + + @Test + void sunionEmpty() { + assertThatThrownBy(() -> redis.sunion()).isInstanceOf(IllegalArgumentException. class); + } + + @Test + void sunionStreaming() { + setupSet(); + + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + Long count = redis.sunion(adapter, "key1", "key2", "key3"); + + assertThat(count.longValue()).isEqualTo(5); + + assertThat(new TreeSet<>(adapter.getList())).isEqualTo(new TreeSet<>(list("c", "a", "b", "e", "d"))); + } + + @Test + void sunionstore() { + setupSet(); + assertThat(redis.sunionstore("newset", "key1", "key2", "key3")).isEqualTo(5); + assertThat(redis.smembers("newset")).isEqualTo(set("a", "b", "c", "d", "e")); + } + + @Test + void sscan() { + redis.sadd(key, value); + ValueScanCursor cursor = redis.sscan(key); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getValues()).isEqualTo(list(value)); + } + + @Test + void sscanWithCursor() { + redis.sadd(key, value); + ValueScanCursor cursor = redis.sscan(key, ScanCursor.INITIAL); + + assertThat(cursor.getValues()).hasSize(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void sscanWithCursorAndArgs() { + redis.sadd(key, value); + + ValueScanCursor cursor = redis.sscan(key, ScanCursor.INITIAL, ScanArgs.Builder.limit(5)); + + assertThat(cursor.getValues()).hasSize(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + } + + @Test + void sscanStreaming() { + redis.sadd(key, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.sscan(adapter, key); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(adapter.getList()).isEqualTo(list(value)); + } + + @Test + void sscanStreamingWithCursor() { + redis.sadd(key, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.sscan(adapter, key, ScanCursor.INITIAL); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void sscanStreamingWithCursorAndArgs() { + redis.sadd(key, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.sscan(adapter, key, ScanCursor.INITIAL, ScanArgs.Builder.limit(5)); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void sscanStreamingArgs() { + redis.sadd(key, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.sscan(adapter, key, ScanArgs.Builder.limit(100).match("*")); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(adapter.getList()).isEqualTo(list(value)); + } + + @Test + void sscanMultiple() { + + Set expect = new HashSet<>(); + Set check = new HashSet<>(); + setup100KeyValues(expect); + + ValueScanCursor cursor = redis.sscan(key, ScanArgs.Builder.limit(5)); + + assertThat(cursor.getCursor()).isNotNull().isNotEqualTo("0"); + assertThat(cursor.isFinished()).isFalse(); + + check.addAll(cursor.getValues()); + + while (!cursor.isFinished()) { + cursor = redis.sscan(key, cursor); + check.addAll(cursor.getValues()); + } + + assertThat(new TreeSet<>(check)).isEqualTo(new TreeSet<>(expect)); + } + + @Test + void scanMatch() { + + Set expect = new HashSet<>(); + setup100KeyValues(expect); + + ValueScanCursor cursor = redis.sscan(key, ScanArgs.Builder.limit(200).match("value1*")); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + assertThat(cursor.getValues()).hasSize(11); + } + + void setup100KeyValues(Set expect) { + for (int i = 0; i < 100; i++) { + redis.sadd(key, value + i); + expect.add(value + i); + } + } + + private void setupSet() { + redis.sadd(key, "a", "b", "c"); + redis.sadd("key1", "a", "b", "c", "d"); + redis.sadd("key2", "c"); + redis.sadd("key3", "a", "c", "e"); + } + +} diff --git a/src/test/java/io/lettuce/core/commands/SortCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/SortCommandIntegrationTests.java new file mode 100644 index 0000000000..9e5aaaae9a --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/SortCommandIntegrationTests.java @@ -0,0 +1,117 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static io.lettuce.core.SortArgs.Builder.*; +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.ListStreamingAdapter; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class SortCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected SortCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void sort() { + redis.rpush(key, "3", "2", "1"); + assertThat(redis.sort(key)).isEqualTo(list("1", "2", "3")); + assertThat(redis.sort(key, asc())).isEqualTo(list("1", "2", "3")); + } + + @Test + void sortStreaming() { + redis.rpush(key, "3", "2", "1"); + + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + Long count = redis.sort(streamingAdapter, key); + + assertThat(count.longValue()).isEqualTo(3); + assertThat(streamingAdapter.getList()).isEqualTo(list("1", "2", "3")); + streamingAdapter.getList().clear(); + + count = redis.sort(streamingAdapter, key, desc()); + assertThat(count.longValue()).isEqualTo(3); + assertThat(streamingAdapter.getList()).isEqualTo(list("3", "2", "1")); + } + + @Test + void sortAlpha() { + redis.rpush(key, "A", "B", "C"); + assertThat(redis.sort(key, alpha().desc())).isEqualTo(list("C", "B", "A")); + } + + @Test + void sortBy() { + redis.rpush(key, "foo", "bar", "baz"); + redis.set("weight_foo", "8"); + redis.set("weight_bar", "4"); + redis.set("weight_baz", "2"); + assertThat(redis.sort(key, by("weight_*"))).isEqualTo(list("baz", "bar", "foo")); + } + + @Test + void sortDesc() { + redis.rpush(key, "1", "2", "3"); + assertThat(redis.sort(key, desc())).isEqualTo(list("3", "2", "1")); + } + + @Test + void sortGet() { + redis.rpush(key, "1", "2"); + redis.set("obj_1", "foo"); + redis.set("obj_2", "bar"); + assertThat(redis.sort(key, get("obj_*"))).isEqualTo(list("foo", "bar")); + } + + @Test + void sortLimit() { + redis.rpush(key, "3", "2", "1"); + assertThat(redis.sort(key, limit(1, 2))).isEqualTo(list("2", "3")); + } + + @Test + void sortStore() { + redis.rpush("one", "1", "2", "3"); + assertThat(redis.sortStore("one", desc(), "two")).isEqualTo(3); + assertThat(redis.lrange("two", 0, -1)).isEqualTo(list("3", "2", "1")); + } +} diff --git a/src/test/java/io/lettuce/core/commands/SortedSetCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/SortedSetCommandIntegrationTests.java new file mode 100644 index 0000000000..eac688946a --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/SortedSetCommandIntegrationTests.java @@ -0,0 +1,798 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static io.lettuce.core.Range.Boundary.including; +import static io.lettuce.core.ZStoreArgs.Builder.max; +import static io.lettuce.core.ZStoreArgs.Builder.min; +import static io.lettuce.core.ZStoreArgs.Builder.sum; +import static io.lettuce.core.ZStoreArgs.Builder.weights; +import static java.lang.Double.NEGATIVE_INFINITY; +import static java.lang.Double.POSITIVE_INFINITY; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.data.Offset.offset; + +import java.util.HashSet; +import java.util.Set; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.Range.Boundary; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.ListStreamingAdapter; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class SortedSetCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected SortedSetCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void zadd() { + assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(1); + assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(0); + + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a")); + assertThat(redis.zadd(key, 2.0, "b", 3.0, "c")).isEqualTo(2); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "b", "c")); + } + + @Test + void zaddScoredValue() { + assertThat(redis.zadd(key, ScoredValue.fromNullable(1.0, "a"))).isEqualTo(1); + assertThat(redis.zadd(key, ScoredValue.fromNullable(1.0, "a"))).isEqualTo(0); + + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a")); + assertThat(redis.zadd(key, ScoredValue.fromNullable(2.0, "b"), ScoredValue.fromNullable(3.0, "c"))).isEqualTo(2); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "b", "c")); + } + + @Test + void zaddnx() { + assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(1); + assertThat(redis.zadd(key, ZAddArgs.Builder.nx(), ScoredValue.fromNullable(2.0, "a"))).isEqualTo(0); + + assertThat(redis.zadd(key, ZAddArgs.Builder.nx(), ScoredValue.fromNullable(2.0, "b"))).isEqualTo(1); + + assertThat(redis.zadd(key, ZAddArgs.Builder.nx(), new Object[] { 2.0, "b", 3.0, "c" })).isEqualTo(1); + + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"))); + } + + @Test + void zaddWrongArguments() { + assertThatThrownBy(() -> redis.zadd(key, 2.0, "b", 3.0)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void zaddnxWrongArguments() { + assertThatThrownBy(() -> redis.zadd(key, ZAddArgs.Builder.nx(), new Object[] { 2.0, "b", 3.0 })) + .isInstanceOf(IllegalArgumentException.class); + } + + @Test + void zaddxx() { + assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(1); + assertThat(redis.zadd(key, ZAddArgs.Builder.xx(), 2.0, "a")).isEqualTo(0); + + assertThat(redis.zadd(key, ZAddArgs.Builder.xx(), 2.0, "b")).isEqualTo(0); + + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"))); + } + + @Test + void zaddch() { + assertThat(redis.zadd(key, 1.0, "a")).isEqualTo(1); + assertThat(redis.zadd(key, ZAddArgs.Builder.ch().xx(), 2.0, "a")).isEqualTo(1); + assertThat(redis.zadd(key, ZAddArgs.Builder.ch(), 2.0, "b")).isEqualTo(1); + + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"), sv(2.0, "b"))); + } + + @Test + void zaddincr() { + assertThat(redis.zadd(key, 1.0, "a").longValue()).isEqualTo(1); + assertThat(redis.zaddincr(key, 2.0, "a").longValue()).isEqualTo(3); + + assertThat(redis.zaddincr(key, 2.0, "b").longValue()).isEqualTo(2); + + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "a"))); + } + + @Test + void zaddincrnx() { + assertThat(redis.zaddincr(key, ZAddArgs.Builder.nx(), 2.0, "a").longValue()).isEqualTo(2); + assertThat(redis.zaddincr(key, ZAddArgs.Builder.nx(), 2.0, "a")).isNull(); + } + + @Test + void zaddincrxx() { + assertThat(redis.zaddincr(key, ZAddArgs.Builder.xx(), 2.0, "a")).isNull(); + assertThat(redis.zaddincr(key, ZAddArgs.Builder.nx(), 2.0, "a").longValue()).isEqualTo(2); + assertThat(redis.zaddincr(key, ZAddArgs.Builder.xx(), 2.0, "a").longValue()).isEqualTo(4); + } + + @Test + void zcard() { + assertThat(redis.zcard(key)).isEqualTo(0); + redis.zadd(key, 1.0, "a"); + assertThat(redis.zcard(key)).isEqualTo(1); + } + + @Test + void zcount() { + assertThat(redis.zcount(key, 0, 0)).isEqualTo(0); + + redis.zadd(key, 1.0, "a", 2.0, "b", 2.1, "c"); + + assertThat(redis.zcount(key, 1.0, 3.0)).isEqualTo(3); + assertThat(redis.zcount(key, 1.0, 2.0)).isEqualTo(2); + assertThat(redis.zcount(key, NEGATIVE_INFINITY, POSITIVE_INFINITY)).isEqualTo(3); + + assertThat(redis.zcount(key, "(1.0", "3.0")).isEqualTo(2); + assertThat(redis.zcount(key, "-inf", "+inf")).isEqualTo(3); + + assertThat(redis.zcount(key, Range.create(1.0, 3.0))).isEqualTo(3); + assertThat(redis.zcount(key, Range.create(1.0, 2.0))).isEqualTo(2); + assertThat(redis.zcount(key, Range.create(NEGATIVE_INFINITY, POSITIVE_INFINITY))).isEqualTo(3); + + assertThat(redis.zcount(key, Range.from(Boundary.excluding(1.0), including(3.0)))).isEqualTo(2); + assertThat(redis.zcount(key, Range.unbounded())).isEqualTo(3); + } + + @Test + void zincrby() { + assertThat(redis.zincrby(key, 0.0, "a")).isEqualTo(0, offset(0.1)); + assertThat(redis.zincrby(key, 1.1, "a")).isEqualTo(1.1, offset(0.1)); + assertThat(redis.zscore(key, "a")).isEqualTo(1.1, offset(0.1)); + assertThat(redis.zincrby(key, -1.2, "a")).isEqualTo(-0.1, offset(0.1)); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zinterstore() { + redis.zadd("zset1", 1.0, "a", 2.0, "b"); + redis.zadd("zset2", 2.0, "a", 3.0, "b", 4.0, "c"); + assertThat(redis.zinterstore(key, "zset1", "zset2")).isEqualTo(2); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "b")); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(3.0, "a"), sv(5.0, "b"))); + } + + @Test + @EnabledOnCommand("BZPOPMIN") + void bzpopmin() { + + redis.zadd("zset", 2.0, "a", 3.0, "b", 4.0, "c"); + + assertThat(redis.bzpopmin(1, "zset")).isEqualTo(KeyValue.just("zset", ScoredValue.just(2.0, "a"))); + assertThat(redis.bzpopmin(1, "zset2")).isNull(); + } + + @Test + @EnabledOnCommand("BZPOPMAX") + void bzpopmax() { + + redis.zadd("zset", 2.0, "a", 3.0, "b", 4.0, "c"); + + assertThat(redis.bzpopmax(1, "zset")).isEqualTo(KeyValue.just("zset", ScoredValue.just(4.0, "c"))); + assertThat(redis.bzpopmax(1, "zset2")).isNull(); + } + + @Test + @EnabledOnCommand("ZPOPMIN") + void zpopmin() { + + redis.zadd("zset", 2.0, "a", 3.0, "b", 4.0, "c"); + + assertThat(redis.zpopmin("zset")).isEqualTo(ScoredValue.just(2.0, "a")); + assertThat(redis.zpopmin("zset", 2)).contains(ScoredValue.just(3.0, "b"), ScoredValue.just(4.0, "c")); + assertThat(redis.zpopmin("foo")).isEqualTo(ScoredValue.empty()); + } + + @Test + @EnabledOnCommand("ZPOPMAX") + void zpopmax() { + + redis.zadd("zset", 2.0, "a", 3.0, "b", 4.0, "c"); + + assertThat(redis.zpopmax("zset")).isEqualTo(ScoredValue.just(4.0, "c")); + assertThat(redis.zpopmax("zset", 2)).contains(ScoredValue.just(2.0, "a"), ScoredValue.just(3.0, "b")); + assertThat(redis.zpopmax("foo")).isEqualTo(ScoredValue.empty()); + } + + @Test + void zrange() { + setup(); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "b", "c")); + } + + @Test + void zrangeStreaming() { + setup(); + + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + Long count = redis.zrange(streamingAdapter, key, 0, -1); + assertThat(count.longValue()).isEqualTo(3); + + assertThat(streamingAdapter.getList()).isEqualTo(list("a", "b", "c")); + } + + private void setup() { + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c"); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zrangeWithScores() { + setup(); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"))); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zrangeWithScoresStreaming() { + setup(); + ScoredValueStreamingAdapter streamingAdapter = new ScoredValueStreamingAdapter<>(); + Long count = redis.zrangeWithScores(streamingAdapter, key, 0, -1); + assertThat(count.longValue()).isEqualTo(3); + assertThat(streamingAdapter.getList()).isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"))); + } + + @Test + void zrangebyscore() { + + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + + assertThat(redis.zrangebyscore(key, 2.0, 3.0)).isEqualTo(list("b", "c")); + assertThat(redis.zrangebyscore(key, "(1.0", "(4.0")).isEqualTo(list("b", "c")); + assertThat(redis.zrangebyscore(key, NEGATIVE_INFINITY, POSITIVE_INFINITY)).isEqualTo(list("a", "b", "c", "d")); + assertThat(redis.zrangebyscore(key, "-inf", "+inf")).isEqualTo(list("a", "b", "c", "d")); + assertThat(redis.zrangebyscore(key, 0.0, 4.0, 1, 3)).isEqualTo(list("b", "c", "d")); + assertThat(redis.zrangebyscore(key, "-inf", "+inf", 2, 2)).isEqualTo(list("c", "d")); + + assertThat(redis.zrangebyscore(key, Range.create(2.0, 3.0))).isEqualTo(list("b", "c")); + assertThat(redis.zrangebyscore(key, Range.from(Boundary.excluding(1.0), Boundary.excluding(4.0)))) + .isEqualTo(list("b", "c")); + assertThat(redis.zrangebyscore(key, Range.unbounded())).isEqualTo(list("a", "b", "c", "d")); + assertThat(redis.zrangebyscore(key, Range.create(0.0, 4.0), Limit.create(1, 3))).isEqualTo(list("b", "c", "d")); + assertThat(redis.zrangebyscore(key, Range.unbounded(), Limit.create(2, 2))).isEqualTo(list("c", "d")); + } + + @Test + void zrangebyscoreStreaming() { + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + + assertThat(redis.zrangebyscore(streamingAdapter, key, 2.0, 3.0)).isEqualTo(2); + assertThat(redis.zrangebyscore(streamingAdapter, key, "(1.0", "(4.0")).isEqualTo(2); + assertThat(redis.zrangebyscore(streamingAdapter, key, NEGATIVE_INFINITY, POSITIVE_INFINITY)).isEqualTo(4); + assertThat(redis.zrangebyscore(streamingAdapter, key, "-inf", "+inf")).isEqualTo(4); + assertThat(redis.zrangebyscore(streamingAdapter, key, "-inf", "+inf")).isEqualTo(4); + assertThat(redis.zrangebyscore(streamingAdapter, key, 0.0, 4.0, 1, 3)).isEqualTo(3); + assertThat(redis.zrangebyscore(streamingAdapter, key, "-inf", "+inf", 2, 2)).isEqualTo(2); + + assertThat(redis.zrangebyscore(streamingAdapter, key, Range.create(2.0, 3.0))).isEqualTo(2); + assertThat(redis.zrangebyscore(streamingAdapter, key, Range.from(Boundary.excluding(1.0), Boundary.excluding(4.0)))) + .isEqualTo(2); + assertThat(redis.zrangebyscore(streamingAdapter, key, Range.unbounded())).isEqualTo(4); + assertThat(redis.zrangebyscore(streamingAdapter, key, Range.create(0.0, 4.0), Limit.create(1, 3))).isEqualTo(3); + assertThat(redis.zrangebyscore(streamingAdapter, key, Range.unbounded(), Limit.create(2, 2))).isEqualTo(2); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zrangebyscoreWithScores() { + + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + + assertThat(redis.zrangebyscoreWithScores(key, 2.0, 3.0)).isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "c"))); + assertThat(redis.zrangebyscoreWithScores(key, "(1.0", "(4.0")).isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "c"))); + assertThat(redis.zrangebyscoreWithScores(key, NEGATIVE_INFINITY, POSITIVE_INFINITY)) + .isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"), sv(4.0, "d"))); + assertThat(redis.zrangebyscoreWithScores(key, "-inf", "+inf")) + .isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"), sv(4.0, "d"))); + assertThat(redis.zrangebyscoreWithScores(key, 0.0, 4.0, 1, 3)) + .isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "c"), sv(4.0, "d"))); + assertThat(redis.zrangebyscoreWithScores(key, "-inf", "+inf", 2, 2)).isEqualTo(svlist(sv(3.0, "c"), sv(4.0, "d"))); + + assertThat(redis.zrangebyscoreWithScores(key, Range.create(2.0, 3.0))).isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "c"))); + assertThat(redis.zrangebyscoreWithScores(key, Range.from(Boundary.excluding(1.0), Boundary.excluding(4.0)))) + .isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "c"))); + assertThat(redis.zrangebyscoreWithScores(key, Range.unbounded())) + .isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"), sv(3.0, "c"), sv(4.0, "d"))); + assertThat(redis.zrangebyscoreWithScores(key, Range.create(0.0, 4.0), Limit.create(1, 3))) + .isEqualTo(svlist(sv(2.0, "b"), sv(3.0, "c"), sv(4.0, "d"))); + assertThat(redis.zrangebyscoreWithScores(key, Range.unbounded(), Limit.create(2, 2))) + .isEqualTo(svlist(sv(3.0, "c"), sv(4.0, "d"))); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zrangebyscoreWithScoresInfinity() { + + redis.zadd(key, Double.POSITIVE_INFINITY, "a", Double.NEGATIVE_INFINITY, "b"); + + assertThat(redis.zrangebyscoreWithScores(key, "-inf", "+inf")).hasSize(2); + + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + + Range range = Range.from(including(Double.NEGATIVE_INFINITY), including(Double.POSITIVE_INFINITY)); + redis.zrangebyscoreWithScores(streamingAdapter, key, range); + + assertThat(streamingAdapter.getList()).hasSize(2); + } + + @Test + void zrangebyscoreWithScoresStreaming() { + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, 2.0, 3.0).longValue()).isEqualTo(2); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, "(1.0", "(4.0").longValue()).isEqualTo(2); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, NEGATIVE_INFINITY, POSITIVE_INFINITY).longValue()) + .isEqualTo(4); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, "-inf", "+inf").longValue()).isEqualTo(4); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, "-inf", "+inf").longValue()).isEqualTo(4); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, 0.0, 4.0, 1, 3).longValue()).isEqualTo(3); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, "-inf", "+inf", 2, 2).longValue()).isEqualTo(2); + + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, Range.create(2.0, 3.0))).isEqualTo(2); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, + Range.from(Boundary.excluding(1.0), Boundary.excluding(4.0)))).isEqualTo(2); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, Range.unbounded())).isEqualTo(4); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, Range.create(0.0, 4.0), Limit.create(1, 3))) + .isEqualTo(3); + assertThat(redis.zrangebyscoreWithScores(streamingAdapter, key, Range.unbounded(), Limit.create(2, 2))).isEqualTo(2); + + } + + @Test + void zrank() { + assertThat(redis.zrank(key, "a")).isNull(); + setup(); + assertThat(redis.zrank(key, "a")).isEqualTo(0); + assertThat(redis.zrank(key, "c")).isEqualTo(2); + } + + @Test + void zrem() { + assertThat(redis.zrem(key, "a")).isEqualTo(0); + setup(); + assertThat(redis.zrem(key, "b")).isEqualTo(1); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "c")); + assertThat(redis.zrem(key, "a", "c")).isEqualTo(2); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list()); + } + + @Test + void zremrangebyscore() { + + setup(); + assertThat(redis.zremrangebyscore(key, 1.0, 2.0)).isEqualTo(2); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("c")); + + setup(); + assertThat(redis.zremrangebyscore(key, Range.create(1.0, 2.0))).isEqualTo(2); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("c")); + + setup(); + assertThat(redis.zremrangebyscore(key, "(1.0", "(3.0")).isEqualTo(1); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "c")); + + setup(); + assertThat(redis.zremrangebyscore(key, Range.from(Boundary.excluding(1.0), Boundary.excluding(3.0)))).isEqualTo(1); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "c")); + } + + @Test + void zremrangebyrank() { + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + assertThat(redis.zremrangebyrank(key, 1, 2)).isEqualTo(2); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "d")); + + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + assertThat(redis.zremrangebyrank(key, 0, -1)).isEqualTo(4); + assertThat(redis.zcard(key)).isEqualTo(0); + } + + @Test + void zrevrange() { + setup(); + assertThat(redis.zrevrange(key, 0, -1)).isEqualTo(list("c", "b", "a")); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zrevrangeWithScores() { + setup(); + assertThat(redis.zrevrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); + } + + @Test + void zrevrangeStreaming() { + setup(); + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + Long count = redis.zrevrange(streamingAdapter, key, 0, -1); + assertThat(count).isEqualTo(3); + assertThat(streamingAdapter.getList()).isEqualTo(list("c", "b", "a")); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zrevrangeWithScoresStreaming() { + setup(); + ScoredValueStreamingAdapter streamingAdapter = new ScoredValueStreamingAdapter<>(); + Long count = redis.zrevrangeWithScores(streamingAdapter, key, 0, -1); + assertThat(count).isEqualTo(3); + assertThat(streamingAdapter.getList()).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); + } + + @Test + void zrevrangebylex() { + + setup100KeyValues(new HashSet<>()); + + assertThat(redis.zrevrangebylex(key, Range.unbounded())).hasSize(100); + assertThat(redis.zrevrangebylex(key, Range.create("value", "zzz"))).hasSize(100); + assertThat(redis.zrevrangebylex(key, Range.from(including("value98"), including("value99")))) + .containsSequence("value99", "value98"); + assertThat(redis.zrevrangebylex(key, Range.from(including("value99"), Boundary.unbounded()))).hasSize(1); + assertThat(redis.zrevrangebylex(key, Range.from(Boundary.excluding("value99"), Boundary.unbounded()))).hasSize(0); + } + + @Test + void zrevrangebyscore() { + + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + + assertThat(redis.zrevrangebyscore(key, 3.0, 2.0)).isEqualTo(list("c", "b")); + assertThat(redis.zrevrangebyscore(key, "(4.0", "(1.0")).isEqualTo(list("c", "b")); + assertThat(redis.zrevrangebyscore(key, POSITIVE_INFINITY, NEGATIVE_INFINITY)).isEqualTo(list("d", "c", "b", "a")); + assertThat(redis.zrevrangebyscore(key, "+inf", "-inf")).isEqualTo(list("d", "c", "b", "a")); + assertThat(redis.zrevrangebyscore(key, 4.0, 0.0, 1, 3)).isEqualTo(list("c", "b", "a")); + assertThat(redis.zrevrangebyscore(key, "+inf", "-inf", 2, 2)).isEqualTo(list("b", "a")); + + assertThat(redis.zrevrangebyscore(key, Range.create(2.0, 3.0))).isEqualTo(list("c", "b")); + assertThat(redis.zrevrangebyscore(key, Range.from(Boundary.excluding(1.0), Boundary.excluding(4.0)))) + .isEqualTo(list("c", "b")); + assertThat(redis.zrevrangebyscore(key, Range.unbounded())).isEqualTo(list("d", "c", "b", "a")); + assertThat(redis.zrevrangebyscore(key, Range.create(0.0, 4.0), Limit.create(1, 3))).isEqualTo(list("c", "b", "a")); + assertThat(redis.zrevrangebyscore(key, Range.unbounded(), Limit.create(2, 2))).isEqualTo(list("b", "a")); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zrevrangebyscoreWithScores() { + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + + assertThat(redis.zrevrangebyscoreWithScores(key, 3.0, 2.0)).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"))); + assertThat(redis.zrevrangebyscoreWithScores(key, "(4.0", "(1.0")).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"))); + assertThat(redis.zrevrangebyscoreWithScores(key, POSITIVE_INFINITY, NEGATIVE_INFINITY)) + .isEqualTo(svlist(sv(4.0, "d"), sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); + assertThat(redis.zrevrangebyscoreWithScores(key, "+inf", "-inf")) + .isEqualTo(svlist(sv(4.0, "d"), sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); + assertThat(redis.zrevrangebyscoreWithScores(key, 4.0, 0.0, 1, 3)) + .isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); + assertThat(redis.zrevrangebyscoreWithScores(key, "+inf", "-inf", 2, 2)).isEqualTo(svlist(sv(2.0, "b"), sv(1.0, "a"))); + + assertThat(redis.zrevrangebyscoreWithScores(key, Range.create(2.0, 3.0))).isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"))); + assertThat(redis.zrevrangebyscoreWithScores(key, Range.from(Boundary.excluding(1.0), Boundary.excluding(4.0)))) + .isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"))); + assertThat(redis.zrevrangebyscoreWithScores(key, Range.unbounded())) + .isEqualTo(svlist(sv(4.0, "d"), sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); + assertThat(redis.zrevrangebyscoreWithScores(key, Range.create(0.0, 4.0), Limit.create(1, 3))) + .isEqualTo(svlist(sv(3.0, "c"), sv(2.0, "b"), sv(1.0, "a"))); + assertThat(redis.zrevrangebyscoreWithScores(key, Range.unbounded(), Limit.create(2, 2))) + .isEqualTo(svlist(sv(2.0, "b"), sv(1.0, "a"))); + } + + @Test + void zrevrangebyscoreStreaming() { + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + ListStreamingAdapter streamingAdapter = new ListStreamingAdapter<>(); + + assertThat(redis.zrevrangebyscore(streamingAdapter, key, 3.0, 2.0).longValue()).isEqualTo(2); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, "(4.0", "(1.0").longValue()).isEqualTo(2); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, POSITIVE_INFINITY, NEGATIVE_INFINITY).longValue()) + .isEqualTo(4); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, "+inf", "-inf").longValue()).isEqualTo(4); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, 4.0, 0.0, 1, 3).longValue()).isEqualTo(3); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, "+inf", "-inf", 2, 2).longValue()).isEqualTo(2); + + assertThat(redis.zrevrangebyscore(streamingAdapter, key, Range.create(2.0, 3.0)).longValue()).isEqualTo(2); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, Range.from(Boundary.excluding(1.0), Boundary.excluding(4.0))) + .longValue()).isEqualTo(2); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, Range.unbounded()).longValue()).isEqualTo(4); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, Range.create(0.0, 4.0), Limit.create(1, 3)).longValue()) + .isEqualTo(3); + assertThat(redis.zrevrangebyscore(streamingAdapter, key, Range.unbounded(), Limit.create(2, 2)).longValue()) + .isEqualTo(2); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zrevrangebyscoreWithScoresStreaming() { + redis.zadd(key, 1.0, "a", 2.0, "b", 3.0, "c", 4.0, "d"); + + ScoredValueStreamingAdapter streamingAdapter = new ScoredValueStreamingAdapter<>(); + + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, 3.0, 2.0)).isEqualTo(2); + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, "(4.0", "(1.0")).isEqualTo(2); + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, POSITIVE_INFINITY, NEGATIVE_INFINITY)).isEqualTo(4); + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, "+inf", "-inf")).isEqualTo(4); + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, 4.0, 0.0, 1, 3)).isEqualTo(3); + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, "+inf", "-inf", 2, 2)).isEqualTo(2); + + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, Range.create(2.0, 3.0)).longValue()).isEqualTo(2); + assertThat(redis + .zrevrangebyscoreWithScores(streamingAdapter, key, Range.from(Boundary.excluding(1.0), Boundary.excluding(4.0))) + .longValue()).isEqualTo(2); + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, Range.unbounded()).longValue()).isEqualTo(4); + assertThat( + redis.zrevrangebyscoreWithScores(streamingAdapter, key, Range.create(0.0, 4.0), Limit.create(1, 3)).longValue()) + .isEqualTo(3); + assertThat(redis.zrevrangebyscoreWithScores(streamingAdapter, key, Range.unbounded(), Limit.create(2, 2)).longValue()) + .isEqualTo(2); + } + + @Test + void zrevrank() { + assertThat(redis.zrevrank(key, "a")).isNull(); + setup(); + assertThat(redis.zrevrank(key, "c")).isEqualTo(0); + assertThat(redis.zrevrank(key, "a")).isEqualTo(2); + } + + @Test + void zscore() { + assertThat(redis.zscore(key, "a")).isNull(); + redis.zadd(key, 1.0, "a"); + assertThat(redis.zscore(key, "a")).isEqualTo(1.0); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zunionstore() { + redis.zadd("zset1", 1.0, "a", 2.0, "b"); + redis.zadd("zset2", 2.0, "a", 3.0, "b", 4.0, "c"); + assertThat(redis.zunionstore(key, "zset1", "zset2")).isEqualTo(3); + assertThat(redis.zrange(key, 0, -1)).isEqualTo(list("a", "c", "b")); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(3.0, "a"), sv(4.0, "c"), sv(5.0, "b"))); + + assertThat(redis.zunionstore(key, weights(new long[] { 2, 3 }), "zset1", "zset2")).isEqualTo(3); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(8.0, "a"), sv(12.0, "c"), sv(13.0, "b"))); + + assertThat(redis.zunionstore(key, weights(2, 3).sum(), "zset1", "zset2")).isEqualTo(3); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(8.0, "a"), sv(12.0, "c"), sv(13.0, "b"))); + + assertThat(redis.zunionstore(key, weights(2, 3).min(), "zset1", "zset2")).isEqualTo(3); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"), sv(4.0, "b"), sv(12.0, "c"))); + + assertThat(redis.zunionstore(key, weights(2, 3).max(), "zset1", "zset2")).isEqualTo(3); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(6.0, "a"), sv(9.0, "b"), sv(12.0, "c"))); + } + + @Test + @SuppressWarnings({ "unchecked" }) + void zStoreArgs() { + redis.zadd("zset1", 1.0, "a", 2.0, "b"); + redis.zadd("zset2", 2.0, "a", 3.0, "b", 4.0, "c"); + + assertThat(redis.zinterstore(key, sum(), "zset1", "zset2")).isEqualTo(2); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(3.0, "a"), sv(5.0, "b"))); + + assertThat(redis.zinterstore(key, min(), "zset1", "zset2")).isEqualTo(2); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(1.0, "a"), sv(2.0, "b"))); + + assertThat(redis.zinterstore(key, max(), "zset1", "zset2")).isEqualTo(2); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"), sv(3.0, "b"))); + + assertThat(redis.zinterstore(key, weights(new long[] { 2, 3 }), "zset1", "zset2")).isEqualTo(2); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(8.0, "a"), sv(13.0, "b"))); + + assertThat(redis.zinterstore(key, weights(2, 3).sum(), "zset1", "zset2")).isEqualTo(2); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(8.0, "a"), sv(13.0, "b"))); + + assertThat(redis.zinterstore(key, weights(2, 3).min(), "zset1", "zset2")).isEqualTo(2); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(2.0, "a"), sv(4.0, "b"))); + + assertThat(redis.zinterstore(key, weights(2, 3).max(), "zset1", "zset2")).isEqualTo(2); + assertThat(redis.zrangeWithScores(key, 0, -1)).isEqualTo(svlist(sv(6.0, "a"), sv(9.0, "b"))); + } + + @Test + void zsscan() { + redis.zadd(key, 1, value); + ScoredValueScanCursor cursor = redis.zscan(key); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getValues().get(0)).isEqualTo(sv(1, value)); + } + + @Test + void zsscanWithCursor() { + redis.zadd(key, 1, value); + + ScoredValueScanCursor cursor = redis.zscan(key, ScanCursor.INITIAL); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getValues().get(0)).isEqualTo(sv(1, value)); + } + + @Test + void zsscanWithCursorAndArgs() { + redis.zadd(key, 1, value); + + ScoredValueScanCursor cursor = redis.zscan(key, ScanCursor.INITIAL, ScanArgs.Builder.limit(5)); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(cursor.getValues().get(0)).isEqualTo(sv(1, value)); + } + + @Test + void zscanStreaming() { + redis.zadd(key, 1, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.zscan(adapter, key); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + assertThat(adapter.getList().get(0)).isEqualTo(value); + } + + @Test + void zscanStreamingWithCursor() { + redis.zadd(key, 1, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.zscan(adapter, key, ScanCursor.INITIAL); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void zscanStreamingWithCursorAndArgs() { + redis.zadd(key, 1, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.zscan(adapter, key, ScanCursor.INITIAL, ScanArgs.Builder.matches("*").limit(100)); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + } + + @Test + void zscanStreamingWithArgs() { + redis.zadd(key, 1, value); + ListStreamingAdapter adapter = new ListStreamingAdapter<>(); + + StreamScanCursor cursor = redis.zscan(adapter, key, ScanArgs.Builder.limit(100).match("*")); + + assertThat(cursor.getCount()).isEqualTo(1); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + } + + @Test + void zscanMultiple() { + + Set expect = new HashSet<>(); + setup100KeyValues(expect); + + ScoredValueScanCursor cursor = redis.zscan(key, ScanArgs.Builder.limit(5)); + + assertThat(cursor.getCursor()).isNotNull(); + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + assertThat(cursor.getValues()).hasSize(100); + + } + + @Test + void zscanMatch() { + + Set expect = new HashSet<>(); + setup100KeyValues(expect); + + ScoredValueScanCursor cursor = redis.zscan(key, ScanArgs.Builder.limit(10).match("val*")); + + assertThat(cursor.getCursor()).isEqualTo("0"); + assertThat(cursor.isFinished()).isTrue(); + + assertThat(cursor.getValues()).hasSize(100); + } + + @Test + void zlexcount() { + + setup100KeyValues(new HashSet<>()); + + assertThat(redis.zlexcount(key, "-", "+")).isEqualTo(100); + assertThat(redis.zlexcount(key, "[value", "[zzz")).isEqualTo(100); + + assertThat(redis.zlexcount(key, Range.unbounded())).isEqualTo(100); + assertThat(redis.zlexcount(key, Range.create("value", "zzz"))).isEqualTo(100); + assertThat(redis.zlexcount(key, Range.from(including("value99"), Boundary.unbounded()))).isEqualTo(1); + assertThat(redis.zlexcount(key, Range.from(Boundary.excluding("value99"), Boundary.unbounded()))).isEqualTo(0); + } + + @Test + void zrangebylex() { + setup100KeyValues(new HashSet<>()); + + assertThat(redis.zrangebylex(key, "-", "+")).hasSize(100); + assertThat(redis.zrangebylex(key, "-", "+", 10, 10)).hasSize(10); + + assertThat(redis.zrangebylex(key, Range.unbounded())).hasSize(100); + assertThat(redis.zrangebylex(key, Range.create("value", "zzz"))).hasSize(100); + assertThat(redis.zrangebylex(key, Range.from(including("value98"), including("value99")))).containsSequence("value98", + "value99"); + assertThat(redis.zrangebylex(key, Range.from(including("value99"), Boundary.unbounded()))).hasSize(1); + assertThat(redis.zrangebylex(key, Range.from(Boundary.excluding("value99"), Boundary.unbounded()))).hasSize(0); + } + + @Test + void zremrangebylex() { + + setup100KeyValues(new HashSet<>()); + assertThat(redis.zremrangebylex(key, "(aaa", "[zzz")).isEqualTo(100); + + setup100KeyValues(new HashSet<>()); + assertThat(redis.zremrangebylex(key, Range.create("value", "zzz"))).isEqualTo(100); + + } + + void setup100KeyValues(Set expect) { + for (int i = 0; i < 100; i++) { + redis.zadd(key + 1, i, value + i); + redis.zadd(key, i, value + i); + expect.add(value + i); + } + } +} diff --git a/src/test/java/io/lettuce/core/commands/StreamCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/StreamCommandIntegrationTests.java new file mode 100644 index 0000000000..f02e7db2e8 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/StreamCommandIntegrationTests.java @@ -0,0 +1,479 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static io.lettuce.core.protocol.CommandType.XINFO; +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Instant; +import java.util.*; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.XReadArgs.StreamOffset; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.models.stream.PendingMessage; +import io.lettuce.core.models.stream.PendingMessages; +import io.lettuce.core.models.stream.PendingParser; +import io.lettuce.core.output.NestedMultiOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +@EnabledOnCommand("XADD") +public class StreamCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected StreamCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void xadd() { + + assertThat(redis.xadd(key, Collections.singletonMap("key", "value"))).endsWith("-0"); + assertThat(redis.xadd(key, "foo", "bar")).isNotEmpty(); + } + + @Test + void xaddMaxLen() { + + String id = redis.xadd(key, XAddArgs.Builder.maxlen(5), "foo", "bar"); + + for (int i = 0; i < 5; i++) { + redis.xadd(key, XAddArgs.Builder.maxlen(5), "foo", "bar"); + } + + List> messages = redis.xrange(key, + Range.from(Range.Boundary.including(id), Range.Boundary.unbounded())); + + assertThat(messages).hasSize(5); + } + + @Test + void xaddMaxLenEfficientTrimming() { + + String id = redis.xadd(key, XAddArgs.Builder.maxlen(5).approximateTrimming(), "foo", "bar"); + + assertThat(id).isNotNull(); + } + + @Test + void xdel() { + + List ids = new ArrayList<>(); + for (int i = 0; i < 2; i++) { + ids.add(redis.xadd(key, Collections.singletonMap("key", "value"))); + } + + Long deleted = redis.xdel(key, ids.get(0), "123456-0"); + + assertThat(deleted).isEqualTo(1); + + List> messages = redis.xrange(key, Range.unbounded()); + assertThat(messages).hasSize(1); + } + + @Test + void xtrim() { + + List ids = new ArrayList<>(); + for (int i = 0; i < 10; i++) { + ids.add(redis.xadd(key, Collections.singletonMap("key", "value"))); + } + redis.xdel(key, ids.get(0), ids.get(2)); + assertThat(redis.xlen(key)).isBetween(8L, 10L); + + redis.xtrim(key, true, 8); + + assertThat(redis.xlen(key)).isLessThanOrEqualTo(10); + } + + @Test + void xrange() { + + List ids = new ArrayList<>(); + for (int i = 0; i < 5; i++) { + + Map body = new HashMap<>(); + body.put("key-1", "value-1-" + i); + body.put("key-2", "value-2-" + i); + + ids.add(redis.xadd(key, body)); + } + + List> messages = redis.xrange(key, Range.unbounded()); + assertThat(messages).hasSize(5); + + StreamMessage message = messages.get(0); + + Map expectedBody = new HashMap<>(); + expectedBody.put("key-1", "value-1-0"); + expectedBody.put("key-2", "value-2-0"); + + assertThat(message.getId()).contains("-"); + assertThat(message.getStream()).isEqualTo(key); + assertThat(message.getBody()).isEqualTo(expectedBody); + + assertThat(redis.xrange(key, Range.unbounded(), Limit.from(2))).hasSize(2); + + List> range = redis.xrange(key, Range.create(ids.get(0), ids.get(1))); + + assertThat(range).hasSize(2); + assertThat(range.get(0).getBody()).isEqualTo(expectedBody); + } + + @Test + void xrevrange() { + + for (int i = 0; i < 5; i++) { + + Map body = new HashMap<>(); + body.put("key-1", "value-1-" + i); + body.put("key-2", "value-2-" + i); + + redis.xadd(key, body); + } + + List> messages = redis.xrevrange(key, Range.unbounded()); + assertThat(messages).hasSize(5); + + StreamMessage message = messages.get(0); + + Map expectedBody = new HashMap<>(); + expectedBody.put("key-1", "value-1-4"); + expectedBody.put("key-2", "value-2-4"); + + assertThat(message.getId()).contains("-"); + assertThat(message.getStream()).isEqualTo(key); + assertThat(message.getBody()).isEqualTo(expectedBody); + } + + @Test + void xreadSingleStream() { + + redis.xadd("stream-1", Collections.singletonMap("key1", "value1")); + redis.xadd("stream-1", Collections.singletonMap("key2", "value2")); + + List> messages = redis.xread(XReadArgs.Builder.count(2), + StreamOffset.from("stream-1", "0-0")); + + assertThat(messages).hasSize(2); + StreamMessage firstMessage = messages.get(0); + + assertThat(firstMessage.getStream()).isEqualTo("stream-1"); + assertThat(firstMessage.getBody()).hasSize(1).containsEntry("key1", "value1"); + + StreamMessage nextMessage = messages.get(1); + + assertThat(nextMessage.getStream()).isEqualTo("stream-1"); + assertThat(nextMessage.getBody()).hasSize(1).containsEntry("key2", "value2"); + } + + @Test + void xreadMultipleStreams() { + + Map biggerBody = new LinkedHashMap<>(); + biggerBody.put("key4", "value4"); + biggerBody.put("key5", "value5"); + + String initial1 = redis.xadd("stream-1", Collections.singletonMap("key1", "value1")); + String initial2 = redis.xadd("stream-2", Collections.singletonMap("key2", "value2")); + String message1 = redis.xadd("stream-1", Collections.singletonMap("key3", "value3")); + String message2 = redis.xadd("stream-2", biggerBody); + + List> messages = redis.xread(StreamOffset.from("stream-1", "0-0"), + StreamOffset.from("stream-2", "0-0")); + + assertThat(messages).hasSize(4); + + StreamMessage firstMessage = messages.get(0); + + assertThat(firstMessage.getId()).isEqualTo(initial1); + assertThat(firstMessage.getStream()).isEqualTo("stream-1"); + assertThat(firstMessage.getBody()).hasSize(1).containsEntry("key1", "value1"); + + StreamMessage secondMessage = messages.get(3); + + assertThat(secondMessage.getId()).isEqualTo(message2); + assertThat(secondMessage.getStream()).isEqualTo("stream-2"); + assertThat(secondMessage.getBody()).hasSize(2).containsEntry("key4", "value4"); + } + + @Test + void xreadTransactional() { + + String initial1 = redis.xadd("stream-1", Collections.singletonMap("key1", "value1")); + String initial2 = redis.xadd("stream-2", Collections.singletonMap("key2", "value2")); + + redis.multi(); + redis.xadd("stream-1", Collections.singletonMap("key3", "value3")); + redis.xadd("stream-2", Collections.singletonMap("key4", "value4")); + redis.xread(StreamOffset.from("stream-1", initial1), XReadArgs.StreamOffset.from("stream-2", initial2)); + + TransactionResult exec = redis.exec(); + + String message1 = exec.get(0); + String message2 = exec.get(1); + List> messages = exec.get(2); + + StreamMessage firstMessage = messages.get(0); + + assertThat(firstMessage.getId()).isEqualTo(message1); + assertThat(firstMessage.getStream()).isEqualTo("stream-1"); + assertThat(firstMessage.getBody()).containsEntry("key3", "value3"); + + StreamMessage secondMessage = messages.get(1); + + assertThat(secondMessage.getId()).isEqualTo(message2); + assertThat(secondMessage.getStream()).isEqualTo("stream-2"); + assertThat(secondMessage.getBody()).containsEntry("key4", "value4"); + } + + @Test + void xinfoStream() { + + redis.xadd(key, Collections.singletonMap("key1", "value1")); + + List objects = redis.xinfoStream(key); + + assertThat(objects).containsSequence("length", 1L); + } + + @Test + void xinfoGroups() { + + assertThat(redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream())).isEqualTo("OK"); + + List objects = redis.xinfoGroups(key); + assertThat((List) objects.get(0)).containsSequence("name", "group"); + } + + @Test + void xinfoConsumers() { + + assertThat(redis.xgroupCreate(StreamOffset.from(key, "0-0"), "group", XGroupCreateArgs.Builder.mkstream())) + .isEqualTo("OK"); + redis.xadd(key, Collections.singletonMap("key1", "value1")); + + redis.xreadgroup(Consumer.from("group", "consumer1"), StreamOffset.lastConsumed(key)); + + List objects = redis.xinfoConsumers(key, "group"); + assertThat((List) objects.get(0)).containsSequence("name", "consumer1"); + } + + @Test + void xgroupCreate() { + + assertThat(redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream())).isEqualTo("OK"); + + List groups = redis.dispatch(XINFO, new NestedMultiOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).add("GROUPS").add(key)); + + assertThat(groups).isNotEmpty(); + assertThat(redis.type(key)).isEqualTo("stream"); + } + + @Test + void xgroupread() { + + redis.xadd(key, Collections.singletonMap("key", "value")); + redis.xgroupCreate(StreamOffset.latest(key), "group"); + redis.xadd(key, Collections.singletonMap("key", "value")); + + List> read1 = redis.xreadgroup(Consumer.from("group", "consumer1"), + StreamOffset.lastConsumed(key)); + + assertThat(read1).hasSize(1); + } + + @Test + void xpendingWithGroup() { + + redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream()); + String id = redis.xadd(key, Collections.singletonMap("key", "value")); + + redis.xreadgroup(Consumer.from("group", "consumer1"), StreamOffset.lastConsumed(key)); + + List pendingEntries = redis.xpending(key, "group"); + assertThat(pendingEntries).hasSize(4).containsSequence(1L, id, id); + } + + @Test + void xpending() { + + redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream()); + String id = redis.xadd(key, Collections.singletonMap("key", "value")); + + redis.xreadgroup(Consumer.from("group", "consumer1"), StreamOffset.lastConsumed(key)); + + List pendingEntries = redis.xpending(key, "group", Range.unbounded(), Limit.from(10)); + + List pendingMessages = PendingParser.parseRange(pendingEntries); + assertThat(pendingMessages).hasSize(1); + + PendingMessage message = pendingMessages.get(0); + assertThat(message.getId()).isEqualTo(id); + assertThat(message.getConsumer()).isEqualTo("consumer1"); + assertThat(message.getRedeliveryCount()).isEqualTo(1); + } + + @Test + void xpendingUnlimited() { + + redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream()); + String id = redis.xadd(key, Collections.singletonMap("key", "value")); + + redis.xreadgroup(Consumer.from("group", "consumer1"), StreamOffset.lastConsumed(key)); + + List pendingEntries = redis.xpending(key, Consumer.from("group", "consumer1"), Range.unbounded(), + Limit.unlimited()); + + List pendingMessages = PendingParser.parseRange(pendingEntries); + assertThat(pendingMessages).hasSize(1); + + PendingMessage message = pendingMessages.get(0); + assertThat(message.getId()).isEqualTo(id); + assertThat(message.getConsumer()).isEqualTo("consumer1"); + assertThat(message.getRedeliveryCount()).isEqualTo(1); + } + + @Test + void xack() { + + redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream()); + redis.xadd(key, Collections.singletonMap("key", "value")); + + List> messages = redis.xreadgroup(Consumer.from("group", "consumer1"), + StreamOffset.lastConsumed(key)); + + Long ackd = redis.xack(key, "group", messages.get(0).getId()); + assertThat(ackd).isEqualTo(1); + + List pendingEntries = redis.xpending(key, "group", Range.unbounded(), Limit.from(10)); + assertThat(pendingEntries).isEmpty(); + } + + @Test + void xclaim() { + + redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream()); + redis.xadd(key, Collections.singletonMap("key", "value")); + + List> messages = redis.xreadgroup(Consumer.from("group", "consumer1"), + StreamOffset.lastConsumed(key)); + + List> claimedMessages = redis.xclaim(key, Consumer.from("group", "consumer2"), 0, + messages.get(0).getId()); + + assertThat(claimedMessages).hasSize(1).contains(messages.get(0)); + + assertThat(redis.xpending(key, Consumer.from("group", "consumer1"), Range.unbounded(), Limit.from(10))).isEmpty(); + assertThat(redis.xpending(key, Consumer.from("group", "consumer2"), Range.unbounded(), Limit.from(10))).hasSize(1); + } + + @Test + void xclaimWithArgs() { + + String id1 = redis.xadd(key, Collections.singletonMap("key", "value")); + redis.xgroupCreate(StreamOffset.latest(key), "group"); + String id2 = redis.xadd(key, Collections.singletonMap("key", "value")); + + List> messages = redis.xreadgroup(Consumer.from("group", "consumer1"), + StreamOffset.lastConsumed(key)); + + List> claimedMessages = redis.xclaim(key, Consumer.from("group", "consumer2"), + XClaimArgs.Builder.minIdleTime(0).time(Instant.now().minusSeconds(60)), id1, id2); + + assertThat(claimedMessages).hasSize(1).contains(messages.get(0)); + + List xpending = redis.xpending(key, Consumer.from("group", "consumer2"), Range.unbounded(), Limit.from(10)); + + List pendingMessages = PendingParser.parseRange(xpending); + PendingMessage message = pendingMessages.get(0); + + assertThat(message.getMsSinceLastDelivery()).isBetween(50000L, 80000L); + } + + @Test + void xclaimJustId() { + + String id1 = redis.xadd(key, Collections.singletonMap("key", "value")); + redis.xgroupCreate(StreamOffset.latest(key), "group"); + String id2 = redis.xadd(key, Collections.singletonMap("key", "value")); + String id3 = redis.xadd(key, Collections.singletonMap("key", "value")); + + redis.xreadgroup(Consumer.from("group", "consumer1"), StreamOffset.lastConsumed(key)); + + List> claimedMessages = redis.xclaim(key, Consumer.from("group", "consumer2"), + XClaimArgs.Builder.justid(), id1, id2, id3); + + assertThat(claimedMessages).hasSize(2); + + StreamMessage message = claimedMessages.get(0); + + assertThat(message.getBody()).isNull(); + assertThat(message.getStream()).isEqualTo("key"); + assertThat(message.getId()).isEqualTo(id2); + } + + @Test + void xgroupDestroy() { + + redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream()); + + assertThat(redis.xgroupDestroy(key, "group")).isTrue(); + assertThat(redis.xgroupDestroy(key, "group")).isFalse(); + } + + @Test + void xgroupDelconsumer() { + + redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream()); + redis.xadd(key, Collections.singletonMap("key", "value")); + redis.xreadgroup(Consumer.from("group", "consumer1"), StreamOffset.lastConsumed(key)); + + assertThat(redis.xgroupDelconsumer(key, Consumer.from("group", "consumer1"))).isTrue(); + assertThat(redis.xgroupDelconsumer(key, Consumer.from("group", "consumer1"))).isFalse(); + } + + @Test + void xgroupSetid() { + + redis.xgroupCreate(StreamOffset.latest(key), "group", XGroupCreateArgs.Builder.mkstream()); + + assertThat(redis.xgroupSetid(StreamOffset.latest(key), "group")).isEqualTo("OK"); + } +} diff --git a/src/test/java/io/lettuce/core/commands/StringCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/StringCommandIntegrationTests.java new file mode 100644 index 0000000000..fd2020c67e --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/StringCommandIntegrationTests.java @@ -0,0 +1,241 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static io.lettuce.core.SetArgs.Builder.*; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.RedisException; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.KeyValueStreamingAdapter; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.condition.EnabledOnCommand; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class StringCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + protected StringCommandIntegrationTests(RedisCommands redis) { + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void append() { + assertThat(redis.append(key, value)).isEqualTo(value.length()); + assertThat(redis.append(key, "X")).isEqualTo(value.length() + 1); + } + + @Test + void get() { + assertThat(redis.get(key)).isNull(); + redis.set(key, value); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void getbit() { + assertThat(redis.getbit(key, 0)).isEqualTo(0); + redis.setbit(key, 0, 1); + assertThat(redis.getbit(key, 0)).isEqualTo(1); + } + + @Test + void getrange() { + assertThat(redis.getrange(key, 0, -1)).isEqualTo(""); + redis.set(key, "foobar"); + assertThat(redis.getrange(key, 2, 4)).isEqualTo("oba"); + assertThat(redis.getrange(key, 3, -1)).isEqualTo("bar"); + } + + @Test + void getset() { + assertThat(redis.getset(key, value)).isNull(); + assertThat(redis.getset(key, "two")).isEqualTo(value); + assertThat(redis.get(key)).isEqualTo("two"); + } + + @Test + void mget() { + setupMget(); + assertThat(redis.mget("one", "two")).isEqualTo(list(kv("one", "1"), kv("two", "2"))); + } + + protected void setupMget() { + assertThat(redis.mget(key)).isEqualTo(list(KeyValue.empty("key"))); + redis.set("one", "1"); + redis.set("two", "2"); + } + + @Test + void mgetStreaming() { + setupMget(); + + KeyValueStreamingAdapter streamingAdapter = new KeyValueStreamingAdapter<>(); + Long count = redis.mget(streamingAdapter, "one", "two"); + assertThat(count.intValue()).isEqualTo(2); + + assertThat(streamingAdapter.getMap()).containsEntry("one", "1").containsEntry("two", "2"); + } + + @Test + void mset() { + assertThat(redis.mget("one", "two")).isEqualTo(list(KeyValue.empty("one"), KeyValue.empty("two"))); + Map map = new LinkedHashMap<>(); + map.put("one", "1"); + map.put("two", "2"); + assertThat(redis.mset(map)).isEqualTo("OK"); + assertThat(redis.mget("one", "two")).isEqualTo(list(kv("one", "1"), kv("two", "2"))); + } + + @Test + void msetnx() { + redis.set("one", "1"); + Map map = new LinkedHashMap<>(); + map.put("one", "1"); + map.put("two", "2"); + assertThat(redis.msetnx(map)).isFalse(); + redis.del("one"); + assertThat(redis.msetnx(map)).isTrue(); + assertThat(redis.get("two")).isEqualTo("2"); + } + + @Test + void set() { + assertThat(redis.get(key)).isNull(); + assertThat(redis.set(key, value)).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + + assertThat(redis.set(key, value, px(20000))).isEqualTo("OK"); + assertThat(redis.set(key, value, ex(10))).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + assertThat(redis.ttl(key)).isGreaterThanOrEqualTo(9); + + assertThat(redis.set(key, value, px(10000))).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + assertThat(redis.ttl(key)).isGreaterThanOrEqualTo(9); + + assertThat(redis.set(key, value, nx())).isNull(); + assertThat(redis.set(key, value, xx())).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + + redis.del(key); + assertThat(redis.set(key, value, nx())).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + + redis.del(key); + + assertThat(redis.set(key, value, px(20000).nx())).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + assertThat(redis.ttl(key) >= 19).isTrue(); + } + + @Test + @EnabledOnCommand("ACL") // Redis 6.0 guard + void setKeepTTL() { + + redis.set(key, value, ex(10)); + + assertThat(redis.set(key, "value2", keepttl())).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo("value2"); + assertThat(redis.ttl(key) >= 1).isTrue(); + } + + @Test + void setNegativeEX() { + assertThatThrownBy(() -> redis.set(key, value, ex(-10))).isInstanceOf(RedisException. class); + } + + @Test + void setNegativePX() { + assertThatThrownBy(() -> redis.set(key, value, px(-1000))).isInstanceOf(RedisException. class); + } + + @Test + void setbit() { + assertThat(redis.setbit(key, 0, 1)).isEqualTo(0); + assertThat(redis.setbit(key, 0, 0)).isEqualTo(1); + } + + @Test + void setex() { + assertThat(redis.setex(key, 10, value)).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + assertThat(redis.ttl(key) >= 9).isTrue(); + } + + @Test + void psetex() { + assertThat(redis.psetex(key, 20000, value)).isEqualTo("OK"); + assertThat(redis.get(key)).isEqualTo(value); + assertThat(redis.pttl(key) >= 19000).isTrue(); + } + + @Test + void setnx() { + assertThat(redis.setnx(key, value)).isTrue(); + assertThat(redis.setnx(key, value)).isFalse(); + } + + @Test + void setrange() { + assertThat(redis.setrange(key, 0, "foo")).isEqualTo("foo".length()); + assertThat(redis.setrange(key, 3, "bar")).isEqualTo(6); + assertThat(redis.get(key)).isEqualTo("foobar"); + } + + @Test + void strlen() { + assertThat((long) redis.strlen(key)).isEqualTo(0); + redis.set(key, value); + assertThat((long) redis.strlen(key)).isEqualTo(value.length()); + } + + @Test + void time() { + + List time = redis.time(); + assertThat(time).hasSize(2); + + Long.parseLong(time.get(0)); + Long.parseLong(time.get(1)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/TransactionCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/TransactionCommandIntegrationTests.java new file mode 100644 index 0000000000..939fa02246 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/TransactionCommandIntegrationTests.java @@ -0,0 +1,151 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.*; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +public class TransactionCommandIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisCommands redis; + + @Inject + protected TransactionCommandIntegrationTests(RedisClient client, RedisCommands redis) { + this.client = client; + this.redis = redis; + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void discard() { + assertThat(redis.multi()).isEqualTo("OK"); + redis.set(key, value); + assertThat(redis.discard()).isEqualTo("OK"); + assertThat(redis.get(key)).isNull(); + } + + @Test + void exec() { + assertThat(redis.multi()).isEqualTo("OK"); + redis.set(key, value); + assertThat(redis.exec()).contains("OK"); + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void watch() { + assertThat(redis.watch(key)).isEqualTo("OK"); + + RedisCommands redis2 = client.connect().sync(); + redis2.set(key, value + "X"); + redis2.getStatefulConnection().close(); + + redis.multi(); + redis.append(key, "foo"); + + TransactionResult transactionResult = redis.exec(); + + assertThat(transactionResult.wasDiscarded()).isTrue(); + assertThat(transactionResult).isEmpty(); + } + + @Test + void unwatch() { + assertThat(redis.unwatch()).isEqualTo("OK"); + } + + @Test + void commandsReturnNullInMulti() { + + assertThat(redis.multi()).isEqualTo("OK"); + assertThat(redis.set(key, value)).isNull(); + assertThat(redis.get(key)).isNull(); + + TransactionResult exec = redis.exec(); + assertThat(exec.wasDiscarded()).isFalse(); + assertThat(exec).contains("OK", value); + + assertThat(redis.get(key)).isEqualTo(value); + } + + @Test + void execmulti() { + redis.multi(); + redis.set("one", "1"); + redis.set("two", "2"); + redis.mget("one", "two"); + redis.llen(key); + assertThat(redis.exec()).contains("OK", "OK", list(kv("one", "1"), kv("two", "2")), 0L); + } + + @Test + void emptyMulti() { + redis.multi(); + TransactionResult exec = redis.exec(); + assertThat(exec.wasDiscarded()).isFalse(); + assertThat(exec).isEmpty(); + } + + @Test + void errorInMulti() { + redis.multi(); + redis.set(key, value); + redis.lpop(key); + redis.get(key); + TransactionResult values = redis.exec(); + assertThat(values.wasDiscarded()).isFalse(); + assertThat((String) values.get(0)).isEqualTo("OK"); + assertThat(values.get(1) instanceof RedisException).isTrue(); + assertThat((String) values.get(2)).isEqualTo(value); + } + + @Test + void execWithoutMulti() { + assertThatThrownBy(redis::exec).isInstanceOf(RedisCommandExecutionException.class).hasMessageContaining( + "ERR EXEC without MULTI"); + } + + @Test + void multiCalledTwiceShouldFail() { + + redis.multi(); + assertThatThrownBy(redis::multi).isInstanceOf(RedisCommandExecutionException.class).hasMessageContaining( + "ERR MULTI calls can not be nested"); + redis.discard(); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/BitReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/BitReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..cf32e7d1d4 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/BitReactiveCommandIntegrationTests.java @@ -0,0 +1,114 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import static io.lettuce.core.BitFieldArgs.offset; +import static io.lettuce.core.BitFieldArgs.signed; +import static io.lettuce.core.BitFieldArgs.typeWidthBasedOffset; +import static io.lettuce.core.BitFieldArgs.OverflowType.FAIL; +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; + +import reactor.test.StepVerifier; +import io.lettuce.core.BitFieldArgs; +import io.lettuce.core.RedisClient; +import io.lettuce.core.Value; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisStringReactiveCommands; +import io.lettuce.core.commands.BitCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class BitReactiveCommandIntegrationTests extends BitCommandIntegrationTests { + + private RedisStringReactiveCommands reactive; + + @Inject + BitReactiveCommandIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + super(client, ReactiveSyncInvocationHandler.sync(connection)); + this.reactive = connection.reactive(); + } + + @Test + void bitfield() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 1).set(5, 1).incrBy(2, 3).get().get(2); + + StepVerifier.create(reactive.bitfield(key, bitFieldArgs)) + .expectNext(Value.just(0L), Value.just(32L), Value.just(3L), Value.just(0L), Value.just(3L)).verifyComplete(); + + assertThat(bitstring.get(key)).isEqualTo("0000000000010011"); + } + + @Test + void bitfieldGetWithOffset() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 1).get(signed(2), typeWidthBasedOffset(1)); + + StepVerifier.create(reactive.bitfield(key, bitFieldArgs)).expectNext(Value.just(0L), Value.just(0L)).verifyComplete(); + + assertThat(bitstring.get(key)).isEqualTo("10000000"); + } + + @Test + void bitfieldSet() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 5).set(5); + + StepVerifier.create(reactive.bitfield(key, bitFieldArgs)).expectNext(Value.just(0L), Value.just(5L)).verifyComplete(); + + assertThat(bitstring.get(key)).isEqualTo("10100000"); + } + + @Test + void bitfieldWithOffsetSet() { + + StepVerifier.create(reactive.bitfield(key, BitFieldArgs.Builder.set(signed(8), typeWidthBasedOffset(2), 5))) + .expectNextCount(1).verifyComplete(); + + assertThat(bitstring.get(key)).isEqualTo("000000000000000010100000"); + + bitstring.del(key); + StepVerifier.create(reactive.bitfield(key, BitFieldArgs.Builder.set(signed(8), offset(2), 5))).expectNextCount(1) + .verifyComplete(); + assertThat(bitstring.get(key)).isEqualTo("1000000000000010"); + } + + @Test + void bitfieldIncrBy() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.set(signed(8), 0, 5).incrBy(1); + + StepVerifier.create(reactive.bitfield(key, bitFieldArgs)).expectNext(Value.just(0L), Value.just(6L)).verifyComplete(); + + assertThat(bitstring.get(key)).isEqualTo("01100000"); + } + + @Test + void bitfieldOverflow() { + + BitFieldArgs bitFieldArgs = BitFieldArgs.Builder.overflow(FAIL).set(signed(8), 9, 5) + .incrBy(signed(8), Integer.MAX_VALUE); + + StepVerifier.create(reactive.bitfield(key, bitFieldArgs)).expectNext(Value.just(0L)).expectNext(Value.empty()) + .verifyComplete(); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/CustomReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/CustomReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..b554489f3f --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/CustomReactiveCommandIntegrationTests.java @@ -0,0 +1,73 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.core.publisher.Flux; +import reactor.test.StepVerifier; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.ValueListOutput; +import io.lettuce.core.output.ValueOutput; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class CustomReactiveCommandIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + CustomReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + this.redis = connection.sync(); + this.redis.flushdb(); + } + + @Test + void dispatchGetAndSet() { + + redis.set(key, value); + RedisReactiveCommands reactive = redis.getStatefulConnection().reactive(); + + Flux flux = reactive.dispatch(CommandType.GET, new ValueOutput<>(StringCodec.UTF8), new CommandArgs<>( + StringCodec.UTF8).addKey(key)); + + StepVerifier.create(flux).expectNext(value).verifyComplete(); + } + + @Test + void dispatchList() { + + redis.rpush(key, "a", "b", "c"); + RedisReactiveCommands reactive = redis.getStatefulConnection().reactive(); + + Flux flux = reactive.dispatch(CommandType.LRANGE, new ValueListOutput<>(StringCodec.UTF8), new CommandArgs<>( + StringCodec.UTF8).addKey(key).add(0).add(-1)); + + StepVerifier.create(flux).expectNext("a", "b", "c").verifyComplete(); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/GeoReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/GeoReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..8d4a0d6e55 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/GeoReactiveCommandIntegrationTests.java @@ -0,0 +1,68 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.AssertionsForClassTypes.offset; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.Test; + +import reactor.test.StepVerifier; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.commands.GeoCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class GeoReactiveCommandIntegrationTests extends GeoCommandIntegrationTests { + + private final StatefulRedisConnection connection; + + @Inject + GeoReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + this.connection = connection; + } + + @Test + @Override + public void geopos() { + + RedisReactiveCommands reactive = connection.reactive(); + + prepareGeo(); + + StepVerifier.create(reactive.geopos(key, "Weinheim", "foobar", "Bahn")).consumeNextWith(actual -> { + assertThat(actual.getValue().getX().doubleValue()).isEqualTo(8.6638, offset(0.001)); + + }).consumeNextWith(actual -> { + assertThat(actual.hasValue()).isFalse(); + }).consumeNextWith(actual -> { + assertThat(actual.hasValue()).isTrue(); + }).verifyComplete(); + } + + @Test + @Disabled("API differences") + @Override + public void geoposInTransaction() { + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/HLLReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/HLLReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..614d1e6075 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/HLLReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.HLLCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class HLLReactiveCommandIntegrationTests extends HLLCommandIntegrationTests { + + @Inject + HLLReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/HashReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/HashReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..4c81cea008 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/HashReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.HashCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class HashReactiveCommandIntegrationTests extends HashCommandIntegrationTests { + + @Inject + HashReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/KeyReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/KeyReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..613fe16fdc --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/KeyReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.KeyCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class KeyReactiveCommandIntegrationTests extends KeyCommandIntegrationTests { + + @Inject + KeyReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/ListReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/ListReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..5357208c27 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/ListReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.ListCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class ListReactiveCommandIntegrationTests extends ListCommandIntegrationTests { + + @Inject + ListReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/NumericReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/NumericReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..cfb8f6ffe5 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/NumericReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.NumericCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class NumericReactiveCommandIntegrationTests extends NumericCommandIntegrationTests { + + @Inject + NumericReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/ScriptingReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/ScriptingReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..a824ea3064 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/ScriptingReactiveCommandIntegrationTests.java @@ -0,0 +1,34 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.ScriptingCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class ScriptingReactiveCommandIntegrationTests extends ScriptingCommandIntegrationTests { + + @Inject + ScriptingReactiveCommandIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + super(client, ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/ServerReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/ServerReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..32b2f1a6a4 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/ServerReactiveCommandIntegrationTests.java @@ -0,0 +1,75 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.commands.ServerCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class ServerReactiveCommandIntegrationTests extends ServerCommandIntegrationTests { + + private RedisReactiveCommands reactive; + + @Inject + ServerReactiveCommandIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + super(client, ReactiveSyncInvocationHandler.sync(connection)); + this.reactive = connection.reactive(); + } + + /** + * Luckily these commands do not destroy anything in contrast to sync/async. + */ + @Test + void shutdown() { + reactive.shutdown(true); + assertThat(reactive.getStatefulConnection().isOpen()).isTrue(); + } + + @Test + void debugOom() { + reactive.debugOom(); + assertThat(reactive.getStatefulConnection().isOpen()).isTrue(); + } + + @Test + void debugSegfault() { + reactive.debugSegfault(); + assertThat(reactive.getStatefulConnection().isOpen()).isTrue(); + } + + @Test + void debugRestart() { + reactive.debugRestart(1L); + assertThat(reactive.getStatefulConnection().isOpen()).isTrue(); + } + + @Test + void migrate() { + reactive.migrate("host", 1234, "key", 1, 10); + assertThat(reactive.getStatefulConnection().isOpen()).isTrue(); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/SetReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/SetReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..34ace19ed4 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/SetReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.SetCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class SetReactiveCommandIntegrationTests extends SetCommandIntegrationTests { + + @Inject + SetReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/SortReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/SortReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..c64a9c164e --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/SortReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.SortCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class SortReactiveCommandIntegrationTests extends SortCommandIntegrationTests { + + @Inject + SortReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/SortedSetReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/SortedSetReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..87a997895b --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/SortedSetReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.SortedSetCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class SortedSetReactiveCommandIntegrationTests extends SortedSetCommandIntegrationTests { + + @Inject + SortedSetReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/StreamReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/StreamReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..37964a579b --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/StreamReactiveCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.StreamCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class StreamReactiveCommandIntegrationTests extends StreamCommandIntegrationTests { + + @Inject + StreamReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/StringReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/StringReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..7522608390 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/StringReactiveCommandIntegrationTests.java @@ -0,0 +1,65 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; + +import reactor.core.publisher.Flux; +import reactor.test.StepVerifier; +import io.lettuce.core.KeyValue; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.commands.StringCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +class StringReactiveCommandIntegrationTests extends StringCommandIntegrationTests { + + private final RedisCommands redis; + private final RedisReactiveCommands reactive; + + @Inject + StringReactiveCommandIntegrationTests(StatefulRedisConnection connection) { + super(ReactiveSyncInvocationHandler.sync(connection)); + this.redis = connection.sync(); + this.reactive = connection.reactive(); + } + + @Test + void mget() { + + redis.set(key, value); + redis.set("key1", value); + redis.set("key2", value); + + Flux> mget = reactive.mget(key, "key1", "key2"); + StepVerifier.create(mget.next()).expectNext(KeyValue.just(key, value)).verifyComplete(); + } + + @Test + void mgetEmpty() { + + redis.set(key, value); + + Flux> mget = reactive.mget("unknown"); + StepVerifier.create(mget.next()).expectNext(KeyValue.empty("unknown")).verifyComplete(); + } +} diff --git a/src/test/java/io/lettuce/core/commands/reactive/TransactionReactiveCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/reactive/TransactionReactiveCommandIntegrationTests.java new file mode 100644 index 0000000000..ce56d87ee6 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/reactive/TransactionReactiveCommandIntegrationTests.java @@ -0,0 +1,146 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.reactive; + +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; + +import reactor.test.StepVerifier; +import io.lettuce.core.KeyValue; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisException; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.commands.TransactionCommandIntegrationTests; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +public class TransactionReactiveCommandIntegrationTests extends TransactionCommandIntegrationTests { + + private final RedisClient client; + private final RedisReactiveCommands commands; + private final StatefulRedisConnection connection; + + @Inject + public TransactionReactiveCommandIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + super(client, ReactiveSyncInvocationHandler.sync(connection)); + this.client = client; + this.commands = connection.reactive(); + this.connection = connection; + } + + @Test + void discard() { + + StepVerifier.create(commands.multi()).expectNext("OK").verifyComplete(); + + commands.set(key, value).toProcessor(); + + StepVerifier.create(commands.discard()).expectNext("OK").verifyComplete(); + StepVerifier.create(commands.get(key)).verifyComplete(); + } + + @Test + void watchRollback() { + + StatefulRedisConnection otherConnection = client.connect(); + + otherConnection.sync().set(key, value); + + StepVerifier.create(commands.watch(key)).expectNext("OK").verifyComplete(); + StepVerifier.create(commands.multi()).expectNext("OK").verifyComplete(); + + commands.set(key, value).toProcessor(); + + otherConnection.sync().del(key); + + StepVerifier.create(commands.exec()).consumeNextWith(actual -> { + assertThat(actual).isNotNull(); + assertThat(actual.wasDiscarded()).isTrue(); + }); + + otherConnection.close(); + } + + @Test + void execSingular() { + + StepVerifier.create(commands.multi()).expectNext("OK").verifyComplete(); + + connection.sync().set(key, value); + + StepVerifier.create(commands.exec()).consumeNextWith(actual -> assertThat(actual).contains("OK")).verifyComplete(); + StepVerifier.create(commands.get(key)).expectNext(value).verifyComplete(); + } + + @Test + void errorInMulti() { + + commands.multi().toProcessor(); + commands.set(key, value).toProcessor(); + commands.lpop(key).toProcessor(); + commands.get(key).toProcessor(); + + StepVerifier.create(commands.exec()).consumeNextWith(actual -> { + + assertThat((String) actual.get(0)).isEqualTo("OK"); + assertThat((Object) actual.get(1)).isInstanceOf(RedisException.class); + assertThat((String) actual.get(2)).isEqualTo(value); + }).verifyComplete(); + } + + @Test + void resultOfMultiIsContainedInCommandFlux() { + + commands.multi().toProcessor(); + + StepVerifier.Step set1 = StepVerifier.create(commands.set("key1", "value1")).expectNext("OK").thenAwait(); + StepVerifier.Step set2 = StepVerifier.create(commands.set("key2", "value2")).expectNext("OK").thenAwait(); + StepVerifier.Step> mget = StepVerifier.create(commands.mget("key1", "key2")) + .expectNext(KeyValue.just("key1", "value1"), KeyValue.just("key2", "value2")).thenAwait(); + StepVerifier.Step llen = StepVerifier.create(commands.llen("something")).expectNext(0L).thenAwait(); + + StepVerifier.create(commands.exec()).then(() -> { + + set1.verifyComplete(); + set2.verifyComplete(); + mget.verifyComplete(); + llen.verifyComplete(); + + }).expectNextCount(1).verifyComplete(); + } + + @Test + void resultOfMultiIsContainedInExecObservable() { + + commands.multi().toProcessor(); + commands.set("key1", "value1").toProcessor(); + commands.set("key2", "value2").toProcessor(); + commands.mget("key1", "key2").collectList().toProcessor(); + commands.llen("something").toProcessor(); + + StepVerifier.create(commands.exec()).consumeNextWith(actual -> { + + assertThat(actual).contains("OK", "OK", list(kv("key1", "value1"), kv("key2", "value2")), 0L); + + }).verifyComplete(); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/BitTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/BitTxCommandIntegrationTests.java new file mode 100644 index 0000000000..a1ffaf9dd1 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/BitTxCommandIntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.BitCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class BitTxCommandIntegrationTests extends BitCommandIntegrationTests { + + @Inject + BitTxCommandIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + super(client, TxSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/GeoTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/GeoTxCommandIntegrationTests.java new file mode 100644 index 0000000000..40ee77a961 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/GeoTxCommandIntegrationTests.java @@ -0,0 +1,74 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Disabled; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.GeoCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class GeoTxCommandIntegrationTests extends GeoCommandIntegrationTests { + + @Inject + GeoTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } + + @Disabled + @Override + public void georadiusbymemberWithArgsInTransaction() { + } + + @Disabled + @Override + public void geoaddInTransaction() { + } + + @Disabled + @Override + public void geoaddMultiInTransaction() { + } + + @Disabled + @Override + public void geoposInTransaction() { + } + + @Disabled + @Override + public void georadiusWithArgsAndTransaction() { + } + + @Disabled + @Override + public void georadiusInTransaction() { + } + + @Disabled + @Override + public void geodistInTransaction() { + } + + @Disabled + @Override + public void geohashInTransaction() { + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/HLLTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/HLLTxCommandIntegrationTests.java new file mode 100644 index 0000000000..87c19af4f0 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/HLLTxCommandIntegrationTests.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.HLLCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class HLLTxCommandIntegrationTests extends HLLCommandIntegrationTests { + + @Inject + HLLTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/HashTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/HashTxCommandIntegrationTests.java new file mode 100644 index 0000000000..e20a9d030d --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/HashTxCommandIntegrationTests.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.HashCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class HashTxCommandIntegrationTests extends HashCommandIntegrationTests { + + @Inject + HashTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/KeyTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/KeyTxCommandIntegrationTests.java new file mode 100644 index 0000000000..9859da767f --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/KeyTxCommandIntegrationTests.java @@ -0,0 +1,39 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Disabled; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.KeyCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +public class KeyTxCommandIntegrationTests extends KeyCommandIntegrationTests { + + @Inject + KeyTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } + + @Disabled + @Override + public void move() { + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/ListTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/ListTxCommandIntegrationTests.java new file mode 100644 index 0000000000..7c4d2201b9 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/ListTxCommandIntegrationTests.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.ListCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class ListTxCommandIntegrationTests extends ListCommandIntegrationTests { + + @Inject + ListTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/SetTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/SetTxCommandIntegrationTests.java new file mode 100644 index 0000000000..7a138dfbe3 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/SetTxCommandIntegrationTests.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.SetCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class SetTxCommandIntegrationTests extends SetCommandIntegrationTests { + + @Inject + SetTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/SortTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/SortTxCommandIntegrationTests.java new file mode 100644 index 0000000000..bbd7b2ff58 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/SortTxCommandIntegrationTests.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.SortCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class SortTxCommandIntegrationTests extends SortCommandIntegrationTests { + + @Inject + SortTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/SortedSetTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/SortedSetTxCommandIntegrationTests.java new file mode 100644 index 0000000000..f2d5d7a102 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/SortedSetTxCommandIntegrationTests.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.SortedSetCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class SortedSetTxCommandIntegrationTests extends SortedSetCommandIntegrationTests { + + @Inject + SortedSetTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/StringTxCommandIntegrationTests.java b/src/test/java/io/lettuce/core/commands/transactional/StringTxCommandIntegrationTests.java new file mode 100644 index 0000000000..b9eb6f0dc4 --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/StringTxCommandIntegrationTests.java @@ -0,0 +1,32 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import javax.inject.Inject; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.commands.StringCommandIntegrationTests; + +/** + * @author Mark Paluch + */ +class StringTxCommandIntegrationTests extends StringCommandIntegrationTests { + + @Inject + StringTxCommandIntegrationTests(StatefulRedisConnection connection) { + super(TxSyncInvocationHandler.sync(connection)); + } +} diff --git a/src/test/java/io/lettuce/core/commands/transactional/TxSyncInvocationHandler.java b/src/test/java/io/lettuce/core/commands/transactional/TxSyncInvocationHandler.java new file mode 100644 index 0000000000..318a1b685e --- /dev/null +++ b/src/test/java/io/lettuce/core/commands/transactional/TxSyncInvocationHandler.java @@ -0,0 +1,118 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.commands.transactional; + +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.lang.reflect.Proxy; + +import io.lettuce.core.TransactionResult; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.internal.AbstractInvocationHandler; + +/** + * Invocation handler for testing purposes that wraps each call into a transaction. + * + * @param + * @param + */ +class TxSyncInvocationHandler extends AbstractInvocationHandler { + + private final Object api; + private final Method multi; + private final Method discard; + private final Method exec; + private final Method ping; + + private TxSyncInvocationHandler(Object api) throws Exception { + + this.api = api; + this.multi = api.getClass().getMethod("multi"); + this.exec = api.getClass().getMethod("exec"); + this.discard = api.getClass().getMethod("discard"); + this.ping = api.getClass().getMethod("ping"); + } + + @Override + @SuppressWarnings("unchecked") + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + try { + + if (method.getName().equals("exec") || method.getName().equals("multi")) { + throw new IllegalStateException("Cannot execute transaction commands over this transactional wrapper"); + } + + Method targetMethod = api.getClass().getMethod(method.getName(), method.getParameterTypes()); + + if (!method.getName().equals("close") && !method.getName().equals("getStatefulConnection")) { + + multi.invoke(api); + ping.invoke(api); + + targetMethod.invoke(api, args); + + Object result = exec.invoke(api); + + if (result == null || !(result instanceof TransactionResult)) { + return result; + } + + TransactionResult txResult = (TransactionResult) result; + + if (txResult.size() > 1) { + + result = txResult.get(1); + if (result instanceof Exception) { + throw (Exception) result; + } + + return result; + } + + return null; + } + + return targetMethod.invoke(api, args); + + } catch (InvocationTargetException e) { + try { + discard.invoke(api); + } catch (Exception e1) { + } + throw e.getTargetException(); + } + } + + /** + * Create a transactional wrapper proxy for {@link RedisCommands}. + * + * @param connection the connection + * @return the wrapper proxy. + */ + @SuppressWarnings("unchecked") + public static RedisCommands sync(StatefulRedisConnection connection) { + + try { + TxSyncInvocationHandler handler = new TxSyncInvocationHandler<>(connection.sync()); + return (RedisCommands) Proxy.newProxyInstance(handler.getClass().getClassLoader(), + new Class[] { RedisCommands.class }, handler); + } catch (Exception e) { + throw new IllegalStateException(e); + } + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/BatchExecutableCommandLookupStrategyUnitTests.java b/src/test/java/io/lettuce/core/dynamic/BatchExecutableCommandLookupStrategyUnitTests.java new file mode 100644 index 0000000000..530b409b32 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/BatchExecutableCommandLookupStrategyUnitTests.java @@ -0,0 +1,110 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.mockito.Matchers.any; +import static org.mockito.Mockito.when; + +import java.util.Collections; +import java.util.concurrent.Future; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.domain.Timeout; +import io.lettuce.core.dynamic.output.CommandOutputFactory; +import io.lettuce.core.dynamic.output.CommandOutputFactoryResolver; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class BatchExecutableCommandLookupStrategyUnitTests { + + @Mock + private RedisCommandsMetadata metadata; + @Mock + private StatefulRedisConnection connection; + + @Mock + private CommandOutputFactoryResolver outputFactoryResolver; + + @Mock + private CommandOutputFactory outputFactory; + + private BatchExecutableCommandLookupStrategy sut; + + @BeforeEach + void before() { + sut = new BatchExecutableCommandLookupStrategy(Collections.singletonList(StringCodec.UTF8), outputFactoryResolver, + CommandMethodVerifier.NONE, Batcher.NONE, connection); + + when(outputFactoryResolver.resolveCommandOutput(any())).thenReturn(outputFactory); + } + + @Test + void shouldCreateAsyncBatchCommand() throws Exception { + + ExecutableCommand result = sut.resolveCommandMethod(getMethod("async"), metadata); + + assertThat(result).isInstanceOf(BatchExecutableCommand.class); + } + + @Test + void shouldCreateSyncBatchCommand() throws Exception { + + ExecutableCommand result = sut.resolveCommandMethod(getMethod("justVoid"), metadata); + + assertThat(result).isInstanceOf(BatchExecutableCommand.class); + } + + @Test + void shouldNotAllowTimeoutParameter() { + assertThatThrownBy(() -> sut.resolveCommandMethod(getMethod("withTimeout", String.class, Timeout.class), metadata)) + .isInstanceOf(IllegalArgumentException.class); + } + + @Test + void shouldNotAllowSynchronousReturnTypes() { + assertThatThrownBy(() -> sut.resolveCommandMethod(getMethod("withReturnType"), metadata)).isInstanceOf( + IllegalArgumentException.class); + } + + private CommandMethod getMethod(String name, Class... parameterTypes) throws NoSuchMethodException { + return DeclaredCommandMethod.create(BatchingCommands.class.getDeclaredMethod(name, parameterTypes)); + } + + private static interface BatchingCommands { + + Future async(); + + String withTimeout(String key, Timeout timeout); + + String withReturnType(); + + void justVoid(); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/CommandSegmentCommandFactoryUnitTests.java b/src/test/java/io/lettuce/core/dynamic/CommandSegmentCommandFactoryUnitTests.java new file mode 100644 index 0000000000..27512b950c --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/CommandSegmentCommandFactoryUnitTests.java @@ -0,0 +1,214 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.Assert.fail; + +import java.util.concurrent.Future; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.ScanArgs; +import io.lettuce.core.SetArgs; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.core.dynamic.annotation.Param; +import io.lettuce.core.dynamic.annotation.Value; +import io.lettuce.core.dynamic.domain.Timeout; +import io.lettuce.core.dynamic.output.CodecAwareOutputFactoryResolver; +import io.lettuce.core.dynamic.output.OutputRegistry; +import io.lettuce.core.dynamic.output.OutputRegistryCommandOutputFactoryResolver; +import io.lettuce.core.dynamic.segment.AnnotationCommandSegmentFactory; +import io.lettuce.core.dynamic.segment.CommandSegmentFactory; +import io.lettuce.core.dynamic.support.ReflectionUtils; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.RedisCommand; + +/** + * @author Mark Paluch + */ +class CommandSegmentCommandFactoryUnitTests { + + @Test + void setKeyValue() { + + RedisCommand command = createCommand(methodOf(Commands.class, "set", String.class, String.class), + new StringCodec(), "key", "value"); + + assertThat(toString(command)).isEqualTo("SET key key"); + assertThat(command.getType()).isSameAs(CommandType.SET); + } + + @Test + void setKeyValueWithByteArrayCodec() { + + RedisCommand command = createCommand(methodOf(Commands.class, "set", String.class, String.class), + new ByteArrayCodec(), "key", "value"); + + assertThat(toString(command)).isEqualTo("SET key value"); + } + + @Test + void setKeyValueWithHintedValue() { + + RedisCommand command = createCommand(methodOf(Commands.class, "set2", String.class, String.class), + new StringCodec(), "key", "value"); + + assertThat(toString(command)).isEqualTo("SET key value"); + assertThat(command.getType()).isSameAs(CommandType.SET); + } + + @Test + void lowercaseCommandResolvesToStringCommand() { + + RedisCommand command = createCommand(methodOf(Commands.class, "set3", String.class, String.class), + new StringCodec(), "key", "value"); + + assertThat(toString(command)).isEqualTo("set key value"); + assertThat(command.getType()).isNotInstanceOf(CommandType.class); + } + + @Test + void setWithArgs() { + + RedisCommand command = createCommand( + methodOf(Commands.class, "set", String.class, String.class, SetArgs.class), new StringCodec(), "key", "value", + SetArgs.Builder.ex(123).nx()); + + assertThat(toString(command)).isEqualTo("SET key key EX 123 NX"); + } + + @Test + void varargsMethodWithParameterIndexAccess() { + + RedisCommand command = createCommand( + methodOf(Commands.class, "varargsWithParamIndexes", ScanArgs.class, String[].class), new StringCodec(), + ScanArgs.Builder.limit(1), new String[] { "a", "b" }); + + assertThat(toString(command)).isEqualTo("MGET a b COUNT 1"); + } + + @Test + void clientSetname() { + + RedisCommand command = createCommand(methodOf(Commands.class, "clientSetname", String.class), + new ByteArrayCodec(), "name"); + + assertThat(toString(command)).isEqualTo("CLIENT SETNAME name"); + } + + @Test + void annotatedClientSetname() { + + RedisCommand command = createCommand(methodOf(Commands.class, "methodWithNamedParameters", String.class), + new StringCodec(), "name"); + + assertThat(toString(command)).isEqualTo("CLIENT SETNAME key"); + } + + @Test + void asyncWithTimeout() { + + try { + createCommand(methodOf(MethodsWithTimeout.class, "async", String.class, Timeout.class), new StringCodec()); + fail("Missing CommandCreationException"); + } catch (CommandCreationException e) { + assertThat(e).hasMessageContaining("Asynchronous command methods do not support Timeout parameters"); + } + } + + @Test + void syncWithTimeout() { + + createCommand(methodOf(MethodsWithTimeout.class, "sync", String.class, Timeout.class), new StringCodec(), "hello", + null); + } + + @Test + void resolvesUnknownCommandToStringBackedCommandType() { + + RedisCommand command = createCommand(methodOf(Commands.class, "unknownCommand"), new StringCodec()); + + assertThat(toString(command)).isEqualTo("XYZ"); + assertThat(command.getType()).isNotInstanceOf(CommandType.class); + } + + private CommandMethod methodOf(Class commandInterface, String methodName, Class... args) { + return DeclaredCommandMethod.create(ReflectionUtils.findMethod(commandInterface, methodName, args)); + } + + @SuppressWarnings("unchecked") + private RedisCommand createCommand(CommandMethod commandMethod, RedisCodec codec, Object... args) { + + CommandSegmentFactory segmentFactory = new AnnotationCommandSegmentFactory(); + CodecAwareOutputFactoryResolver outputFactoryResolver = new CodecAwareOutputFactoryResolver( + new OutputRegistryCommandOutputFactoryResolver(new OutputRegistry()), codec); + CommandSegmentCommandFactory factory = new CommandSegmentCommandFactory( + segmentFactory.createCommandSegments(commandMethod), commandMethod, codec, outputFactoryResolver); + + return factory.createCommand(args); + } + + @SuppressWarnings("unchecked") + private String toString(RedisCommand command) { + + StringBuilder builder = new StringBuilder(); + + builder.append(command.getType().name()); + + String commandString = command.getArgs().toCommandString(); + + if (!commandString.isEmpty()) { + builder.append(' ').append(commandString); + } + + return builder.toString(); + } + + private interface Commands { + + boolean set(String key, String value); + + @Command("SET") + boolean set2(String key, @Value String value); + + @Command("set") + boolean set3(String key, @Value String value); + + boolean set(String key, String value, SetArgs setArgs); + + boolean clientSetname(String connectionName); + + @Command("CLIENT SETNAME :connectionName") + boolean methodWithNamedParameters(@Param("connectionName") String connectionName); + + @Command("MGET ?1 ?0") + String varargsWithParamIndexes(ScanArgs scanArgs, String... keys); + + @Command("XYZ") + boolean unknownCommand(); + } + + private static interface MethodsWithTimeout { + + Future async(String key, Timeout timeout); + + String sync(String key, Timeout timeout); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/ConversionServiceUnitTests.java b/src/test/java/io/lettuce/core/dynamic/ConversionServiceUnitTests.java new file mode 100644 index 0000000000..330582d39d --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/ConversionServiceUnitTests.java @@ -0,0 +1,87 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.Assert.fail; + +import java.util.function.Function; + +import org.junit.jupiter.api.Test; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.reactivex.Observable; + +/** + * @author Mark Paluch + */ +class ConversionServiceUnitTests { + + private ConversionService sut = new ConversionService(); + + @Test + void getConverter() { + + sut.addConverter(new FluxToObservableConverter()); + sut.addConverter(new MonoToObservableConverter()); + + assertThat(sut.getConverter(Flux.just("").getClass(), Observable.class)).isNotNull() + .isInstanceOf(FluxToObservableConverter.class); + + try { + sut.getConverter(Flux.just("").getClass(), String.class); + fail("Missing IllegalArgumentException"); + } catch (IllegalArgumentException e) { + assertThat(e).hasMessageContaining("No converter found for reactor.core.publisher.FluxJust to java.lang.String"); + } + } + + @Test + void canConvert() { + + sut.addConverter(new FluxToObservableConverter()); + sut.addConverter(new MonoToObservableConverter()); + + assertThat(sut.canConvert(Flux.class, Observable.class)).isTrue(); + assertThat(sut.canConvert(Observable.class, Flux.class)).isFalse(); + } + + @Test + void convert() { + + sut.addConverter(new FluxToObservableConverter()); + sut.addConverter(new MonoToObservableConverter()); + + Observable observable = sut.convert(Mono.just("hello"), Observable.class); + observable.test().assertValue("world").assertComplete(); + } + + private class FluxToObservableConverter implements Function, Observable> { + @Override + public Observable apply(Flux source) { + return null; + } + } + + private class MonoToObservableConverter implements Function, Observable> { + @Override + public Observable apply(Mono source) { + return Observable.just("world"); + } + } + +} diff --git a/src/test/java/io/lettuce/core/dynamic/DeclaredCommandMethodUnitTests.java b/src/test/java/io/lettuce/core/dynamic/DeclaredCommandMethodUnitTests.java new file mode 100644 index 0000000000..2b65c57f37 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/DeclaredCommandMethodUnitTests.java @@ -0,0 +1,71 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.lang.reflect.Method; +import java.util.concurrent.Future; + +import org.junit.jupiter.api.Test; + +import reactor.core.publisher.Flux; + +/** + * @author Mark Paluch + */ +class DeclaredCommandMethodUnitTests { + + @Test + void shouldResolveConcreteType() throws Exception { + + CommandMethod commandMethod = DeclaredCommandMethod.create(getMethod("getString")); + + assertThat(commandMethod.getActualReturnType().getType()).isEqualTo(String.class); + assertThat(commandMethod.getReturnType().getType()).isEqualTo(String.class); + } + + @Test + void shouldResolveFutureComponentType() throws Exception { + + CommandMethod commandMethod = DeclaredCommandMethod.create(getMethod("getFuture")); + + assertThat(commandMethod.getActualReturnType().getRawClass()).isEqualTo(String.class); + assertThat(commandMethod.getReturnType().getRawClass()).isEqualTo(Future.class); + } + + @Test + void shouldResolveFluxComponentType() throws Exception { + + CommandMethod commandMethod = DeclaredCommandMethod.create(getMethod("getFlux")); + + assertThat(commandMethod.getActualReturnType().getRawClass()).isEqualTo(Flux.class); + assertThat(commandMethod.getReturnType().getRawClass()).isEqualTo(Flux.class); + } + + private Method getMethod(String name) throws NoSuchMethodException { + return MyInterface.class.getDeclaredMethod(name); + } + + private interface MyInterface { + + String getString(); + + Future getFuture(); + + Flux getFlux(); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/DefaultCommandMethodVerifierUnitTests.java b/src/test/java/io/lettuce/core/dynamic/DefaultCommandMethodVerifierUnitTests.java new file mode 100644 index 0000000000..79c7eda34a --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/DefaultCommandMethodVerifierUnitTests.java @@ -0,0 +1,149 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Fail.fail; + +import java.lang.reflect.Method; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.GeoCoordinates; +import io.lettuce.core.KeyValue; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.core.dynamic.annotation.Param; +import io.lettuce.core.dynamic.segment.AnnotationCommandSegmentFactory; +import io.lettuce.core.dynamic.segment.CommandSegmentFactory; +import io.lettuce.core.dynamic.segment.CommandSegments; +import io.lettuce.core.dynamic.support.ReflectionUtils; +import io.lettuce.core.internal.LettuceLists; +import io.lettuce.core.models.command.CommandDetail; + +/** + * @author Mark Paluch + */ +class DefaultCommandMethodVerifierUnitTests { + + private DefaultCommandMethodVerifier sut; + + @BeforeEach + void before() { + + CommandDetail mget = new CommandDetail("mget", -2, null, 0, 0, 0); + CommandDetail randomkey = new CommandDetail("randomkey", 1, null, 0, 0, 0); + CommandDetail rpop = new CommandDetail("rpop", 2, null, 0, 0, 0); + CommandDetail lpop = new CommandDetail("lpop", 2, null, 0, 0, 0); + CommandDetail set = new CommandDetail("set", 3, null, 0, 0, 0); + CommandDetail geoadd = new CommandDetail("geoadd", -4, null, 0, 0, 0); + + sut = new DefaultCommandMethodVerifier(LettuceLists.newList(mget, randomkey, rpop, lpop, set, geoadd)); + } + + @Test + void misspelledName() { + + try { + validateMethod("megt"); + fail("Missing CommandMethodSyntaxException"); + } catch (CommandMethodSyntaxException e) { + assertThat(e).hasMessageContaining("Command MEGT does not exist. Did you mean: MGET, SET?"); + } + } + + @Test + void tooFewAtLeastParameters() { + + try { + validateMethod("mget"); + fail("Missing CommandMethodSyntaxException"); + } catch (CommandMethodSyntaxException e) { + assertThat(e) + .hasMessageContaining("Command MGET requires at least 1 parameters but method declares 0 parameter(s)"); + } + } + + @Test + void shouldPassWithCorrectParameterCount() { + + validateMethod("lpop", String.class); + validateMethod("rpop", String.class); + validateMethod("mget", String.class); + validateMethod("randomkey"); + validateMethod("set", KeyValue.class); + validateMethod("geoadd", String.class, String.class, GeoCoordinates.class); + } + + @Test + void tooManyParameters() { + + try { + validateMethod("rpop", String.class, String.class); + fail("Missing CommandMethodSyntaxException"); + } catch (CommandMethodSyntaxException e) { + assertThat(e).hasMessageContaining("Command RPOP accepts 1 parameters but method declares 2 parameter(s)"); + } + } + + @Test + void methodDoesNotAcceptParameters() { + + try { + validateMethod("randomkey", String.class); + fail("Missing CommandMethodSyntaxException"); + } catch (CommandMethodSyntaxException e) { + assertThat(e).hasMessageContaining("Command RANDOMKEY accepts no parameters"); + } + } + + private void validateMethod(String methodName, Class... parameterTypes) { + + Method method = ReflectionUtils.findMethod(MyInterface.class, methodName, parameterTypes); + CommandSegmentFactory commandSegmentFactory = new AnnotationCommandSegmentFactory(); + CommandMethod commandMethod = DeclaredCommandMethod.create(method); + CommandSegments commandSegments = commandSegmentFactory.createCommandSegments(commandMethod); + + sut.validate(commandSegments, commandMethod); + } + + private static interface MyInterface { + + void megt(); + + void mget(); + + void mget(String key); + + void mget(String key1, String key2); + + void set(KeyValue keyValue); + + void geoadd(String key, String member, GeoCoordinates geoCoordinates); + + void randomkey(); + + void randomkey(String key); + + @Command("RPOP ?0") + void rpop(String key); + + @Command("LPOP :key") + void lpop(@Param("key") String key); + + void rpop(String key1, String key2); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/ParameterBinderUnitTests.java b/src/test/java/io/lettuce/core/dynamic/ParameterBinderUnitTests.java new file mode 100644 index 0000000000..cb3978445c --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/ParameterBinderUnitTests.java @@ -0,0 +1,216 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.Collections; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.junit.jupiter.MockitoExtension; +import org.springframework.util.Base64Utils; +import org.springframework.util.ReflectionUtils; + +import io.lettuce.core.*; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.segment.CommandSegment; +import io.lettuce.core.dynamic.segment.CommandSegments; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandType; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class ParameterBinderUnitTests { + + private ParameterBinder binder = new ParameterBinder(); + private CommandSegments segments = new CommandSegments(Collections.singletonList(CommandSegment.constant("set"))); + + @Test + void bindsNullValueAsEmptyByteArray() { + + CommandArgs args = bind(null); + + assertThat(args.toCommandString()).isEqualTo(""); + } + + @Test + void bindsStringCorrectly() { + + CommandArgs args = bind("string"); + + assertThat(args.toCommandString()).isEqualTo("string"); + } + + @Test + void bindsStringArrayCorrectly() { + + CommandArgs args = bind(new String[] { "arg1", "arg2" }); + + assertThat(args.toCommandString()).isEqualTo("arg1 arg2"); + } + + @Test + void bindsIntArrayCorrectly() { + + CommandArgs args = bind(new int[] { 1, 2, 3 }); + + assertThat(args.toCommandString()).isEqualTo("1 2 3"); + } + + @Test + void bindsValueCorrectly() { + + CommandArgs args = bind(Value.just("string")); + + assertThat(args.toCommandString()).isEqualTo("value"); + } + + @Test + void rejectsEmptyValue() { + assertThatThrownBy(() -> bind(Value.empty())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void bindsKeyValueCorrectly() { + + CommandArgs args = bind(KeyValue.just("mykey", "string")); + + assertThat(args.toCommandString()).isEqualTo("key value"); + } + + @Test + void rejectsEmptyKeyValue() { + assertThatThrownBy(() -> bind(KeyValue.empty())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void bindsScoredValueCorrectly() { + + CommandArgs args = bind(ScoredValue.just(20, "string")); + + assertThat(args.toCommandString()).isEqualTo("20.0 value"); + } + + @Test + void rejectsEmptyScoredValue() { + assertThatThrownBy(() -> bind(ScoredValue.empty())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void bindsLimitCorrectly() { + + CommandArgs args = bind(Limit.create(10, 100)); + + assertThat(args.toCommandString()).isEqualTo("LIMIT 10 100"); + } + + @Test + void bindsRangeCorrectly() { + + CommandArgs args = bind(Range.from(Range.Boundary.including(10), Range.Boundary.excluding(15))); + + assertThat(args.toCommandString()).isEqualTo("10 (15"); + } + + @Test + void bindsUnboundedRangeCorrectly() { + + CommandArgs args = bind(Range.unbounded()); + + assertThat(args.toCommandString()).isEqualTo("-inf +inf"); + } + + @Test + void rejectsStringLowerValue() { + assertThatThrownBy(() -> bind(Range.from(Range.Boundary.including("hello"), Range.Boundary.excluding(15)))) + .isInstanceOf(IllegalArgumentException.class); + } + + @Test + void rejectsStringUpperValue() { + assertThatThrownBy(() -> bind(Range.from(Range.Boundary.including(11), Range.Boundary.excluding("hello")))) + .isInstanceOf(IllegalArgumentException.class); + } + + @Test + void bindsValueRangeCorrectly() { + + CommandMethod commandMethod = DeclaredCommandMethod.create(ReflectionUtils.findMethod(MyCommands.class, "valueRange", + Range.class)); + + CommandArgs args = bind(commandMethod, + Range.from(Range.Boundary.including("lower"), Range.Boundary.excluding("upper"))); + + assertThat(args.toCommandString()).isEqualTo( + String.format("%s %s", Base64Utils.encodeToString("[lower".getBytes()), + Base64Utils.encodeToString("(upper".getBytes()))); + } + + @Test + void bindsUnboundedValueRangeCorrectly() { + + CommandMethod commandMethod = DeclaredCommandMethod.create(ReflectionUtils.findMethod(MyCommands.class, "valueRange", + Range.class)); + + CommandArgs args = bind(commandMethod, Range.unbounded()); + + assertThat(args.toCommandString()).isEqualTo( + String.format("%s %s", Base64Utils.encodeToString("-".getBytes()), Base64Utils.encodeToString("+".getBytes()))); + } + + @Test + void bindsGeoCoordinatesCorrectly() { + + CommandArgs args = bind(new GeoCoordinates(100, 200)); + + assertThat(args.toCommandString()).isEqualTo("100.0 200.0"); + } + + @Test + void bindsProtocolKeywordCorrectly() { + + CommandArgs args = bind(CommandType.LINDEX); + + assertThat(args.toCommandString()).isEqualTo("LINDEX"); + } + + private CommandArgs bind(Object object) { + CommandMethod commandMethod = DeclaredCommandMethod.create(ReflectionUtils.findMethod(MyCommands.class, "justObject", + Object.class)); + return bind(commandMethod, object); + } + + private CommandArgs bind(CommandMethod commandMethod, Object object) { + DefaultMethodParametersAccessor parametersAccessor = new DefaultMethodParametersAccessor(commandMethod.getParameters(), + object); + + CommandArgs args = new CommandArgs<>(new StringCodec()); + binder.bind(args, StringCodec.UTF8, segments, parametersAccessor); + + return args; + } + + private interface MyCommands { + + void justObject(Object object); + + void valueRange(@io.lettuce.core.dynamic.annotation.Value Range value); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/ReactiveCommandSegmentCommandFactoryUnitTests.java b/src/test/java/io/lettuce/core/dynamic/ReactiveCommandSegmentCommandFactoryUnitTests.java new file mode 100644 index 0000000000..3fb30a6c50 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/ReactiveCommandSegmentCommandFactoryUnitTests.java @@ -0,0 +1,100 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.fail; + +import java.lang.reflect.Method; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.junit.jupiter.MockitoExtension; +import org.reactivestreams.Publisher; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.domain.Timeout; +import io.lettuce.core.dynamic.output.CodecAwareOutputFactoryResolver; +import io.lettuce.core.dynamic.output.OutputRegistry; +import io.lettuce.core.dynamic.output.OutputRegistryCommandOutputFactoryResolver; +import io.lettuce.core.dynamic.segment.AnnotationCommandSegmentFactory; +import io.lettuce.core.dynamic.segment.CommandSegments; +import io.lettuce.core.dynamic.support.ReflectionUtils; +import io.lettuce.core.output.StreamingOutput; +import io.lettuce.core.protocol.RedisCommand; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class ReactiveCommandSegmentCommandFactoryUnitTests { + + private CodecAwareOutputFactoryResolver outputFactoryResolver = new CodecAwareOutputFactoryResolver( + new OutputRegistryCommandOutputFactoryResolver(new OutputRegistry()), StringCodec.UTF8); + + @Test + void commandCreationWithTimeoutShouldFail() { + + try { + createCommand("get", ReactiveWithTimeout.class, String.class, Timeout.class); + fail("Missing CommandCreationException"); + } catch (CommandCreationException e) { + assertThat(e).hasMessageContaining("Reactive command methods do not support Timeout parameters"); + } + } + + @Test + void shouldResolveNonStreamingOutput() { + + RedisCommand command = createCommand("getOne", ReactiveWithTimeout.class, String.class); + + assertThat(command.getOutput()).isNotInstanceOf(StreamingOutput.class); + } + + @Test + void shouldResolveStreamingOutput() { + + RedisCommand command = createCommand("getMany", ReactiveWithTimeout.class, String.class); + + assertThat(command.getOutput()).isInstanceOf(StreamingOutput.class); + } + + RedisCommand createCommand(String methodName, Class interfaceClass, Class... parameterTypes) { + + Method method = ReflectionUtils.findMethod(interfaceClass, methodName, parameterTypes); + + CommandMethod commandMethod = DeclaredCommandMethod.create(method); + + AnnotationCommandSegmentFactory segmentFactory = new AnnotationCommandSegmentFactory(); + CommandSegments commandSegments = segmentFactory.createCommandSegments(commandMethod); + + ReactiveCommandSegmentCommandFactory factory = new ReactiveCommandSegmentCommandFactory(commandSegments, commandMethod, + new StringCodec(), outputFactoryResolver); + + return factory.createCommand(new Object[] { "foo" }); + } + + private static interface ReactiveWithTimeout { + + Publisher get(String key, Timeout timeout); + + Mono getOne(String key); + + Flux getMany(String key); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/ReactiveTypeAdaptersUnitTests.java b/src/test/java/io/lettuce/core/dynamic/ReactiveTypeAdaptersUnitTests.java new file mode 100644 index 0000000000..21c1953264 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/ReactiveTypeAdaptersUnitTests.java @@ -0,0 +1,262 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.reactivestreams.Publisher; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import rx.Single; +import io.reactivex.Maybe; + +/** + * Unit tests for {@link ReactiveTypeAdapters} via {@link ConversionService}. + * + * @author Mark Paluch + */ +class ReactiveTypeAdaptersUnitTests { + + private ConversionService conversionService = new ConversionService(); + + @BeforeEach + void before() { + ReactiveTypeAdapters.registerIn(conversionService); + } + + @Test + void toWrapperShouldCastMonoToMono() { + + Mono foo = Mono.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isSameAs(foo); + } + + @Test + void toWrapperShouldConvertMonoToRxJava1Single() { + + Mono foo = Mono.just("foo"); + assertThat(conversionService.convert(foo, Single.class)).isInstanceOf(Single.class); + } + + @Test + void toWrapperShouldConvertMonoToRxJava2Single() { + + Mono foo = Mono.just("foo"); + assertThat(conversionService.convert(foo, io.reactivex.Single.class)).isInstanceOf(io.reactivex.Single.class); + } + + @Test + void toWrapperShouldConvertRxJava2SingleToMono() { + + io.reactivex.Single foo = io.reactivex.Single.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isInstanceOf(Mono.class); + } + + @Test + void toWrapperShouldConvertRxJava2SingleToPublisher() { + + io.reactivex.Single foo = io.reactivex.Single.just("foo"); + assertThat(conversionService.convert(foo, Publisher.class)).isInstanceOf(Publisher.class); + } + + @Test + void toWrapperShouldConvertRxJava2MaybeToMono() { + + io.reactivex.Maybe foo = io.reactivex.Maybe.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isInstanceOf(Mono.class); + } + + @Test + void toWrapperShouldConvertRxJava2MaybeToFlux() { + + io.reactivex.Maybe foo = io.reactivex.Maybe.just("foo"); + assertThat(conversionService.convert(foo, Flux.class)).isInstanceOf(Flux.class); + } + + @Test + void toWrapperShouldConvertRxJava2MaybeToPublisher() { + + io.reactivex.Maybe foo = io.reactivex.Maybe.just("foo"); + assertThat(conversionService.convert(foo, Publisher.class)).isInstanceOf(Publisher.class); + } + + @Test + void toWrapperShouldConvertRxJava2FlowableToMono() { + + io.reactivex.Flowable foo = io.reactivex.Flowable.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isInstanceOf(Mono.class); + } + + @Test + void toWrapperShouldConvertRxJava2FlowableToFlux() { + + io.reactivex.Flowable foo = io.reactivex.Flowable.just("foo"); + assertThat(conversionService.convert(foo, Flux.class)).isInstanceOf(Flux.class); + } + + @Test + void toWrapperShouldCastRxJava2FlowableToPublisher() { + + io.reactivex.Flowable foo = io.reactivex.Flowable.just("foo"); + assertThat(conversionService.convert(foo, Publisher.class)).isInstanceOf(Publisher.class); + } + + @Test + void toWrapperShouldConvertRxJava2ObservableToMono() { + + io.reactivex.Observable foo = io.reactivex.Observable.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isInstanceOf(Mono.class); + } + + @Test + void toWrapperShouldConvertRxJava2ObservableToFlux() { + + io.reactivex.Observable foo = io.reactivex.Observable.just("foo"); + assertThat(conversionService.convert(foo, Flux.class)).isInstanceOf(Flux.class); + } + + @Test + void toWrapperShouldConvertRxJava2ObservableToSingle() { + + io.reactivex.Observable foo = io.reactivex.Observable.just("foo"); + assertThat(conversionService.convert(foo, io.reactivex.Single.class)).isInstanceOf(io.reactivex.Single.class); + } + + @Test + void toWrapperShouldConvertRxJava2ObservableToMaybe() { + + io.reactivex.Observable foo = io.reactivex.Observable.empty(); + assertThat(conversionService.convert(foo, Maybe.class)).isInstanceOf(Maybe.class); + } + + @Test + void toWrapperShouldConvertRxJava2ObservableToPublisher() { + + io.reactivex.Observable foo = io.reactivex.Observable.just("foo"); + assertThat(conversionService.convert(foo, Publisher.class)).isInstanceOf(Publisher.class); + } + + @Test + void toWrapperShouldConvertMonoToRxJava3Single() { + + Mono foo = Mono.just("foo"); + assertThat(conversionService.convert(foo, io.reactivex.rxjava3.core.Single.class)) + .isInstanceOf(io.reactivex.rxjava3.core.Single.class); + } + + @Test + void toWrapperShouldConvertRxJava3SingleToMono() { + + io.reactivex.rxjava3.core.Single foo = io.reactivex.rxjava3.core.Single.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isInstanceOf(Mono.class); + } + + @Test + void toWrapperShouldConvertRxJava3SingleToPublisher() { + + io.reactivex.rxjava3.core.Single foo = io.reactivex.rxjava3.core.Single.just("foo"); + assertThat(conversionService.convert(foo, Publisher.class)).isInstanceOf(Publisher.class); + } + + @Test + void toWrapperShouldConvertRxJava3MaybeToMono() { + + io.reactivex.rxjava3.core.Maybe foo = io.reactivex.rxjava3.core.Maybe.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isInstanceOf(Mono.class); + } + + @Test + void toWrapperShouldConvertRxJava3MaybeToFlux() { + + io.reactivex.rxjava3.core.Maybe foo = io.reactivex.rxjava3.core.Maybe.just("foo"); + assertThat(conversionService.convert(foo, Flux.class)).isInstanceOf(Flux.class); + } + + @Test + void toWrapperShouldConvertRxJava3MaybeToPublisher() { + + io.reactivex.rxjava3.core.Maybe foo = io.reactivex.rxjava3.core.Maybe.just("foo"); + assertThat(conversionService.convert(foo, Publisher.class)).isInstanceOf(Publisher.class); + } + + @Test + void toWrapperShouldConvertRxJava3FlowableToMono() { + + io.reactivex.rxjava3.core.Flowable foo = io.reactivex.rxjava3.core.Flowable.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isInstanceOf(Mono.class); + } + + @Test + void toWrapperShouldConvertRxJava3FlowableToFlux() { + + io.reactivex.rxjava3.core.Flowable foo = io.reactivex.rxjava3.core.Flowable.just("foo"); + assertThat(conversionService.convert(foo, Flux.class)).isInstanceOf(Flux.class); + } + + @Test + void toWrapperShouldCastRxJava3FlowableToPublisher() { + + io.reactivex.rxjava3.core.Flowable foo = io.reactivex.rxjava3.core.Flowable.just("foo"); + assertThat(conversionService.convert(foo, Publisher.class)).isInstanceOf(Publisher.class); + } + + @Test + void toWrapperShouldConvertRxJava3ObservableToMono() { + + io.reactivex.rxjava3.core.Observable foo = io.reactivex.rxjava3.core.Observable.just("foo"); + assertThat(conversionService.convert(foo, Mono.class)).isInstanceOf(Mono.class); + } + + @Test + void toWrapperShouldConvertRxJava3ObservableToFlux() { + + io.reactivex.rxjava3.core.Observable foo = io.reactivex.rxjava3.core.Observable.just("foo"); + assertThat(conversionService.convert(foo, Flux.class)).isInstanceOf(Flux.class); + } + + @Test + void toWrapperShouldConvertRxJava3ObservableToSingle() { + + io.reactivex.rxjava3.core.Observable foo = io.reactivex.rxjava3.core.Observable.just("foo"); + assertThat(conversionService.convert(foo, io.reactivex.rxjava3.core.Single.class)) + .isInstanceOf(io.reactivex.rxjava3.core.Single.class); + } + + @Test + void toWrapperShouldConvertRxJava3ObservableToMaybe() { + + io.reactivex.rxjava3.core.Observable foo = io.reactivex.rxjava3.core.Observable.empty(); + assertThat(conversionService.convert(foo, io.reactivex.rxjava3.core.Maybe.class)) + .isInstanceOf(io.reactivex.rxjava3.core.Maybe.class); + } + + @Test + void toWrapperShouldConvertRxJava3ObservableToPublisher() { + + io.reactivex.rxjava3.core.Observable foo = io.reactivex.rxjava3.core.Observable.just("foo"); + assertThat(conversionService.convert(foo, Publisher.class)).isInstanceOf(Publisher.class); + } + + @Test + void toWrapperShouldConvertMonoToFlux() { + + Mono foo = Mono.just("foo"); + assertThat(conversionService.convert(foo, Flux.class)).isInstanceOf(Flux.class); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/ReactiveTypeAdaptionIntegrationTests.java b/src/test/java/io/lettuce/core/dynamic/ReactiveTypeAdaptionIntegrationTests.java new file mode 100644 index 0000000000..48039614be --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/ReactiveTypeAdaptionIntegrationTests.java @@ -0,0 +1,170 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import rx.Observable; +import rx.Single; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class ReactiveTypeAdaptionIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + private final RxJava1Types rxjava1; + private final RxJava2Types rxjava2; + private final RxJava3Types rxjava3; + + @Inject + ReactiveTypeAdaptionIntegrationTests(StatefulRedisConnection connection) { + + this.redis = connection.sync(); + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + this.rxjava1 = factory.getCommands(RxJava1Types.class); + this.rxjava2 = factory.getCommands(RxJava2Types.class); + this.rxjava3 = factory.getCommands(RxJava3Types.class); + } + + @BeforeEach + void setUp() { + redis.set(key, value); + } + + @Test + void rxJava1Single() { + + Single single = rxjava1.getRxJava1Single(key); + assertThat(single.toBlocking().value()).isEqualTo(value); + } + + @Test + void rxJava1Observable() { + + Observable observable = rxjava1.getRxJava1Observable(key); + assertThat(observable.toBlocking().last()).isEqualTo(value); + } + + @Test + void rxJava2Single() throws InterruptedException { + + io.reactivex.Single single = rxjava2.getRxJava2Single(key); + single.test().await().assertResult(value).assertComplete(); + } + + @Test + void rxJava2Maybe() throws InterruptedException { + + io.reactivex.Maybe maybe = rxjava2.getRxJava2Maybe(key); + maybe.test().await().assertResult(value).assertComplete(); + } + + @Test + void rxJava2Observable() throws InterruptedException { + + io.reactivex.Observable observable = rxjava2.getRxJava2Observable(key); + observable.test().await().assertResult(value).assertComplete(); + } + + @Test + void rxJava2Flowable() throws InterruptedException { + + io.reactivex.Flowable flowable = rxjava2.getRxJava2Flowable(key); + flowable.test().await().assertResult(value).assertComplete(); + } + + @Test + void rxJava3Single() throws InterruptedException { + + io.reactivex.rxjava3.core.Single single = rxjava3.getRxJava3Single(key); + single.test().await().assertResult(value).assertComplete(); + } + + @Test + void rxJava3Maybe() throws InterruptedException { + + io.reactivex.rxjava3.core.Maybe maybe = rxjava3.getRxJava3Maybe(key); + maybe.test().await().assertResult(value).assertComplete(); + } + + @Test + void rxJava3Observable() throws InterruptedException { + + io.reactivex.rxjava3.core.Observable observable = rxjava3.getRxJava3Observable(key); + observable.test().await().assertResult(value).assertComplete(); + } + + @Test + void rxJava3Flowable() throws InterruptedException { + + io.reactivex.rxjava3.core.Flowable flowable = rxjava3.getRxJava3Flowable(key); + flowable.test().await().assertResult(value).assertComplete(); + } + + static interface RxJava1Types extends Commands { + + @Command("GET") + Single getRxJava1Single(String key); + + @Command("GET") + Observable getRxJava1Observable(String key); + } + + static interface RxJava2Types extends Commands { + + @Command("GET") + io.reactivex.Single getRxJava2Single(String key); + + @Command("GET") + io.reactivex.Maybe getRxJava2Maybe(String key); + + @Command("GET") + io.reactivex.Observable getRxJava2Observable(String key); + + @Command("GET") + io.reactivex.Flowable getRxJava2Flowable(String key); + } + + static interface RxJava3Types extends Commands { + + @Command("GET") + io.reactivex.rxjava3.core.Single getRxJava3Single(String key); + + @Command("GET") + io.reactivex.rxjava3.core.Maybe getRxJava3Maybe(String key); + + @Command("GET") + io.reactivex.rxjava3.core.Observable getRxJava3Observable(String key); + + @Command("GET") + io.reactivex.rxjava3.core.Flowable getRxJava3Flowable(String key); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/RedisCommandsAsyncIntegrationTests.java b/src/test/java/io/lettuce/core/dynamic/RedisCommandsAsyncIntegrationTests.java new file mode 100644 index 0000000000..ca7ddcb63f --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/RedisCommandsAsyncIntegrationTests.java @@ -0,0 +1,60 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Future; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RedisCommandsAsyncIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + RedisCommandsAsyncIntegrationTests(StatefulRedisConnection connection) { + this.redis = connection.sync(); + } + + @Test + void async() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + Future set = api.set(key, value); + assertThat(set).isInstanceOf(CompletableFuture.class); + } + + static interface MultipleExecutionModels extends Commands { + Future set(String key, String value); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/RedisCommandsBatchingIntegrationTests.java b/src/test/java/io/lettuce/core/dynamic/RedisCommandsBatchingIntegrationTests.java new file mode 100644 index 0000000000..b0e6cbaf59 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/RedisCommandsBatchingIntegrationTests.java @@ -0,0 +1,218 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Fail.fail; + +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.core.dynamic.batch.BatchException; +import io.lettuce.core.dynamic.batch.BatchExecutor; +import io.lettuce.core.dynamic.batch.BatchSize; +import io.lettuce.core.dynamic.batch.CommandBatching; +import io.lettuce.core.internal.Futures; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.TestFutures; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RedisCommandsBatchingIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + RedisCommandsBatchingIntegrationTests(StatefulRedisConnection connection) { + this.redis = connection.sync(); + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void selectiveBatching() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + SelectiveBatching api = factory.getCommands(SelectiveBatching.class); + + api.set("k1", value); + assertThat(redis.get("k1")).isEqualTo(value); + + api.set("k2", value, CommandBatching.queue()); + api.set("k3", value, CommandBatching.queue()); + assertThat(redis.get("k2")).isNull(); + assertThat(redis.get("k3")).isNull(); + + api.set("k4", value, CommandBatching.flush()); + + assertThat(redis.get("k2")).isEqualTo(value); + assertThat(redis.get("k3")).isEqualTo(value); + assertThat(redis.get("k4")).isEqualTo(value); + } + + @Test + void selectiveBatchingShouldHandleErrors() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + SelectiveBatching api = factory.getCommands(SelectiveBatching.class); + + api.set("k1", value, CommandBatching.queue()); + api.llen("k1", CommandBatching.queue()); + + try { + api.flush(); + fail("Missing BatchException"); + } catch (BatchException e) { + assertThat(redis.get("k1")).isEqualTo(value); + assertThat(e.getFailedCommands()).hasSize(1); + } + } + + @Test + void shouldExecuteBatchingSynchronously() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + Batching api = factory.getCommands(Batching.class); + + api.set("k1", value); + api.set("k2", value); + api.set("k3", value); + api.set("k4", value); + + assertThat(redis.get("k1")).isNull(); + api.set("k5", value); + + assertThat(redis.get("k1")).isEqualTo(value); + } + + @Test + void shouldHandleSynchronousBatchErrors() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + Batching api = factory.getCommands(Batching.class); + + api.set("k1", value); + api.set("k2", value); + api.llen("k2"); + api.set("k4", value); + + assertThat(redis.get("k1")).isNull(); + + try { + api.llen("k4"); + fail("Missing BatchException"); + } catch (BatchException e) { + + assertThat(redis.get("k1")).isEqualTo(value); + assertThat(redis.get("k4")).isEqualTo(value); + + assertThat(e).isInstanceOf(BatchException.class); + assertThat(e.getSuppressed()).hasSize(2); + assertThat(e.getFailedCommands()).hasSize(2); + } + } + + @Test + void shouldExecuteBatchingAynchronously() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + Batching api = factory.getCommands(Batching.class); + + api.setAsync("k1", value); + api.setAsync("k2", value); + api.setAsync("k3", value); + api.setAsync("k4", value); + + assertThat(redis.get("k1")).isNull(); + assertThat(TestFutures.getOrTimeout(api.setAsync("k5", value))).isEqualTo("OK"); + + assertThat(redis.get("k1")).isEqualTo(value); + } + + @Test + void shouldHandleAsynchronousBatchErrors() throws Exception { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + Batching api = factory.getCommands(Batching.class); + + api.setAsync("k1", value); + api.setAsync("k2", value); + api.llen("k2"); + api.setAsync("k4", value); + + assertThat(redis.get("k1")).isNull(); + + RedisFuture llen = api.llenAsync("k4"); + llen.await(1, TimeUnit.SECONDS); + + assertThat(redis.get("k1")).isEqualTo(value); + assertThat(redis.get("k4")).isEqualTo(value); + + try { + Futures.await(1, TimeUnit.SECONDS, llen); + fail("Missing RedisCommandExecutionException"); + } catch (Exception e) { + assertThat(e).isInstanceOf(RedisCommandExecutionException.class); + } + } + + @BatchSize(5) + static interface Batching extends Commands { + + void set(String key, String value); + + void llen(String key); + + @Command("SET") + RedisFuture setAsync(String key, String value); + + @Command("LLEN") + RedisFuture llenAsync(String key); + } + + static interface SelectiveBatching extends Commands, BatchExecutor { + + void set(String key, String value); + + void set(String key, String value, CommandBatching commandBatching); + + void llen(String key, CommandBatching commandBatching); + + } + +} diff --git a/src/test/java/io/lettuce/core/dynamic/RedisCommandsClusterIntegrationTests.java b/src/test/java/io/lettuce/core/dynamic/RedisCommandsClusterIntegrationTests.java new file mode 100644 index 0000000000..3d7b81074c --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/RedisCommandsClusterIntegrationTests.java @@ -0,0 +1,106 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.List; +import java.util.concurrent.ExecutionException; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.TestSupport; +import io.lettuce.core.Value; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.core.dynamic.domain.Timeout; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RedisCommandsClusterIntegrationTests extends TestSupport { + + private final StatefulRedisClusterConnection connection; + + @Inject + RedisCommandsClusterIntegrationTests(StatefulRedisClusterConnection connection) { + this.connection = connection; + this.connection.sync().flushall(); + } + + @Test + void future() throws ExecutionException, InterruptedException { + + RedisCommandFactory factory = new RedisCommandFactory(connection); + + SynchronousCommands api = factory.getCommands(SynchronousCommands.class); + + api.setSync(key, value, Timeout.create(Duration.ofSeconds(10))); + + assertThat(api.get("key").get()).isEqualTo("value"); + assertThat(api.getAsBytes("key")).isEqualTo("value".getBytes()); + } + + @Test + void shouldRouteBinaryKey() { + + connection.sync().set(key, value); + + RedisCommandFactory factory = new RedisCommandFactory(connection); + + SynchronousCommands api = factory.getCommands(SynchronousCommands.class); + + assertThat(api.get(key.getBytes())).isEqualTo(value.getBytes()); + } + + @Test + void mgetAsValues() { + + connection.sync().set(key, value); + + RedisCommandFactory factory = new RedisCommandFactory(connection); + + SynchronousCommands api = factory.getCommands(SynchronousCommands.class); + + List> values = api.mgetAsValues(key); + assertThat(values).hasSize(1); + assertThat(values.get(0)).isEqualTo(Value.just(value)); + } + + interface SynchronousCommands extends Commands { + + byte[] get(byte[] key); + + RedisFuture get(String key); + + @Command("GET") + byte[] getAsBytes(String key); + + @Command("SET") + String setSync(String key, String value, Timeout timeout); + + @Command("MGET") + List> mgetAsValues(String... keys); + } + +} diff --git a/src/test/java/io/lettuce/core/dynamic/RedisCommandsIntegrationTests.java b/src/test/java/io/lettuce/core/dynamic/RedisCommandsIntegrationTests.java new file mode 100644 index 0000000000..9ff09e7e07 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/RedisCommandsIntegrationTests.java @@ -0,0 +1,154 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.fail; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.when; + +import javax.inject.Inject; + +import org.apache.commons.pool2.impl.GenericObjectPool; +import org.apache.commons.pool2.impl.GenericObjectPoolConfig; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mockito; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.support.AsyncConnectionPoolSupport; +import io.lettuce.core.support.BoundedAsyncPool; +import io.lettuce.core.support.BoundedPoolConfig; +import io.lettuce.core.support.ConnectionPoolSupport; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RedisCommandsIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisCommands redis; + + @Inject + RedisCommandsIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + this.client = client; + this.redis = connection.sync(); + } + + @Test + void verifierShouldCatchMisspelledDeclarations() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + assertThat(factory).hasFieldOrPropertyWithValue("verifyCommandMethods", true); + try { + factory.getCommands(WithTypo.class); + fail("Missing CommandCreationException"); + } catch (CommandCreationException e) { + assertThat(e).hasMessageContaining("Command GAT does not exist."); + } + } + + @Test + void disabledVerifierDoesNotReportTypo() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + factory.setVerifyCommandMethods(false); + + assertThat(factory.getCommands(WithTypo.class)).isNotNull(); + } + + @Test + void doesNotFailIfCommandRetrievalFails() { + + StatefulRedisConnection connectionMock = Mockito.mock(StatefulRedisConnection.class); + RedisCommands commandsMock = Mockito.mock(RedisCommands.class); + + when(connectionMock.sync()).thenReturn(commandsMock); + doThrow(new RedisCommandExecutionException("ERR unknown command 'COMMAND'")).when(commandsMock).command(); + + RedisCommandFactory factory = new RedisCommandFactory(connectionMock); + + assertThat(factory).hasFieldOrPropertyWithValue("verifyCommandMethods", false); + } + + @Test + void verifierShouldCatchTooFewParametersDeclarations() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + try { + factory.getCommands(TooFewParameters.class); + fail("Missing CommandCreationException"); + } catch (CommandCreationException e) { + assertThat(e).hasMessageContaining("Command GET accepts 1 parameters but method declares 0 parameter"); + } + } + + @Test + void shouldWorkWithPooledConnection() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + client::connect, new GenericObjectPoolConfig<>()); + + try (StatefulRedisConnection connection = pool.borrowObject()) { + + RedisCommandFactory factory = new RedisCommandFactory(connection); + SimpleCommands commands = factory.getCommands(SimpleCommands.class); + commands.get("foo"); + } + + pool.close(); + } + + @Test + void shouldWorkWithAsyncPooledConnection() { + + BoundedAsyncPool> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> client.connectAsync(StringCodec.ASCII, RedisURI.create(TestSettings.host(), TestSettings.port())), + BoundedPoolConfig.create()); + + try (StatefulRedisConnection connection = pool.acquire().join()) { + + RedisCommandFactory factory = new RedisCommandFactory(connection); + SimpleCommands commands = factory.getCommands(SimpleCommands.class); + commands.get("foo"); + } + + pool.close(); + } + + private interface SimpleCommands extends Commands { + String get(String key); + } + + private interface TooFewParameters extends Commands { + String get(); + } + + private interface WithTypo extends Commands { + String gat(String key); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/RedisCommandsReactiveIntegrationTests.java b/src/test/java/io/lettuce/core/dynamic/RedisCommandsReactiveIntegrationTests.java new file mode 100644 index 0000000000..5539999ae8 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/RedisCommandsReactiveIntegrationTests.java @@ -0,0 +1,113 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import reactor.core.publisher.Mono; +import reactor.test.StepVerifier; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.test.LettuceExtension; +import io.reactivex.Maybe; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RedisCommandsReactiveIntegrationTests extends TestSupport { + + private final RedisCommands redis; + + @Inject + RedisCommandsReactiveIntegrationTests(StatefulRedisConnection connection) { + this.redis = connection.sync(); + } + + @BeforeEach + void setUp() { + this.redis.flushall(); + } + + @Test + void reactive() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + StepVerifier.create(api.setReactive(key, value)).expectNext("OK").verifyComplete(); + } + + @Test + void shouldHandlePresentValue() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + StepVerifier.create(api.setReactive(key, value)).expectNext("OK").verifyComplete(); + StepVerifier.create(api.get(key)).expectNext(value).verifyComplete(); + } + + @Test + void shouldHandleAbsentValue() { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + StepVerifier.create(api.get("unknown")).verifyComplete(); + } + + @Test + void shouldHandlePresentValueRxJava() throws InterruptedException { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + StepVerifier.create(api.setReactive(key, value)).expectNext("OK").verifyComplete(); + api.getRxJava(key).test().await().onSuccess(value); + } + + @Test + void shouldHandleAbsentValueRxJava() throws InterruptedException { + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + api.getRxJava(key).test().await().onSuccess(null); + } + + interface MultipleExecutionModels extends Commands { + + @Command("SET") + Mono setReactive(String key, String value); + + Mono get(String key); + + @Command("GET") + Maybe getRxJava(String key); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/RedisCommandsSyncIntegrationTests.java b/src/test/java/io/lettuce/core/dynamic/RedisCommandsSyncIntegrationTests.java new file mode 100644 index 0000000000..7c7e7d1e2a --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/RedisCommandsSyncIntegrationTests.java @@ -0,0 +1,132 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Collections; +import java.util.List; +import java.util.concurrent.TimeUnit; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.TestSupport; +import io.lettuce.core.Value; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.core.dynamic.domain.Timeout; +import io.lettuce.test.LettuceExtension; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class RedisCommandsSyncIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisCommands redis; + + @Inject + RedisCommandsSyncIntegrationTests(RedisClient client, StatefulRedisConnection connection) { + this.client = client; + this.redis = connection.sync(); + } + + @Test + void sync() { + + StatefulRedisConnection connection = client.connect(ByteArrayCodec.INSTANCE); + RedisCommandFactory factory = new RedisCommandFactory(connection); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + api.setSync(key, value, Timeout.create(10, TimeUnit.SECONDS)); + assertThat(api.get("key")).isEqualTo("value"); + assertThat(api.getAsBytes("key")).isEqualTo("value".getBytes()); + + connection.close(); + } + + @Test + void defaultMethod() { + + StatefulRedisConnection connection = client.connect(ByteArrayCodec.INSTANCE); + RedisCommandFactory factory = new RedisCommandFactory(connection); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + api.setSync(key, value, Timeout.create(10, TimeUnit.SECONDS)); + + assertThat(api.getAsBytes()).isEqualTo("value".getBytes()); + + connection.close(); + } + + @Test + void mgetAsValues() { + + redis.set(key, value); + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + List> values = api.mgetAsValues(key, "key2"); + assertThat(values).hasSize(2); + assertThat(values.get(0)).isEqualTo(Value.just(value)); + assertThat(values.get(1)).isEqualTo(Value.empty()); + } + + @Test + void mgetByteArray() { + + redis.set(key, value); + + RedisCommandFactory factory = new RedisCommandFactory(redis.getStatefulConnection()); + + MultipleExecutionModels api = factory.getCommands(MultipleExecutionModels.class); + + List values = api.mget(Collections.singleton(key.getBytes())); + assertThat(values).hasSize(1).contains(value.getBytes()); + } + + interface MultipleExecutionModels extends Commands { + + List mget(Iterable keys); + + String get(String key); + + default byte[] getAsBytes() { + return getAsBytes("key"); + } + + @Command("GET ?0") + byte[] getAsBytes(String key); + + @Command("SET") + String setSync(String key, String value, Timeout timeout); + + @Command("MGET") + List> mgetAsValues(String... keys); + } + +} diff --git a/src/test/java/io/lettuce/core/dynamic/SimpleBatcherUnitTests.java b/src/test/java/io/lettuce/core/dynamic/SimpleBatcherUnitTests.java new file mode 100644 index 0000000000..838c2916eb --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/SimpleBatcherUnitTests.java @@ -0,0 +1,146 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyZeroInteractions; + +import java.util.Arrays; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.dynamic.batch.CommandBatching; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.RedisCommand; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class SimpleBatcherUnitTests { + + @Mock + private StatefulConnection connection; + + @Test + void shouldBatchWithDefaultSize() { + + RedisCommand c1 = createCommand(); + RedisCommand c2 = createCommand(); + RedisCommand c3 = createCommand(); + RedisCommand c4 = createCommand(); + + SimpleBatcher batcher = new SimpleBatcher(connection, 2); + + assertThat(batcher.batch(c1, null)).isEqualTo(BatchTasks.EMPTY); + verifyZeroInteractions(connection); + + BatchTasks batch = batcher.batch(c2, null); + verify(connection).dispatch(Arrays.asList(c1, c2)); + assertThat(batch).contains(c1, c2); + + batcher.batch(c3, null); + verifyZeroInteractions(connection); + + batcher.batch(c4, null); + verify(connection).dispatch(Arrays.asList(c3, c4)); + } + + @Test + void shouldBatchWithoutSize() { + + RedisCommand c1 = createCommand(); + RedisCommand c2 = createCommand(); + + SimpleBatcher batcher = new SimpleBatcher(connection, -1); + + batcher.batch(c1, null); + + verify(connection).dispatch(c1); + + batcher.batch(c2, null); + + verify(connection).dispatch(c2); + } + + @Test + void shouldBatchWithBatchControlQueue() { + + RedisCommand c1 = createCommand(); + RedisCommand c2 = createCommand(); + RedisCommand c3 = createCommand(); + RedisCommand c4 = createCommand(); + + SimpleBatcher batcher = new SimpleBatcher(connection, 2); + + batcher.batch(c1, CommandBatching.queue()); + batcher.batch(c2, CommandBatching.queue()); + verifyZeroInteractions(connection); + + batcher.batch(c3, null); + + verify(connection).dispatch(Arrays.asList(c1, c2)); + } + + @Test + void shouldBatchWithBatchControlQueueOverqueue() { + + RedisCommand c1 = createCommand(); + RedisCommand c2 = createCommand(); + RedisCommand c3 = createCommand(); + RedisCommand c4 = createCommand(); + RedisCommand c5 = createCommand(); + + SimpleBatcher batcher = new SimpleBatcher(connection, 2); + + batcher.batch(c1, CommandBatching.queue()); + batcher.batch(c2, CommandBatching.queue()); + batcher.batch(c3, CommandBatching.queue()); + batcher.batch(c4, CommandBatching.queue()); + verifyZeroInteractions(connection); + + batcher.batch(c5, null); + + verify(connection).dispatch(Arrays.asList(c1, c2)); + verify(connection).dispatch(Arrays.asList(c3, c4)); + } + + @Test + void shouldBatchWithBatchControlFlush() { + + RedisCommand c1 = createCommand(); + RedisCommand c2 = createCommand(); + RedisCommand c3 = createCommand(); + + SimpleBatcher batcher = new SimpleBatcher(connection, 4); + + batcher.batch(c1, null); + batcher.batch(c2, CommandBatching.flush()); + + verify(connection).dispatch(Arrays.asList(c1, c2)); + } + + private static RedisCommand createCommand() { + return new AsyncCommand<>(new Command<>(CommandType.COMMAND, null, null)); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/codec/AnnotationRedisCodecResolverUnitTests.java b/src/test/java/io/lettuce/core/dynamic/codec/AnnotationRedisCodecResolverUnitTests.java new file mode 100644 index 0000000000..87e5f2d1e1 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/codec/AnnotationRedisCodecResolverUnitTests.java @@ -0,0 +1,148 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.codec; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.lang.reflect.Method; +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.Range; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.CommandMethod; +import io.lettuce.core.dynamic.DeclaredCommandMethod; +import io.lettuce.core.dynamic.annotation.Key; +import io.lettuce.core.dynamic.annotation.Value; +import io.lettuce.core.dynamic.support.ReflectionUtils; + +/** + * Unit tests for {@link AnnotationRedisCodecResolver}. + * + * @author Mark Paluch + * @author Manyanda Chitimbo + */ +class AnnotationRedisCodecResolverUnitTests { + + private List> codecs = Arrays.asList(new StringCodec(), new ByteArrayCodec()); + + @Test + void shouldResolveFullyHinted() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "stringOnly", String.class, String.class); + RedisCodec codec = resolve(method); + + assertThat(codec).isInstanceOf(StringCodec.class); + } + + @Test + void shouldResolveHintedKey() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "annotatedKey", String.class, String.class); + RedisCodec codec = resolve(method); + + assertThat(codec).isInstanceOf(StringCodec.class); + } + + @Test + void shouldResolveHintedValue() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "annotatedValue", String.class, String.class); + RedisCodec codec = resolve(method); + + assertThat(codec).isInstanceOf(StringCodec.class); + } + + @Test + void shouldResolveWithoutHints() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "nothingAnnotated", String.class, String.class); + RedisCodec codec = resolve(method); + + assertThat(codec).isInstanceOf(StringCodec.class); + } + + @Test + void shouldResolveHintedByteArrayValue() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "annotatedByteArrayValue", String.class, byte[].class); + RedisCodec codec = resolve(method); + + assertThat(codec).isInstanceOf(ByteArrayCodec.class); + } + + @Test + void resolutionOfMethodWithMixedTypesShouldFail() { + Method method = ReflectionUtils.findMethod(CommandMethods.class, "mixedTypes", String.class, byte[].class); + assertThatThrownBy(() -> resolve(method)).isInstanceOf(IllegalStateException.class); + } + + @Test + void resolutionOfMethodWithMixedCodecsShouldFail() { + Method method = ReflectionUtils.findMethod(CommandMethods.class, "mixedCodecs", String.class, byte[].class, + String.class); + assertThatThrownBy(() -> resolve(method)).isInstanceOf(IllegalStateException.class); + } + + @Test + void shouldDiscoverCodecTypesFromWrappers() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "withWrappers", Range.class, + io.lettuce.core.Value.class); + + Set> types = new AnnotationRedisCodecResolver(codecs).findTypes(DeclaredCommandMethod.create(method), + Value.class); + + assertThat(types).contains(String.class, Number.class); + } + + RedisCodec resolve(Method method) { + + CommandMethod commandMethod = DeclaredCommandMethod.create(method); + AnnotationRedisCodecResolver resolver = new AnnotationRedisCodecResolver(codecs); + + return resolver.resolve(commandMethod); + } + + private static interface CommandMethods { + + String stringOnly(@Key String key, @Value String value); + + String annotatedKey(@Key String key, String value); + + String annotatedValue(String key, @Value String value); + + String annotatedByteArrayValue(String key, @Value byte[] value); + + String nothingAnnotated(String key, String value); + + String mixedTypes(@Key String key, @Value byte[] value); + + String mixedCodecs(@Key String key1, @Key byte[] key2, @Value String value); + + String withWrappers(@Value Range range, @Value io.lettuce.core.Value value); + + String withMap(Map map); + } + +} diff --git a/src/test/java/io/lettuce/core/dynamic/codec/ParameterWrappersUnitTests.java b/src/test/java/io/lettuce/core/dynamic/codec/ParameterWrappersUnitTests.java new file mode 100644 index 0000000000..85f61d9a7a --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/codec/ParameterWrappersUnitTests.java @@ -0,0 +1,147 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.codec; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.lang.reflect.Method; +import java.util.List; +import java.util.Map; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.KeyValue; +import io.lettuce.core.Range; +import io.lettuce.core.Value; +import io.lettuce.core.dynamic.codec.AnnotationRedisCodecResolver.ParameterWrappers; +import io.lettuce.core.dynamic.parameter.Parameter; +import io.lettuce.core.dynamic.support.ReflectionUtils; +import io.lettuce.core.dynamic.support.TypeInformation; + +/** + * @author Mark Paluch + */ +class ParameterWrappersUnitTests { + + @Test + void shouldReturnValueTypeForRange() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "range", Range.class); + + TypeInformation typeInformation = new Parameter(method, 0).getTypeInformation(); + + assertThat(ParameterWrappers.hasKeyType(typeInformation)).isFalse(); + assertThat(ParameterWrappers.hasValueType(typeInformation)).isFalse(); + assertThat(ParameterWrappers.supports(typeInformation)).isTrue(); + assertThat(ParameterWrappers.getValueType(typeInformation).getType()).isEqualTo(String.class); + } + + @Test + void shouldReturnValueTypeForValue() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "value", Value.class); + + TypeInformation typeInformation = new Parameter(method, 0).getTypeInformation(); + + assertThat(ParameterWrappers.hasKeyType(typeInformation)).isFalse(); + assertThat(ParameterWrappers.hasValueType(typeInformation)).isTrue(); + assertThat(ParameterWrappers.getValueType(typeInformation).getType()).isEqualTo(String.class); + } + + @Test + void shouldReturnValueTypeForKeyValue() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "keyValue", KeyValue.class); + + TypeInformation typeInformation = new Parameter(method, 0).getTypeInformation(); + + assertThat(ParameterWrappers.hasKeyType(typeInformation)).isTrue(); + assertThat(ParameterWrappers.getKeyType(typeInformation).getType()).isEqualTo(Integer.class); + + assertThat(ParameterWrappers.hasValueType(typeInformation)).isTrue(); + assertThat(ParameterWrappers.getValueType(typeInformation).getType()).isEqualTo(String.class); + } + + @Test + void shouldReturnValueTypeForArray() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "array", String[].class); + + TypeInformation typeInformation = new Parameter(method, 0).getTypeInformation(); + + assertThat(ParameterWrappers.hasKeyType(typeInformation)).isFalse(); + assertThat(ParameterWrappers.supports(typeInformation)).isTrue(); + assertThat(ParameterWrappers.getValueType(typeInformation).getType()).isEqualTo(String.class); + } + + @Test + void shouldNotSupportByteArray() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "byteArray", byte[].class); + + TypeInformation typeInformation = new Parameter(method, 0).getTypeInformation(); + + assertThat(ParameterWrappers.supports(typeInformation)).isFalse(); + } + + @Test + void shouldReturnValueTypeForList() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "withList", List.class); + + TypeInformation typeInformation = new Parameter(method, 0).getTypeInformation(); + + assertThat(ParameterWrappers.hasKeyType(typeInformation)).isFalse(); + + assertThat(ParameterWrappers.supports(typeInformation)).isTrue(); + assertThat(ParameterWrappers.getValueType(typeInformation).getType()).isEqualTo(String.class); + } + + @Test + void shouldReturnValueTypeForMap() { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, "withMap", Map.class); + + TypeInformation typeInformation = new Parameter(method, 0).getTypeInformation(); + + assertThat(ParameterWrappers.hasKeyType(typeInformation)).isTrue(); + assertThat(ParameterWrappers.getKeyType(typeInformation).getType()).isEqualTo(Integer.class); + + assertThat(ParameterWrappers.hasValueType(typeInformation)).isTrue(); + assertThat(ParameterWrappers.getValueType(typeInformation).getType()).isEqualTo(String.class); + } + + private static interface CommandMethods { + + String range(Range range); + + String value(Value range); + + String keyValue(KeyValue range); + + String array(String[] values); + + String byteArray(byte[] values); + + String withWrappers(Range range, io.lettuce.core.Value value, + io.lettuce.core.KeyValue keyValue); + + String withList(List map); + + String withMap(Map map); + } + +} diff --git a/src/test/java/io/lettuce/core/dynamic/intercept/InvocationProxyFactoryUnitTests.java b/src/test/java/io/lettuce/core/dynamic/intercept/InvocationProxyFactoryUnitTests.java new file mode 100644 index 0000000000..077b771c3b --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/intercept/InvocationProxyFactoryUnitTests.java @@ -0,0 +1,101 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.intercept; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class InvocationProxyFactoryUnitTests { + + @Test + void shouldDelegateCallsToInterceptor() { + + InvocationProxyFactory factory = new InvocationProxyFactory(); + factory.addInterface(TargetWithBooleanMethod.class); + factory.addInterceptor(new ReturnValue(Boolean.TRUE)); + + TargetWithBooleanMethod target = factory.createProxy(getClass().getClassLoader()); + + assertThat(target.someMethod()).isTrue(); + } + + @Test + void shouldNotFailWithoutFurtherInterceptors() { + + InvocationProxyFactory factory = new InvocationProxyFactory(); + factory.addInterface(TargetWithBooleanMethod.class); + + TargetWithBooleanMethod target = factory.createProxy(getClass().getClassLoader()); + + assertThat(target.someMethod()).isNull(); + } + + @Test + void shouldCallInterceptorsInOrder() { + + InvocationProxyFactory factory = new InvocationProxyFactory(); + factory.addInterface(TargetWithStringMethod.class); + factory.addInterceptor(new StringAppendingMethodInterceptor("-foo")); + factory.addInterceptor(new StringAppendingMethodInterceptor("-bar")); + factory.addInterceptor(new ReturnValue("actual")); + + TargetWithStringMethod target = factory.createProxy(getClass().getClassLoader()); + + assertThat(target.run()).isEqualTo("actual-bar-foo"); + } + + private interface TargetWithBooleanMethod { + + Boolean someMethod(); + } + + private static class ReturnValue implements MethodInterceptor { + + private final Object value; + + ReturnValue(Object value) { + this.value = value; + } + + @Override + public Object invoke(MethodInvocation invocation) { + return value; + } + } + + private interface TargetWithStringMethod { + + String run(); + } + + private static class StringAppendingMethodInterceptor implements MethodInterceptor { + + private final String toAppend; + + StringAppendingMethodInterceptor(String toAppend) { + this.toAppend = toAppend; + } + + @Override + public Object invoke(MethodInvocation invocation) throws Throwable { + return invocation.proceed().toString() + toAppend; + } + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/output/CodecAwareOutputResolverUnitTests.java b/src/test/java/io/lettuce/core/dynamic/output/CodecAwareOutputResolverUnitTests.java new file mode 100644 index 0000000000..5f39045d9a --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/output/CodecAwareOutputResolverUnitTests.java @@ -0,0 +1,138 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.lang.reflect.Method; +import java.nio.ByteBuffer; +import java.util.List; +import java.util.Map; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.dynamic.CommandMethod; +import io.lettuce.core.dynamic.DeclaredCommandMethod; +import io.lettuce.core.dynamic.support.ReflectionUtils; +import io.lettuce.core.output.*; + +/** + * @author Mark Paluch + */ +class CodecAwareOutputResolverUnitTests { + + private CodecAwareOutputFactoryResolver resolver = new CodecAwareOutputFactoryResolver( + new OutputRegistryCommandOutputFactoryResolver(new OutputRegistry()), new ByteBufferAndStringCodec()); + + @Test + void shouldResolveValueOutput() { + + CommandOutput commandOutput = getCommandOutput("string"); + + assertThat(commandOutput).isInstanceOf(ValueOutput.class); + } + + @Test + void shouldResolveValueListOutput() { + + assertThat(getCommandOutput("stringList")).isOfAnyClassIn(ValueListOutput.class, StringListOutput.class); + assertThat(getCommandOutput("charSequenceList")).isOfAnyClassIn(ValueListOutput.class, StringListOutput.class); + } + + @Test + void shouldResolveKeyOutput() { + + CommandOutput commandOutput = getCommandOutput("byteBuffer"); + + assertThat(commandOutput).isInstanceOf(KeyOutput.class); + } + + @Test + void shouldResolveKeyListOutput() { + + CommandOutput commandOutput = getCommandOutput("byteBufferList"); + + assertThat(commandOutput).isInstanceOf(KeyListOutput.class); + } + + @Test + void shouldResolveListOfMapsOutput() { + + CommandOutput commandOutput = getCommandOutput("listOfMapsOutput"); + + assertThat(commandOutput).isInstanceOf(ListOfMapsOutput.class); + } + + @Test + void shouldResolveMapsOutput() { + + CommandOutput commandOutput = getCommandOutput("mapOutput"); + + assertThat(commandOutput).isInstanceOf(MapOutput.class); + } + + CommandOutput getCommandOutput(String methodName) { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, methodName); + CommandMethod commandMethod = DeclaredCommandMethod.create(method); + + CommandOutputFactory factory = resolver + .resolveCommandOutput(new OutputSelector(commandMethod.getReturnType(), new ByteBufferAndStringCodec())); + + return factory.create(new ByteBufferAndStringCodec()); + } + + private static interface CommandMethods { + + List stringList(); + + List charSequenceList(); + + List byteBufferList(); + + List> listOfMapsOutput(); + + Map mapOutput(); + + String string(); + + ByteBuffer byteBuffer(); + } + + private static class ByteBufferAndStringCodec implements RedisCodec { + + @Override + public ByteBuffer decodeKey(ByteBuffer bytes) { + return null; + } + + @Override + public String decodeValue(ByteBuffer bytes) { + return null; + } + + @Override + public ByteBuffer encodeKey(ByteBuffer key) { + return null; + } + + @Override + public ByteBuffer encodeValue(String value) { + return null; + } + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/output/OutputRegistryCommandOutputFactoryResolverUnitTests.java b/src/test/java/io/lettuce/core/dynamic/output/OutputRegistryCommandOutputFactoryResolverUnitTests.java new file mode 100644 index 0000000000..f83be70103 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/output/OutputRegistryCommandOutputFactoryResolverUnitTests.java @@ -0,0 +1,215 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.lang.reflect.Method; +import java.util.Collection; +import java.util.List; + +import org.junit.jupiter.api.Test; +import org.reactivestreams.Publisher; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.GeoCoordinates; +import io.lettuce.core.ScoredValue; +import io.lettuce.core.Value; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.DeclaredCommandMethod; +import io.lettuce.core.dynamic.support.ReflectionUtils; +import io.lettuce.core.output.*; + +/** + * @author Mark Paluch + */ +class OutputRegistryCommandOutputFactoryResolverUnitTests { + + private OutputRegistryCommandOutputFactoryResolver resolver = new OutputRegistryCommandOutputFactoryResolver( + new OutputRegistry()); + + @Test + void shouldResolveStringListOutput() { + + assertThat(getCommandOutput("stringList")).isInstanceOf(KeyListOutput.class); + assertThat(getCommandOutput("stringIterable")).isInstanceOf(KeyListOutput.class); + } + + @Test + void shouldResolveStreamingStringListOutput() { + assertThat(getStreamingCommandOutput("stringFlux")).isInstanceOf(KeyListOutput.class); + } + + @Test + void shouldResolveVoidOutput() { + + assertThat(getCommandOutput("voidMethod")).isInstanceOf(VoidOutput.class); + assertThat(getCommandOutput("voidWrapper")).isInstanceOf(VoidOutput.class); + } + + @Test + void shouldResolveKeyOutput() { + assertThat(getCommandOutput("stringMono")).isInstanceOf(KeyOutput.class); + } + + @Test + void shouldResolveStringValueListOutput() { + + CommandOutput commandOutput = getCommandOutput("stringValueCollection"); + + assertThat(commandOutput).isInstanceOf(ValueValueListOutput.class); + } + + @Test + void shouldResolveStringScoredValueListOutput() { + + CommandOutput commandOutput = getCommandOutput("stringScoredValueList"); + + assertThat(commandOutput).isInstanceOf(ScoredValueListOutput.class); + } + + @Test + void shouldResolveGeoCoordinatesValueOutput() { + + CommandOutput commandOutput = getCommandOutput("geoCoordinatesValueList"); + + assertThat(commandOutput).isInstanceOf(GeoCoordinatesValueListOutput.class); + } + + @Test + void shouldResolveByteArrayOutput() { + + CommandOutput commandOutput = getCommandOutput("bytes"); + + assertThat(commandOutput).isInstanceOf(ByteArrayOutput.class); + } + + @Test + void shouldResolveBooleanOutput() { + + CommandOutput commandOutput = getCommandOutput("bool"); + + assertThat(commandOutput).isInstanceOf(BooleanOutput.class); + } + + @Test + void shouldResolveBooleanWrappedOutput() { + + CommandOutput commandOutput = getCommandOutput("boolWrapper"); + + assertThat(commandOutput).isInstanceOf(BooleanOutput.class); + } + + @Test + void shouldResolveBooleanListOutput() { + + CommandOutput commandOutput = getCommandOutput("boolList"); + + assertThat(commandOutput).isInstanceOf(BooleanListOutput.class); + } + + @Test + void shouldResolveListOfMapsOutput() { + + CommandOutput commandOutput = getCommandOutput("listOfMapsOutput"); + + assertThat(commandOutput).isInstanceOf(ListOfMapsOutput.class); + } + + @Test + void stringValueCollectionIsAssignableFromStringValueListOutput() { + + OutputSelector selector = getOutputSelector("stringValueCollection"); + + boolean assignable = resolver.isAssignableFrom(selector, + OutputRegistry.getOutputComponentType(StringValueListOutput.class)); + assertThat(assignable).isTrue(); + } + + @Test + void stringWildcardValueCollectionIsAssignableFromOutputs() { + + OutputSelector selector = getOutputSelector("stringValueCollection"); + + assertThat(resolver.isAssignableFrom(selector, OutputRegistry.getOutputComponentType(ScoredValueListOutput.class))) + .isFalse(); + + assertThat(resolver.isAssignableFrom(selector, OutputRegistry.getOutputComponentType(StringValueListOutput.class))) + .isTrue(); + + } + + CommandOutput getCommandOutput(String methodName) { + + OutputSelector outputSelector = getOutputSelector(methodName); + CommandOutputFactory factory = resolver.resolveCommandOutput(Publisher.class.isAssignableFrom(outputSelector + .getOutputType().getRawClass()) ? unwrapReactiveType(outputSelector) : outputSelector); + + return factory.create(new StringCodec()); + } + + CommandOutput getStreamingCommandOutput(String methodName) { + + OutputSelector outputSelector = getOutputSelector(methodName); + CommandOutputFactory factory = resolver.resolveStreamingCommandOutput(unwrapReactiveType(outputSelector)); + + return factory.create(new StringCodec()); + } + + private OutputSelector unwrapReactiveType(OutputSelector outputSelector) { + return new OutputSelector(outputSelector.getOutputType().getGeneric(0), outputSelector.getRedisCodec()); + } + + private OutputSelector getOutputSelector(String methodName) { + + Method method = ReflectionUtils.findMethod(CommandMethods.class, methodName); + return new OutputSelector(DeclaredCommandMethod.create(method).getActualReturnType(), StringCodec.UTF8); + } + + private interface CommandMethods { + + List stringList(); + + Iterable stringIterable(); + + Mono stringMono(); + + Flux stringFlux(); + + Collection> stringValueCollection(); + + Collection> stringWildcardValueCollection(); + + List> stringScoredValueList(); + + List> geoCoordinatesValueList(); + + byte[] bytes(); + + boolean bool(); + + Boolean boolWrapper(); + + void voidMethod(); + + Void voidWrapper(); + + List boolList(); + + ListOfMapsOutput listOfMapsOutput(); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/output/OutputRegistryUnitTests.java b/src/test/java/io/lettuce/core/dynamic/output/OutputRegistryUnitTests.java new file mode 100644 index 0000000000..23b8768e72 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/output/OutputRegistryUnitTests.java @@ -0,0 +1,119 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; +import java.util.List; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.ScoredValue; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.support.ClassTypeInformation; +import io.lettuce.core.dynamic.support.ResolvableType; +import io.lettuce.core.output.*; + +/** + * @author Mark Paluch + */ +class OutputRegistryUnitTests { + + @Test + void getKeyOutputType() { + + OutputType outputComponentType = OutputRegistry.getOutputComponentType(KeyListOutput.class); + + assertThat(outputComponentType.getTypeInformation().getComponentType().getType()).isEqualTo(Object.class); + } + + @Test + void getStringListOutputType() { + + OutputType outputComponentType = OutputRegistry.getOutputComponentType(StringListOutput.class); + + assertThat(outputComponentType.getTypeInformation().getComponentType()) + .isEqualTo(ClassTypeInformation.from(String.class)); + } + + @Test + void componentTypeOfKeyOuputWithCodecIsAssignableFromString() { + + OutputType outputComponentType = OutputRegistry.getOutputComponentType(KeyOutput.class); + + ResolvableType resolvableType = outputComponentType.withCodec(new StringCodec()); + + assertThat(resolvableType.isAssignableFrom(String.class)).isTrue(); + } + + @Test + void componentTypeOfKeyListOuputWithCodecIsAssignableFromListOfString() { + + OutputType outputComponentType = OutputRegistry.getOutputComponentType(KeyListOutput.class); + + ResolvableType resolvableType = outputComponentType.withCodec(new StringCodec()); + + assertThat(resolvableType.isAssignableFrom(ResolvableType.forClassWithGenerics(List.class, String.class))).isTrue(); + } + + @Test + void streamingTypeOfKeyOuputWithCodecIsAssignableFromString() { + + OutputType outputComponentType = OutputRegistry.getStreamingType(KeyListOutput.class); + + ResolvableType resolvableType = outputComponentType.withCodec(new StringCodec()); + + assertThat(resolvableType.isAssignableFrom(ResolvableType.forClass(String.class))).isTrue(); + } + + @Test + void streamingTypeOfKeyListOuputWithCodecIsAssignableFromListOfString() { + + OutputType outputComponentType = OutputRegistry.getStreamingType(ScoredValueListOutput.class); + + ResolvableType resolvableType = outputComponentType.withCodec(new StringCodec()); + + assertThat(resolvableType.isAssignableFrom(ResolvableType.forClassWithGenerics(ScoredValue.class, String.class))) + .isTrue(); + } + + @Test + void customizedValueOutput() { + + OutputType outputComponentType = OutputRegistry.getOutputComponentType(KeyTypedOutput.class); + + ResolvableType resolvableType = outputComponentType.withCodec(ByteArrayCodec.INSTANCE); + + assertThat(resolvableType.isAssignableFrom(ResolvableType.forClass(byte[].class))).isTrue(); + } + + private static abstract class IntermediateOutput extends CommandOutput { + + IntermediateOutput(RedisCodec codec, V1 output) { + super(codec, null); + } + } + + private static class KeyTypedOutput extends IntermediateOutput { + + KeyTypedOutput(RedisCodec codec) { + super(codec, null); + } + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/segment/AnnotationCommandSegmentFactoryUnitTests.java b/src/test/java/io/lettuce/core/dynamic/segment/AnnotationCommandSegmentFactoryUnitTests.java new file mode 100644 index 0000000000..c999b8d840 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/segment/AnnotationCommandSegmentFactoryUnitTests.java @@ -0,0 +1,131 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.segment; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.dynamic.CommandMethod; +import io.lettuce.core.dynamic.DeclaredCommandMethod; +import io.lettuce.core.dynamic.annotation.Command; +import io.lettuce.core.dynamic.annotation.CommandNaming; +import io.lettuce.core.dynamic.annotation.CommandNaming.LetterCase; +import io.lettuce.core.dynamic.annotation.CommandNaming.Strategy; +import io.lettuce.core.dynamic.support.ReflectionUtils; + +/** + * @author Mark Paluch + */ +class AnnotationCommandSegmentFactoryUnitTests { + + private AnnotationCommandSegmentFactory factory = new AnnotationCommandSegmentFactory(); + + @Test + void notAnnotatedDotAsIs() { + + CommandMethod commandMethod = DeclaredCommandMethod.create(ReflectionUtils.findMethod(CommandMethods.class, + "notAnnotated")); + + CommandSegments commandSegments = factory.createCommandSegments(commandMethod); + + assertThat(commandSegments).isEmpty(); + assertThat(commandSegments.getCommandType().name()).isEqualTo("not.Annotated"); + } + + @Test + void uppercaseDot() { + + CommandMethod commandMethod = DeclaredCommandMethod.create(ReflectionUtils + .findMethod(CommandMethods.class, "upperCase")); + + CommandSegments commandSegments = factory.createCommandSegments(commandMethod); + + assertThat(commandSegments).isEmpty(); + assertThat(commandSegments.getCommandType().name()).isEqualTo("UPPER.CASE"); + } + + @Test + void methodNameAsIs() { + + CommandMethod commandMethod = DeclaredCommandMethod.create(ReflectionUtils.findMethod(CommandMethods.class, + "methodName")); + + CommandSegments commandSegments = factory.createCommandSegments(commandMethod); + + assertThat(commandSegments).isEmpty(); + assertThat(commandSegments.getCommandType().name()).isEqualTo("methodName"); + } + + @Test + void splitAsIs() { + + CommandMethod commandMethod = DeclaredCommandMethod.create(ReflectionUtils.findMethod(CommandMethods.class, + "clientSetname")); + + CommandSegments commandSegments = factory.createCommandSegments(commandMethod); + + assertThat(commandSegments).hasSize(1).extracting(CommandSegment::asString).contains("Setname"); + assertThat(commandSegments.getCommandType().name()).isEqualTo("client"); + } + + @Test + void commandAnnotation() { + + CommandMethod commandMethod = DeclaredCommandMethod.create(ReflectionUtils + .findMethod(CommandMethods.class, "atCommand")); + + CommandSegments commandSegments = factory.createCommandSegments(commandMethod); + + assertThat(commandSegments).hasSize(1).extracting(CommandSegment::asString).contains("WORLD"); + assertThat(commandSegments.getCommandType().name()).isEqualTo("HELLO"); + } + + @Test + void splitDefault() { + + CommandMethod commandMethod = DeclaredCommandMethod + .create(ReflectionUtils.findMethod(Defaulted.class, "clientSetname")); + + CommandSegments commandSegments = factory.createCommandSegments(commandMethod); + + assertThat(commandSegments).hasSize(1).extracting(CommandSegment::asString).contains("SETNAME"); + assertThat(commandSegments.getCommandType().name()).isEqualTo("CLIENT"); + } + + @CommandNaming(strategy = Strategy.DOT, letterCase = LetterCase.AS_IS) + private static interface CommandMethods { + + void notAnnotated(); + + @CommandNaming(letterCase = LetterCase.UPPERCASE) + void upperCase(); + + @CommandNaming(strategy = Strategy.METHOD_NAME) + void methodName(); + + @CommandNaming(strategy = Strategy.SPLIT) + void clientSetname(); + + @Command("HELLO WORLD") + void atCommand(); + } + + private static interface Defaulted { + + void clientSetname(); + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/support/ParametrizedTypeInformationUnitTests.java b/src/test/java/io/lettuce/core/dynamic/support/ParametrizedTypeInformationUnitTests.java new file mode 100644 index 0000000000..19509f3828 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/support/ParametrizedTypeInformationUnitTests.java @@ -0,0 +1,140 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Collection; +import java.util.List; +import java.util.Set; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class ParametrizedTypeInformationUnitTests { + + @Test + void isAssignableShouldConsiderExactType() { + + TypeInformation target = ClassTypeInformation + .fromReturnTypeOf(ReflectionUtils.findMethod(TestType.class, "exactNumber")); + + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfNumber.class))).isTrue(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfInteger.class))).isFalse(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfString.class))).isFalse(); + } + + @Test + void isAssignableShouldConsiderCompatibleType() { + + TypeInformation target = ClassTypeInformation + .fromReturnTypeOf(ReflectionUtils.findMethod(TestType.class, "collectionOfNumber")); + + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfNumber.class))).isTrue(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfInteger.class))).isFalse(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfString.class))).isFalse(); + } + + @Test + void isAssignableShouldConsiderWildcardOfNumberType() { + + TypeInformation target = ClassTypeInformation + .fromReturnTypeOf(ReflectionUtils.findMethod(TestType.class, "numberOrSubtype")); + + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfNumber.class))).isTrue(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfInteger.class))).isTrue(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfString.class))).isFalse(); + } + + @Test + void isAssignableShouldConsiderWildcard() { + + TypeInformation target = ClassTypeInformation + .fromReturnTypeOf(ReflectionUtils.findMethod(TestType.class, "anything")); + + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfNumber.class))).isTrue(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfInteger.class))).isTrue(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfString.class))).isTrue(); + } + + @Test + void returnsNullMapValueTypeForNonMapProperties() { + + TypeInformation valueType = ClassTypeInformation.from(Bar.class).getSuperTypeInformation(List.class); + TypeInformation mapValueType = valueType.getMapValueType(); + + assertThat(valueType).isInstanceOf(ParametrizedTypeInformation.class); + assertThat(mapValueType).isNull(); + } + + @Test + void isAssignableShouldConsiderNestedParameterTypes() { + + TypeInformation target = ClassTypeInformation + .fromReturnTypeOf(ReflectionUtils.findMethod(TestType.class, "collectionOfIterableOfNumber")); + + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfIterableOfInteger.class))).isFalse(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfListOfNumber.class))).isFalse(); + assertThat(target.isAssignableFrom(ClassTypeInformation.from(ListOfSetOfNumber.class))).isFalse(); + } + + private interface Bar extends List { + + } + + private static interface TestType { + + Collection collectionOfNumber(); + + Collection> collectionOfIterableOfNumber(); + + List exactNumber(); + + List anything(); + + List numberOrSubtype(); + } + + private static interface ListOfNumber extends List { + + } + + static interface ListOfIterableOfNumber extends List> { + + } + + private static interface ListOfSetOfNumber extends List> { + + } + + private static interface ListOfIterableOfInteger extends List> { + + } + + private static interface ListOfListOfNumber extends List> { + + } + + private static interface ListOfString extends List { + + } + + private static interface ListOfInteger extends List { + + } +} diff --git a/src/test/java/io/lettuce/core/dynamic/support/WildcardTypeInformationUnitTests.java b/src/test/java/io/lettuce/core/dynamic/support/WildcardTypeInformationUnitTests.java new file mode 100644 index 0000000000..82901e2413 --- /dev/null +++ b/src/test/java/io/lettuce/core/dynamic/support/WildcardTypeInformationUnitTests.java @@ -0,0 +1,114 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.support; + +import static io.lettuce.core.dynamic.support.ClassTypeInformation.from; +import static org.assertj.core.api.Assertions.assertThat; + +import java.lang.reflect.Method; +import java.util.Collection; +import java.util.List; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class WildcardTypeInformationUnitTests { + + @Test + void shouldResolveWildcardType() { + + TypeInformation information = ClassTypeInformation.fromReturnTypeOf(methodOf("listOfAnything")); + + assertThat(information.getComponentType()).isInstanceOf(WildcardTypeInformation.class); + } + + @Test + void isAssignableFromExactType() { + + TypeInformation information = ClassTypeInformation.fromReturnTypeOf(methodOf("listOfAnything")); + TypeInformation compatible = ClassTypeInformation.fromReturnTypeOf(methodOf("anotherListOfAnything")); + + assertThat(information.isAssignableFrom(compatible)).isTrue(); + } + + @Test + void isAssignableFromCompatibleFirstLevelType() { + + TypeInformation target = ClassTypeInformation.fromReturnTypeOf(methodOf("collectionOfAnything")); + TypeInformation source = ClassTypeInformation.fromReturnTypeOf(methodOf("anotherListOfAnything")); + + assertThat(target.isAssignableFrom(source)).isTrue(); + } + + @Test + void isAssignableFromCompatibleComponentType() { + + TypeInformation target = ClassTypeInformation.fromReturnTypeOf(methodOf("listOfAnything")); + TypeInformation source = ClassTypeInformation.fromReturnTypeOf(methodOf("exactNumber")); + + assertThat(target.isAssignableFrom(source)).isTrue(); + assertThat(target.isAssignableFrom(ClassTypeInformation.SET)).isFalse(); + } + + @Test + void isAssignableFromUpperBoundComponentType() { + + TypeInformation target = componentTypeOf("atMostInteger"); + + assertThat(target.isAssignableFrom(from(Integer.class))).isTrue(); + assertThat(target.isAssignableFrom(from(Number.class))).isTrue(); + assertThat(target.isAssignableFrom(from(Float.class))).isFalse(); + } + + @Test + void isAssignableFromLowerBoundComponentType() { + + TypeInformation target = componentTypeOf("atLeastNumber"); + + assertThat(target.isAssignableFrom(from(Integer.class))).isTrue(); + assertThat(target.isAssignableFrom(from(Number.class))).isTrue(); + assertThat(target.isAssignableFrom(from(Float.class))).isTrue(); + assertThat(target.isAssignableFrom(from(String.class))).isFalse(); + assertThat(target.isAssignableFrom(from(Object.class))).isFalse(); + } + + TypeInformation componentTypeOf(String name) { + return ClassTypeInformation.fromReturnTypeOf(methodOf(name)).getComponentType(); + } + + Method methodOf(String name) { + return ReflectionUtils.findMethod(GenericReturnTypes.class, name); + } + + private static interface GenericReturnTypes { + + List exactNumber(); + + List listOfAnything(); + + List anotherListOfAnything(); + + Collection collectionOfAnything(); + + List atMostInteger(); + + List exactFloat(); + + List atLeastNumber(); + } +} diff --git a/src/test/java/io/lettuce/core/event/ConnectionEventsTriggeredIntegrationTests.java b/src/test/java/io/lettuce/core/event/ConnectionEventsTriggeredIntegrationTests.java new file mode 100644 index 0000000000..20309e878b --- /dev/null +++ b/src/test/java/io/lettuce/core/event/ConnectionEventsTriggeredIntegrationTests.java @@ -0,0 +1,55 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.time.temporal.ChronoUnit; + +import org.junit.jupiter.api.Test; + +import reactor.core.publisher.Flux; +import reactor.test.StepVerifier; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.event.connection.ConnectionEvent; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; + +/** + * @author Mark Paluch + */ +class ConnectionEventsTriggeredIntegrationTests extends TestSupport { + + @Test + void testConnectionEvents() { + + RedisClient client = RedisClient.create(TestClientResources.get(), RedisURI.Builder.redis(host, port).build()); + + Flux publisher = client.getResources().eventBus().get() + .filter(event -> event instanceof ConnectionEvent).cast(ConnectionEvent.class); + + StepVerifier.create(publisher).then(() -> client.connect().close()).assertNext(event -> { + assertThat(event.remoteAddress()).isNotNull(); + assertThat(event.localAddress()).isNotNull(); + assertThat(event.toString()).contains("->"); + }).expectNextCount(3).thenCancel().verify(Duration.of(5, ChronoUnit.SECONDS)); + + FastShutdown.shutdown(client); + } +} diff --git a/src/test/java/io/lettuce/core/event/DefaultEventBusUnitTests.java b/src/test/java/io/lettuce/core/event/DefaultEventBusUnitTests.java new file mode 100644 index 0000000000..fac1710de1 --- /dev/null +++ b/src/test/java/io/lettuce/core/event/DefaultEventBusUnitTests.java @@ -0,0 +1,63 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.concurrent.ArrayBlockingQueue; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; + +import reactor.core.Disposable; +import reactor.core.scheduler.Schedulers; +import reactor.test.StepVerifier; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class DefaultEventBusUnitTests { + + @Mock + private Event event; + + @Test + void publishToSubscriber() { + + EventBus sut = new DefaultEventBus(Schedulers.immediate()); + + StepVerifier.create(sut.get()).then(() -> sut.publish(event)).expectNext(event).thenCancel().verify(); + } + + @Test + void publishToMultipleSubscribers() throws Exception { + + EventBus sut = new DefaultEventBus(Schedulers.parallel()); + + ArrayBlockingQueue arrayQueue = new ArrayBlockingQueue<>(5); + + Disposable disposable1 = sut.get().doOnNext(arrayQueue::add).subscribe(); + StepVerifier.create(sut.get().doOnNext(arrayQueue::add)).then(() -> sut.publish(event)).expectNext(event).thenCancel() + .verify(); + + assertThat(arrayQueue.take()).isEqualTo(event); + assertThat(arrayQueue.take()).isEqualTo(event); + disposable1.dispose(); + } +} diff --git a/src/test/java/io/lettuce/core/event/DefaultEventPublisherOptionsUnitTests.java b/src/test/java/io/lettuce/core/event/DefaultEventPublisherOptionsUnitTests.java new file mode 100644 index 0000000000..2cff90154c --- /dev/null +++ b/src/test/java/io/lettuce/core/event/DefaultEventPublisherOptionsUnitTests.java @@ -0,0 +1,54 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.event; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class DefaultEventPublisherOptionsUnitTests { + + @Test + void testDefault() { + + DefaultEventPublisherOptions sut = DefaultEventPublisherOptions.create(); + + assertThat(sut.eventEmitInterval()).isEqualTo(Duration.ofMinutes(10)); + } + + @Test + void testDisabled() { + + DefaultEventPublisherOptions sut = DefaultEventPublisherOptions.disabled(); + + assertThat(sut.eventEmitInterval()).isEqualTo(Duration.ZERO); + } + + @Test + void testBuilder() { + + DefaultEventPublisherOptions sut = DefaultEventPublisherOptions.builder().eventEmitInterval(1, TimeUnit.SECONDS) + .build(); + + assertThat(sut.eventEmitInterval()).isEqualTo(Duration.ofSeconds(1)); + } +} diff --git a/src/test/java/io/lettuce/core/internal/AbstractInvocationHandlerUnitTests.java b/src/test/java/io/lettuce/core/internal/AbstractInvocationHandlerUnitTests.java new file mode 100644 index 0000000000..d1ffc78813 --- /dev/null +++ b/src/test/java/io/lettuce/core/internal/AbstractInvocationHandlerUnitTests.java @@ -0,0 +1,81 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat; + +import java.lang.reflect.Method; +import java.lang.reflect.Proxy; +import java.util.Collection; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class AbstractInvocationHandlerUnitTests { + + @Test + void shouldHandleInterfaceMethod() { + + ReturnOne proxy = createProxy(); + assertThat(proxy.returnOne()).isEqualTo(1); + } + + @Test + void shouldBeEqualToSelf() { + + ReturnOne proxy1 = createProxy(); + ReturnOne proxy2 = createProxy(); + + assertThat(proxy1).isEqualTo(proxy1); + assertThat(proxy1.hashCode()).isEqualTo(proxy1.hashCode()); + + assertThat(proxy1).isNotEqualTo(proxy2); + assertThat(proxy1.hashCode()).isNotEqualTo(proxy2.hashCode()); + } + + @Test + void shouldBeNotEqualToProxiesWithDifferentInterfaces() { + + ReturnOne proxy1 = createProxy(); + Object proxy2 = Proxy.newProxyInstance(getClass().getClassLoader(), new Class[] { ReturnOne.class, Collection.class }, + new InvocationHandler()); + + assertThat(proxy1).isNotEqualTo(proxy2); + assertThat(proxy1.hashCode()).isNotEqualTo(proxy2.hashCode()); + } + + private ReturnOne createProxy() { + + return (ReturnOne) Proxy.newProxyInstance(getClass().getClassLoader(), new Class[] { ReturnOne.class }, + new InvocationHandler()); + + } + + static class InvocationHandler extends AbstractInvocationHandler { + + @Override + protected Object handleInvocation(Object proxy, Method method, Object[] args) { + return 1; + } + } + + static interface ReturnOne { + int returnOne(); + } + +} diff --git a/src/test/java/io/lettuce/core/internal/HostAndPortUnitTests.java b/src/test/java/io/lettuce/core/internal/HostAndPortUnitTests.java new file mode 100644 index 0000000000..9e6f1a8260 --- /dev/null +++ b/src/test/java/io/lettuce/core/internal/HostAndPortUnitTests.java @@ -0,0 +1,186 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.fail; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class HostAndPortUnitTests { + + @Test + void testFromStringWellFormed() { + // Well-formed inputs. + checkFromStringCase("google.com", 80, "google.com", 80, false); + checkFromStringCase("google.com", 80, "google.com", 80, false); + checkFromStringCase("192.0.2.1", 82, "192.0.2.1", 82, false); + checkFromStringCase("[2001::1]", 84, "2001::1", 84, false); + checkFromStringCase("2001::3", 86, "2001::3", 86, false); + checkFromStringCase("host:", 80, "host", 80, false); + } + + @Test + void testFromStringBadDefaultPort() { + // Well-formed strings with bad default ports. + checkFromStringCase("gmail.com:81", -1, "gmail.com", 81, true); + checkFromStringCase("192.0.2.2:83", -1, "192.0.2.2", 83, true); + checkFromStringCase("[2001::2]:85", -1, "2001::2", 85, true); + checkFromStringCase("goo.gl:65535", 65536, "goo.gl", 65535, true); + // No port, bad default. + checkFromStringCase("google.com", -1, "google.com", -1, false); + checkFromStringCase("192.0.2.1", 65536, "192.0.2.1", -1, false); + checkFromStringCase("[2001::1]", -1, "2001::1", -1, false); + checkFromStringCase("2001::3", 65536, "2001::3", -1, false); + } + + @Test + void testFromStringUnusedDefaultPort() { + // Default port, but unused. + checkFromStringCase("gmail.com:81", 77, "gmail.com", 81, true); + checkFromStringCase("192.0.2.2:83", 77, "192.0.2.2", 83, true); + checkFromStringCase("[2001::2]:85", 77, "2001::2", 85, true); + } + + @Test + void testFromStringBadPort() { + // Out-of-range ports. + checkFromStringCase("google.com:65536", 1, null, 99, false); + checkFromStringCase("google.com:9999999999", 1, null, 99, false); + // Invalid port parts. + checkFromStringCase("google.com:port", 1, null, 99, false); + checkFromStringCase("google.com:-25", 1, null, 99, false); + checkFromStringCase("google.com:+25", 1, null, 99, false); + checkFromStringCase("google.com:25 ", 1, null, 99, false); + checkFromStringCase("google.com:25\t", 1, null, 99, false); + checkFromStringCase("google.com:0x25 ", 1, null, 99, false); + } + + @Test + void testFromStringUnparseableNonsense() { + // Some nonsense that causes parse failures. + checkFromStringCase("[goo.gl]", 1, null, 99, false); + checkFromStringCase("[goo.gl]:80", 1, null, 99, false); + checkFromStringCase("[", 1, null, 99, false); + checkFromStringCase("[]:", 1, null, 99, false); + checkFromStringCase("[]:80", 1, null, 99, false); + checkFromStringCase("[]bad", 1, null, 99, false); + } + + @Test + void testFromStringParseableNonsense() { + // Examples of nonsense that gets through. + checkFromStringCase("[[:]]", 86, "[:]", 86, false); + checkFromStringCase("x:y:z", 87, "x:y:z", 87, false); + checkFromStringCase("", 88, "", 88, false); + checkFromStringCase(":", 99, "", 99, false); + checkFromStringCase(":123", -1, "", 123, true); + checkFromStringCase("\nOMG\t", 89, "\nOMG\t", 89, false); + } + + @Test + void shouldCreateHostAndPortFromParts() { + HostAndPort hp = HostAndPort.of("gmail.com", 81); + assertThat(hp.getHostText()).isEqualTo("gmail.com"); + assertThat(hp.hasPort()).isTrue(); + assertThat(hp.getPort()).isEqualTo(81); + + try { + HostAndPort.of("gmail.com:80", 81); + fail("Expected IllegalArgumentException"); + } catch (IllegalArgumentException expected) { + } + + try { + HostAndPort.of("gmail.com", -1); + fail("Expected IllegalArgumentException"); + } catch (IllegalArgumentException expected) { + } + } + + @Test + void shouldCompare() { + HostAndPort hp1 = HostAndPort.parse("foo::123"); + HostAndPort hp2 = HostAndPort.parse("foo::123"); + HostAndPort hp3 = HostAndPort.parse("[foo::124]"); + HostAndPort hp4 = HostAndPort.of("[foo::123]", 80); + HostAndPort hp5 = HostAndPort.parse("[foo::123]:80"); + assertThat(hp1.hashCode()).isEqualTo(hp1.hashCode()); + assertThat(hp2.hashCode()).isEqualTo(hp1.hashCode()); + assertThat(hp3.hashCode()).isNotEqualTo(hp1.hashCode()); + assertThat(hp3.hashCode()).isNotEqualTo(hp4.hashCode()); + assertThat(hp5.hashCode()).isNotEqualTo(hp4.hashCode()); + + assertThat(hp1.equals(hp1)).isTrue(); + assertThat(hp1).isEqualTo(hp1); + assertThat(hp1.equals(hp2)).isTrue(); + assertThat(hp1.equals(hp3)).isFalse(); + assertThat(hp1).isNotEqualTo(hp3); + assertThat(hp3.equals(hp4)).isFalse(); + assertThat(hp4.equals(hp5)).isFalse(); + assertThat(hp1.equals(null)).isFalse(); + } + + @Test + void shouldApplyCompatibilityParsing() { + + checkFromCompatCase("affe::123:6379", "affe::123", 6379); + checkFromCompatCase("1:2:3:4:5:6:7:8:6379", "1:2:3:4:5:6:7:8", 6379); + checkFromCompatCase("[affe::123]:6379", "affe::123", 6379); + checkFromCompatCase("127.0.0.1:6379", "127.0.0.1", 6379); + } + + private static void checkFromStringCase(String hpString, int defaultPort, String expectHost, int expectPort, + boolean expectHasExplicitPort) { + HostAndPort hp; + try { + hp = HostAndPort.parse(hpString); + } catch (IllegalArgumentException e) { + // Make sure we expected this. + assertThat(expectHost).isNull(); + return; + } + assertThat(expectHost).isNotNull(); + + // Apply withDefaultPort(), yielding hp2. + final boolean badDefaultPort = (defaultPort < 0 || defaultPort > 65535); + + // Check the pre-withDefaultPort() instance. + if (expectHasExplicitPort) { + assertThat(hp.hasPort()).isTrue(); + assertThat(hp.getPort()).isEqualTo(expectPort); + } else { + assertThat(hp.hasPort()).isFalse(); + try { + hp.getPort(); + fail("Expected IllegalStateException"); + } catch (IllegalStateException expected) { + } + } + assertThat(hp.getHostText()).isEqualTo(expectHost); + } + + private static void checkFromCompatCase(String hpString, String expectHost, int expectPort) { + + HostAndPort hostAndPort = HostAndPort.parseCompat(hpString); + assertThat(hostAndPort.getHostText()).isEqualTo(expectHost); + assertThat(hostAndPort.getPort()).isEqualTo(expectPort); + + } +} diff --git a/src/test/java/io/lettuce/core/internal/TimeoutProviderUnitTests.java b/src/test/java/io/lettuce/core/internal/TimeoutProviderUnitTests.java new file mode 100644 index 0000000000..7e6fc5ba03 --- /dev/null +++ b/src/test/java/io/lettuce/core/internal/TimeoutProviderUnitTests.java @@ -0,0 +1,68 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.internal; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Mockito.mock; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.TimeoutOptions; +import io.lettuce.core.protocol.RedisCommand; + +/** + * Unit tests for {@link TimeoutProvider}. + * + * @author Mark Paluch + */ +class TimeoutProviderUnitTests { + + @Test + void shouldReturnConfiguredTimeout() { + + TimeoutProvider provider = new TimeoutProvider(() -> TimeoutOptions.enabled(Duration.ofSeconds(10)), + () -> TimeUnit.SECONDS.toNanos(100)); + + long timeout = provider.getTimeoutNs(mock(RedisCommand.class)); + + assertThat(timeout).isEqualTo(Duration.ofSeconds(10).toNanos()); + } + + @Test + void shouldReturnDefaultTimeout() { + + TimeoutProvider provider = new TimeoutProvider(() -> TimeoutOptions.enabled(Duration.ofSeconds(-1)), + () -> TimeUnit.SECONDS.toNanos(100)); + + long timeout = provider.getTimeoutNs(mock(RedisCommand.class)); + + assertThat(timeout).isEqualTo(Duration.ofSeconds(100).toNanos()); + } + + @Test + void shouldReturnNoTimeout() { + + TimeoutProvider provider = new TimeoutProvider(() -> TimeoutOptions.enabled(Duration.ZERO), + () -> TimeUnit.SECONDS.toNanos(100)); + + long timeout = provider.getTimeoutNs(mock(RedisCommand.class)); + + assertThat(timeout).isEqualTo(0); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/ConnectionsUnitTests.java b/src/test/java/io/lettuce/core/masterreplica/ConnectionsUnitTests.java new file mode 100644 index 0000000000..e257c55d75 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/ConnectionsUnitTests.java @@ -0,0 +1,65 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyZeroInteractions; +import static org.mockito.Mockito.when; + +import java.util.Collections; +import java.util.concurrent.CompletableFuture; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.masterreplica.Connections; +import reactor.util.function.Tuples; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class ConnectionsUnitTests { + + @Mock + private StatefulRedisConnection connection1; + + @BeforeEach + void before() { + when(connection1.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + } + + @Test + void shouldCloseConnectionCompletingAfterCloseSignal() { + + Connections connections = new Connections(5, Collections.emptyList()); + connections.closeAsync(); + + verifyZeroInteractions(connection1); + + connections.onAccept(Tuples.of(RedisURI.create("localhost", 6379), connection1)); + + verify(connection1).closeAsync(); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/CustomCommandIntegrationTests.java b/src/test/java/io/lettuce/core/masterreplica/CustomCommandIntegrationTests.java new file mode 100644 index 0000000000..9f3eb462d7 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/CustomCommandIntegrationTests.java @@ -0,0 +1,175 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.Arrays; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.RedisBug; +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.masterslave.MasterSlave; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.TestFutures; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class CustomCommandIntegrationTests extends TestSupport { + + private final RedisClient redisClient; + + private StatefulRedisConnection connection; + private RedisCommands redis; + + @Inject + CustomCommandIntegrationTests(RedisClient redisClient) { + this.redisClient = redisClient; + } + + @BeforeEach + void before() { + + RedisURI uri = RedisURI.create("redis-sentinel://127.0.0.1:26379?sentinelMasterId=mymaster&timeout=5s"); + connection = MasterReplica.connect(redisClient, StringCodec.UTF8, uri); + redis = connection.sync(); + redis.flushall(); + } + + @AfterEach + void after() { + connection.close(); + } + + @Test + void dispatchSet() { + + String response = redis.dispatch(MyCommands.SET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey(key).addValue(value)); + + assertThat(response).isEqualTo("OK"); + } + + @Test + void dispatchWithoutArgs() { + + String response = redis.dispatch(MyCommands.INFO, new StatusOutput<>(StringCodec.UTF8)); + + assertThat(response).contains("connected_clients"); + } + + @Test + void dispatchShouldFailForWrongDataType() { + + redis.hset(key, key, value); + assertThatThrownBy(() -> redis.dispatch(CommandType.GET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey(key))).isInstanceOf(RedisCommandExecutionException.class); + } + + @Test + void dispatchTransactions() { + + redis.multi(); + String response = redis.dispatch(CommandType.SET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey(key).addValue(value)); + + TransactionResult exec = redis.exec(); + + assertThat(response).isNull(); + assertThat(exec).hasSize(1).contains("OK"); + } + + @Test + void masterReplicaAsyncPing() { + + RedisCommand command = new Command<>(MyCommands.PING, new StatusOutput<>(StringCodec.UTF8), + null); + + AsyncCommand async = new AsyncCommand<>(command); + getStandaloneConnection().dispatch(async); + + assertThat(TestFutures.getOrTimeout(async.toCompletableFuture())).isEqualTo("PONG"); + } + + @Test + void masterReplicaAsyncBatchPing() { + + RedisCommand command1 = new Command<>(CommandType.SET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey("key1").addValue("value")); + + RedisCommand command2 = new Command<>(CommandType.GET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey("key1")); + + RedisCommand command3 = new Command<>(CommandType.SET, new StatusOutput<>(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey("other-key1").addValue("value")); + + AsyncCommand async1 = new AsyncCommand<>(command1); + AsyncCommand async2 = new AsyncCommand<>(command2); + AsyncCommand async3 = new AsyncCommand<>(command3); + getStandaloneConnection().dispatch(Arrays.asList(async1, async2, async3)); + + assertThat(TestFutures.getOrTimeout(async1.toCompletableFuture())).isEqualTo("OK"); + assertThat(TestFutures.getOrTimeout(async2.toCompletableFuture())).isEqualTo("value"); + assertThat(TestFutures.getOrTimeout(async3.toCompletableFuture())).isEqualTo("OK"); + } + + @Test + void masterReplicaFireAndForget() { + + RedisCommand command = new Command<>(MyCommands.PING, new StatusOutput<>(StringCodec.UTF8), + null); + getStandaloneConnection().dispatch(command); + assertThat(command.isCancelled()).isFalse(); + + } + + private StatefulRedisConnection getStandaloneConnection() { + + assumeTrue(redis.getStatefulConnection() instanceof StatefulRedisConnection); + return redis.getStatefulConnection(); + } + + public enum MyCommands implements ProtocolKeyword { + PING, SET, INFO; + + private final byte name[]; + + MyCommands() { + // cache the bytes for the command name. Reduces memory and cpu pressure when using commands. + name = name().getBytes(); + } + + @Override + public byte[] getBytes() { + return name; + } + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/MasterReplicaChannelWriterUnitTests.java b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaChannelWriterUnitTests.java new file mode 100644 index 0000000000..dc20eea32c --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaChannelWriterUnitTests.java @@ -0,0 +1,190 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.CompletableFuture; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class MasterReplicaChannelWriterUnitTests { + + @Mock + private MasterReplicaConnectionProvider connectionProvider; + + @Mock + private ClientResources clientResources; + + @Mock + private StatefulRedisConnection connection; + + @Test + void shouldReturnIntentForWriteCommand() { + + RedisCommand set = new Command<>(CommandType.SET, null); + RedisCommand mset = new Command<>(CommandType.MSET, null); + + assertThat(MasterReplicaChannelWriter.getIntent(Arrays.asList(set, mset))) + .isEqualTo(MasterReplicaConnectionProvider.Intent.WRITE); + + assertThat(MasterReplicaChannelWriter.getIntent(Collections.singletonList(set))) + .isEqualTo(MasterReplicaConnectionProvider.Intent.WRITE); + } + + @Test + void shouldReturnDefaultIntentForNoCommands() { + + assertThat(MasterReplicaChannelWriter.getIntent(Collections.emptyList())) + .isEqualTo(MasterReplicaConnectionProvider.Intent.WRITE); + } + + @Test + void shouldReturnIntentForReadCommand() { + + RedisCommand get = new Command<>(CommandType.GET, null); + RedisCommand mget = new Command<>(CommandType.MGET, null); + + assertThat(MasterReplicaChannelWriter.getIntent(Arrays.asList(get, mget))) + .isEqualTo(MasterReplicaConnectionProvider.Intent.READ); + + assertThat(MasterReplicaChannelWriter.getIntent(Collections.singletonList(get))) + .isEqualTo(MasterReplicaConnectionProvider.Intent.READ); + } + + @Test + void shouldReturnIntentForMixedCommands() { + + RedisCommand set = new Command<>(CommandType.SET, null); + RedisCommand mget = new Command<>(CommandType.MGET, null); + + assertThat(MasterReplicaChannelWriter.getIntent(Arrays.asList(set, mget))) + .isEqualTo(MasterReplicaConnectionProvider.Intent.WRITE); + + assertThat(MasterReplicaChannelWriter.getIntent(Collections.singletonList(set))) + .isEqualTo(MasterReplicaConnectionProvider.Intent.WRITE); + } + + @Test + void shouldBindTransactionsToMaster() { + + MasterReplicaChannelWriter writer = new MasterReplicaChannelWriter(connectionProvider, clientResources); + + when(connectionProvider.getConnectionAsync(any(MasterReplicaConnectionProvider.Intent.class))) + .thenReturn(CompletableFuture.completedFuture(connection)); + + writer.write(mockCommand(CommandType.MULTI)); + writer.write(mockCommand(CommandType.GET)); + writer.write(mockCommand(CommandType.EXEC)); + + verify(connectionProvider, times(3)).getConnectionAsync(MasterReplicaConnectionProvider.Intent.WRITE); + } + + @Test + void shouldBindTransactionsToMasterInBatch() { + + MasterReplicaChannelWriter writer = new MasterReplicaChannelWriter(connectionProvider, clientResources); + + when(connectionProvider.getConnectionAsync(any(MasterReplicaConnectionProvider.Intent.class))) + .thenReturn(CompletableFuture.completedFuture(connection)); + + List> commands = Arrays.asList(mockCommand(CommandType.MULTI), + mockCommand(CommandType.GET), mockCommand(CommandType.EXEC)); + + writer.write(commands); + + verify(connectionProvider).getConnectionAsync(MasterReplicaConnectionProvider.Intent.WRITE); + } + + @Test + void shouldDeriveIntentFromCommandTypeAfterTransaction() { + + MasterReplicaChannelWriter writer = new MasterReplicaChannelWriter(connectionProvider, clientResources); + + when(connectionProvider.getConnectionAsync(any(MasterReplicaConnectionProvider.Intent.class))) + .thenReturn(CompletableFuture.completedFuture(connection)); + + writer.write(mockCommand(CommandType.MULTI)); + writer.write(mockCommand(CommandType.EXEC)); + writer.write(mockCommand(CommandType.GET)); + + verify(connectionProvider, times(2)).getConnectionAsync(MasterReplicaConnectionProvider.Intent.WRITE); + verify(connectionProvider).getConnectionAsync(MasterReplicaConnectionProvider.Intent.READ); + } + + @Test + void shouldDeriveIntentFromCommandTypeAfterDiscardedTransaction() { + + MasterReplicaChannelWriter writer = new MasterReplicaChannelWriter(connectionProvider, clientResources); + + when(connectionProvider.getConnectionAsync(any(MasterReplicaConnectionProvider.Intent.class))) + .thenReturn(CompletableFuture.completedFuture(connection)); + + writer.write(mockCommand(CommandType.MULTI)); + writer.write(mockCommand(CommandType.DISCARD)); + writer.write(mockCommand(CommandType.GET)); + + verify(connectionProvider, times(2)).getConnectionAsync(MasterReplicaConnectionProvider.Intent.WRITE); + verify(connectionProvider).getConnectionAsync(MasterReplicaConnectionProvider.Intent.READ); + } + + @Test + void shouldDeriveIntentFromCommandBatchTypeAfterDiscardedTransaction() { + + MasterReplicaChannelWriter writer = new MasterReplicaChannelWriter(connectionProvider, clientResources); + + when(connectionProvider.getConnectionAsync(any(MasterReplicaConnectionProvider.Intent.class))) + .thenReturn(CompletableFuture.completedFuture(connection)); + + List> commands = Arrays.asList(mockCommand(CommandType.MULTI), + mockCommand(CommandType.EXEC)); + + writer.write(commands); + writer.write(Collections.singletonList(mockCommand(CommandType.GET))); + + verify(connectionProvider).getConnectionAsync(MasterReplicaConnectionProvider.Intent.WRITE); + verify(connectionProvider).getConnectionAsync(MasterReplicaConnectionProvider.Intent.READ); + } + + private static Command mockCommand(CommandType multi) { + return new Command<>(multi, new StatusOutput<>(StringCodec.UTF8)); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/MasterReplicaConnectionProviderUnitTests.java b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaConnectionProviderUnitTests.java new file mode 100644 index 0000000000..84e9651c44 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaConnectionProviderUnitTests.java @@ -0,0 +1,91 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Matchers.any; +import static org.mockito.Matchers.eq; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import java.util.Arrays; +import java.util.Collections; +import java.util.concurrent.CompletableFuture; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.RedisChannelHandler; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.models.role.RedisInstance; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class MasterReplicaConnectionProviderUnitTests { + + private MasterReplicaConnectionProvider sut; + + @Mock + RedisClient clientMock; + + @Mock(extraInterfaces = StatefulRedisConnection.class) + RedisChannelHandler channelHandlerMock; + + private StatefulRedisConnection nodeConnectionMock; + + @Mock + RedisCommands commandsMock; + + @BeforeEach + void before() { + + nodeConnectionMock = (StatefulRedisConnection) channelHandlerMock; + sut = new MasterReplicaConnectionProvider<>(clientMock, StringCodec.UTF8, RedisURI.create("localhost", 1), + Collections.emptyMap()); + sut.setKnownNodes(Arrays.asList( + new RedisMasterReplicaNode("localhost", 1, RedisURI.create("localhost", 1), + RedisInstance.Role.MASTER))); + } + + @Test + void shouldCloseConnections() { + + when(channelHandlerMock.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + + when(clientMock.connectAsync(eq(StringCodec.UTF8), any())) + .thenReturn(ConnectionFuture.completed(null, nodeConnectionMock)); + + StatefulRedisConnection connection = sut.getConnection(MasterReplicaConnectionProvider.Intent.READ); + assertThat(connection).isNotNull(); + + sut.close(); + + verify(channelHandlerMock).closeAsync(); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/MasterReplicaSentinelSslIntegrationTests.java b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaSentinelSslIntegrationTests.java new file mode 100644 index 0000000000..c739db6d3e --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaSentinelSslIntegrationTests.java @@ -0,0 +1,75 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.RedisBug; +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DnsResolver; +import io.lettuce.core.resource.MappingSocketAddressResolver; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.settings.TestSettings; + +/** + * Integration test for Master/Replica using Redis Sentinel over SSL. + * + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class MasterReplicaSentinelSslIntegrationTests extends TestSupport { + + private final ClientResources clientResources; + + @Inject + MasterReplicaSentinelSslIntegrationTests(ClientResources clientResources) { + this.clientResources = clientResources.mutate() + .socketAddressResolver(MappingSocketAddressResolver.create(DnsResolver.jvmDefault(), hostAndPort -> { + + return HostAndPort.of(hostAndPort.getHostText(), hostAndPort.getPort() + 443); + })).build(); + } + + @Test + void testMasterReplicaSentinelBasic() { + + RedisClient client = RedisClient.create(clientResources); + RedisURI redisURI = RedisURI.create("rediss-sentinel://" + TestSettings.host() + ":26379?sentinelMasterId=mymaster"); + redisURI.setVerifyPeer(false); + StatefulRedisMasterReplicaConnection connection = MasterReplica.connect(client, StringCodec.UTF8, + redisURI); + + connection.setReadFrom(ReadFrom.REPLICA); + + connection.sync().set(key, value); + connection.sync().set(key, value); + connection.sync().get(key); + + connection.close(); + + FastShutdown.shutdown(client); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTest.java b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTest.java new file mode 100644 index 0000000000..8bc24dab43 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTest.java @@ -0,0 +1,190 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.Collections; +import java.util.List; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.AbstractRedisClientTest; +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.lettuce.core.models.role.RoleParser; +import io.lettuce.test.WithPassword; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class MasterReplicaTest extends AbstractRedisClientTest { + + private RedisURI masterURI = RedisURI.Builder.redis(host, TestSettings.port(3)).withPassword(passwd) + .withClientName("my-client").withDatabase(5).build(); + + private StatefulRedisMasterReplicaConnection connection; + + private RedisURI master; + private RedisURI replica; + + private RedisCommands connection1; + private RedisCommands connection2; + + @BeforeEach + void before() throws Exception { + + RedisURI node1 = RedisURI.Builder.redis(host, TestSettings.port(3)).withDatabase(2).build(); + RedisURI node2 = RedisURI.Builder.redis(host, TestSettings.port(4)).withDatabase(2).build(); + + connection1 = client.connect(node1).sync(); + connection2 = client.connect(node2).sync(); + + RedisInstance node1Instance = RoleParser.parse(this.connection1.role()); + RedisInstance node2Instance = RoleParser.parse(this.connection2.role()); + + if (node1Instance.getRole() == RedisInstance.Role.MASTER && node2Instance.getRole() == RedisInstance.Role.SLAVE) { + master = node1; + replica = node2; + } else if (node2Instance.getRole() == RedisInstance.Role.MASTER + && node1Instance.getRole() == RedisInstance.Role.SLAVE) { + master = node2; + replica = node1; + } else { + assumeTrue(false, + String.format("Cannot run the test because I don't have a distinct master and replica but %s and %s", + node1Instance, node2Instance)); + } + + WithPassword.enableAuthentication(this.connection1); + this.connection1.auth(passwd); + this.connection1.configSet("masterauth", passwd); + + WithPassword.enableAuthentication(this.connection2); + this.connection2.auth(passwd); + this.connection2.configSet("masterauth", passwd); + + connection = MasterReplica.connect(client, StringCodec.UTF8, masterURI); + connection.setReadFrom(ReadFrom.REPLICA); + } + + @AfterEach + void after() { + + if (connection1 != null) { + WithPassword.disableAuthentication(connection1); + connection1.configRewrite(); + connection1.getStatefulConnection().close(); + } + + if (connection2 != null) { + WithPassword.disableAuthentication(connection2); + connection2.configRewrite(); + connection2.getStatefulConnection().close(); + } + + if (connection != null) { + connection.close(); + } + } + + @Test + void testMasterReplicaReadFromMaster() { + + connection.setReadFrom(ReadFrom.MASTER); + String server = connection.sync().info("server"); + + Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); + Matcher matcher = pattern.matcher(server); + + assertThat(matcher.find()).isTrue(); + assertThat(matcher.group(1)).isEqualTo("" + master.getPort()); + } + + @Test + void testMasterReplicaReadFromReplica() { + + String server = connection.sync().info("server"); + + Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); + Matcher matcher = pattern.matcher(server); + + assertThat(matcher.find()).isTrue(); + assertThat(matcher.group(1)).isEqualTo("" + replica.getPort()); + assertThat(connection.getReadFrom()).isEqualTo(ReadFrom.REPLICA); + } + + @Test + void testMasterReplicaReadWrite() { + + RedisCommands redisCommands = connection.sync(); + redisCommands.set(key, value); + redisCommands.waitForReplication(1, 100); + + assertThat(redisCommands.get(key)).isEqualTo(value); + } + + @Test + void testConnectToReplica() { + + connection.close(); + + RedisURI replicaUri = RedisURI.Builder.redis(host, TestSettings.port(4)).withPassword(passwd).build(); + connection = MasterReplica.connect(client, StringCodec.UTF8, replicaUri); + + RedisCommands sync = connection.sync(); + sync.set(key, value); + } + + @Test + void noReplicaForRead() { + + connection.setReadFrom(new ReadFrom() { + @Override + public List select(Nodes nodes) { + return Collections.emptyList(); + } + }); + + assertThatThrownBy(() -> replicaCall(connection)).isInstanceOf(RedisException.class); + } + + @Test + void masterReplicaConnectionShouldSetClientName() { + + assertThat(connection.sync().clientGetname()).isEqualTo(masterURI.getClientName()); + connection.sync().quit(); + assertThat(connection.sync().clientGetname()).isEqualTo(masterURI.getClientName()); + + connection.close(); + } + + static String replicaCall(StatefulRedisMasterReplicaConnection connection) { + return connection.sync().info("replication"); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTopologyProviderUnitTests.java b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTopologyProviderUnitTests.java new file mode 100644 index 0000000000..860c6b6f36 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTopologyProviderUnitTests.java @@ -0,0 +1,169 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.mockito.Mockito.mock; + +import java.util.List; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.masterreplica.MasterReplicaTopologyProvider; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class MasterReplicaTopologyProviderUnitTests { + + private StatefulRedisConnection connectionMock = mock(StatefulRedisConnection.class); + + private MasterReplicaTopologyProvider sut = new MasterReplicaTopologyProvider(connectionMock, + RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build()); + + @Test + void shouldParseMaster() { + + String info = "# Replication\r\n" + "role:master\r\n" + "connected_slaves:1\r\n" + "master_repl_offset:56276\r\n" + + "repl_backlog_active:1\r\n"; + + List result = sut.getNodesFromInfo(info); + assertThat(result).hasSize(1); + + RedisNodeDescription redisNodeDescription = result.get(0); + + assertThat(redisNodeDescription.getRole()).isEqualTo(RedisInstance.Role.MASTER); + assertThat(redisNodeDescription.getUri().getHost()).isEqualTo(TestSettings.host()); + assertThat(redisNodeDescription.getUri().getPort()).isEqualTo(TestSettings.port()); + } + + @Test + void shouldParseMasterAndSlave() { + + String info = "# Replication\r\n" + "role:slave\r\n" + "connected_slaves:1\r\n" + "master_host:127.0.0.1\r\n" + + "master_port:1234\r\n" + "master_repl_offset:56276\r\n" + "repl_backlog_active:1\r\n"; + + List result = sut.getNodesFromInfo(info); + assertThat(result).hasSize(2); + + RedisNodeDescription replica = result.get(0); + assertThat(replica.getRole()).isEqualTo(RedisInstance.Role.SLAVE); + + RedisNodeDescription master = result.get(1); + assertThat(master.getRole()).isEqualTo(RedisInstance.Role.MASTER); + assertThat(master.getUri().getHost()).isEqualTo("127.0.0.1"); + } + + @Test + void shouldParseMasterHostname() { + + String info = "# Replication\r\n" + "role:slave\r\n" + "connected_slaves:1\r\n" + "master_host:my.Host-name.COM\r\n" + + "master_port:1234\r\n" + "master_repl_offset:56276\r\n" + "repl_backlog_active:1\r\n"; + + List result = sut.getNodesFromInfo(info); + assertThat(result).hasSize(2); + + RedisNodeDescription replica = result.get(0); + assertThat(replica.getRole()).isEqualTo(RedisInstance.Role.SLAVE); + + RedisNodeDescription master = result.get(1); + assertThat(master.getRole()).isEqualTo(RedisInstance.Role.MASTER); + assertThat(master.getUri().getHost()).isEqualTo("my.Host-name.COM"); + } + + @Test + void shouldParseIPv6MasterAddress() { + + String info = "# Replication\r\n" + "role:slave\r\n" + "connected_slaves:1\r\n" + "master_host:::20f8:1400:0:0\r\n" + + "master_port:1234\r\n" + "master_repl_offset:56276\r\n" + "repl_backlog_active:1\r\n"; + + List result = sut.getNodesFromInfo(info); + assertThat(result).hasSize(2); + + + RedisNodeDescription replica = result.get(0); + assertThat(replica.getRole()).isEqualTo(RedisInstance.Role.SLAVE); + + RedisNodeDescription master = result.get(1); + assertThat(master.getRole()).isEqualTo(RedisInstance.Role.MASTER); + assertThat(master.getUri().getHost()).isEqualTo("::20f8:1400:0:0"); + } + + @Test + void shouldFailWithoutRole() { + + String info = "# Replication\r\n" + "connected_slaves:1\r\n" + "master_repl_offset:56276\r\n" + + "repl_backlog_active:1\r\n"; + + assertThatThrownBy(() -> sut.getNodesFromInfo(info)).isInstanceOf(IllegalStateException.class); + } + + @Test + void shouldFailWithInvalidRole() { + + String info = "# Replication\r\n" + "role:abc\r\n" + "master_repl_offset:56276\r\n" + "repl_backlog_active:1\r\n"; + + assertThatThrownBy(() -> sut.getNodesFromInfo(info)).isInstanceOf(IllegalStateException.class); + } + + @Test + void shouldParseSlaves() { + + String info = "# Replication\r\n" + "role:master\r\n" + + "slave0:ip=127.0.0.1,port=6483,state=online,offset=56276,lag=0\r\n" + + "slave1:ip=127.0.0.1,port=6484,state=online,offset=56276,lag=0\r\n" + "master_repl_offset:56276\r\n" + + "repl_backlog_active:1\r\n"; + + List result = sut.getNodesFromInfo(info); + assertThat(result).hasSize(3); + + RedisNodeDescription replica1 = result.get(1); + + assertThat(replica1.getRole()).isEqualTo(RedisInstance.Role.SLAVE); + assertThat(replica1.getUri().getHost()).isEqualTo("127.0.0.1"); + assertThat(replica1.getUri().getPort()).isEqualTo(6483); + + RedisNodeDescription replica2 = result.get(2); + + assertThat(replica2.getRole()).isEqualTo(RedisInstance.Role.SLAVE); + assertThat(replica2.getUri().getHost()).isEqualTo("127.0.0.1"); + assertThat(replica2.getUri().getPort()).isEqualTo(6484); + } + + @Test + void shouldParseIPv6SlaveAddress() { + + String info = "# Replication\r\n" + "role:master\r\n" + + "slave0:ip=::20f8:1400:0:0,port=6483,state=online,offset=56276,lag=0\r\n" + + "master_repl_offset:56276\r\n" + + "repl_backlog_active:1\r\n"; + + List result = sut.getNodesFromInfo(info); + assertThat(result).hasSize(2); + + RedisNodeDescription replica1 = result.get(1); + + assertThat(replica1.getRole()).isEqualTo(RedisInstance.Role.SLAVE); + assertThat(replica1.getUri().getHost()).isEqualTo("::20f8:1400:0:0"); + assertThat(replica1.getUri().getPort()).isEqualTo(6483); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTopologyRefreshUnitTests.java b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTopologyRefreshUnitTests.java new file mode 100644 index 0000000000..78f0b2fa38 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaTopologyRefreshUnitTests.java @@ -0,0 +1,128 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.when; + +import java.time.Duration; +import java.util.Arrays; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ScheduledThreadPoolExecutor; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.lettuce.core.protocol.RedisCommand; +import io.netty.util.concurrent.DefaultThreadFactory; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class MasterReplicaTopologyRefreshUnitTests { + + private static final RedisMasterReplicaNode MASTER = new RedisMasterReplicaNode("localhost", 1, new RedisURI(), + RedisInstance.Role.MASTER); + + private static final RedisMasterReplicaNode SLAVE = new RedisMasterReplicaNode("localhost", 2, new RedisURI(), + RedisInstance.Role.SLAVE); + + @Mock + NodeConnectionFactory connectionFactory; + + @Mock + StatefulRedisConnection connection; + + @Mock + RedisAsyncCommands async; + + private ScheduledThreadPoolExecutor executorService; + + private TopologyProvider provider; + + @BeforeEach + void before() { + + executorService = new ScheduledThreadPoolExecutor(1, new DefaultThreadFactory(getClass().getSimpleName(), true)); + when(connection.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + when(connection.async()).thenReturn(async); + when(connection.dispatch(any(RedisCommand.class))).then(invocation -> { + + RedisCommand command = invocation.getArgument(0); + command.complete(); + + return null; + }); + + provider = () -> Arrays.asList(MASTER, SLAVE); + } + + @AfterEach + void tearDown() { + executorService.shutdown(); + } + + @Test + void shouldRetrieveTopology() { + + MasterReplicaTopologyRefresh refresh = new MasterReplicaTopologyRefresh(connectionFactory, executorService, provider); + + CompletableFuture> master = CompletableFuture.completedFuture(connection); + CompletableFuture> replica = CompletableFuture.completedFuture(connection); + when(connectionFactory.connectToNodeAsync(any(), any())).thenReturn((CompletableFuture) master, + (CompletableFuture) replica); + + RedisURI redisURI = new RedisURI(); + redisURI.setTimeout(Duration.ofMillis(1)); + + List nodes = refresh.getNodes(redisURI).block(); + + assertThat(nodes).hasSize(2); + } + + @Test + void shouldRetrieveTopologyWithFailedNode() { + + MasterReplicaTopologyRefresh refresh = new MasterReplicaTopologyRefresh(connectionFactory, executorService, provider); + + CompletableFuture> connected = CompletableFuture.completedFuture(connection); + CompletableFuture> pending = new CompletableFuture<>(); + when(connectionFactory.connectToNodeAsync(any(), any())).thenReturn((CompletableFuture) connected, + (CompletableFuture) pending); + + RedisURI redisURI = new RedisURI(); + redisURI.setTimeout(Duration.ofMillis(1)); + + List nodes = refresh.getNodes(redisURI).block(); + + assertThat(nodes).hasSize(1).containsOnly(MASTER); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/MasterReplicaUtilsUnitTests.java b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaUtilsUnitTests.java new file mode 100644 index 0000000000..5374181c09 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/MasterReplicaUtilsUnitTests.java @@ -0,0 +1,101 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat; + +import java.util.Arrays; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.models.role.RedisInstance; + +/** + * @author Mark Paluch + */ +class MasterReplicaUtilsUnitTests { + + @Test + void isChangedShouldReturnFalse() { + + RedisMasterReplicaNode master = new RedisMasterReplicaNode("host", 1234, RedisURI.create("host", 111), + RedisInstance.Role.MASTER); + RedisMasterReplicaNode replica = new RedisMasterReplicaNode("host", 234, RedisURI.create("host", 234), + RedisInstance.Role.SLAVE); + + RedisMasterReplicaNode newmaster = new RedisMasterReplicaNode("host", 1234, RedisURI.create("host", 555), + RedisInstance.Role.MASTER); + RedisMasterReplicaNode newslave = new RedisMasterReplicaNode("host", 234, RedisURI.create("host", 666), + RedisInstance.Role.SLAVE); + + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(master, replica), Arrays.asList(newmaster, newslave))).isFalse(); + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(replica, master), Arrays.asList(newmaster, newslave))).isFalse(); + + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(newmaster, newslave), Arrays.asList(master, replica))).isFalse(); + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(newmaster, newslave), Arrays.asList(replica, master))).isFalse(); + } + + @Test + void isChangedShouldReturnTrueBecauseSlaveIsGone() { + + RedisMasterReplicaNode master = new RedisMasterReplicaNode("host", 1234, RedisURI.create("host", 111), + RedisInstance.Role.MASTER); + RedisMasterReplicaNode replica = new RedisMasterReplicaNode("host", 234, RedisURI.create("host", 234), + RedisInstance.Role.MASTER); + + RedisMasterReplicaNode newmaster = new RedisMasterReplicaNode("host", 1234, RedisURI.create("host", 111), + RedisInstance.Role.MASTER); + + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(master, replica), Arrays.asList(newmaster))).isTrue(); + } + + @Test + void isChangedShouldReturnTrueBecauseHostWasMigrated() { + + RedisMasterReplicaNode master = new RedisMasterReplicaNode("host", 1234, RedisURI.create("host", 111), + RedisInstance.Role.MASTER); + RedisMasterReplicaNode replica = new RedisMasterReplicaNode("host", 234, RedisURI.create("host", 234), + RedisInstance.Role.SLAVE); + + RedisMasterReplicaNode newmaster = new RedisMasterReplicaNode("host", 1234, RedisURI.create("host", 555), + RedisInstance.Role.MASTER); + RedisMasterReplicaNode newslave = new RedisMasterReplicaNode("newhost", 234, RedisURI.create("newhost", 666), + RedisInstance.Role.SLAVE); + + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(master, replica), Arrays.asList(newmaster, newslave))).isTrue(); + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(replica, master), Arrays.asList(newmaster, newslave))).isTrue(); + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(newmaster, newslave), Arrays.asList(master, replica))).isTrue(); + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(newslave, newmaster), Arrays.asList(master, replica))).isTrue(); + } + + @Test + void isChangedShouldReturnTrueBecauseRolesSwitched() { + + RedisMasterReplicaNode master = new RedisMasterReplicaNode("host", 1234, RedisURI.create("host", 111), + RedisInstance.Role.MASTER); + RedisMasterReplicaNode replica = new RedisMasterReplicaNode("host", 234, RedisURI.create("host", 234), + RedisInstance.Role.MASTER); + + RedisMasterReplicaNode newslave = new RedisMasterReplicaNode("host", 1234, RedisURI.create("host", 111), + RedisInstance.Role.SLAVE); + RedisMasterReplicaNode newmaster = new RedisMasterReplicaNode("host", 234, RedisURI.create("host", 234), + RedisInstance.Role.MASTER); + + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(master, replica), Arrays.asList(newmaster, newslave))).isTrue(); + assertThat(MasterReplicaUtils.isChanged(Arrays.asList(master, replica), Arrays.asList(newslave, newmaster))).isTrue(); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/SentinelTopologyRefreshUnitTests.java b/src/test/java/io/lettuce/core/masterreplica/SentinelTopologyRefreshUnitTests.java new file mode 100644 index 0000000000..01851fc49a --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/SentinelTopologyRefreshUnitTests.java @@ -0,0 +1,401 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Matchers.any; +import static org.mockito.Matchers.anyLong; +import static org.mockito.Matchers.eq; +import static org.mockito.Mockito.*; + +import java.util.Arrays; +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.CompletableFuture; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.ArgumentCaptor; +import org.mockito.Captor; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.ConnectionFuture; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.masterreplica.SentinelTopologyRefresh; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.pubsub.RedisPubSubAdapter; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; +import io.lettuce.core.resource.ClientResources; +import io.netty.util.concurrent.EventExecutorGroup; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("unchecked") +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class SentinelTopologyRefreshUnitTests { + + private static final RedisURI host1 = RedisURI.create("localhost", 1234); + private static final RedisURI host2 = RedisURI.create("localhost", 3456); + + @Mock + private RedisClient redisClient; + + @Mock + private StatefulRedisPubSubConnection connection; + + @Mock + private RedisPubSubAsyncCommands pubSubAsyncCommands; + + @Mock + private ClientResources clientResources; + + @Mock + private EventExecutorGroup eventExecutors; + + @Mock + private Runnable refreshRunnable; + + @Captor + private ArgumentCaptor captor; + + private SentinelTopologyRefresh sut; + + @BeforeEach + void before() { + + when(redisClient.connectPubSubAsync(any(StringCodec.class), eq(host1))).thenReturn( + ConnectionFuture.completed(null, connection)); + when(clientResources.eventExecutorGroup()).thenReturn(eventExecutors); + when(redisClient.getResources()).thenReturn(clientResources); + when(connection.async()).thenReturn(pubSubAsyncCommands); + + AsyncCommand command = new AsyncCommand<>(new Command<>(CommandType.PSUBSCRIBE, null)); + command.complete(); + + when(connection.async().psubscribe(anyString())).thenReturn(command); + + sut = new SentinelTopologyRefresh(redisClient, "mymaster", Collections.singletonList(host1)); + } + + @AfterEach + void tearDown() { + + verify(redisClient, never()).connect(any(), any()); + verify(redisClient, never()).connectPubSub(any(), any()); + } + + @Test + void bind() { + + sut.bind(refreshRunnable); + + verify(redisClient).connectPubSubAsync(any(), any()); + verify(pubSubAsyncCommands).psubscribe("*"); + } + + @Test + void bindWithSecondSentinelFails() { + + sut = new SentinelTopologyRefresh(redisClient, "mymaster", Arrays.asList(host1, host2)); + + when(redisClient.connectPubSubAsync(any(StringCodec.class), eq(host2))).thenReturn( + ConnectionFuture.from(null, Futures.failed(new RedisConnectionException("err")))); + + sut.bind(refreshRunnable); + + Map> connections = (Map) ReflectionTestUtils.getField(sut, + "pubSubConnections"); + + assertThat(connections).containsKey(host1).hasSize(1); + } + + @Test + void bindWithSentinelRecovery() { + + StatefulRedisPubSubConnection connection2 = mock(StatefulRedisPubSubConnection.class); + RedisPubSubAsyncCommands async2 = mock(RedisPubSubAsyncCommands.class); + when(connection2.async()).thenReturn(async2); + + AsyncCommand command = new AsyncCommand<>(new Command<>(CommandType.PSUBSCRIBE, null)); + command.complete(); + + when(async2.psubscribe(anyString())).thenReturn(command); + + sut = new SentinelTopologyRefresh(redisClient, "mymaster", Arrays.asList(host1, host2)); + + when(redisClient.connectPubSubAsync(any(StringCodec.class), eq(host2))).thenReturn( + ConnectionFuture.from(null, Futures.failed(new RedisConnectionException("err")))).thenReturn( + ConnectionFuture.completed(null, connection2)); + + sut.bind(refreshRunnable); + + verify(redisClient).connectPubSubAsync(any(), eq(host1)); + verify(redisClient).connectPubSubAsync(any(), eq(host2)); + + Map> connections = (Map) ReflectionTestUtils.getField(sut, + "pubSubConnections"); + + RedisPubSubAdapter adapter = getAdapter(); + + adapter.message("*", "+sentinel", + "sentinel c14cc895bb0479c91312cee0e0440b7d99ad367b 127.0.0.1 26380 @ mymaster 127.0.0.1 6483"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + + verify(redisClient, times(2)).connectPubSubAsync(any(), eq(host2)); + assertThat(connections).containsKey(host1).containsKey(host2).hasSize(2); + verify(refreshRunnable, never()).run(); + } + + @Test + void bindDuringClose() { + + sut = new SentinelTopologyRefresh(redisClient, "mymaster", Arrays.asList(host1, host2)); + + StatefulRedisPubSubConnection connection2 = mock(StatefulRedisPubSubConnection.class); + when(connection.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + when(connection2.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + + when(redisClient.connectPubSubAsync(any(StringCodec.class), eq(host2))).thenAnswer(invocation -> { + + sut.closeAsync(); + return ConnectionFuture.completed(null, connection2); + }); + + sut.bind(refreshRunnable); + + verify(redisClient).connectPubSubAsync(any(), eq(host2)); + verify(connection).closeAsync(); + verify(connection2).closeAsync(); + + Map> connections = (Map) ReflectionTestUtils.getField(sut, + "pubSubConnections"); + + assertThat(connections).isEmpty(); + } + + @Test + void close() { + + when(connection.closeAsync()).thenReturn(CompletableFuture.completedFuture(null)); + + sut.bind(refreshRunnable); + sut.close(); + + verify(connection).removeListener(any()); + verify(connection).closeAsync(); + } + + @Test + void bindAfterClose() { + + sut.close(); + sut.bind(refreshRunnable); + + verify(redisClient, times(2)).getResources(); + verifyNoMoreInteractions(redisClient); + } + + @Test + void shouldNotProcessOtherEvents() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "*", "irrelevant"); + + verify(redisClient, times(3)).getResources(); + verify(redisClient).connectPubSubAsync(any(), any()); + verifyNoMoreInteractions(redisClient); + } + + @Test + void shouldProcessSlaveDown() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "+sdown", "replica 127.0.0.1:6483 127.0.0.1 6483-2020 @ mymaster 127.0.0.1 6482"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessSlaveAdded() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "+slave", "replica 127.0.0.1:8483 127.0.0.1 8483-2020 @ mymaster 127.0.0.1 6482"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessSlaveBackUp() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "-sdown", "replica 127.0.0.1:6483 127.0.0.1 6483-2020 @ mymaster 127.0.0.1 6482"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessElectedLeader() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "+elected-leader", "master mymaster 127.0.0.1"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessSwitchMaster() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "+switch-master", "mymaster 127.0.0.1"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessFixSlaveConfig() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "fix-slave-config", "@ mymaster 127.0.0.1"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessConvertToSlave() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "+convert-to-slave", "@ mymaster 127.0.0.1"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessRoleChange() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "+role-change", "@ mymaster 127.0.0.1"); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessFailoverEnd() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "failover-end", ""); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldProcessFailoverTimeout() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "failover-end-for-timeout", ""); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldExecuteOnceWithinATimeout() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + + adapter.message("*", "failover-end-for-timeout", ""); + adapter.message("*", "failover-end-for-timeout", ""); + + verify(eventExecutors, times(1)).schedule(captor.capture(), anyLong(), any()); + captor.getValue().run(); + verify(refreshRunnable, times(1)).run(); + } + + @Test + void shouldNotProcessIfExecutorIsShuttingDown() { + + RedisPubSubAdapter adapter = getAdapter(); + sut.bind(refreshRunnable); + when(eventExecutors.isShuttingDown()).thenReturn(true); + + adapter.message("*", "failover-end-for-timeout", ""); + + verify(redisClient).connectPubSubAsync(any(), any()); + verify(eventExecutors, never()).schedule(any(Runnable.class), anyLong(), any()); + } + + private RedisPubSubAdapter getAdapter() { + return (RedisPubSubAdapter) ReflectionTestUtils.getField(sut, "adapter"); + } +} diff --git a/src/test/java/io/lettuce/core/masterreplica/StaticMasterReplicaTest.java b/src/test/java/io/lettuce/core/masterreplica/StaticMasterReplicaTest.java new file mode 100644 index 0000000000..a741c9cefc --- /dev/null +++ b/src/test/java/io/lettuce/core/masterreplica/StaticMasterReplicaTest.java @@ -0,0 +1,226 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterreplica; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.Arrays; +import java.util.Collections; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.AbstractRedisClientTest; +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RoleParser; +import io.lettuce.test.WithPassword; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class StaticMasterReplicaTest extends AbstractRedisClientTest { + + private StatefulRedisMasterReplicaConnection connection; + + private RedisURI master; + private RedisURI replica; + + private RedisCommands connection1; + private RedisCommands connection2; + + @BeforeEach + void before() throws Exception { + + RedisURI node1 = RedisURI.Builder.redis(host, TestSettings.port(3)).withClientName("my-client").withDatabase(2).build(); + RedisURI node2 = RedisURI.Builder.redis(host, TestSettings.port(4)).withClientName("my-client").withDatabase(2).build(); + + connection1 = client.connect(node1).sync(); + connection2 = client.connect(node2).sync(); + + RedisInstance node1Instance = RoleParser.parse(this.connection1.role()); + RedisInstance node2Instance = RoleParser.parse(this.connection2.role()); + + if (node1Instance.getRole() == RedisInstance.Role.MASTER && node2Instance.getRole() == RedisInstance.Role.SLAVE) { + master = node1; + replica = node2; + } else if (node2Instance.getRole() == RedisInstance.Role.MASTER + && node1Instance.getRole() == RedisInstance.Role.SLAVE) { + master = node2; + replica = node1; + } else { + assumeTrue(false, + String.format("Cannot run the test because I don't have a distinct master and replica but %s and %s", + node1Instance, node2Instance)); + } + + WithPassword.enableAuthentication(this.connection1); + this.connection1.auth(passwd); + this.connection1.configSet("masterauth", passwd); + + WithPassword.enableAuthentication(this.connection2); + this.connection2.auth(passwd); + this.connection2.configSet("masterauth", passwd); + + node1.setPassword(passwd); + node2.setPassword(passwd); + + connection = MasterReplica.connect(client, StringCodec.UTF8, Arrays.asList(master, replica)); + connection.setReadFrom(ReadFrom.REPLICA); + } + + @AfterEach + void after() throws Exception { + + if (connection1 != null) { + WithPassword.disableAuthentication(connection1); + connection1.configSet("masterauth", ""); + connection1.configRewrite(); + connection1.getStatefulConnection().close(); + } + + if (connection2 != null) { + WithPassword.disableAuthentication(connection2); + connection2.configSet("masterauth", ""); + connection2.configRewrite(); + connection2.getStatefulConnection().close(); + } + + if (connection != null) { + connection.close(); + } + } + + @Test + void testMasterReplicaStandaloneBasic() { + + String server = connection.sync().info("server"); + + Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); + Matcher matcher = pattern.matcher(server); + + assertThat(matcher.find()).isTrue(); + assertThat(matcher.group(1)).isEqualTo("6483"); + assertThat(connection.getReadFrom()).isEqualTo(ReadFrom.REPLICA); + } + + @Test + void testMasterReplicaReadWrite() { + + RedisCommands redisCommands = connection.sync(); + redisCommands.set(key, value); + redisCommands.waitForReplication(1, 100); + + assertThat(redisCommands.get(key)).isEqualTo(value); + } + + @Test + void noReplicaForRead() { + + connection.close(); + + connection = MasterReplica.connect(client, StringCodec.UTF8, Collections.singletonList(master)); + connection.setReadFrom(ReadFrom.REPLICA); + + assertThatThrownBy(() -> replicaCall(connection)).isInstanceOf(RedisException.class); + } + + @Test + void shouldWorkWithMasterOnly() { + + connection.close(); + + connection = MasterReplica.connect(client, StringCodec.UTF8, Collections.singletonList(master)); + + connection.sync().set(key, value); + assertThat(connection.sync().get(key)).isEqualTo("value"); + } + + @Test + void shouldWorkWithReplicaOnly() { + + connection.close(); + + connection = MasterReplica.connect(client, StringCodec.UTF8, Collections.singletonList(replica)); + connection.setReadFrom(ReadFrom.MASTER_PREFERRED); + + assertThat(connection.sync().info()).isNotEmpty(); + } + + @Test + void noMasterForWrite() { + + connection.close(); + + connection = MasterReplica.connect(client, StringCodec.UTF8, Collections.singletonList(replica)); + + assertThatThrownBy(() -> connection.sync().set(key, value)).isInstanceOf(RedisException.class); + } + + @Test + void masterReplicaConnectionShouldSetClientName() { + + assertThat(connection.sync().clientGetname()).isEqualTo("my-client"); + connection.sync().quit(); + assertThat(connection.sync().clientGetname()).isEqualTo("my-client"); + + connection.close(); + } + + static String replicaCall(StatefulRedisMasterReplicaConnection connection) { + return connection.sync().info("replication"); + } + + @Test + void testConnectionCount() { + + MasterReplicaConnectionProvider connectionProvider = getConnectionProvider(); + + assertThat(connectionProvider.getConnectionCount()).isEqualTo(0); + replicaCall(connection); + + assertThat(connectionProvider.getConnectionCount()).isEqualTo(1); + + connection.sync().set(key, value); + assertThat(connectionProvider.getConnectionCount()).isEqualTo(2); + } + + @Test + void testReconfigureTopology() { + MasterReplicaConnectionProvider connectionProvider = getConnectionProvider(); + + replicaCall(connection); + + connectionProvider.setKnownNodes(Collections.emptyList()); + + assertThat(connectionProvider.getConnectionCount()).isEqualTo(0); + } + + MasterReplicaConnectionProvider getConnectionProvider() { + MasterReplicaChannelWriter writer = ((StatefulRedisMasterReplicaConnectionImpl) connection).getChannelWriter(); + return writer.getMasterReplicaConnectionProvider(); + } +} diff --git a/src/test/java/io/lettuce/core/masterslave/MasterSlaveSentinelIntegrationTests.java b/src/test/java/io/lettuce/core/masterslave/MasterSlaveSentinelIntegrationTests.java new file mode 100644 index 0000000000..8503120cf4 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterslave/MasterSlaveSentinelIntegrationTests.java @@ -0,0 +1,154 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterslave; + +import static io.lettuce.core.masterslave.MasterSlaveTest.slaveCall; +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.Assert.fail; + +import java.io.IOException; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.RedisBug; +import io.lettuce.core.*; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.masterslave.MasterSlave; +import io.lettuce.core.masterslave.StatefulRedisMasterSlaveConnection; +import io.lettuce.core.sentinel.SentinelTestSettings; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.settings.TestSettings; +import io.netty.channel.group.ChannelGroup; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class MasterSlaveSentinelIntegrationTests extends TestSupport { + + private final Pattern pattern = Pattern.compile("role:(\\w+)"); + private final RedisClient redisClient; + + @Inject + MasterSlaveSentinelIntegrationTests(RedisClient redisClient) { + this.redisClient = redisClient; + } + + @Test + void testMasterSlaveSentinelBasic() { + + RedisURI uri = RedisURI.create( + "redis-sentinel://127.0.0.1:21379,127.0.0.1:22379,127.0.0.1:26379?sentinelMasterId=mymaster&timeout=5s"); + StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(redisClient, StringCodec.UTF8, uri); + + connection.setReadFrom(ReadFrom.MASTER); + String server = slaveCall(connection); + assertThatServerIs(server, "master"); + + connection.close(); + } + + @Test + void masterSlaveConnectionShouldSetClientName() { + + RedisURI redisURI = RedisURI.Builder.sentinel(TestSettings.host(), SentinelTestSettings.MASTER_ID) + .withClientName("my-client").build(); + + StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(redisClient, StringCodec.UTF8, + redisURI); + + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + connection.sync().quit(); + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + + connection.close(); + } + + @Test + void testMasterSlaveSentinelWithTwoUnavailableSentinels() { + + RedisURI uri = RedisURI.create( + "redis-sentinel://127.0.0.1:21379,127.0.0.1:22379,127.0.0.1:26379?sentinelMasterId=mymaster&timeout=5s"); + StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(redisClient, StringCodec.UTF8, uri); + + connection.setReadFrom(ReadFrom.MASTER); + String server = connection.sync().info("replication"); + assertThatServerIs(server, "master"); + + connection.close(); + } + + @Test + void testMasterSlaveSentinelWithUnavailableSentinels() { + + RedisURI uri = RedisURI.create("redis-sentinel://127.0.0.1:21379,127.0.0.1:21379?sentinelMasterId=mymaster&timeout=5s"); + + try { + MasterSlave.connect(redisClient, StringCodec.UTF8, uri); + fail("Missing RedisConnectionException"); + } catch (RedisConnectionException e) { + assertThat(e.getCause()).hasRootCauseInstanceOf(IOException.class); + } + } + + @Test + void testMasterSlaveSentinelConnectionCount() { + + ChannelGroup channels = (ChannelGroup) ReflectionTestUtils.getField(redisClient, "channels"); + int count = channels.size(); + + StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(redisClient, StringCodec.UTF8, + SentinelTestSettings.SENTINEL_URI); + + connection.sync().ping(); + connection.setReadFrom(ReadFrom.REPLICA); + slaveCall(connection); + + assertThat(channels.size()).isEqualTo(count + 2 /* connections */ + 1 /* sentinel connections */); + + connection.close(); + } + + @Test + void testMasterSlaveSentinelClosesSentinelConnections() { + + ChannelGroup channels = (ChannelGroup) ReflectionTestUtils.getField(redisClient, "channels"); + int count = channels.size(); + + StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(redisClient, StringCodec.UTF8, + SentinelTestSettings.SENTINEL_URI); + + connection.sync().ping(); + connection.setReadFrom(ReadFrom.REPLICA); + slaveCall(connection); + connection.close(); + + assertThat(channels.size()).isEqualTo(count); + } + + private void assertThatServerIs(String server, String expectation) { + Matcher matcher = pattern.matcher(server); + + assertThat(matcher.find()).isTrue(); + assertThat(matcher.group(1)).isEqualTo(expectation); + } +} diff --git a/src/test/java/io/lettuce/core/masterslave/MasterSlaveTest.java b/src/test/java/io/lettuce/core/masterslave/MasterSlaveTest.java new file mode 100644 index 0000000000..9a7e22118f --- /dev/null +++ b/src/test/java/io/lettuce/core/masterslave/MasterSlaveTest.java @@ -0,0 +1,191 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterslave; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.Collections; +import java.util.List; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.AbstractRedisClientTest; +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RedisNodeDescription; +import io.lettuce.core.models.role.RoleParser; +import io.lettuce.test.WithPassword; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class MasterSlaveTest extends AbstractRedisClientTest { + + private RedisURI masterURI = RedisURI.Builder.redis(host, TestSettings.port(3)).withPassword(passwd) + .withClientName("my-client").withDatabase(5).build(); + + private StatefulRedisMasterSlaveConnection connection; + + private RedisURI master; + private RedisURI replica; + + private RedisCommands connection1; + private RedisCommands connection2; + + @BeforeEach + void before() throws Exception { + + RedisURI node1 = RedisURI.Builder.redis(host, TestSettings.port(3)).withDatabase(2).build(); + RedisURI node2 = RedisURI.Builder.redis(host, TestSettings.port(4)).withDatabase(2).build(); + + this.connection1 = client.connect(node1).sync(); + this.connection2 = client.connect(node2).sync(); + + RedisInstance node1Instance = RoleParser.parse(this.connection1.role()); + RedisInstance node2Instance = RoleParser.parse(this.connection2.role()); + + if (node1Instance.getRole() == RedisInstance.Role.MASTER && node2Instance.getRole() == RedisInstance.Role.SLAVE) { + master = node1; + replica = node2; + } else if (node2Instance.getRole() == RedisInstance.Role.MASTER + && node1Instance.getRole() == RedisInstance.Role.SLAVE) { + master = node2; + replica = node1; + } else { + assumeTrue(false, + String.format("Cannot run the test because I don't have a distinct master and replica but %s and %s", + node1Instance, node2Instance)); + } + + WithPassword.enableAuthentication(this.connection1); + this.connection1.auth(passwd); + this.connection1.configSet("masterauth", passwd); + + WithPassword.enableAuthentication(this.connection2); + this.connection2.auth(passwd); + this.connection2.configSet("masterauth", passwd); + + connection = MasterSlave.connect(client, StringCodec.UTF8, masterURI); + connection.setReadFrom(ReadFrom.REPLICA); + } + + @AfterEach + void after() { + + if (connection1 != null) { + WithPassword.disableAuthentication(connection1); + connection1.configRewrite(); + connection1.getStatefulConnection().close(); + } + + if (connection2 != null) { + WithPassword.disableAuthentication(connection2); + connection2.configRewrite(); + connection2.getStatefulConnection().close(); + } + + if (connection != null) { + connection.close(); + } + } + + @Test + void testMasterSlaveReadFromMaster() { + + connection.setReadFrom(ReadFrom.MASTER); + String server = connection.sync().info("server"); + + Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); + Matcher matcher = pattern.matcher(server); + + assertThat(matcher.find()).isTrue(); + assertThat(matcher.group(1)).isEqualTo("" + master.getPort()); + } + + @Test + void testMasterSlaveReadFromSlave() { + + String server = connection.sync().info("server"); + + Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); + Matcher matcher = pattern.matcher(server); + + assertThat(matcher.find()).isTrue(); + assertThat(matcher.group(1)).isEqualTo("" + replica.getPort()); + assertThat(connection.getReadFrom()).isEqualTo(ReadFrom.REPLICA); + } + + @Test + void testMasterSlaveReadWrite() { + + RedisCommands redisCommands = connection.sync(); + redisCommands.set(key, value); + redisCommands.waitForReplication(1, 100); + + assertThat(redisCommands.get(key)).isEqualTo(value); + } + + @Test + void testConnectToSlave() { + + connection.close(); + + RedisURI slaveUri = RedisURI.Builder.redis(host, TestSettings.port(4)).withPassword(passwd).build(); + connection = MasterSlave.connect(client, StringCodec.UTF8, slaveUri); + + RedisCommands sync = connection.sync(); + sync.set(key, value); + } + + @Test + void noSlaveForRead() { + + connection.setReadFrom(new ReadFrom() { + @Override + public List select(Nodes nodes) { + return Collections.emptyList(); + } + }); + + assertThatThrownBy(() -> slaveCall(connection)).isInstanceOf(RedisException.class); + } + + @Test + void masterSlaveConnectionShouldSetClientName() { + + assertThat(connection.sync().clientGetname()).isEqualTo(masterURI.getClientName()); + connection.sync().quit(); + assertThat(connection.sync().clientGetname()).isEqualTo(masterURI.getClientName()); + + connection.close(); + } + + static String slaveCall(StatefulRedisMasterSlaveConnection connection) { + return connection.sync().info("replication"); + } + +} diff --git a/src/test/java/io/lettuce/core/masterslave/StaticMasterSlaveTest.java b/src/test/java/io/lettuce/core/masterslave/StaticMasterSlaveTest.java new file mode 100644 index 0000000000..8a9f83bcd4 --- /dev/null +++ b/src/test/java/io/lettuce/core/masterslave/StaticMasterSlaveTest.java @@ -0,0 +1,196 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.masterslave; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.util.Arrays; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.AbstractRedisClientTest; +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.models.role.RedisInstance; +import io.lettuce.core.models.role.RoleParser; +import io.lettuce.test.WithPassword; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +class StaticMasterSlaveTest extends AbstractRedisClientTest { + + private StatefulRedisMasterSlaveConnection connection; + + private RedisURI master; + private RedisURI replica; + + private RedisCommands connection1; + private RedisCommands connection2; + + @BeforeEach + void before() { + + RedisURI node1 = RedisURI.Builder.redis(host, TestSettings.port(3)).withClientName("my-client").withDatabase(2).build(); + RedisURI node2 = RedisURI.Builder.redis(host, TestSettings.port(4)).withClientName("my-client").withDatabase(2).build(); + + this.connection1 = client.connect(node1).sync(); + this.connection2 = client.connect(node2).sync(); + + RedisInstance node1Instance = RoleParser.parse(this.connection1.role()); + RedisInstance node2Instance = RoleParser.parse(this.connection2.role()); + + if (node1Instance.getRole() == RedisInstance.Role.MASTER && node2Instance.getRole() == RedisInstance.Role.SLAVE) { + master = node1; + replica = node2; + } else if (node2Instance.getRole() == RedisInstance.Role.MASTER + && node1Instance.getRole() == RedisInstance.Role.SLAVE) { + master = node2; + replica = node1; + } else { + assumeTrue(false, + String.format("Cannot run the test because I don't have a distinct master and replica but %s and %s", + node1Instance, node2Instance)); + } + + WithPassword.enableAuthentication(this.connection1); + this.connection1.auth(passwd); + this.connection1.configSet("masterauth", passwd); + + WithPassword.enableAuthentication(this.connection2); + this.connection2.auth(passwd); + this.connection2.configSet("masterauth", passwd); + + master.setPassword(passwd); + replica.setPassword(passwd); + + connection = MasterSlave.connect(client, StringCodec.UTF8, Arrays.asList(master, replica)); + connection.setReadFrom(ReadFrom.REPLICA); + } + + @AfterEach + void after() throws Exception { + + if (connection1 != null) { + WithPassword.disableAuthentication(connection1); + connection1.configSet("masterauth", ""); + connection1.configRewrite(); + connection1.getStatefulConnection().close(); + } + + if (connection2 != null) { + WithPassword.disableAuthentication(connection2); + connection2.configSet("masterauth", ""); + connection2.configRewrite(); + connection2.getStatefulConnection().close(); + } + + if (connection != null) { + connection.close(); + } + } + + @Test + void testMasterSlaveStandaloneBasic() { + + String server = connection.sync().info("server"); + + Pattern pattern = Pattern.compile("tcp_port:(\\d+)"); + Matcher matcher = pattern.matcher(server); + + assertThat(matcher.find()).isTrue(); + assertThat(matcher.group(1)).isEqualTo("6483"); + assertThat(connection.getReadFrom()).isEqualTo(ReadFrom.REPLICA); + } + + @Test + void testMasterSlaveReadWrite() { + + RedisCommands redisCommands = connection.sync(); + redisCommands.set(key, value); + redisCommands.waitForReplication(1, 100); + + assertThat(redisCommands.get(key)).isEqualTo(value); + } + + @Test + void noSlaveForRead() { + + connection.close(); + + connection = MasterSlave.connect(client, StringCodec.UTF8, Arrays.asList(master)); + connection.setReadFrom(ReadFrom.REPLICA); + + assertThatThrownBy(() -> slaveCall(connection)).isInstanceOf(RedisException.class); + } + + @Test + void shouldWorkWithMasterOnly() { + + connection.close(); + + connection = MasterSlave.connect(client, StringCodec.UTF8, Arrays.asList(master)); + + connection.sync().set(key, value); + assertThat(connection.sync().get(key)).isEqualTo("value"); + } + + @Test + void shouldWorkWithSlaveOnly() { + + connection.close(); + + connection = MasterSlave.connect(client, StringCodec.UTF8, Arrays.asList(replica)); + connection.setReadFrom(ReadFrom.MASTER_PREFERRED); + + assertThat(connection.sync().info()).isNotEmpty(); + } + + @Test + void noMasterForWrite() { + + connection.close(); + + connection = MasterSlave.connect(client, StringCodec.UTF8, Arrays.asList(replica)); + + assertThatThrownBy(() -> connection.sync().set(key, value)).isInstanceOf(RedisException.class); + } + + @Test + void masterSlaveConnectionShouldSetClientName() { + + assertThat(connection.sync().clientGetname()).isEqualTo("my-client"); + connection.sync().quit(); + assertThat(connection.sync().clientGetname()).isEqualTo("my-client"); + + connection.close(); + } + + static String slaveCall(StatefulRedisMasterSlaveConnection connection) { + return connection.sync().info("replication"); + } + +} diff --git a/src/test/java/io/lettuce/core/metrics/CommandLatencyCollectorOptionsUnitTests.java b/src/test/java/io/lettuce/core/metrics/CommandLatencyCollectorOptionsUnitTests.java new file mode 100644 index 0000000000..d5a660e81a --- /dev/null +++ b/src/test/java/io/lettuce/core/metrics/CommandLatencyCollectorOptionsUnitTests.java @@ -0,0 +1,38 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Larry Battle + */ +class CommandLatencyCollectorOptionsUnitTests { + + @Test + void testBuilder() { + + CommandLatencyCollectorOptions sut = CommandLatencyCollectorOptions.builder() + .targetUnit(TimeUnit.HOURS).targetPercentiles(new double[] { 1, 2, 3 }).build(); + + assertThat(sut.targetPercentiles()).hasSize(3); + assertThat(sut.targetUnit()).isEqualTo(TimeUnit.HOURS); + } +} diff --git a/src/test/java/io/lettuce/core/metrics/CommandLatencyIdUnitTests.java b/src/test/java/io/lettuce/core/metrics/CommandLatencyIdUnitTests.java new file mode 100644 index 0000000000..d6305a7fe7 --- /dev/null +++ b/src/test/java/io/lettuce/core/metrics/CommandLatencyIdUnitTests.java @@ -0,0 +1,76 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.protocol.CommandKeyword; +import io.lettuce.core.protocol.ProtocolKeyword; +import io.netty.channel.local.LocalAddress; + +/** + * Unit tests for {@link CommandLatencyId}. + * + * @author Mark Paluch + */ +class CommandLatencyIdUnitTests { + + private CommandLatencyId sut = CommandLatencyId.create(LocalAddress.ANY, new LocalAddress("me"), CommandKeyword.ADDR); + + @Test + void testToString() { + assertThat(sut.toString()).contains("local:any -> local:me"); + } + + @Test + void testValues() { + assertThat(sut.localAddress()).isEqualTo(LocalAddress.ANY); + assertThat(sut.remoteAddress()).isEqualTo(new LocalAddress("me")); + } + + @Test + void testEquality() { + assertThat(sut).isEqualTo(CommandLatencyId.create(LocalAddress.ANY, new LocalAddress("me"), new MyCommand("ADDR"))); + assertThat(sut).isNotEqualTo(CommandLatencyId.create(LocalAddress.ANY, new LocalAddress("me"), new MyCommand("FOO"))); + } + + @Test + void testHashCode() { + assertThat(sut) + .hasSameHashCodeAs(CommandLatencyId.create(LocalAddress.ANY, new LocalAddress("me"), new MyCommand("ADDR"))); + } + + static class MyCommand implements ProtocolKeyword { + + final String name; + + public MyCommand(String name) { + this.name = name; + } + + @Override + public byte[] getBytes() { + return name.getBytes(); + } + + @Override + public String name() { + return name; + } + } +} diff --git a/src/test/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorOptionsUnitTests.java b/src/test/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorOptionsUnitTests.java new file mode 100644 index 0000000000..55c5414c47 --- /dev/null +++ b/src/test/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorOptionsUnitTests.java @@ -0,0 +1,55 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class DefaultCommandLatencyCollectorOptionsUnitTests { + + @Test + void testDefault() { + + DefaultCommandLatencyCollectorOptions sut = DefaultCommandLatencyCollectorOptions.create(); + + assertThat(sut.targetPercentiles()).hasSize(5); + assertThat(sut.targetUnit()).isEqualTo(TimeUnit.MICROSECONDS); + } + + @Test + void testDisabled() { + + DefaultCommandLatencyCollectorOptions sut = DefaultCommandLatencyCollectorOptions.disabled(); + + assertThat(sut.isEnabled()).isEqualTo(false); + } + + @Test + void testBuilder() { + + DefaultCommandLatencyCollectorOptions sut = DefaultCommandLatencyCollectorOptions.builder() + .targetUnit(TimeUnit.HOURS).targetPercentiles(new double[] { 1, 2, 3 }).build(); + + assertThat(sut.targetPercentiles()).hasSize(3); + assertThat(sut.targetUnit()).isEqualTo(TimeUnit.HOURS); + } +} diff --git a/src/test/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorUnitTests.java b/src/test/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorUnitTests.java new file mode 100644 index 0000000000..0cf0fdaa8b --- /dev/null +++ b/src/test/java/io/lettuce/core/metrics/DefaultCommandLatencyCollectorUnitTests.java @@ -0,0 +1,149 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.metrics; + +import static java.util.concurrent.TimeUnit.MICROSECONDS; +import static java.util.concurrent.TimeUnit.MILLISECONDS; +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Map; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.junit.jupiter.MockitoExtension; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.metrics.DefaultCommandLatencyCollector.PauseDetectorWrapper; +import io.lettuce.core.protocol.CommandType; +import io.netty.channel.local.LocalAddress; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +class DefaultCommandLatencyCollectorUnitTests { + + private DefaultCommandLatencyCollector sut; + + @Test + void shutdown() { + + sut = new DefaultCommandLatencyCollector(DefaultCommandLatencyCollectorOptions.create()); + + sut.shutdown(); + + assertThat(sut.isEnabled()).isFalse(); + } + + @Test + void simpleCreateShouldNotInitializePauseDetector() { + + sut = new DefaultCommandLatencyCollector(DefaultCommandLatencyCollectorOptions.create()); + PauseDetectorWrapper wrapper = (PauseDetectorWrapper) ReflectionTestUtils.getField(sut, "pauseDetectorWrapper"); + + assertThat(wrapper).isNull(); + } + + @Test + void latencyRecordShouldInitializePauseDetectorWrapper() { + + sut = new DefaultCommandLatencyCollector(DefaultCommandLatencyCollectorOptions.create()); + + setupData(); + + PauseDetectorWrapper wrapper = (PauseDetectorWrapper) ReflectionTestUtils.getField(sut, "pauseDetectorWrapper"); + assertThat(wrapper).isNotNull(); + + sut.shutdown(); + + wrapper = (PauseDetectorWrapper) ReflectionTestUtils.getField(sut, "pauseDetectorWrapper"); + assertThat(wrapper).isNull(); + } + + @Test + void shutdownShouldReleasePauseDetector() { + + sut = new DefaultCommandLatencyCollector(DefaultCommandLatencyCollectorOptions.create()); + PauseDetectorWrapper wrapper = (PauseDetectorWrapper) ReflectionTestUtils.getField(sut, "pauseDetectorWrapper"); + + assertThat(wrapper).isNull(); + + setupData(); + + wrapper = (PauseDetectorWrapper) ReflectionTestUtils.getField(sut, "pauseDetectorWrapper"); + + assertThat(wrapper).isNotNull(); + + sut.shutdown(); + } + + @Test + void verifyMetrics() { + + sut = new DefaultCommandLatencyCollector(DefaultCommandLatencyCollectorOptions.create()); + + setupData(); + + Map latencies = sut.retrieveMetrics(); + assertThat(latencies).hasSize(1); + + Map.Entry entry = latencies.entrySet().iterator().next(); + + assertThat(entry.getKey().commandType()).isSameAs(CommandType.BGSAVE); + + CommandMetrics metrics = entry.getValue(); + + assertThat(metrics.getCount()).isEqualTo(3); + assertThat(metrics.getCompletion().getMin()).isBetween(990000L, 1100000L); + assertThat(metrics.getCompletion().getPercentiles()).hasSize(5); + + assertThat(metrics.getFirstResponse().getMin()).isBetween(90000L, 110000L); + assertThat(metrics.getFirstResponse().getMax()).isBetween(290000L, 310000L); + assertThat(metrics.getCompletion().getPercentiles()).containsKey(50.0d); + + assertThat(metrics.getFirstResponse().getPercentiles().get(50d)).isLessThanOrEqualTo( + metrics.getCompletion().getPercentiles().get(50d)); + + assertThat(metrics.getTimeUnit()).isEqualTo(MICROSECONDS); + + assertThat(sut.retrieveMetrics()).isEmpty(); + + sut.shutdown(); + } + + @Test + void verifyCummulativeMetrics() { + + sut = new DefaultCommandLatencyCollector(DefaultCommandLatencyCollectorOptions.builder() + .resetLatenciesAfterEvent(false).build()); + + setupData(); + + assertThat(sut.retrieveMetrics()).hasSize(1); + assertThat(sut.retrieveMetrics()).hasSize(1); + + sut.shutdown(); + } + + private void setupData() { + sut.recordCommandLatency(LocalAddress.ANY, LocalAddress.ANY, CommandType.BGSAVE, MILLISECONDS.toNanos(100), + MILLISECONDS.toNanos(1000)); + sut.recordCommandLatency(LocalAddress.ANY, LocalAddress.ANY, CommandType.BGSAVE, MILLISECONDS.toNanos(200), + MILLISECONDS.toNanos(1000)); + sut.recordCommandLatency(LocalAddress.ANY, LocalAddress.ANY, CommandType.BGSAVE, MILLISECONDS.toNanos(300), + MILLISECONDS.toNanos(1000)); + } +} diff --git a/src/test/java/io/lettuce/core/models/command/CommandDetailParserUnitTests.java b/src/test/java/io/lettuce/core/models/command/CommandDetailParserUnitTests.java new file mode 100644 index 0000000000..69724668c1 --- /dev/null +++ b/src/test/java/io/lettuce/core/models/command/CommandDetailParserUnitTests.java @@ -0,0 +1,79 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.command; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.internal.LettuceLists; + +/** + * @author Mark Paluch + */ +class CommandDetailParserUnitTests { + + @Test + void testMappings() { + assertThat(CommandDetailParser.FLAG_MAPPING).hasSameSizeAs(CommandDetail.Flag.values()); + } + + @Test + void testEmptyList() { + + List result = CommandDetailParser.parse(new ArrayList<>()); + assertThat(result).isEmpty(); + } + + @Test + void testMalformedList() { + Object o = LettuceLists.newList("", "", ""); + List result = CommandDetailParser.parse(LettuceLists.newList(o)); + assertThat(result).isEmpty(); + } + + @Test + void testParse() { + Object o = LettuceLists.newList("get", "1", LettuceLists.newList("fast", "loading"), 1L, 2L, 3L); + List result = CommandDetailParser.parse(LettuceLists.newList(o)); + assertThat(result).hasSize(1); + + CommandDetail commandDetail = result.get(0); + assertThat(commandDetail.getName()).isEqualTo("get"); + assertThat(commandDetail.getArity()).isEqualTo(1); + assertThat(commandDetail.getFlags()).hasSize(2); + assertThat(commandDetail.getFirstKeyPosition()).isEqualTo(1); + assertThat(commandDetail.getLastKeyPosition()).isEqualTo(2); + assertThat(commandDetail.getKeyStepCount()).isEqualTo(3); + } + + @Test + void testModel() { + CommandDetail commandDetail = new CommandDetail(); + commandDetail.setArity(1); + commandDetail.setFirstKeyPosition(2); + commandDetail.setLastKeyPosition(3); + commandDetail.setKeyStepCount(4); + commandDetail.setName("theName"); + commandDetail.setFlags(new HashSet<>()); + + assertThat(commandDetail.toString()).contains(CommandDetail.class.getSimpleName()); + } +} diff --git a/src/test/java/io/lettuce/core/models/role/RoleParserUnitTests.java b/src/test/java/io/lettuce/core/models/role/RoleParserUnitTests.java new file mode 100644 index 0000000000..4cafc26ce3 --- /dev/null +++ b/src/test/java/io/lettuce/core/models/role/RoleParserUnitTests.java @@ -0,0 +1,172 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.role; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.ArrayList; +import java.util.List; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.internal.LettuceLists; + +/** + * @author Mark Paluch + */ +class RoleParserUnitTests { + + private static final long REPLICATION_OFFSET_1 = 3167038L; + private static final long REPLICATION_OFFSET_2 = 3167039L; + private static final String LOCALHOST = "127.0.0.1"; + + @Test + void testMappings() { + assertThat(RoleParser.ROLE_MAPPING).hasSameSizeAs(RedisInstance.Role.values()); + assertThat(RoleParser.SLAVE_STATE_MAPPING).hasSameSizeAs(RedisSlaveInstance.State.values()); + } + + @Test + void emptyList() { + assertThatThrownBy(() -> RoleParser.parse(new ArrayList<>())).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void invalidFirstElement() { + assertThatThrownBy(() -> RoleParser.parse(LettuceLists.newList(new Object()))).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void invalidRole() { + assertThatThrownBy(() -> RoleParser.parse(LettuceLists.newList("blubb"))).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void master() { + + List> slaves = LettuceLists.newList(LettuceLists.newList(LOCALHOST, "9001", "" + REPLICATION_OFFSET_2), + LettuceLists.newList(LOCALHOST, "9002", "3129543")); + + List input = LettuceLists.newList("master", REPLICATION_OFFSET_1, slaves); + + RedisInstance result = RoleParser.parse(input); + + assertThat(result.getRole()).isEqualTo(RedisInstance.Role.MASTER); + assertThat(result instanceof RedisMasterInstance).isTrue(); + + RedisMasterInstance instance = (RedisMasterInstance) result; + + assertThat(instance.getReplicationOffset()).isEqualTo(REPLICATION_OFFSET_1); + assertThat(instance.getSlaves()).hasSize(2); + + ReplicationPartner slave1 = instance.getSlaves().get(0); + assertThat(slave1.getHost().getHostText()).isEqualTo(LOCALHOST); + assertThat(slave1.getHost().getPort()).isEqualTo(9001); + assertThat(slave1.getReplicationOffset()).isEqualTo(REPLICATION_OFFSET_2); + + assertThat(instance.toString()).startsWith(RedisMasterInstance.class.getSimpleName()); + assertThat(slave1.toString()).startsWith(ReplicationPartner.class.getSimpleName()); + + } + + @Test + void slave() { + + List input = LettuceLists.newList("slave", LOCALHOST, 9000L, "connected", REPLICATION_OFFSET_1); + + RedisInstance result = RoleParser.parse(input); + + assertThat(result.getRole()).isEqualTo(RedisInstance.Role.SLAVE); + assertThat(result instanceof RedisSlaveInstance).isTrue(); + + RedisSlaveInstance instance = (RedisSlaveInstance) result; + + assertThat(instance.getMaster().getReplicationOffset()).isEqualTo(REPLICATION_OFFSET_1); + assertThat(instance.getState()).isEqualTo(RedisSlaveInstance.State.CONNECTED); + + assertThat(instance.toString()).startsWith(RedisSlaveInstance.class.getSimpleName()); + + } + + @Test + void sentinel() { + + List input = LettuceLists.newList("sentinel", LettuceLists.newList("resque-master", "html-fragments-master", "stats-master")); + + RedisInstance result = RoleParser.parse(input); + + assertThat(result.getRole()).isEqualTo(RedisInstance.Role.SENTINEL); + assertThat(result instanceof RedisSentinelInstance).isTrue(); + + RedisSentinelInstance instance = (RedisSentinelInstance) result; + + assertThat(instance.getMonitoredMasters()).hasSize(3); + + assertThat(instance.toString()).startsWith(RedisSentinelInstance.class.getSimpleName()); + + } + + @Test + void sentinelWithoutMasters() { + + List input = LettuceLists.newList("sentinel"); + + RedisInstance result = RoleParser.parse(input); + RedisSentinelInstance instance = (RedisSentinelInstance) result; + + assertThat(instance.getMonitoredMasters()).hasSize(0); + + } + + @Test + void sentinelMastersIsNotAList() { + + List input = LettuceLists.newList("sentinel", ""); + + RedisInstance result = RoleParser.parse(input); + RedisSentinelInstance instance = (RedisSentinelInstance) result; + + assertThat(instance.getMonitoredMasters()).hasSize(0); + + } + + @Test + void testModelTest() { + + RedisMasterInstance master = new RedisMasterInstance(); + master.setReplicationOffset(1); + master.setSlaves(new ArrayList<>()); + assertThat(master.toString()).contains(RedisMasterInstance.class.getSimpleName()); + + RedisSlaveInstance slave = new RedisSlaveInstance(); + slave.setMaster(new ReplicationPartner()); + slave.setState(RedisSlaveInstance.State.CONNECT); + assertThat(slave.toString()).contains(RedisSlaveInstance.class.getSimpleName()); + + RedisSentinelInstance sentinel = new RedisSentinelInstance(); + sentinel.setMonitoredMasters(new ArrayList<>()); + assertThat(sentinel.toString()).contains(RedisSentinelInstance.class.getSimpleName()); + + ReplicationPartner partner = new ReplicationPartner(); + partner.setHost(HostAndPort.parse("localhost")); + partner.setReplicationOffset(12); + + assertThat(partner.toString()).contains(ReplicationPartner.class.getSimpleName()); + } +} diff --git a/src/test/java/io/lettuce/core/models/stream/PendingParserUnitTests.java b/src/test/java/io/lettuce/core/models/stream/PendingParserUnitTests.java new file mode 100644 index 0000000000..f3a9cdc70b --- /dev/null +++ b/src/test/java/io/lettuce/core/models/stream/PendingParserUnitTests.java @@ -0,0 +1,62 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.models.stream; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.time.Duration; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.Range; + +/** + * @author Mark Paluch + */ +class PendingParserUnitTests { + + @Test + void shouldParseXpendingWithRangeOutput() { + + List result = PendingParser + .parseRange(Collections.singletonList(Arrays.asList("foo", "consumer", 1L, + 2L))); + + assertThat(result).hasSize(1); + + PendingMessage message = result.get(0); + + assertThat(message.getId()).isEqualTo("foo"); + assertThat(message.getConsumer()).isEqualTo("consumer"); + assertThat(message.getMsSinceLastDelivery()).isEqualTo(1); + assertThat(message.getSinceLastDelivery()).isEqualTo(Duration.ofMillis(1)); + assertThat(message.getRedeliveryCount()).isEqualTo(2); + } + + @Test + void shouldParseXpendingOutput() { + + PendingMessages result = PendingParser.parse(Arrays.asList(16L, "from", "to", + Collections.singletonList(Arrays.asList("consumer", 17L)))); + + assertThat(result.getCount()).isEqualTo(16); + assertThat(result.getMessageIds()).isEqualTo(Range.create("from", "to")); + assertThat(result.getConsumerMessageCount()).containsEntry("consumer", 17L); + } +} diff --git a/src/test/java/io/lettuce/core/output/BooleanListOutputUnitTests.java b/src/test/java/io/lettuce/core/output/BooleanListOutputUnitTests.java new file mode 100644 index 0000000000..8a038d54cc --- /dev/null +++ b/src/test/java/io/lettuce/core/output/BooleanListOutputUnitTests.java @@ -0,0 +1,54 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class BooleanListOutputUnitTests { + + private BooleanListOutput sut = new BooleanListOutput<>(StringCodec.UTF8); + + @Test + void defaultSubscriberIsSet() { + assertThat(sut.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); + } + + @Test + void commandOutputCorrectlyDecoded() { + + sut.multi(3); + sut.set(1L); + sut.set(0L); + sut.set(2L); + + assertThat(sut.get()).contains(true, false, false); + } + + @Test + void setByteNotImplemented() { + assertThatThrownBy(() -> sut.set(ByteBuffer.wrap("4.567".getBytes()))).isInstanceOf(IllegalStateException.class); + } +} diff --git a/src/test/java/io/lettuce/core/output/GeoCoordinatesListOutputUnitTests.java b/src/test/java/io/lettuce/core/output/GeoCoordinatesListOutputUnitTests.java new file mode 100644 index 0000000000..7910d2f696 --- /dev/null +++ b/src/test/java/io/lettuce/core/output/GeoCoordinatesListOutputUnitTests.java @@ -0,0 +1,50 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.GeoCoordinates; +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class GeoCoordinatesListOutputUnitTests { + + private GeoCoordinatesListOutput sut = new GeoCoordinatesListOutput<>(StringCodec.UTF8); + + @Test + void setIntegerShouldFail() { + assertThatThrownBy(() -> sut.set(123L)).isInstanceOf(IllegalStateException.class); + } + + @Test + void commandOutputCorrectlyDecoded() { + + sut.multi(2); + sut.set(ByteBuffer.wrap("1.234".getBytes())); + sut.set(ByteBuffer.wrap("4.567".getBytes())); + sut.multi(-1); + + assertThat(sut.get()).contains(new GeoCoordinates(1.234, 4.567)); + } +} diff --git a/src/test/java/io/lettuce/core/output/GeoCoordinatesValueListOutputUnitTests.java b/src/test/java/io/lettuce/core/output/GeoCoordinatesValueListOutputUnitTests.java new file mode 100644 index 0000000000..17c9f50c68 --- /dev/null +++ b/src/test/java/io/lettuce/core/output/GeoCoordinatesValueListOutputUnitTests.java @@ -0,0 +1,56 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.GeoCoordinates; +import io.lettuce.core.Value; +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class GeoCoordinatesValueListOutputUnitTests { + + private GeoCoordinatesValueListOutput sut = new GeoCoordinatesValueListOutput<>(StringCodec.UTF8); + + @Test + void defaultSubscriberIsSet() { + assertThat(sut.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); + } + + @Test + void setIntegerShouldFail() { + assertThatThrownBy(() -> sut.set(123L)).isInstanceOf(IllegalStateException.class); + } + + @Test + void commandOutputCorrectlyDecoded() { + + sut.multi(2); + sut.set(ByteBuffer.wrap("1.234".getBytes())); + sut.set(ByteBuffer.wrap("4.567".getBytes())); + sut.multi(-1); + + assertThat(sut.get()).contains(Value.just(new GeoCoordinates(1.234, 4.567))); + } +} diff --git a/src/test/java/io/lettuce/core/output/GeoWithinListOutputUnitTests.java b/src/test/java/io/lettuce/core/output/GeoWithinListOutputUnitTests.java new file mode 100644 index 0000000000..fc13573c9f --- /dev/null +++ b/src/test/java/io/lettuce/core/output/GeoWithinListOutputUnitTests.java @@ -0,0 +1,106 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.GeoCoordinates; +import io.lettuce.core.GeoWithin; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class GeoWithinListOutputUnitTests { + + private GeoWithinListOutput sut = new GeoWithinListOutput<>(StringCodec.UTF8, false, false, false); + + @Test + void defaultSubscriberIsSet() { + + sut.multi(1); + assertThat(sut.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); + } + + @Test + void commandOutputKeyOnlyDecoded() { + + sut.multi(1); + sut.set(ByteBuffer.wrap("key".getBytes())); + sut.set(ByteBuffer.wrap("4.567".getBytes())); + sut.complete(1); + + assertThat(sut.get()).contains(new GeoWithin<>("key", null, null, null)); + } + + @Test + void commandOutputKeyAndDistanceDecoded() { + + sut = new GeoWithinListOutput<>(StringCodec.UTF8, true, false, false); + + sut.multi(1); + sut.set(ByteBuffer.wrap("key".getBytes())); + sut.set(ByteBuffer.wrap("4.567".getBytes())); + sut.complete(1); + + assertThat(sut.get()).contains(new GeoWithin<>("key", 4.567, null, null)); + } + + @Test + void commandOutputKeyAndHashDecoded() { + + sut = new GeoWithinListOutput<>(StringCodec.UTF8, false, true, false); + + sut.multi(1); + sut.set(ByteBuffer.wrap("key".getBytes())); + sut.set(4567); + sut.complete(1); + + assertThat(sut.get()).contains(new GeoWithin<>("key", null, 4567L, null)); + } + + @Test + void commandOutputLongKeyAndHashDecoded() { + + GeoWithinListOutput sut = new GeoWithinListOutput<>((RedisCodec) StringCodec.UTF8, false, true, false); + + sut.multi(1); + sut.set(1234); + sut.set(4567); + sut.complete(1); + + assertThat(sut.get()).contains(new GeoWithin<>(1234L, null, 4567L, null)); + } + + @Test + void commandOutputKeyAndCoordinatesDecoded() { + + sut = new GeoWithinListOutput<>(StringCodec.UTF8, false, false, true); + + sut.multi(1); + sut.set(ByteBuffer.wrap("key".getBytes())); + sut.set(ByteBuffer.wrap("1.234".getBytes())); + sut.set(ByteBuffer.wrap("4.567".getBytes())); + sut.complete(1); + + assertThat(sut.get()).contains(new GeoWithin<>("key", null, null, new GeoCoordinates(1.234, 4.567))); + } +} diff --git a/src/test/java/io/lettuce/core/output/ListOutputUnitTests.java b/src/test/java/io/lettuce/core/output/ListOutputUnitTests.java new file mode 100644 index 0000000000..4d56cd2ac0 --- /dev/null +++ b/src/test/java/io/lettuce/core/output/ListOutputUnitTests.java @@ -0,0 +1,99 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.ByteBuffer; +import java.util.Arrays; +import java.util.Collection; +import java.util.List; + +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; + +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class ListOutputUnitTests { + + static Collection parameters() { + + KeyListOutput keyListOutput = new KeyListOutput<>(StringCodec.UTF8); + Fixture keyList = new Fixture(keyListOutput, keyListOutput, "hello world".getBytes(), "hello world"); + + ValueListOutput valueListOutput = new ValueListOutput<>(StringCodec.UTF8); + Fixture valueList = new Fixture(valueListOutput, valueListOutput, "hello world".getBytes(), "hello world"); + + StringListOutput stringListOutput = new StringListOutput<>(StringCodec.UTF8); + Fixture stringList = new Fixture(stringListOutput, stringListOutput, "hello world".getBytes(), "hello world"); + + return Arrays.asList(keyList, valueList, stringList); + } + + @ParameterizedTest + @MethodSource("parameters") + void settingEmptySubscriberShouldFail(Fixture fixture) { + assertThatThrownBy(() -> fixture.streamingOutput.setSubscriber(null)).isInstanceOf(IllegalArgumentException.class); + } + + @ParameterizedTest + @MethodSource("parameters") + void defaultSubscriberIsSet(Fixture fixture) { + fixture.commandOutput.multi(1); + assertThat(fixture.streamingOutput.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); + } + + @ParameterizedTest + @MethodSource("parameters") + void setIntegerShouldFail(Fixture fixture) { + assertThatThrownBy(() -> fixture.commandOutput.set(123L)).isInstanceOf(IllegalStateException.class); + } + + @ParameterizedTest + @MethodSource("parameters") + void setValueShouldConvert(Fixture fixture) { + + fixture.commandOutput.multi(1); + fixture.commandOutput.set(ByteBuffer.wrap(fixture.valueBytes)); + + assertThat(fixture.commandOutput.get()).contains(fixture.value); + } + + static class Fixture { + + final CommandOutput> commandOutput; + final StreamingOutput streamingOutput; + final byte[] valueBytes; + final Object value; + + Fixture(CommandOutput commandOutput, StreamingOutput streamingOutput, byte[] valueBytes, Object value) { + + this.commandOutput = (CommandOutput) commandOutput; + this.streamingOutput = streamingOutput; + this.valueBytes = valueBytes; + this.value = value; + } + + @Override + public String toString() { + return commandOutput.getClass().getSimpleName() + "/" + value; + } + } +} diff --git a/src/test/java/io/lettuce/core/output/MultiOutputUnitTests.java b/src/test/java/io/lettuce/core/output/MultiOutputUnitTests.java new file mode 100644 index 0000000000..c261e89d28 --- /dev/null +++ b/src/test/java/io/lettuce/core/output/MultiOutputUnitTests.java @@ -0,0 +1,78 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; + +/** + * @author Mark Paluch + */ +class MultiOutputUnitTests { + + @Test + void shouldCompleteCommand() { + + MultiOutput output = new MultiOutput<>(StringCodec.UTF8); + Command command = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8)); + + output.add(command); + + output.multi(1); + output.set(ByteBuffer.wrap("OK".getBytes())); + output.complete(1); + + assertThat(command.getOutput().get()).isEqualTo("OK"); + } + + @Test + void shouldReportErrorForCommand() { + + MultiOutput output = new MultiOutput<>(StringCodec.UTF8); + Command command = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8)); + + output.add(command); + + output.multi(1); + output.setError(ByteBuffer.wrap("Fail".getBytes())); + output.complete(1); + + assertThat(command.getOutput().getError()).isEqualTo("Fail"); + assertThat(output.getError()).isNull(); + } + + @Test + void shouldFailMulti() { + + MultiOutput output = new MultiOutput<>(StringCodec.UTF8); + Command command = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8)); + + output.add(command); + + output.setError(ByteBuffer.wrap("Fail".getBytes())); + output.complete(0); + + assertThat(command.getOutput().getError()).isNull(); + assertThat(output.getError()).isEqualTo("Fail"); + } +} diff --git a/src/test/java/io/lettuce/core/output/NestedMultiOutputUnitTests.java b/src/test/java/io/lettuce/core/output/NestedMultiOutputUnitTests.java new file mode 100644 index 0000000000..18fdc5eed9 --- /dev/null +++ b/src/test/java/io/lettuce/core/output/NestedMultiOutputUnitTests.java @@ -0,0 +1,38 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.charset.StandardCharsets; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class NestedMultiOutputUnitTests { + + @Test + void nestedMultiError() { + + NestedMultiOutput output = new NestedMultiOutput<>(StringCodec.UTF8); + output.setError(StandardCharsets.US_ASCII.encode("Oops!")); + assertThat(output.getError()).isNotNull(); + } +} diff --git a/src/test/java/io/lettuce/core/output/ReplayOutputUnitTests.java b/src/test/java/io/lettuce/core/output/ReplayOutputUnitTests.java new file mode 100644 index 0000000000..a820b9d39c --- /dev/null +++ b/src/test/java/io/lettuce/core/output/ReplayOutputUnitTests.java @@ -0,0 +1,82 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; +import java.util.Collections; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class ReplayOutputUnitTests { + + @Test + void shouldReplaySimpleCompletion() { + + ReplayOutput replay = new ReplayOutput<>(); + ValueOutput target = new ValueOutput<>(StringCodec.ASCII); + + replay.multi(1); + replay.set(ByteBuffer.wrap("foo".getBytes())); + replay.complete(1); + + replay.replay(target); + + assertThat(target.get()).isEqualTo("foo"); + } + + @Test + void shouldReplayNestedCompletion() { + + ReplayOutput replay = new ReplayOutput<>(); + ArrayOutput target = new ArrayOutput<>(StringCodec.ASCII); + + replay.multi(1); + replay.multi(1); + replay.set(ByteBuffer.wrap("foo".getBytes())); + replay.complete(2); + + replay.multi(1); + replay.set(ByteBuffer.wrap("bar".getBytes())); + replay.complete(2); + replay.complete(1); + + replay.replay(target); + + assertThat(target.get().get(0)).isEqualTo(Collections.singletonList("foo")); + assertThat(target.get().get(1)).isEqualTo(Collections.singletonList("bar")); + } + + @Test + void shouldDecodeErrorResponse() { + + ReplayOutput replay = new ReplayOutput<>(); + ValueOutput target = new ValueOutput<>(StringCodec.ASCII); + + replay.setError(ByteBuffer.wrap("foo".getBytes())); + + replay.replay(target); + + assertThat(replay.getError()).isEqualTo("foo"); + assertThat(target.getError()).isEqualTo("foo"); + } +} diff --git a/src/test/java/io/lettuce/core/output/ScoredValueListOutputUnitTests.java b/src/test/java/io/lettuce/core/output/ScoredValueListOutputUnitTests.java new file mode 100644 index 0000000000..def2cef1ad --- /dev/null +++ b/src/test/java/io/lettuce/core/output/ScoredValueListOutputUnitTests.java @@ -0,0 +1,57 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.ScoredValue; +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class ScoredValueListOutputUnitTests { + + private ScoredValueListOutput sut = new ScoredValueListOutput<>(StringCodec.UTF8); + + @Test + void defaultSubscriberIsSet() { + + sut.multi(1); + assertThat(sut.getSubscriber()).isNotNull().isInstanceOf(ListSubscriber.class); + } + + @Test + void setIntegerShouldFail() { + assertThatThrownBy(() -> sut.set(123L)).isInstanceOf(IllegalStateException. class); + } + + @Test + void commandOutputCorrectlyDecoded() { + + sut.multi(1); + sut.set(ByteBuffer.wrap("key".getBytes())); + sut.set(ByteBuffer.wrap("4.567".getBytes())); + sut.multi(-1); + + assertThat(sut.get()).contains(ScoredValue.fromNullable(4.567, "key")); + } +} diff --git a/src/test/java/io/lettuce/core/output/SocketAddressOutputUnitTests.java b/src/test/java/io/lettuce/core/output/SocketAddressOutputUnitTests.java new file mode 100644 index 0000000000..20ae58ad01 --- /dev/null +++ b/src/test/java/io/lettuce/core/output/SocketAddressOutputUnitTests.java @@ -0,0 +1,44 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.net.InetSocketAddress; +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.StringCodec; + +/** + * @author Mark Paluch + */ +class SocketAddressOutputUnitTests { + + @Test + void shouldReportSocketAddress() { + + SocketAddressOutput output = new SocketAddressOutput<>(StringCodec.ASCII); + + output.set(ByteBuffer.wrap("localhost".getBytes())); + output.set(ByteBuffer.wrap("6379".getBytes())); + + output.complete(0); + + assertThat(output.get()).isNotNull().isInstanceOf(InetSocketAddress.class); + } +} diff --git a/src/test/java/io/lettuce/core/output/StreamReadOutputUnitTests.java b/src/test/java/io/lettuce/core/output/StreamReadOutputUnitTests.java new file mode 100644 index 0000000000..d476daae13 --- /dev/null +++ b/src/test/java/io/lettuce/core/output/StreamReadOutputUnitTests.java @@ -0,0 +1,193 @@ +/* + * Copyright 2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.StreamMessage; +import io.lettuce.core.codec.StringCodec; + +/** + * Unit tests for {@link StreamReadOutput}. + * + * @author Mark Paluch + */ +class StreamReadOutputUnitTests { + + private StreamReadOutput sut = new StreamReadOutput<>(StringCodec.UTF8); + + @Test + void shouldDecodeSingleEntryMessage() { + + sut.multi(2); + sut.set(ByteBuffer.wrap("stream-key".getBytes())); + sut.complete(1); + sut.multi(1); + sut.multi(2); + sut.set(ByteBuffer.wrap("1234-12".getBytes())); + sut.complete(3); + sut.multi(2); + sut.set(ByteBuffer.wrap("key".getBytes())); + sut.complete(4); + sut.set(ByteBuffer.wrap("value".getBytes())); + sut.complete(4); + sut.complete(3); + sut.complete(2); + sut.complete(1); + sut.complete(0); + + assertThat(sut.get()).hasSize(1); + StreamMessage streamMessage = sut.get().get(0); + + assertThat(streamMessage.getId()).isEqualTo("1234-12"); + assertThat(streamMessage.getStream()).isEqualTo("stream-key"); + assertThat(streamMessage.getBody()).hasSize(1).containsEntry("key", "value"); + } + + @Test + void shouldDecodeMultiEntryMessage() { + + sut.multi(2); + sut.set(ByteBuffer.wrap("stream-key".getBytes())); + sut.complete(1); + sut.multi(1); + sut.multi(2); + sut.set(ByteBuffer.wrap("1234-12".getBytes())); + sut.complete(3); + sut.multi(4); + sut.set(ByteBuffer.wrap("key1".getBytes())); + sut.complete(4); + sut.set(ByteBuffer.wrap("value1".getBytes())); + sut.complete(4); + sut.set(ByteBuffer.wrap("key2".getBytes())); + sut.complete(4); + sut.set(ByteBuffer.wrap("value2".getBytes())); + sut.complete(4); + sut.complete(3); + sut.complete(2); + sut.complete(1); + sut.complete(0); + + assertThat(sut.get()).hasSize(1); + StreamMessage streamMessage = sut.get().get(0); + + assertThat(streamMessage.getId()).isEqualTo("1234-12"); + assertThat(streamMessage.getStream()).isEqualTo("stream-key"); + assertThat(streamMessage.getBody()).hasSize(2).containsEntry("key1", "value1").containsEntry("key2", "value2"); + } + + @Test + void shouldDecodeTwoSingleEntryMessage() { + + sut.multi(2); + sut.set(ByteBuffer.wrap("stream-key".getBytes())); + sut.complete(1); + sut.multi(2); + + sut.multi(2); + sut.set(ByteBuffer.wrap("1234-11".getBytes())); + sut.complete(3); + sut.multi(2); + sut.set(ByteBuffer.wrap("key1".getBytes())); + sut.complete(4); + sut.set(ByteBuffer.wrap("value1".getBytes())); + sut.complete(4); + sut.complete(3); + sut.complete(2); + + sut.multi(2); + sut.set(ByteBuffer.wrap("1234-22".getBytes())); + sut.complete(3); + sut.multi(2); + sut.set(ByteBuffer.wrap("key2".getBytes())); + sut.complete(4); + sut.set(ByteBuffer.wrap("value2".getBytes())); + sut.complete(4); + sut.complete(3); + sut.complete(2); + + sut.complete(1); + sut.complete(0); + + assertThat(sut.get()).hasSize(2); + StreamMessage streamMessage1 = sut.get().get(0); + + assertThat(streamMessage1.getId()).isEqualTo("1234-11"); + assertThat(streamMessage1.getStream()).isEqualTo("stream-key"); + assertThat(streamMessage1.getBody()).hasSize(1).containsEntry("key1", "value1"); + + StreamMessage streamMessage2 = sut.get().get(1); + + assertThat(streamMessage2.getId()).isEqualTo("1234-22"); + assertThat(streamMessage2.getStream()).isEqualTo("stream-key"); + assertThat(streamMessage2.getBody()).hasSize(1).containsEntry("key2", "value2"); + } + + @Test + void shouldDecodeFromTwoStreams() { + + sut.multi(4); + + sut.set(ByteBuffer.wrap("stream1".getBytes())); + sut.complete(1); + sut.multi(1); + sut.multi(2); + sut.set(ByteBuffer.wrap("1234-11".getBytes())); + sut.complete(3); + sut.multi(2); + sut.set(ByteBuffer.wrap("key1".getBytes())); + sut.complete(4); + sut.set(ByteBuffer.wrap("value1".getBytes())); + sut.complete(4); + sut.complete(3); + sut.complete(2); + sut.complete(1); + + sut.set(ByteBuffer.wrap("stream2".getBytes())); + sut.complete(1); + sut.multi(1); + sut.multi(2); + sut.set(ByteBuffer.wrap("1234-22".getBytes())); + sut.complete(3); + sut.multi(2); + sut.set(ByteBuffer.wrap("key2".getBytes())); + sut.complete(4); + sut.set(ByteBuffer.wrap("value2".getBytes())); + sut.complete(4); + sut.complete(3); + sut.complete(2); + sut.complete(1); + + sut.complete(0); + + assertThat(sut.get()).hasSize(2); + StreamMessage streamMessage1 = sut.get().get(0); + + assertThat(streamMessage1.getId()).isEqualTo("1234-11"); + assertThat(streamMessage1.getStream()).isEqualTo("stream1"); + assertThat(streamMessage1.getBody()).hasSize(1).containsEntry("key1", "value1"); + + StreamMessage streamMessage2 = sut.get().get(1); + + assertThat(streamMessage2.getId()).isEqualTo("1234-22"); + assertThat(streamMessage2.getStream()).isEqualTo("stream2"); + assertThat(streamMessage2.getBody()).hasSize(1).containsEntry("key2", "value2"); + } +} diff --git a/src/test/java/io/lettuce/core/protocol/AsyncCommandUnitTests.java b/src/test/java/io/lettuce/core/protocol/AsyncCommandUnitTests.java new file mode 100644 index 0000000000..20b665cab3 --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/AsyncCommandUnitTests.java @@ -0,0 +1,230 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.charset.StandardCharsets; +import java.util.concurrent.CancellationException; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.LettuceFutures; +import io.lettuce.core.RedisCommandExecutionException; +import io.lettuce.core.RedisCommandInterruptedException; +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.Futures; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.test.TestFutures; + +/** + * Unit tests for {@link AsyncCommand}. + * + * @author Mark Paluch + */ +public class AsyncCommandUnitTests { + + private RedisCodec codec = StringCodec.UTF8; + private Command internal; + private AsyncCommand sut; + + @BeforeEach + final void createCommand() { + CommandOutput output = new StatusOutput<>(codec); + internal = new Command<>(CommandType.INFO, output, null); + sut = new AsyncCommand<>(internal); + } + + @Test + void isCancelled() { + assertThat(sut.isCancelled()).isFalse(); + assertThat(sut.cancel(true)).isTrue(); + assertThat(sut.isCancelled()).isTrue(); + assertThat(sut.cancel(true)).isTrue(); + } + + @Test + void isDone() { + assertThat(sut.isDone()).isFalse(); + sut.complete(); + assertThat(sut.isDone()).isTrue(); + } + + @Test + void awaitAllCompleted() { + sut.complete(); + assertThat(LettuceFutures.awaitAll(-1, TimeUnit.MILLISECONDS, sut)).isTrue(); + assertThat(LettuceFutures.awaitAll(0, TimeUnit.MILLISECONDS, sut)).isTrue(); + assertThat(Futures.await(5, TimeUnit.MILLISECONDS, sut)).isTrue(); + } + + @Test + void awaitAll() { + assertThat(Futures.awaitAll(1, TimeUnit.NANOSECONDS, sut)).isFalse(); + } + + @Test + void awaitReturnsCompleted() { + sut.getOutput().set(StandardCharsets.US_ASCII.encode("one")); + sut.complete(); + assertThat(LettuceFutures.awaitOrCancel(sut, -1, TimeUnit.NANOSECONDS)).isEqualTo("one"); + assertThat(LettuceFutures.awaitOrCancel(sut, 0, TimeUnit.NANOSECONDS)).isEqualTo("one"); + assertThat(LettuceFutures.awaitOrCancel(sut, 1, TimeUnit.NANOSECONDS)).isEqualTo("one"); + } + + @Test + void awaitWithExecutionException() { + sut.completeExceptionally(new RedisException("error")); + assertThatThrownBy(() -> LettuceFutures.awaitOrCancel(sut, 1, TimeUnit.SECONDS)).isInstanceOf(RedisException.class); + } + + @Test + void awaitWithCancelledCommand() { + sut.cancel(); + assertThatThrownBy(() -> LettuceFutures.awaitOrCancel(sut, 5, TimeUnit.SECONDS)) + .isInstanceOf(CancellationException.class); + } + + @Test + void awaitAllWithExecutionException() { + sut.completeExceptionally(new RedisCommandExecutionException("error")); + + assertThatThrownBy(() -> Futures.await(0, TimeUnit.SECONDS, sut)).isInstanceOf(RedisException.class); + } + + @Test + void getError() { + sut.getOutput().setError("error"); + assertThat(internal.getError()).isEqualTo("error"); + } + + @Test + void getErrorAsync() { + sut.getOutput().setError("error"); + sut.complete(); + assertThat(sut).isCompletedExceptionally(); + } + + @Test + void completeExceptionally() { + sut.completeExceptionally(new RuntimeException("test")); + assertThat(internal.getError()).isEqualTo("test"); + + assertThat(sut).isCompletedExceptionally(); + } + + @Test + void asyncGet() { + sut.getOutput().set(StandardCharsets.US_ASCII.encode("one")); + sut.complete(); + assertThat(TestFutures.getOrTimeout(sut.toCompletableFuture())).isEqualTo("one"); + sut.getOutput().toString(); + } + + @Test + void customKeyword() { + sut = new AsyncCommand<>(new Command<>(MyKeywords.DUMMY, new StatusOutput<>(codec), null)); + + assertThat(sut.toString()).contains(MyKeywords.DUMMY.name()); + } + + @Test + void customKeywordWithArgs() { + sut = new AsyncCommand<>(new Command<>(MyKeywords.DUMMY, null, new CommandArgs<>(codec))); + sut.getArgs().add(MyKeywords.DUMMY); + assertThat(sut.getArgs().toString()).contains(MyKeywords.DUMMY.name()); + } + + @Test + void getWithTimeout() throws Exception { + sut.getOutput().set(StandardCharsets.US_ASCII.encode("one")); + sut.complete(); + + assertThat(sut.get(0, TimeUnit.MILLISECONDS)).isEqualTo("one"); + } + + @Test + void getTimeout() { + assertThatThrownBy(() -> sut.get(2, TimeUnit.MILLISECONDS)).isInstanceOf(TimeoutException.class); + } + + @Test + void awaitTimeout() { + assertThat(sut.await(2, TimeUnit.MILLISECONDS)).isFalse(); + } + + @Test + void getInterrupted() { + Thread.currentThread().interrupt(); + assertThatThrownBy(() -> sut.get()).isInstanceOf(InterruptedException.class); + } + + @Test + void getInterrupted2() { + Thread.currentThread().interrupt(); + assertThatThrownBy(() -> sut.get(5, TimeUnit.MILLISECONDS)).isInstanceOf(InterruptedException.class); + } + + @Test + void awaitInterrupted2() { + Thread.currentThread().interrupt(); + assertThatThrownBy(() -> sut.await(5, TimeUnit.MILLISECONDS)).isInstanceOf(RedisCommandInterruptedException.class); + } + + @Test + void outputSubclassOverride1() { + CommandOutput output = new CommandOutput(codec, null) { + @Override + public String get() throws RedisException { + return null; + } + }; + assertThatThrownBy(() -> output.set(null)).isInstanceOf(IllegalStateException.class); + } + + @Test + void outputSubclassOverride2() { + CommandOutput output = new CommandOutput(codec, null) { + @Override + public String get() throws RedisException { + return null; + } + }; + assertThatThrownBy(() -> output.set(0)).isInstanceOf(IllegalStateException.class); + } + + @Test + void sillyTestsForEmmaCoverage() { + assertThat(CommandType.valueOf("APPEND")).isEqualTo(CommandType.APPEND); + assertThat(CommandKeyword.valueOf("AFTER")).isEqualTo(CommandKeyword.AFTER); + } + + private enum MyKeywords implements ProtocolKeyword { + DUMMY; + + @Override + public byte[] getBytes() { + return name().getBytes(); + } + } +} diff --git a/src/test/java/io/lettuce/core/protocol/CommandArgsUnitTests.java b/src/test/java/io/lettuce/core/protocol/CommandArgsUnitTests.java new file mode 100644 index 0000000000..1733b2115a --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/CommandArgsUnitTests.java @@ -0,0 +1,178 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.codec.StringCodec; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; + +/** + * @author Mark Paluch + */ +class CommandArgsUnitTests { + + @Test + void getFirstIntegerShouldReturnNull() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add("foo"); + + assertThat(CommandArgsAccessor.getFirstInteger(args)).isNull(); + } + + @Test + void getFirstIntegerShouldReturnFirstInteger() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(1L).add(127).add(128).add(129).add(0) + .add(-1); + + assertThat(CommandArgsAccessor.getFirstInteger(args)).isEqualTo(1L); + } + + @Test + void getFirstIntegerShouldReturnFirstNegativeInteger() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(-1L).add(-127).add(-128).add(-129); + + assertThat(CommandArgsAccessor.getFirstInteger(args)).isEqualTo(-1L); + } + + @Test + void getFirstStringShouldReturnNull() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(1); + + assertThat(CommandArgsAccessor.getFirstString(args)).isNull(); + } + + @Test + void getFirstStringShouldReturnFirstString() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add("one").add("two"); + + assertThat(CommandArgsAccessor.getFirstString(args)).isEqualTo("one"); + } + + @Test + void getFirstCharArrayShouldReturnCharArray() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(1L).add("two".toCharArray()); + + assertThat(CommandArgsAccessor.getFirstCharArray(args)).isEqualTo("two".toCharArray()); + } + + @Test + void getFirstCharArrayShouldReturnNull() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(1L); + + assertThat(CommandArgsAccessor.getFirstCharArray(args)).isNull(); + } + + @Test + void getFirstEncodedKeyShouldReturnNull() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add(1L); + + assertThat(CommandArgsAccessor.getFirstString(args)).isNull(); + } + + @Test + void getFirstEncodedKeyShouldReturnFirstKey() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).addKey("one").addKey("two"); + + assertThat(CommandArgsAccessor.encodeFirstKey(args)).isEqualTo(ByteBuffer.wrap("one".getBytes())); + } + + @Test + void addValues() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).addValues(Arrays.asList("1", "2")); + + ByteBuf buffer = Unpooled.buffer(); + args.encode(buffer); + + ByteBuf expected = Unpooled.buffer(); + expected.writeBytes(("$1\r\n" + "1\r\n" + "$1\r\n" + "2\r\n").getBytes()); + + assertThat(buffer.toString(StandardCharsets.US_ASCII)).isEqualTo(expected.toString(StandardCharsets.US_ASCII)); + } + + @Test + void addByte() { + + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).add("one".getBytes()); + + ByteBuf buffer = Unpooled.buffer(); + args.encode(buffer); + + ByteBuf expected = Unpooled.buffer(); + expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); + + assertThat(buffer.toString(StandardCharsets.US_ASCII)).isEqualTo(expected.toString(StandardCharsets.US_ASCII)); + } + + @Test + void addByteUsingByteCodec() { + + CommandArgs args = new CommandArgs<>(ByteArrayCodec.INSTANCE).add("one".getBytes()); + + ByteBuf buffer = Unpooled.buffer(); + args.encode(buffer); + + ByteBuf expected = Unpooled.buffer(); + expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); + + assertThat(buffer.toString(StandardCharsets.US_ASCII)).isEqualTo(expected.toString(StandardCharsets.US_ASCII)); + } + + @Test + void addValueUsingByteCodec() { + + CommandArgs args = new CommandArgs<>(ByteArrayCodec.INSTANCE).addValue("one".getBytes()); + + ByteBuf buffer = Unpooled.buffer(); + args.encode(buffer); + + ByteBuf expected = Unpooled.buffer(); + expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); + + assertThat(buffer.toString(StandardCharsets.US_ASCII)).isEqualTo(expected.toString(StandardCharsets.US_ASCII)); + } + + @Test + void addKeyUsingByteCodec() { + + CommandArgs args = new CommandArgs<>(ByteArrayCodec.INSTANCE).addValue("one".getBytes()); + + ByteBuf buffer = Unpooled.buffer(); + args.encode(buffer); + + ByteBuf expected = Unpooled.buffer(); + expected.writeBytes(("$3\r\n" + "one\r\n").getBytes()); + + assertThat(buffer.toString(StandardCharsets.US_ASCII)).isEqualTo(expected.toString(StandardCharsets.US_ASCII)); + } +} diff --git a/src/test/java/io/lettuce/core/protocol/CommandHandlerUnitTests.java b/src/test/java/io/lettuce/core/protocol/CommandHandlerUnitTests.java new file mode 100644 index 0000000000..c8cd685c0d --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/CommandHandlerUnitTests.java @@ -0,0 +1,482 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Fail.fail; +import static org.mockito.AdditionalMatchers.gt; +import static org.mockito.Matchers.any; +import static org.mockito.Mockito.*; + +import java.io.IOException; +import java.net.Inet4Address; +import java.net.InetSocketAddress; +import java.time.Duration; +import java.util.*; + +import org.apache.logging.log4j.Level; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.core.LoggerContext; +import org.apache.logging.log4j.core.config.Configuration; +import org.apache.logging.log4j.core.config.LoggerConfig; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.ArgumentCaptor; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.metrics.CommandLatencyCollector; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.tracing.Tracing; +import io.lettuce.test.Delay; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.Unpooled; +import io.netty.channel.*; +import io.netty.util.concurrent.ImmediateEventExecutor; + +/** + * @author Mark Paluch + * @author Jongyeol Choi + * @author Gavin Cook + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class CommandHandlerUnitTests { + + private Queue> stack; + + private CommandHandler sut; + + private final Command command = new Command<>(CommandType.APPEND, new StatusOutput<>( + StringCodec.UTF8), null); + + @Mock + private ChannelHandlerContext context; + + @Mock + private Channel channel; + + @Mock + private ChannelConfig config; + + @Mock + private ChannelPipeline pipeline; + + @Mock + private EventLoop eventLoop; + + @Mock + private ClientResources clientResources; + + @Mock + private Endpoint endpoint; + + @Mock + private ChannelPromise promise; + + @Mock + private CommandLatencyCollector latencyCollector; + + @BeforeAll + static void beforeClass() { + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(CommandHandler.class.getName()); + loggerConfig.setLevel(Level.ALL); + } + + @AfterAll + static void afterClass() { + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(CommandHandler.class.getName()); + loggerConfig.setLevel(null); + } + + @BeforeEach + void before() throws Exception { + + when(context.channel()).thenReturn(channel); + when(context.alloc()).thenReturn(ByteBufAllocator.DEFAULT); + when(channel.pipeline()).thenReturn(pipeline); + when(channel.eventLoop()).thenReturn(eventLoop); + when(channel.remoteAddress()).thenReturn(new InetSocketAddress(Inet4Address.getLocalHost(), 1234)); + when(channel.localAddress()).thenReturn(new InetSocketAddress(Inet4Address.getLocalHost(), 1234)); + when(channel.config()).thenReturn(config); + when(eventLoop.submit(any(Runnable.class))).thenAnswer(invocation -> { + Runnable r = (Runnable) invocation.getArguments()[0]; + r.run(); + return null; + }); + + when(latencyCollector.isEnabled()).thenReturn(true); + when(clientResources.commandLatencyCollector()).thenReturn(latencyCollector); + when(clientResources.tracing()).thenReturn(Tracing.disabled()); + + sut = new CommandHandler(ClientOptions.create(), clientResources, endpoint); + stack = (Queue) ReflectionTestUtils.getField(sut, "stack"); + } + + @Test + void testExceptionChannelActive() throws Exception { + sut.setState(CommandHandler.LifecycleState.ACTIVE); + + sut.channelActive(context); + sut.exceptionCaught(context, new Exception()); + } + + @Test + void testIOExceptionChannelActive() throws Exception { + sut.setState(CommandHandler.LifecycleState.ACTIVE); + + sut.channelActive(context); + sut.exceptionCaught(context, new IOException("Connection timed out")); + } + + @Test + void testExceptionChannelInactive() throws Exception { + sut.setState(CommandHandler.LifecycleState.DISCONNECTED); + sut.exceptionCaught(context, new Exception()); + verify(context, never()).fireExceptionCaught(any(Exception.class)); + } + + @Test + void testExceptionWithQueue() throws Exception { + sut.setState(CommandHandler.LifecycleState.ACTIVE); + stack.clear(); + + sut.channelActive(context); + + stack.add(command); + sut.exceptionCaught(context, new Exception()); + + assertThat(stack).isEmpty(); + command.get(); + + assertThat(ReflectionTestUtils.getField(command, "exception")).isNotNull(); + } + + @Test + void testExceptionWhenClosed() throws Exception { + + sut.setState(CommandHandler.LifecycleState.CLOSED); + + sut.exceptionCaught(context, new Exception()); + verifyZeroInteractions(context); + } + + @Test + void isConnectedShouldReportFalseForNOT_CONNECTED() { + + sut.setState(CommandHandler.LifecycleState.NOT_CONNECTED); + assertThat(sut.isConnected()).isFalse(); + } + + @Test + void isConnectedShouldReportFalseForREGISTERED() { + + sut.setState(CommandHandler.LifecycleState.REGISTERED); + assertThat(sut.isConnected()).isFalse(); + } + + @Test + void isConnectedShouldReportTrueForCONNECTED() { + + sut.setState(CommandHandler.LifecycleState.CONNECTED); + assertThat(sut.isConnected()).isTrue(); + } + + @Test + void isConnectedShouldReportTrueForACTIVATING() { + + sut.setState(CommandHandler.LifecycleState.ACTIVATING); + assertThat(sut.isConnected()).isTrue(); + } + + @Test + void isConnectedShouldReportTrueForACTIVE() { + + sut.setState(CommandHandler.LifecycleState.ACTIVE); + assertThat(sut.isConnected()).isTrue(); + } + + @Test + void isConnectedShouldReportFalseForDISCONNECTED() { + + sut.setState(CommandHandler.LifecycleState.DISCONNECTED); + assertThat(sut.isConnected()).isFalse(); + } + + @Test + void isConnectedShouldReportFalseForDEACTIVATING() { + + sut.setState(CommandHandler.LifecycleState.DEACTIVATING); + assertThat(sut.isConnected()).isFalse(); + } + + @Test + void isConnectedShouldReportFalseForDEACTIVATED() { + + sut.setState(CommandHandler.LifecycleState.DEACTIVATED); + assertThat(sut.isConnected()).isFalse(); + } + + @Test + void isConnectedShouldReportFalseForCLOSED() { + + sut.setState(CommandHandler.LifecycleState.CLOSED); + assertThat(sut.isConnected()).isFalse(); + } + + @Test + void shouldNotWriteCancelledCommand() throws Exception { + + command.cancel(); + sut.write(context, command, promise); + + verifyZeroInteractions(context); + assertThat(stack).isEmpty(); + + verify(promise).trySuccess(); + } + + @Test + void shouldNotWriteCancelledCommands() throws Exception { + + command.cancel(); + sut.write(context, Collections.singleton(command), promise); + + verifyZeroInteractions(context); + assertThat(stack).isEmpty(); + + verify(promise).trySuccess(); + } + + @Test + void shouldCancelCommandOnQueueSingleFailure() throws Exception { + + Command commandMock = mock(Command.class); + + RuntimeException exception = new RuntimeException(); + when(commandMock.getOutput()).thenThrow(exception); + + ChannelPromise channelPromise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + try { + sut.write(context, commandMock, channelPromise); + fail("Missing RuntimeException"); + } catch (RuntimeException e) { + assertThat(e).isSameAs(exception); + } + + assertThat(stack).isEmpty(); + verify(commandMock).completeExceptionally(exception); + } + + @Test + void shouldCancelCommandOnQueueBatchFailure() throws Exception { + + Command commandMock = mock(Command.class); + + RuntimeException exception = new RuntimeException(); + when(commandMock.getOutput()).thenThrow(exception); + + ChannelPromise channelPromise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + try { + sut.write(context, Arrays.asList(commandMock), channelPromise); + fail("Missing RuntimeException"); + } catch (RuntimeException e) { + assertThat(e).isSameAs(exception); + } + + assertThat(stack).isEmpty(); + verify(commandMock).completeExceptionally(exception); + } + + @Test + void shouldFailOnDuplicateCommands() throws Exception { + + Command commandMock = mock(Command.class); + + ChannelPromise channelPromise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + sut.write(context, Arrays.asList(commandMock, commandMock), channelPromise); + + assertThat(stack).isEmpty(); + verify(commandMock).completeExceptionally(any(RedisException.class)); + } + + @Test + void shouldWriteActiveCommands() throws Exception { + + when(promise.isVoid()).thenReturn(true); + + sut.write(context, command, promise); + + verify(context).write(command, promise); + assertThat(stack).hasSize(1).allMatch(o -> o instanceof LatencyMeteredCommand); + } + + @Test + void shouldNotWriteCancelledCommandBatch() throws Exception { + + command.cancel(); + sut.write(context, Arrays.asList(command), promise); + + verifyZeroInteractions(context); + assertThat((Collection) ReflectionTestUtils.getField(sut, "stack")).isEmpty(); + } + + @Test + void shouldWriteSingleActiveCommandsInBatch() throws Exception { + + List> commands = Arrays.asList(command); + when(promise.isVoid()).thenReturn(true); + sut.write(context, commands, promise); + + verify(context).write(command, promise); + assertThat(stack).hasSize(1); + } + + @Test + void shouldWriteActiveCommandsInBatch() throws Exception { + + Command anotherCommand = new Command<>(CommandType.APPEND, + new StatusOutput<>(StringCodec.UTF8), null); + + List> commands = Arrays.asList(command, anotherCommand); + when(promise.isVoid()).thenReturn(true); + sut.write(context, commands, promise); + + verify(context).write(any(Set.class), eq(promise)); + assertThat(stack).hasSize(2); + } + + @Test + @SuppressWarnings("unchecked") + void shouldWriteActiveCommandsInMixedBatch() throws Exception { + + Command command2 = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8), null); + command.cancel(); + when(promise.isVoid()).thenReturn(true); + + sut.write(context, Arrays.asList(command, command2), promise); + + ArgumentCaptor captor = ArgumentCaptor.forClass(Collection.class); + verify(context).write(captor.capture(), any()); + + assertThat(captor.getValue()).containsOnly(command2); + assertThat(stack).hasSize(1).allMatch(o -> o instanceof LatencyMeteredCommand) + .allMatch(o -> CommandWrapper.unwrap((RedisCommand) o) == command2); + } + + @Test + void shouldRecordCorrectFirstResponseLatency() throws Exception { + + ChannelPromise channelPromise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + channelPromise.setSuccess(); + + sut.channelRegistered(context); + sut.channelActive(context); + + sut.write(context, command, channelPromise); + Delay.delay(Duration.ofMillis(10)); + + sut.channelRead(context, Unpooled.wrappedBuffer("*1\r\n+OK\r\n".getBytes())); + + verify(latencyCollector).recordCommandLatency(any(), any(), eq(CommandType.APPEND), gt(0L), gt(0L)); + + sut.channelUnregistered(context); + } + + @Test + void shouldIgnoreNonReadableBuffers() throws Exception { + + ByteBuf byteBufMock = mock(ByteBuf.class); + when(byteBufMock.isReadable()).thenReturn(false); + + sut.channelRead(context, byteBufMock); + + verify(byteBufMock, never()).release(); + } + + @Test + void shouldNotDiscardReadBytes() throws Exception { + + ChannelPromise channelPromise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + channelPromise.setSuccess(); + + sut.channelRegistered(context); + sut.channelActive(context); + + sut.getStack().add(new Command<>(CommandType.PING, new StatusOutput<>(StringCodec.UTF8))); + + // set the command handler buffer capacity to 30, make it easy to test + ByteBuf internalBuffer = context.alloc().buffer(30); + ReflectionTestUtils.setField(sut, "buffer", internalBuffer); + + // mock a multi reply, which will reach the buffer usage ratio + ByteBuf msg = context.alloc().buffer(100); + + msg.writeBytes("*1\r\n+OK\r\n".getBytes()); + + sut.channelRead(context, msg); + + assertThat(internalBuffer.readerIndex()).isEqualTo(9); + assertThat(internalBuffer.writerIndex()).isEqualTo(9); + sut.channelUnregistered(context); + } + + @Test + void shouldDiscardReadBytes() throws Exception { + + ChannelPromise channelPromise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + channelPromise.setSuccess(); + + sut.channelRegistered(context); + sut.channelActive(context); + + sut.getStack().add(new Command<>(CommandType.PING, new StatusOutput<>(StringCodec.UTF8))); + sut.getStack().add(new Command<>(CommandType.PING, new StatusOutput<>(StringCodec.UTF8))); + sut.getStack().add(new Command<>(CommandType.PING, new StatusOutput<>(StringCodec.UTF8))); + + // set the command handler buffer capacity to 30, make it easy to test + ByteBuf internalBuffer = context.alloc().buffer(30); + ReflectionTestUtils.setField(sut, "buffer", internalBuffer); + + // mock a multi reply, which will reach the buffer usage ratio + ByteBuf msg = context.alloc().buffer(100); + + msg.writeBytes("*1\r\n+OK\r\n".getBytes()); + msg.writeBytes("*1\r\n+OK\r\n".getBytes()); + msg.writeBytes("*1\r\n+OK\r\n".getBytes()); + + sut.channelRead(context, msg); + + assertThat(internalBuffer.readerIndex()).isEqualTo(0); + assertThat(internalBuffer.writerIndex()).isEqualTo(0); + sut.channelUnregistered(context); + } +} diff --git a/src/test/java/io/lettuce/core/protocol/CommandUnitTests.java b/src/test/java/io/lettuce/core/protocol/CommandUnitTests.java new file mode 100644 index 0000000000..b90f7e13cd --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/CommandUnitTests.java @@ -0,0 +1,154 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.charset.StandardCharsets; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.output.StatusOutput; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +public class CommandUnitTests { + + private Command sut; + + @BeforeEach + void createCommand() { + + CommandOutput output = new StatusOutput<>(StringCodec.UTF8); + sut = new Command<>(CommandType.INFO, output, null); + } + + @Test + void isCancelled() { + assertThat(sut.isCancelled()).isFalse(); + assertThat(sut.isDone()).isFalse(); + + sut.cancel(); + + assertThat(sut.isCancelled()).isTrue(); + assertThat(sut.isDone()).isTrue(); + + sut.cancel(); + } + + @Test + void isDone() { + assertThat(sut.isCancelled()).isFalse(); + assertThat(sut.isDone()).isFalse(); + + sut.complete(); + + assertThat(sut.isCancelled()).isFalse(); + assertThat(sut.isDone()).isTrue(); + } + + @Test + void get() { + assertThat(sut.get()).isNull(); + sut.getOutput().set(StandardCharsets.US_ASCII.encode("one")); + assertThat(sut.get()).isEqualTo("one"); + } + + @Test + void getError() { + sut.getOutput().setError("error"); + assertThat(sut.getError()).isEqualTo("error"); + } + + @Test + void setOutputAfterCompleted() { + sut.complete(); + assertThatThrownBy(() -> sut.setOutput(new StatusOutput<>(StringCodec.UTF8))).isInstanceOf(IllegalStateException.class); + } + + @Test + void testToString() { + assertThat(sut.toString()).contains("Command"); + } + + @Test + void customKeyword() { + + sut = new Command<>(MyKeywords.DUMMY, null, null); + sut.setOutput(new StatusOutput<>(StringCodec.UTF8)); + + assertThat(sut.toString()).contains(MyKeywords.DUMMY.name()); + } + + @Test + void customKeywordWithArgs() { + sut = new Command<>(MyKeywords.DUMMY, null, new CommandArgs<>(StringCodec.UTF8)); + sut.getArgs().add(MyKeywords.DUMMY); + assertThat(sut.getArgs().toString()).contains(MyKeywords.DUMMY.name()); + } + + @Test + void getWithTimeout() { + sut.getOutput().set(StandardCharsets.US_ASCII.encode("one")); + sut.complete(); + + assertThat(sut.get()).isEqualTo("one"); + } + + @Test + void outputSubclassOverride1() { + CommandOutput output = new CommandOutput(StringCodec.UTF8, null) { + @Override + public String get() throws RedisException { + return null; + } + }; + assertThatThrownBy(() -> output.set(null)).isInstanceOf(IllegalStateException.class); + } + + @Test + void outputSubclassOverride2() { + CommandOutput output = new CommandOutput(StringCodec.UTF8, null) { + @Override + public String get() throws RedisException { + return null; + } + }; + assertThatThrownBy(() -> output.set(0)).isInstanceOf(IllegalStateException.class); + } + + @Test + void sillyTestsForEmmaCoverage() { + assertThat(CommandType.valueOf("APPEND")).isEqualTo(CommandType.APPEND); + assertThat(CommandKeyword.valueOf("AFTER")).isEqualTo(CommandKeyword.AFTER); + } + + private enum MyKeywords implements ProtocolKeyword { + DUMMY; + + @Override + public byte[] getBytes() { + return name().getBytes(); + } + } +} diff --git a/src/test/java/io/lettuce/core/protocol/CommandWrapperUnitTests.java b/src/test/java/io/lettuce/core/protocol/CommandWrapperUnitTests.java new file mode 100644 index 0000000000..cf32b42489 --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/CommandWrapperUnitTests.java @@ -0,0 +1,61 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.concurrent.atomic.AtomicReference; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.CommandOutput; +import io.lettuce.core.output.StatusOutput; + +/** + * @author Mark Paluch + */ +class CommandWrapperUnitTests { + + private RedisCodec codec = StringCodec.UTF8; + private Command sut; + + @BeforeEach + final void createCommand() { + + CommandOutput output = new StatusOutput<>(codec); + sut = new Command<>(CommandType.INFO, output, null); + } + + @Test + void shouldAppendOnComplete() { + + AtomicReference v1 = new AtomicReference<>(); + AtomicReference v2 = new AtomicReference<>(); + + CommandWrapper commandWrapper = new CommandWrapper<>(sut); + + commandWrapper.onComplete(s -> v1.set(true)); + commandWrapper.onComplete(s -> v2.set(true)); + + commandWrapper.complete(); + + assertThat(v1.get()).isEqualTo(true); + assertThat(v2.get()).isEqualTo(true); + } +} diff --git a/src/test/java/io/lettuce/core/protocol/ConnectionFailureIntegrationTests.java b/src/test/java/io/lettuce/core/protocol/ConnectionFailureIntegrationTests.java new file mode 100644 index 0000000000..ff6da796f5 --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/ConnectionFailureIntegrationTests.java @@ -0,0 +1,404 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.Assertions.fail; + +import java.net.InetSocketAddress; +import java.time.Duration; +import java.util.Comparator; +import java.util.List; +import java.util.Queue; +import java.util.concurrent.*; +import java.util.concurrent.atomic.AtomicReference; +import java.util.stream.Collectors; + +import javax.inject.Inject; + +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInstance; +import org.junit.jupiter.api.extension.ExtendWith; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.event.Event; +import io.lettuce.core.event.connection.ReconnectFailedEvent; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.NettyCustomizer; +import io.lettuce.test.*; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.server.RandomResponseServer; +import io.lettuce.test.settings.TestSettings; +import io.netty.channel.Channel; +import io.netty.channel.local.LocalAddress; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +@TestInstance(TestInstance.Lifecycle.PER_CLASS) +class ConnectionFailureIntegrationTests extends TestSupport { + + private final RedisClient client; + private final RedisURI defaultRedisUri = RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build(); + + @Inject + ConnectionFailureIntegrationTests(RedisClient client) { + this.client = client; + } + + /** + * Expect to run into Invalid first byte exception instead of timeout. + * + * @throws Exception + */ + @Test + void invalidFirstByte() throws Exception { + + client.setOptions(ClientOptions.builder().build()); + + RandomResponseServer ts = getRandomResponseServer(); + + RedisURI redisUri = RedisURI.Builder.redis(TestSettings.host(), TestSettings.nonexistentPort()) + .withTimeout(Duration.ofMinutes(10)).build(); + + try { + client.connect(redisUri); + } catch (Exception e) { + assertThat(e).isExactlyInstanceOf(RedisConnectionException.class); + assertThat(e.getCause()).hasMessageContaining("Invalid first byte:"); + } finally { + ts.shutdown(); + } + } + + /** + * Simulates a failure on reconnect by changing the port to a invalid server and triggering a reconnect. Meanwhile a command + * is fired to the connection and the watchdog is triggered afterwards to reconnect. + * + * Expectation: Command after failed reconnect contains the reconnect exception. + * + * @throws Exception + */ + @Test + void failOnReconnect() throws Exception { + + ClientOptions clientOptions = ClientOptions.builder().suspendReconnectOnProtocolFailure(true).build(); + client.setOptions(clientOptions); + + RandomResponseServer ts = getRandomResponseServer(); + + RedisURI redisUri = RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build(); + redisUri.setTimeout(Duration.ofSeconds(5)); + + try { + RedisAsyncCommands connection = client.connect(redisUri).async(); + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil + .getConnectionWatchdog(connection.getStatefulConnection()); + + assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); + assertThat(connectionWatchdog.isReconnectSuspended()).isFalse(); + assertThat(clientOptions.isSuspendReconnectOnProtocolFailure()).isTrue(); + assertThat(connectionWatchdog.getReconnectionHandler().getClientOptions()).isSameAs(clientOptions); + + redisUri.setPort(TestSettings.nonexistentPort()); + + connection.quit(); + Wait.untilTrue(() -> connectionWatchdog.isReconnectSuspended()).waitOrTimeout(); + + assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(connection.info())).hasRootCauseInstanceOf(RedisException.class) + .hasMessageContaining("Invalid first byte"); + + connection.getStatefulConnection().close(); + } finally { + ts.shutdown(); + } + } + + /** + * Simulates a failure on reconnect by changing the port to a invalid server and triggering a reconnect. + * + * Expectation: {@link io.lettuce.core.ConnectionEvents.Reconnect} events are sent. + * + * @throws Exception + */ + @Test + void failOnReconnectShouldSendEvents() throws Exception { + + client.setOptions( + ClientOptions.builder().suspendReconnectOnProtocolFailure(false).build()); + + RandomResponseServer ts = getRandomResponseServer(); + + RedisURI redisUri = RedisURI.create(defaultRedisUri.toURI()); + redisUri.setTimeout(Duration.ofSeconds(5)); + + try { + final BlockingQueue events = new LinkedBlockingDeque<>(); + + RedisAsyncCommands connection = client.connect(redisUri).async(); + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil + .getConnectionWatchdog(connection.getStatefulConnection()); + + ReconnectionListener reconnectionListener = events::offer; + + ReflectionTestUtils.setField(connectionWatchdog, "reconnectionListener", reconnectionListener); + + redisUri.setPort(TestSettings.nonexistentPort()); + + connection.quit(); + Wait.untilTrue(() -> events.size() > 1).waitOrTimeout(); + connection.getStatefulConnection().close(); + + ConnectionEvents.Reconnect event1 = events.take(); + assertThat(event1.getAttempt()).isEqualTo(1); + + ConnectionEvents.Reconnect event2 = events.take(); + assertThat(event2.getAttempt()).isEqualTo(2); + + } finally { + ts.shutdown(); + } + } + + /** + * Simulates a failure on reconnect by changing the port to a invalid server and triggering a reconnect. Meanwhile a command + * is fired to the connection and the watchdog is triggered afterwards to reconnect. + * + * Expectation: Queued commands are canceled (reset), subsequent commands contain the connection exception. + * + * @throws Exception + */ + @Test + void cancelCommandsOnReconnectFailure() throws Exception { + + client.setOptions( + ClientOptions.builder().cancelCommandsOnReconnectFailure(true).build()); + + RandomResponseServer ts = getRandomResponseServer(); + + RedisURI redisUri = RedisURI.create(defaultRedisUri.toURI()); + + try { + RedisAsyncCommandsImpl connection = (RedisAsyncCommandsImpl) client + .connect(redisUri).async(); + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil + .getConnectionWatchdog(connection.getStatefulConnection()); + + assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); + + connectionWatchdog.setReconnectSuspended(true); + redisUri.setPort(TestSettings.nonexistentPort()); + + connection.quit(); + Wait.untilTrue(() -> !connection.getStatefulConnection().isOpen()).waitOrTimeout(); + + RedisFuture set1 = connection.set(key, value); + RedisFuture set2 = connection.set(key, value); + + assertThat(set1.isDone()).isFalse(); + assertThat(set1.isCancelled()).isFalse(); + + assertThat(connection.getStatefulConnection().isOpen()).isFalse(); + connectionWatchdog.setReconnectSuspended(false); + connectionWatchdog.run(0); + Delay.delay(Duration.ofMillis(500)); + assertThat(connection.getStatefulConnection().isOpen()).isFalse(); + + assertThatThrownBy(set1::get).isInstanceOf(CancellationException.class).hasNoCause(); + assertThatThrownBy(set2::get).isInstanceOf(CancellationException.class).hasNoCause(); + + assertThatThrownBy(() -> TestFutures.awaitOrTimeout(connection.info())).isInstanceOf(RedisException.class) + .hasMessageContaining("Invalid first byte"); + + connection.getStatefulConnection().close(); + } finally { + ts.shutdown(); + } + } + + @Test + void emitEventOnReconnectFailure() throws Exception { + + RandomResponseServer ts = getRandomResponseServer(); + Queue queue = new ConcurrentLinkedQueue<>(); + ClientResources clientResources = ClientResources.create(); + + RedisURI redisUri = RedisURI.create(defaultRedisUri.toURI()); + RedisClient client = RedisClient.create(clientResources); + + client.setOptions(ClientOptions.builder().build()); + + try { + RedisAsyncCommandsImpl connection = (RedisAsyncCommandsImpl) client + .connect(redisUri).async(); + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil + .getConnectionWatchdog(connection.getStatefulConnection()); + + redisUri.setPort(TestSettings.nonexistentPort()); + + client.getResources().eventBus().get().subscribe(queue::add); + + connection.quit(); + Wait.untilTrue(() -> !connection.getStatefulConnection().isOpen()).waitOrTimeout(); + + connectionWatchdog.run(0); + Delay.delay(Duration.ofMillis(500)); + + connection.getStatefulConnection().close(); + + assertThat(queue).isNotEmpty(); + + List failures = queue.stream().filter(ReconnectFailedEvent.class::isInstance) + .map(ReconnectFailedEvent.class::cast).sorted(Comparator.comparingInt(ReconnectFailedEvent::getAttempt)) + .collect(Collectors.toList()); + + assertThat(failures.size()).isGreaterThanOrEqualTo(2); + + ReconnectFailedEvent failure1 = failures.get(0); + assertThat(failure1.localAddress()).isEqualTo(LocalAddress.ANY); + assertThat(failure1.remoteAddress()).isInstanceOf(InetSocketAddress.class); + assertThat(failure1.getCause()).hasMessageContaining("Invalid first byte"); + assertThat(failure1.getAttempt()).isZero(); + + ReconnectFailedEvent failure2 = failures.get(1); + assertThat(failure2.localAddress()).isEqualTo(LocalAddress.ANY); + assertThat(failure2.remoteAddress()).isInstanceOf(InetSocketAddress.class); + assertThat(failure2.getCause()).hasMessageContaining("Invalid first byte"); + assertThat(failure2.getAttempt()).isOne(); + + } finally { + ts.shutdown(); + FastShutdown.shutdown(client); + FastShutdown.shutdown(clientResources); + } + } + + @Test + void pingOnConnectFailureShouldCloseConnection() throws Exception { + + AtomicReference ref = new AtomicReference<>(); + ClientResources clientResources = ClientResources.builder().nettyCustomizer(new NettyCustomizer() { + @Override + public void afterChannelInitialized(Channel channel) { + ref.set(channel); + } + }).build(); + + // Cluster node with auth + RedisURI redisUri = RedisURI.create(TestSettings.host(), 7385); + RedisClient client = RedisClient.create(clientResources); + + client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); + + try { + client.connect(redisUri); + fail("Missing Exception"); + } catch (Exception e) { + assertThat(ref.get().isOpen()).isFalse(); + assertThat(ref.get().isRegistered()).isFalse(); + } finally { + FastShutdown.shutdown(client); + FastShutdown.shutdown(clientResources); + } + } + + @Test + void pingOnConnectFailureShouldCloseConnectionOnReconnect() throws Exception { + + BlockingQueue ref = new LinkedBlockingQueue<>(); + ClientResources clientResources = ClientResources.builder().nettyCustomizer(new NettyCustomizer() { + @Override + public void afterChannelInitialized(Channel channel) { + ref.add(channel); + } + }).build(); + + RedisURI redisUri = RedisURI.create(TestSettings.host(), TestSettings.port()); + RedisClient client = RedisClient.create(clientResources, redisUri); + client.setOptions(ClientOptions.builder().pingBeforeActivateConnection(true).build()); + + StatefulRedisConnection connection = client.connect(); + + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil.getConnectionWatchdog(connection); + connectionWatchdog.setListenOnChannelInactive(false); + connection.async().quit(); + + // Cluster node with auth + redisUri.setPort(7385); + + connectionWatchdog.setListenOnChannelInactive(true); + connectionWatchdog.scheduleReconnect(); + + Wait.untilTrue(() -> ref.size() > 1).waitOrTimeout(); + + redisUri.setPort(TestSettings.port()); + + Channel initial = ref.take(); + assertThat(initial.isOpen()).isFalse(); + + Channel reconnect = ref.take(); + Wait.untilTrue(() -> !reconnect.isOpen()).waitOrTimeout(); + assertThat(reconnect.isOpen()).isFalse(); + + FastShutdown.shutdown(client); + FastShutdown.shutdown(clientResources); + } + + /** + * Expect to disable {@link ConnectionWatchdog} when closing a broken connection. + */ + @Test + void closingDisconnectedConnectionShouldDisableConnectionWatchdog() { + + client.setOptions(ClientOptions.create()); + + RedisURI redisUri = RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).withTimeout(Duration.ofMinutes(10)) + .build(); + + StatefulRedisConnection connection = client.connect(redisUri); + + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil.getConnectionWatchdog(connection); + + assertThat(connectionWatchdog.isReconnectSuspended()).isFalse(); + assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); + + connection.sync().ping(); + + redisUri.setPort(TestSettings.nonexistentPort() + 5); + + connection.async().quit(); + Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); + + connection.close(); + Delay.delay(Duration.ofMillis(100)); + + assertThat(connectionWatchdog.isReconnectSuspended()).isTrue(); + assertThat(connectionWatchdog.isListenOnChannelInactive()).isFalse(); + } + + RandomResponseServer getRandomResponseServer() throws InterruptedException { + RandomResponseServer ts = new RandomResponseServer(); + ts.initialize(TestSettings.nonexistentPort()); + return ts; + } +} diff --git a/src/test/java/io/lettuce/core/protocol/DefaultEndpointUnitTests.java b/src/test/java/io/lettuce/core/protocol/DefaultEndpointUnitTests.java new file mode 100644 index 0000000000..f817906a0a --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/DefaultEndpointUnitTests.java @@ -0,0 +1,478 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Matchers.any; +import static org.mockito.Mockito.*; + +import java.nio.channels.ClosedChannelException; +import java.util.Collection; +import java.util.Collections; +import java.util.Queue; +import java.util.concurrent.atomic.AtomicLong; + +import org.apache.logging.log4j.Level; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.core.LoggerContext; +import org.apache.logging.log4j.core.config.Configuration; +import org.apache.logging.log4j.core.config.LoggerConfig; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.ArgumentCaptor; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; +import org.springframework.test.util.ReflectionTestUtils; + +import edu.umd.cs.mtc.MultithreadedTestCase; +import edu.umd.cs.mtc.TestFramework; +import io.lettuce.core.ClientOptions; +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.internal.LettuceFactories; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.test.ConnectionTestUtil; +import io.netty.channel.*; +import io.netty.handler.codec.EncoderException; +import io.netty.util.concurrent.ImmediateEventExecutor; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class DefaultEndpointUnitTests { + + private Queue> queue = LettuceFactories.newConcurrentQueue(1000); + + private DefaultEndpoint sut; + + private final Command command = new Command<>(CommandType.APPEND, + new StatusOutput<>(StringCodec.UTF8), null); + + @Mock + private Channel channel; + + @Mock + private ConnectionFacade connectionFacade; + + @Mock + private ConnectionWatchdog connectionWatchdog; + + @Mock + private ClientResources clientResources; + + private ChannelPromise promise; + + @BeforeAll + static void beforeClass() { + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(CommandHandler.class.getName()); + loggerConfig.setLevel(Level.ALL); + } + + @AfterAll + static void afterClass() { + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(CommandHandler.class.getName()); + loggerConfig.setLevel(null); + } + + @BeforeEach + void before() { + + promise = new DefaultChannelPromise(channel); + when(channel.writeAndFlush(any())).thenAnswer(invocation -> { + if (invocation.getArguments()[0] instanceof RedisCommand) { + queue.add((RedisCommand) invocation.getArguments()[0]); + } + + if (invocation.getArguments()[0] instanceof Collection) { + queue.addAll((Collection) invocation.getArguments()[0]); + } + return promise; + }); + + when(channel.write(any())).thenAnswer(invocation -> { + if (invocation.getArguments()[0] instanceof RedisCommand) { + queue.add((RedisCommand) invocation.getArguments()[0]); + } + + if (invocation.getArguments()[0] instanceof Collection) { + queue.addAll((Collection) invocation.getArguments()[0]); + } + return promise; + }); + + sut = new DefaultEndpoint(ClientOptions.create(), clientResources); + sut.setConnectionFacade(connectionFacade); + } + + @Test + void writeConnectedShouldWriteCommandToChannel() { + + when(channel.isActive()).thenReturn(true); + + sut.notifyChannelActive(channel); + sut.write(command); + + assertThat(ConnectionTestUtil.getQueueSize(sut)).isEqualTo(1); + verify(channel).writeAndFlush(command); + } + + @Test + void writeDisconnectedShouldBufferCommands() { + + sut.write(command); + + assertThat(ConnectionTestUtil.getDisconnectedBuffer(sut)).contains(command); + + verify(channel, never()).writeAndFlush(any()); + } + + @Test + void notifyChannelActiveActivatesFacade() { + + sut.notifyChannelActive(channel); + + verify(connectionFacade).activated(); + } + + @Test + void notifyChannelActiveArmsConnectionWatchdog() { + + sut.registerConnectionWatchdog(connectionWatchdog); + + sut.notifyChannelActive(channel); + + verify(connectionWatchdog).arm(); + } + + @Test + void notifyChannelInactiveDeactivatesFacade() { + + sut.notifyChannelInactive(channel); + + verify(connectionFacade).deactivated(); + } + + @Test + void notifyExceptionShouldStoreException() { + + sut.notifyException(new IllegalStateException()); + sut.write(command); + + assertThat(command.exception).isInstanceOf(IllegalStateException.class); + } + + @Test + void notifyChannelActiveClearsStoredException() { + + sut.notifyException(new IllegalStateException()); + sut.notifyChannelActive(channel); + sut.write(command); + + assertThat(command.exception).isNull(); + } + + @Test + void notifyDrainQueuedCommandsShouldBufferCommands() { + + Queue> q = LettuceFactories.newConcurrentQueue(100); + q.add(command); + + sut.notifyDrainQueuedCommands(() -> q); + + assertThat(ConnectionTestUtil.getDisconnectedBuffer(sut)).contains(command); + verify(channel, never()).write(any()); + } + + @Test + void notifyDrainQueuedCommandsShouldWriteCommands() { + + when(channel.isActive()).thenReturn(true); + + Queue> q = LettuceFactories.newConcurrentQueue(100); + q.add(command); + + sut.notifyChannelActive(channel); + sut.notifyDrainQueuedCommands(() -> q); + + verify(channel).write(command); + verify(channel).flush(); + } + + @Test + void shouldCancelCommandsOnEncoderException() { + + when(channel.isActive()).thenReturn(true); + sut.notifyChannelActive(channel); + + DefaultChannelPromise promise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + + when(channel.writeAndFlush(any())).thenAnswer(invocation -> { + if (invocation.getArguments()[0] instanceof RedisCommand) { + queue.add((RedisCommand) invocation.getArguments()[0]); + } + + if (invocation.getArguments()[0] instanceof Collection) { + queue.addAll((Collection) invocation.getArguments()[0]); + } + return promise; + }); + + promise.setFailure(new EncoderException("foo")); + + sut.write(command); + + assertThat(command.exception).isInstanceOf(EncoderException.class); + } + + @Test + void writeShouldRejectCommandsInDisconnectedState() { + + sut = new DefaultEndpoint(ClientOptions.builder() // + .disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS) // + .build(), clientResources); + + sut.write(command); + assertThat(command.exception).hasMessageContaining("Commands are rejected"); + } + + @Test + void writeShouldRejectCommandsInClosedState() { + + sut.close(); + + sut.write(command); + assertThat(command.exception).hasMessageContaining("Connection is closed"); + } + + @Test + void writeWithoutAutoReconnectShouldRejectCommandsInDisconnectedState() { + + sut = new DefaultEndpoint(ClientOptions.builder() // + .autoReconnect(false) // + .disconnectedBehavior(ClientOptions.DisconnectedBehavior.DEFAULT) // + .build(), clientResources); + + sut.write(command); + assertThat(command.exception).hasMessageContaining("Commands are rejected"); + } + + @Test + void closeCleansUpResources() { + + ChannelFuture future = mock(ChannelFuture.class); + when(future.isSuccess()).thenReturn(true); + when(channel.close()).thenReturn(future); + + sut.notifyChannelActive(channel); + sut.registerConnectionWatchdog(connectionWatchdog); + + sut.close(); + + verify(channel).close(); + verify(connectionWatchdog).prepareClose(); + } + + @Test + void closeAllowsOnlyOneCall() { + + ChannelFuture future = mock(ChannelFuture.class); + when(future.isSuccess()).thenReturn(true); + when(channel.close()).thenReturn(future); + + sut.notifyChannelActive(channel); + sut.registerConnectionWatchdog(connectionWatchdog); + + sut.close(); + sut.close(); + + verify(channel).close(); + verify(connectionWatchdog).prepareClose(); + } + + @Test + void retryListenerCompletesSuccessfullyAfterDeferredRequeue() { + + DefaultEndpoint.RetryListener listener = DefaultEndpoint.RetryListener.newInstance(sut, command); + + ChannelFuture future = mock(ChannelFuture.class); + EventLoop eventLoopGroup = mock(EventLoop.class); + + when(future.isSuccess()).thenReturn(false); + when(future.cause()).thenReturn(new ClosedChannelException()); + when(channel.eventLoop()).thenReturn(eventLoopGroup); + when(channel.close()).thenReturn(mock(ChannelFuture.class)); + + sut.notifyChannelActive(channel); + sut.closeAsync(); + + listener.operationComplete(future); + + ArgumentCaptor runnableCaptor = ArgumentCaptor.forClass(Runnable.class); + verify(eventLoopGroup).submit(runnableCaptor.capture()); + + runnableCaptor.getValue().run(); + + assertThat(command.exception).isInstanceOf(RedisException.class); + } + + @Test + void retryListenerDoesNotRetryCompletedCommands() { + + DefaultEndpoint.RetryListener listener = DefaultEndpoint.RetryListener.newInstance(sut, command); + + when(channel.eventLoop()).thenReturn(mock(EventLoop.class)); + + command.complete(); + promise.tryFailure(new Exception()); + + listener.operationComplete(promise); + + verify(channel, never()).writeAndFlush(command); + } + + @Test + void shouldWrapActivationCommands() { + + when(channel.isActive()).thenReturn(true); + doAnswer(i -> { + + sut.write(new Command<>(CommandType.AUTH, new StatusOutput<>(StringCodec.UTF8))); + sut.write(Collections.singletonList(new Command<>(CommandType.SELECT, new StatusOutput<>(StringCodec.UTF8)))); + return null; + }).when(connectionFacade).activated(); + + sut.notifyChannelActive(channel); + + DefaultChannelPromise promise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + + when(channel.writeAndFlush(any())).thenAnswer(invocation -> { + if (invocation.getArguments()[0] instanceof RedisCommand) { + queue.add((RedisCommand) invocation.getArguments()[0]); + } + + if (invocation.getArguments()[0] instanceof Collection) { + queue.addAll((Collection) invocation.getArguments()[0]); + } + return promise; + }); + + assertThat(queue).hasSize(2).hasOnlyElementsOfTypes(DefaultEndpoint.ActivationCommand.class); + } + + @Test + void shouldNotReplayActivationCommands() { + + when(channel.isActive()).thenReturn(true); + ConnectionTestUtil.getDisconnectedBuffer(sut).add(new DefaultEndpoint.ActivationCommand<>( + new Command<>(CommandType.SELECT, new StatusOutput<>(StringCodec.UTF8)))); + ConnectionTestUtil.getDisconnectedBuffer(sut).add(new LatencyMeteredCommand<>(new DefaultEndpoint.ActivationCommand<>( + new Command<>(CommandType.SUBSCRIBE, new StatusOutput<>(StringCodec.UTF8))))); + + doAnswer(i -> { + + sut.write(new Command<>(CommandType.AUTH, new StatusOutput<>(StringCodec.UTF8))); + return null; + }).when(connectionFacade).activated(); + + sut.notifyChannelActive(channel); + + DefaultChannelPromise promise = new DefaultChannelPromise(channel, ImmediateEventExecutor.INSTANCE); + + when(channel.writeAndFlush(any())).thenAnswer(invocation -> { + if (invocation.getArguments()[0] instanceof RedisCommand) { + queue.add((RedisCommand) invocation.getArguments()[0]); + } + + if (invocation.getArguments()[0] instanceof Collection) { + queue.addAll((Collection) invocation.getArguments()[0]); + } + return promise; + }); + + assertThat(queue).hasSize(1).extracting(RedisCommand::getType).containsOnly(CommandType.AUTH); + } + + @Test + void testMTCConcurrentConcurrentWrite() throws Throwable { + TestFramework.runOnce(new MTCConcurrentConcurrentWrite(command, clientResources)); + } + + /** + * Test of concurrent access to locks. Two concurrent writes. + */ + static class MTCConcurrentConcurrentWrite extends MultithreadedTestCase { + + private final Command command; + private TestableEndpoint handler; + + MTCConcurrentConcurrentWrite(Command command, ClientResources clientResources) { + + this.command = command; + + handler = new TestableEndpoint(ClientOptions.create(), clientResources) { + + @Override + protected , T> void writeToBuffer(C command) { + + waitForTick(2); + + Object sharedLock = ReflectionTestUtils.getField(this, "sharedLock"); + AtomicLong writers = (AtomicLong) ReflectionTestUtils.getField(sharedLock, "writers"); + assertThat(writers.get()).isEqualTo(2); + waitForTick(3); + super.writeToBuffer(command); + } + }; + } + + public void thread1() { + + waitForTick(1); + handler.write(command); + } + + public void thread2() { + + waitForTick(1); + handler.write(command); + } + } + + static class TestableEndpoint extends DefaultEndpoint { + + /** + * Create a new {@link DefaultEndpoint}. + * + * @param clientOptions client options for this connection, must not be {@literal null}. + * @param clientResources client resources for this connection, must not be {@literal null}. + */ + TestableEndpoint(ClientOptions clientOptions, ClientResources clientResources) { + super(clientOptions, clientResources); + } + } +} diff --git a/src/test/java/io/lettuce/core/protocol/RedisStateMachineResp2UnitTests.java b/src/test/java/io/lettuce/core/protocol/RedisStateMachineResp2UnitTests.java new file mode 100644 index 0000000000..ec6f48170b --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/RedisStateMachineResp2UnitTests.java @@ -0,0 +1,195 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static io.lettuce.core.protocol.RedisStateMachine.State; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import org.apache.logging.log4j.Level; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.core.LoggerContext; +import org.apache.logging.log4j.core.config.Configuration; +import org.apache.logging.log4j.core.config.LoggerConfig; +import org.junit.jupiter.api.*; + +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.*; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.Unpooled; + +/** + * Unit tests for {@link RedisStateMachine} using RESP2. + * + * @author Will Glozer + * @author Mark Paluch + */ +class RedisStateMachineResp2UnitTests { + + private RedisCodec codec = StringCodec.UTF8; + private Charset charset = StandardCharsets.UTF_8; + private CommandOutput output; + private RedisStateMachine rsm; + + @BeforeAll + static void beforeClass() { + + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(RedisStateMachine.class.getName()); + loggerConfig.setLevel(Level.ALL); + } + + @AfterAll + static void afterClass() { + + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(RedisStateMachine.class.getName()); + loggerConfig.setLevel(null); + } + + @BeforeEach + final void createStateMachine() { + output = new StatusOutput<>(codec); + rsm = new RedisStateMachine(ByteBufAllocator.DEFAULT); + } + + @AfterEach + void tearDown() { + rsm.close(); + } + + @Test + void helloShouldSwitchToResp3() { + assertThat(rsm.decode(buffer("@0\r\n"), output)).isTrue(); + assertThat(rsm.isDiscoverProtocol()).isFalse(); + assertThat(rsm.getProtocolVersion()).isEqualTo(ProtocolVersion.RESP3); + } + + @Test + void single() { + assertThat(rsm.decode(buffer("+OK\r\n"), output)).isTrue(); + assertThat(output.get()).isEqualTo("OK"); + } + + @Test + void error() { + ByteBuf buffer = buffer("-ERR\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.getError()).isEqualTo("ERR"); + assertThat(buffer.readerIndex()).isEqualTo(6); + } + + @Test + void errorWithoutLineBreak() { + assertThat(rsm.decode(buffer("-ERR"), output)).isFalse(); + assertThat(rsm.decode(buffer("\r\n"), output)).isTrue(); + assertThat(output.getError()).isEqualTo(""); + } + + @Test + void integer() { + CommandOutput output = new IntegerOutput<>(codec); + ByteBuf buffer = buffer(":1\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat((long) output.get()).isEqualTo(1); + assertThat(buffer.readerIndex()).isEqualTo(4); + } + + @Test + void bulk() { + CommandOutput output = new ValueOutput<>(codec); + ByteBuf buffer = buffer("$-1\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(buffer.readerIndex()).isEqualTo(5); + assertThat(output.get()).isNull(); + buffer = buffer("$3\r\nfoo\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get()).isEqualTo("foo"); + assertThat(buffer.readerIndex()).isEqualTo(9); + } + + @Test + void multi() { + CommandOutput> output = new ValueListOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n$-1\r\n$2\r\nok\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get()).isEqualTo(Arrays.asList(null, "ok")); + } + + @Test + void multiEmptyArray1() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n$3\r\nABC\r\n*0\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo("ABC"); + assertThat(output.get().get(1)).isEqualTo(Arrays.asList()); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void multiEmptyArray2() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n*0\r\n$3\r\nABC\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo(Arrays.asList()); + assertThat(output.get().get(1)).isEqualTo("ABC"); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void multiEmptyArray3() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n*2\r\n$2\r\nAB\r\n$2\r\nXY\r\n*0\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo(Arrays.asList("AB", "XY")); + assertThat(output.get().get(1)).isEqualTo(Collections.emptyList()); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void partialFirstLine() { + assertThat(rsm.decode(buffer("+"), output)).isFalse(); + assertThat(rsm.decode(buffer("-"), output)).isFalse(); + assertThat(rsm.decode(buffer(":"), output)).isFalse(); + assertThat(rsm.decode(buffer("$"), output)).isFalse(); + assertThat(rsm.decode(buffer("*"), output)).isFalse(); + } + + @Test + void invalidReplyType() { + assertThatThrownBy(() -> rsm.decode(buffer("?"), output)).isInstanceOf(RedisException.class); + } + + @Test + void sillyTestsForEmmaCoverage() { + assertThat(State.Type.valueOf("SINGLE")).isEqualTo(State.Type.SINGLE); + } + + ByteBuf buffer(String content) { + return Unpooled.copiedBuffer(content, charset); + } +} diff --git a/src/test/java/io/lettuce/core/protocol/RedisStateMachineResp3UnitTests.java b/src/test/java/io/lettuce/core/protocol/RedisStateMachineResp3UnitTests.java new file mode 100644 index 0000000000..ed27cc9892 --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/RedisStateMachineResp3UnitTests.java @@ -0,0 +1,233 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static io.lettuce.core.protocol.RedisStateMachine.State; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +import org.apache.logging.log4j.Level; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.core.LoggerContext; +import org.apache.logging.log4j.core.config.Configuration; +import org.apache.logging.log4j.core.config.LoggerConfig; +import org.junit.jupiter.api.*; + +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.codec.Utf8StringCodec; +import io.lettuce.core.output.*; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.Unpooled; + +/** + * Unit tests for {@link RedisStateMachine} using RESP3. + * + * @author Mark Paluch + */ +class RedisStateMachineResp3UnitTests { + + private RedisCodec codec = StringCodec.UTF8; + private Charset charset = StandardCharsets.UTF_8; + private CommandOutput output; + private RedisStateMachine rsm; + + @BeforeAll + static void beforeClass() { + + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(RedisStateMachine.class.getName()); + loggerConfig.setLevel(Level.ALL); + } + + @AfterAll + static void afterClass() { + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(RedisStateMachine.class.getName()); + loggerConfig.setLevel(null); + } + + @BeforeEach + final void createStateMachine() { + output = new StatusOutput<>(codec); + rsm = new RedisStateMachine(ByteBufAllocator.DEFAULT); + rsm.setProtocolVersion(ProtocolVersion.RESP3); + } + + @AfterEach + void tearDown() { + rsm.close(); + } + + @Test + void single() { + assertThat(rsm.decode(buffer("+OK\r\n"), output)).isTrue(); + assertThat(output.get()).isEqualTo("OK"); + } + + @Test + void error() { + assertThat(rsm.decode(buffer("-ERR\r\n"), output)).isTrue(); + assertThat(output.getError()).isEqualTo("ERR"); + } + + @Test + void errorWithoutLineBreak() { + assertThat(rsm.decode(buffer("-ERR"), output)).isFalse(); + assertThat(rsm.decode(buffer("\r\n"), output)).isTrue(); + assertThat(output.getError()).isEqualTo(""); + } + + @Test + void integer() { + CommandOutput output = new IntegerOutput<>(codec); + assertThat(rsm.decode(buffer(":1\r\n"), output)).isTrue(); + assertThat((long) output.get()).isEqualTo(1); + } + + @Test + void floatNumber() { + CommandOutput output = new DoubleOutput<>(codec); + assertThat(rsm.decode(buffer(",12.345\r\n"), output)).isTrue(); + assertThat(output.get()).isEqualTo(12.345); + } + + @Test + void bigNumber() { + CommandOutput output = new StatusOutput<>(codec); + assertThat(rsm.decode(buffer("(3492890328409238509324850943850943825024385\r\n"), output)).isTrue(); + assertThat(output.get()).isEqualTo("3492890328409238509324850943850943825024385"); + } + + @Test + void booleanValue() { + CommandOutput output = new BooleanOutput<>(codec); + assertThat(rsm.decode(buffer("#t\r\n"), output)).isTrue(); + assertThat(output.get()).isTrue(); + + output = new BooleanOutput<>(codec); + assertThat(rsm.decode(buffer("#f\r\n"), output)).isTrue(); + assertThat(output.get()).isFalse(); + } + + @Test + void hello() { + CommandOutput> output = new GenericMapOutput<>(codec); + assertThat( + rsm.decode(buffer("%7\r\n" + "$6\r\nserver\r\n$5\r\nredis\r\n" + "$7\r\nversion\r\n$11\r\n999.999.999\r\n" + + "$5\r\nproto\r\n:3\r\n" + "$2\r\nid\r\n:184\r\n" + "$4\r\nmode\r\n$10\r\nstandalone\r\n" + + "$4\r\nrole\r\n$6\r\nmaster\r\n" + "$7\r\nmodules\r\n*0\r\n"), + output)).isTrue(); + assertThat(output.get()).containsEntry("mode", "standalone"); + } + + @Test + void bulk() { + CommandOutput output = new ValueOutput<>(codec); + assertThat(rsm.decode(buffer("$-1\r\n"), output)).isTrue(); + assertThat(output.get()).isNull(); + assertThat(rsm.decode(buffer("$3\r\nfoo\r\n"), output)).isTrue(); + assertThat(output.get()).isEqualTo("foo"); + } + + @Test + void multi() { + CommandOutput> output = new ValueListOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n$-1\r\n$2\r\nok\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get()).isEqualTo(Arrays.asList(null, "ok")); + } + + @Test + void multiSet() { + CommandOutput> output = new ValueListOutput<>(codec); + ByteBuf buffer = buffer("~2\r\n$-1\r\n$2\r\nok\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get()).isEqualTo(Arrays.asList(null, "ok")); + } + + @Test + void multiMap() { + CommandOutput> output = new GenericMapOutput<>(codec); + ByteBuf buffer = buffer("%1\r\n$3\r\nfoo\r\n$2\r\nok\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get()).containsEntry("foo", "ok"); + } + + @Test + void multiEmptyArray1() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n$3\r\nABC\r\n*0\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo("ABC"); + assertThat(output.get().get(1)).isEqualTo(Arrays.asList()); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void multiEmptyArray2() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n*0\r\n$3\r\nABC\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo(Arrays.asList()); + assertThat(output.get().get(1)).isEqualTo("ABC"); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void multiEmptyArray3() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n*2\r\n$2\r\nAB\r\n$2\r\nXY\r\n*0\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo(Arrays.asList("AB", "XY")); + assertThat(output.get().get(1)).isEqualTo(Collections.emptyList()); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void partialFirstLine() { + assertThat(rsm.decode(buffer("+"), output)).isFalse(); + assertThat(rsm.decode(buffer("-"), output)).isFalse(); + assertThat(rsm.decode(buffer(":"), output)).isFalse(); + assertThat(rsm.decode(buffer("$"), output)).isFalse(); + assertThat(rsm.decode(buffer("*"), output)).isFalse(); + } + + @Test + void invalidReplyType() { + assertThatThrownBy(() -> rsm.decode(buffer("?"), output)).isInstanceOf(RedisException.class); + } + + @Test + void sillyTestsForEmmaCoverage() { + assertThat(State.Type.valueOf("SINGLE")).isEqualTo(State.Type.SINGLE); + } + + ByteBuf buffer(String content) { + return Unpooled.copiedBuffer(content, charset); + } +} diff --git a/src/test/java/io/lettuce/core/protocol/StateMachineUnitTests.java b/src/test/java/io/lettuce/core/protocol/StateMachineUnitTests.java new file mode 100644 index 0000000000..0aff679355 --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/StateMachineUnitTests.java @@ -0,0 +1,188 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static io.lettuce.core.protocol.RedisStateMachine.State; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.List; + +import org.apache.logging.log4j.Level; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.core.LoggerContext; +import org.apache.logging.log4j.core.config.Configuration; +import org.apache.logging.log4j.core.config.LoggerConfig; +import org.junit.jupiter.api.*; + +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.*; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.Unpooled; + +/** + * @author Will Glozer + * @author Mark Paluch + */ +class StateMachineUnitTests { + private RedisCodec codec = StringCodec.UTF8; + private CommandOutput output; + private RedisStateMachine rsm; + + @BeforeAll + static void beforeClass() { + + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(RedisStateMachine.class.getName()); + loggerConfig.setLevel(Level.ALL); + } + + @AfterAll + static void afterClass() { + LoggerContext ctx = (LoggerContext) LogManager.getContext(); + Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(RedisStateMachine.class.getName()); + loggerConfig.setLevel(null); + } + + @BeforeEach + final void createStateMachine() { + output = new StatusOutput<>(codec); + rsm = new RedisStateMachine(ByteBufAllocator.DEFAULT); + } + + @AfterEach + void tearDown() { + rsm.close(); + } + + @Test + void errorShouldSwitchToResp2Protocol() { + assertThat(rsm.decode(buffer("-ERR\r\n"), output)).isTrue(); + assertThat(output.getError()).isEqualTo("ERR"); + assertThat(rsm.isDiscoverProtocol()).isFalse(); + assertThat(rsm.getProtocolVersion()).isEqualTo(ProtocolVersion.RESP2); + } + + @Test + void helloShouldSwitchToResp3() { + assertThat(rsm.decode(buffer("@0\r\n"), output)).isTrue(); + assertThat(rsm.isDiscoverProtocol()).isFalse(); + assertThat(rsm.getProtocolVersion()).isEqualTo(ProtocolVersion.RESP3); + } + + @Test + void single() { + assertThat(rsm.decode(buffer("+OK\r\n"), output)).isTrue(); + assertThat(output.get()).isEqualTo("OK"); + } + + @Test + void error() { + assertThat(rsm.decode(buffer("-ERR\r\n"), output)).isTrue(); + assertThat(output.getError()).isEqualTo("ERR"); + } + + @Test + void errorWithoutLineBreak() { + assertThat(rsm.decode(buffer("-ERR"), output)).isFalse(); + assertThat(rsm.decode(buffer("\r\n"), output)).isTrue(); + assertThat(output.getError()).isEqualTo(""); + } + + @Test + void integer() { + CommandOutput output = new IntegerOutput<>(codec); + assertThat(rsm.decode(buffer(":1\r\n"), output)).isTrue(); + assertThat((long) output.get()).isEqualTo(1); + } + + @Test + void bulk() { + CommandOutput output = new ValueOutput<>(codec); + assertThat(rsm.decode(buffer("$-1\r\n"), output)).isTrue(); + assertThat(output.get()).isNull(); + assertThat(rsm.decode(buffer("$3\r\nfoo\r\n"), output)).isTrue(); + assertThat(output.get()).isEqualTo("foo"); + } + + @Test + void multi() { + CommandOutput> output = new ValueListOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n$-1\r\n$2\r\nok\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get()).isEqualTo(Arrays.asList(null, "ok")); + } + + @Test + void multiEmptyArray1() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n$3\r\nABC\r\n*0\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo("ABC"); + assertThat(output.get().get(1)).isEqualTo(Arrays.asList()); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void multiEmptyArray2() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n*0\r\n$3\r\nABC\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo(Arrays.asList()); + assertThat(output.get().get(1)).isEqualTo("ABC"); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void multiEmptyArray3() { + CommandOutput> output = new NestedMultiOutput<>(codec); + ByteBuf buffer = buffer("*2\r\n*2\r\n$2\r\nAB\r\n$2\r\nXY\r\n*0\r\n"); + assertThat(rsm.decode(buffer, output)).isTrue(); + assertThat(output.get().get(0)).isEqualTo(Arrays.asList("AB", "XY")); + assertThat(output.get().get(1)).isEqualTo(Arrays.asList()); + assertThat(output.get().size()).isEqualTo(2); + } + + @Test + void partialFirstLine() { + assertThat(rsm.decode(buffer("+"), output)).isFalse(); + assertThat(rsm.decode(buffer("-"), output)).isFalse(); + assertThat(rsm.decode(buffer(":"), output)).isFalse(); + assertThat(rsm.decode(buffer("$"), output)).isFalse(); + assertThat(rsm.decode(buffer("*"), output)).isFalse(); + } + + @Test + void invalidReplyType() { + assertThatThrownBy(() -> rsm.decode(buffer("?"), output)).isInstanceOf(RedisException.class); + } + + @Test + void sillyTestsForEmmaCoverage() { + assertThat(State.Type.valueOf("SINGLE")).isEqualTo(State.Type.SINGLE); + } + + ByteBuf buffer(String content) { + return Unpooled.copiedBuffer(content, StandardCharsets.UTF_8); + } +} diff --git a/src/test/java/io/lettuce/core/protocol/TransactionalCommandUnitTests.java b/src/test/java/io/lettuce/core/protocol/TransactionalCommandUnitTests.java new file mode 100644 index 0000000000..7915d3e249 --- /dev/null +++ b/src/test/java/io/lettuce/core/protocol/TransactionalCommandUnitTests.java @@ -0,0 +1,42 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisException; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.StatusOutput; + +/** + * @author Mark Paluch + */ +class TransactionalCommandUnitTests { + + @Test + void shouldCompleteOnException() { + + RedisCommand inner = new Command<>(CommandType.SET, new StatusOutput<>(StringCodec.UTF8)); + + TransactionalCommand command = new TransactionalCommand<>(new AsyncCommand<>(inner)); + + command.completeExceptionally(new RedisException("foo")); + + assertThat(command).isCompletedExceptionally(); + } +} diff --git a/src/test/java/io/lettuce/core/pubsub/PubSubCommandHandlerUnitTests.java b/src/test/java/io/lettuce/core/pubsub/PubSubCommandHandlerUnitTests.java new file mode 100644 index 0000000000..85738c4567 --- /dev/null +++ b/src/test/java/io/lettuce/core/pubsub/PubSubCommandHandlerUnitTests.java @@ -0,0 +1,332 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Matchers.any; +import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import java.util.Queue; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.ArgumentCaptor; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; +import org.mockito.stubbing.Answer; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.metrics.DefaultCommandLatencyCollector; +import io.lettuce.core.metrics.DefaultCommandLatencyCollectorOptions; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.tracing.Tracing; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.Unpooled; +import io.netty.channel.*; + +/** + * @author Mark Paluch + * @author Giridhar Kannan + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class PubSubCommandHandlerUnitTests { + + private Queue> stack; + + private PubSubCommandHandler sut; + + private final Command command = new Command<>(CommandType.APPEND, + new StatusOutput<>(StringCodec.UTF8), null); + + @Mock + private ChannelHandlerContext context; + + @Mock + private Channel channel; + + @Mock + private ChannelConfig channelConfig; + + @Mock + private ChannelPipeline pipeline; + + @Mock + private EventLoop eventLoop; + + @Mock + private ClientResources clientResources; + + @Mock + private PubSubEndpoint endpoint; + + @SuppressWarnings("unchecked") + @BeforeEach + void before() { + + when(channel.config()).thenReturn(channelConfig); + when(context.alloc()).thenReturn(ByteBufAllocator.DEFAULT); + when(context.channel()).thenReturn(channel); + when(channel.pipeline()).thenReturn(pipeline); + when(channel.eventLoop()).thenReturn(eventLoop); + when(eventLoop.submit(any(Runnable.class))).thenAnswer(invocation -> { + Runnable r = (Runnable) invocation.getArguments()[0]; + r.run(); + return null; + }); + + when(clientResources.commandLatencyCollector()) + .thenReturn(new DefaultCommandLatencyCollector(DefaultCommandLatencyCollectorOptions.create())); + when(clientResources.tracing()).thenReturn(Tracing.disabled()); + + sut = new PubSubCommandHandler<>(ClientOptions.create(), clientResources, StringCodec.UTF8, endpoint); + stack = (Queue) ReflectionTestUtils.getField(sut, "stack"); + } + + @Test + void shouldCompleteCommandExceptionallyOnOutputFailure() throws Exception { + + sut.channelRegistered(context); + sut.channelActive(context); + stack.add(command); + + sut.channelRead(context, responseBytes(":1000\r\n")); + + assertThat(ReflectionTestUtils.getField(command, "exception")).isInstanceOf(IllegalStateException.class); + } + + @Test + void shouldDecodeRegularCommand() throws Exception { + + sut.channelRegistered(context); + sut.channelActive(context); + stack.add(command); + + sut.channelRead(context, responseBytes("+OK\r\n")); + + assertThat(command.get()).isEqualTo("OK"); + } + + @Test + void shouldDecodeTwoCommands() throws Exception { + + Command command1 = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8), + null); + Command command2 = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8), + null); + + sut.channelRegistered(context); + sut.channelActive(context); + stack.add(command1); + stack.add(command2); + + sut.channelRead(context, responseBytes("+OK\r\n+YEAH\r\n")); + + assertThat(command1.get()).isEqualTo("OK"); + assertThat(command2.get()).isEqualTo("YEAH"); + } + + @Test + void shouldPropagatePubSubResponseToOutput() throws Exception { + + Command command1 = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8), + null); + + sut.channelRegistered(context); + sut.channelActive(context); + stack.add(command1); + + sut.channelRead(context, responseBytes("*3\r\n$7\r\nmessage\r\n$3\r\nfoo\r\n$3\r\nbar\r\n")); + + assertThat(command1.isDone()).isFalse(); + + verify(endpoint).notifyMessage(any()); + } + + @Test + void shouldPropagateInterleavedPubSubResponseToOutput() throws Exception { + + Command command1 = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8), + null); + Command command2 = new Command<>(CommandType.APPEND, new StatusOutput<>(StringCodec.UTF8), + null); + + sut.channelRegistered(context); + sut.channelActive(context); + stack.add(command1); + stack.add(command2); + + sut.channelRead(context, + responseBytes("+OK\r\n*4\r\n$8\r\npmessage\r\n$1\r\n*\r\n$3\r\nfoo\r\n$3\r\nbar\r\n+YEAH\r\n")); + + assertThat(command1.get()).isEqualTo("OK"); + assertThat(command2.get()).isEqualTo("YEAH"); + + ArgumentCaptor captor = ArgumentCaptor.forClass(PubSubOutput.class); + verify(endpoint).notifyMessage(captor.capture()); + + assertThat(captor.getValue().pattern()).isEqualTo("*"); + assertThat(captor.getValue().channel()).isEqualTo("foo"); + assertThat(captor.getValue().get()).isEqualTo("bar"); + } + + @Test + void shouldNotPropagatePartialPubSubResponseToOutput() throws Exception { + + Command command1 = new Command<>(CommandType.SUBSCRIBE, new PubSubOutput<>(StringCodec.UTF8), + null); + Command command2 = new Command<>(CommandType.SUBSCRIBE, new PubSubOutput<>(StringCodec.UTF8), + null); + + sut.channelRegistered(context); + sut.channelActive(context); + stack.add(command1); + stack.add(command2); + + sut.channelRead(context, responseBytes("*3\r\n$9\r\nsubscribe\r\n$1\r\na\r\n:2\r\n*3\r\n$9\r\nsubscribe\r\n")); + + assertThat(command1.isDone()).isTrue(); + assertThat(command2.isDone()).isFalse(); + + assertThat(stack).hasSize(1); + + ArgumentCaptor captor = ArgumentCaptor.forClass(PubSubOutput.class); + verify(endpoint).notifyMessage(captor.capture()); + + assertThat(captor.getValue().channel()).isEqualTo("a"); + assertThat(captor.getValue().count()).isEqualTo(2); + } + + @Test + void shouldCompleteWithChunkedResponseOnStack() throws Exception { + + Command command1 = new Command<>(CommandType.SUBSCRIBE, new PubSubOutput<>(StringCodec.UTF8), + null); + Command command2 = new Command<>(CommandType.SUBSCRIBE, new PubSubOutput<>(StringCodec.UTF8), + null); + + sut.channelRegistered(context); + sut.channelActive(context); + stack.add(command1); + stack.add(command2); + + sut.channelRead(context, responseBytes("*3\r\n$9\r\nsubscribe\r\n$1\r\na\r\n:2\r\n*3\r\n$9\r\nsubscribe\r\n")); + sut.channelRead(context, responseBytes("$1\r\nb\r\n:2\r\n")); + + assertThat(command1.isDone()).isTrue(); + assertThat(command2.isDone()).isTrue(); + + assertThat(stack).isEmpty(); + + ArgumentCaptor captor = ArgumentCaptor.forClass(PubSubOutput.class); + verify(endpoint, times(2)).notifyMessage(captor.capture()); + + assertThat(captor.getAllValues().get(0).channel()).isEqualTo("a"); + assertThat(captor.getAllValues().get(1).channel()).isEqualTo("b"); + } + + @Test + void shouldCompleteWithChunkedResponseOutOfBand() throws Exception { + + sut.channelRegistered(context); + sut.channelActive(context); + + sut.channelRead(context, responseBytes("*3\r\n$9\r\nsubscribe\r\n$1\r\na\r\n:2\r\n*3\r\n$9\r\nsubscribe\r\n")); + sut.channelRead(context, responseBytes("$1\r\nb\r\n:2\r\n")); + + ArgumentCaptor captor = ArgumentCaptor.forClass(PubSubOutput.class); + verify(endpoint, times(2)).notifyMessage(captor.capture()); + + assertThat(captor.getAllValues().get(0).channel()).isEqualTo("a"); + assertThat(captor.getAllValues().get(1).channel()).isEqualTo("b"); + } + + @Test + void shouldCompleteUnsubscribe() throws Exception { + + Command subCmd = new Command<>(CommandType.SUBSCRIBE, new PubSubOutput<>(StringCodec.UTF8), + null); + Command unSubCmd = new Command<>(CommandType.UNSUBSCRIBE, new PubSubOutput<>(StringCodec.UTF8), + null); + + doAnswer((Answer>) inv -> { + PubSubOutput out = inv.getArgument(0); + if (out.type() == PubSubOutput.Type.message) { + throw new NullPointerException("Expected exception"); + } + return endpoint; + }).when(endpoint).notifyMessage(any()); + + sut.channelRegistered(context); + sut.channelActive(context); + + stack.add(subCmd); + stack.add(unSubCmd); + ByteBuf buf = responseBytes("*3\r\n$9\r\nsubscribe\r\n$10\r\ntest_sub_0\r\n:1\r\n" + + "*3\r\n$7\r\nmessage\r\n$10\r\ntest_sub_0\r\n$3\r\nabc\r\n" + + "*3\r\n$11\r\nunsubscribe\r\n$10\r\ntest_sub_0\r\n:0\r\n"); + sut.channelRead(context, buf); + sut.channelRead(context, responseBytes("*3\r\n$7\r\nmessage\r\n$10\r\ntest_sub_1\r\n$3\r\nabc\r\n")); + + assertThat(unSubCmd.isDone()).isTrue(); + } + + @Test + void shouldCompleteWithChunkedResponseInterleavedSending() throws Exception { + + Command command1 = new Command<>(CommandType.SUBSCRIBE, new PubSubOutput<>(StringCodec.UTF8), + null); + + sut.channelRegistered(context); + sut.channelActive(context); + + sut.channelRegistered(context); + sut.channelActive(context); + + sut.channelRead(context, responseBytes("*3\r\n$7\r\nmessage\r\n$3")); + stack.add(command1); + sut.channelRead(context, responseBytes("\r\nfoo\r\n$3\r\nbar\r\n")); + sut.channelRead(context, responseBytes("*3\r\n$9\r\nsubscribe\r\n$1\r\na\r\n:2")); + sut.channelRead(context, responseBytes("\r\n")); + + assertThat(command1.isDone()).isTrue(); + assertThat(stack).isEmpty(); + + ArgumentCaptor captor = ArgumentCaptor.forClass(PubSubOutput.class); + verify(endpoint, times(2)).notifyMessage(captor.capture()); + + assertThat(captor.getAllValues().get(0).channel()).isEqualTo("foo"); + assertThat(captor.getAllValues().get(0).get()).isEqualTo("bar"); + assertThat(captor.getAllValues().get(1).channel()).isEqualTo("a"); + } + + private static ByteBuf responseBytes(String s) { + return Unpooled.wrappedBuffer(s.getBytes()); + } +} diff --git a/src/test/java/io/lettuce/core/pubsub/PubSubCommandTest.java b/src/test/java/io/lettuce/core/pubsub/PubSubCommandTest.java new file mode 100644 index 0000000000..5c8b163cb9 --- /dev/null +++ b/src/test/java/io/lettuce/core/pubsub/PubSubCommandTest.java @@ -0,0 +1,512 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.hamcrest.CoreMatchers.hasItem; +import static org.junit.Assert.assertThat; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.TimeUnit; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.*; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.internal.LettuceFactories; +import io.lettuce.core.protocol.ProtocolVersion; +import io.lettuce.core.pubsub.api.async.RedisPubSubAsyncCommands; +import io.lettuce.test.Delay; +import io.lettuce.test.TestFutures; +import io.lettuce.test.Wait; +import io.lettuce.test.WithPassword; +import io.lettuce.test.condition.EnabledOnCommand; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; + +/** + * @author Will Glozer + * @author Mark Paluch + * @author Tugdual Grall + */ +class PubSubCommandTest extends AbstractRedisClientTest implements RedisPubSubListener { + + private RedisPubSubAsyncCommands pubsub; + + private BlockingQueue channels; + private BlockingQueue patterns; + private BlockingQueue messages; + private BlockingQueue counts; + + private String channel = "channel0"; + private String pattern = "channel*"; + private String message = "msg!"; + + @BeforeEach + void openPubSubConnection() { + try { + pubsub = client.connectPubSub().async(); + pubsub.getStatefulConnection().addListener(this); + } finally { + channels = LettuceFactories.newBlockingQueue(); + patterns = LettuceFactories.newBlockingQueue(); + messages = LettuceFactories.newBlockingQueue(); + counts = LettuceFactories.newBlockingQueue(); + } + } + + @AfterEach + void closePubSubConnection() { + if (pubsub != null) { + pubsub.getStatefulConnection().close(); + } + } + + @Test + void auth() { + WithPassword.run(client, () -> { + + client.setOptions( + ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false).build()); + RedisPubSubAsyncCommands connection = client.connectPubSub().async(); + connection.getStatefulConnection().addListener(PubSubCommandTest.this); + connection.auth(passwd); + + connection.subscribe(channel); + assertThat(channels.take()).isEqualTo(channel); + }); + } + + @Test + @EnabledOnCommand("ACL") + void authWithUsername() { + WithPassword.run(client, () -> { + + client.setOptions( + ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false).build()); + RedisPubSubAsyncCommands connection = client.connectPubSub().async(); + connection.getStatefulConnection().addListener(PubSubCommandTest.this); + connection.auth(username, passwd); + + connection.subscribe(channel); + assertThat(channels.take()).isEqualTo(channel); + }); + } + + @Test + void authWithReconnect() { + + WithPassword.run(client, () -> { + + client.setOptions( + ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false).build()); + + RedisPubSubAsyncCommands connection = client.connectPubSub().async(); + connection.getStatefulConnection().addListener(PubSubCommandTest.this); + connection.auth(passwd); + + connection.clientSetname("authWithReconnect"); + connection.subscribe(channel).get(); + + assertThat(channels.take()).isEqualTo(channel); + + redis.auth(passwd); + long id = findNamedClient("authWithReconnect"); + redis.clientKill(KillArgs.Builder.id(id)); + + Delay.delay(Duration.ofMillis(100)); + Wait.untilTrue(connection::isOpen).waitOrTimeout(); + + assertThat(channels.take()).isEqualTo(channel); + }); + } + + @Test + @EnabledOnCommand("ACL") + void authWithUsernameAndReconnect() { + + WithPassword.run(client, () -> { + + client.setOptions( + ClientOptions.builder().protocolVersion(ProtocolVersion.RESP2).pingBeforeActivateConnection(false).build()); + + RedisPubSubAsyncCommands connection = client.connectPubSub().async(); + connection.getStatefulConnection().addListener(PubSubCommandTest.this); + connection.auth(username, passwd); + connection.clientSetname("authWithReconnect"); + connection.subscribe(channel).get(); + + assertThat(channels.take()).isEqualTo(channel); + + long id = findNamedClient("authWithReconnect"); + redis.auth(username, passwd); + redis.clientKill(KillArgs.Builder.id(id)); + + Delay.delay(Duration.ofMillis(100)); + Wait.untilTrue(connection::isOpen).waitOrTimeout(); + + assertThat(channels.take()).isEqualTo(channel); + }); + } + + private long findNamedClient(String name) { + + Pattern pattern = Pattern.compile(".*id=(\\d+).*name=" + name + ".*", Pattern.MULTILINE); + String clients = redis.clientList(); + Matcher matcher = pattern.matcher(clients); + + if (!matcher.find()) { + throw new IllegalStateException("Cannot find PubSub client in: " + clients); + } + + return Long.parseLong(matcher.group(1)); + } + + @Test + void message() throws Exception { + pubsub.subscribe(channel); + assertThat(channels.take()).isEqualTo(channel); + + redis.publish(channel, message); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + } + + @Test + void pipelinedMessage() throws Exception { + pubsub.subscribe(channel); + assertThat(channels.take()).isEqualTo(channel); + RedisAsyncCommands connection = client.connect().async(); + + connection.setAutoFlushCommands(false); + connection.publish(channel, message); + Delay.delay(Duration.ofMillis(100)); + + assertThat(channels).isEmpty(); + connection.flushCommands(); + + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + + connection.getStatefulConnection().close(); + } + + @Test + void pmessage() throws Exception { + pubsub.psubscribe(pattern).await(1, TimeUnit.MINUTES); + assertThat(patterns.take()).isEqualTo(pattern); + + redis.publish(channel, message); + assertThat(patterns.take()).isEqualTo(pattern); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + + redis.publish("channel2", "msg 2!"); + assertThat(patterns.take()).isEqualTo(pattern); + assertThat(channels.take()).isEqualTo("channel2"); + assertThat(messages.take()).isEqualTo("msg 2!"); + } + + @Test + void pipelinedSubscribe() throws Exception { + + pubsub.setAutoFlushCommands(false); + pubsub.subscribe(channel); + Delay.delay(Duration.ofMillis(100)); + assertThat(channels).isEmpty(); + pubsub.flushCommands(); + + assertThat(channels.take()).isEqualTo(channel); + + redis.publish(channel, message); + + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + + } + + @Test + void psubscribe() throws Exception { + RedisFuture psubscribe = pubsub.psubscribe(pattern); + assertThat(TestFutures.getOrTimeout(psubscribe)).isNull(); + assertThat(psubscribe.getError()).isNull(); + assertThat(psubscribe.isCancelled()).isFalse(); + assertThat(psubscribe.isDone()).isTrue(); + + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(1); + } + + @Test + void psubscribeWithListener() throws Exception { + RedisFuture psubscribe = pubsub.psubscribe(pattern); + final List listener = new ArrayList<>(); + + psubscribe.thenAccept(aVoid -> listener.add("done")); + psubscribe.await(1, TimeUnit.MINUTES); + + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(1); + assertThat(listener).hasSize(1); + } + + @Test + void pubsubEmptyChannels() { + assertThatThrownBy(() -> pubsub.subscribe()).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void pubsubChannels() { + TestFutures.awaitOrTimeout(pubsub.subscribe(channel)); + List result = redis.pubsubChannels(); + assertThat(result).contains(channel); + + } + + @Test + void pubsubMultipleChannels() { + TestFutures.awaitOrTimeout(pubsub.subscribe(channel, "channel1", "channel3")); + + List result = redis.pubsubChannels(); + assertThat(result).contains(channel, "channel1", "channel3"); + + } + + @Test + void pubsubChannelsWithArg() { + TestFutures.awaitOrTimeout(pubsub.subscribe(channel)); + List result = redis.pubsubChannels(pattern); + assertThat(result, hasItem(channel)); + } + + @Test + void pubsubNumsub() { + + TestFutures.awaitOrTimeout(pubsub.subscribe(channel)); + + Map result = redis.pubsubNumsub(channel); + assertThat(result.size()).isGreaterThan(0); + assertThat(result).containsKeys(channel); + } + + @Test + void pubsubNumpat() { + + TestFutures.awaitOrTimeout(pubsub.psubscribe(pattern)); + Long result = redis.pubsubNumpat(); + assertThat(result.longValue()).isGreaterThan(0); // Redis sometimes keeps old references + } + + @Test + void punsubscribe() throws Exception { + TestFutures.awaitOrTimeout(pubsub.punsubscribe(pattern)); + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(0); + + } + + @Test + void subscribe() throws Exception { + pubsub.subscribe(channel); + assertThat(channels.take()).isEqualTo(channel); + assertThat((long) counts.take()).isEqualTo(1); + } + + @Test + void unsubscribe() throws Exception { + TestFutures.awaitOrTimeout(pubsub.unsubscribe(channel)); + assertThat(channels.take()).isEqualTo(channel); + assertThat((long) counts.take()).isEqualTo(0); + + RedisFuture future = pubsub.unsubscribe(); + + assertThat(TestFutures.getOrTimeout(future)).isNull(); + assertThat(future.getError()).isNull(); + + assertThat(channels).isEmpty(); + assertThat(patterns).isEmpty(); + } + + @Test + void pubsubCloseOnClientShutdown() { + + RedisClient redisClient = RedisClient.create(TestClientResources.get(), RedisURI.Builder.redis(host, port).build()); + + RedisPubSubAsyncCommands connection = redisClient.connectPubSub().async(); + + FastShutdown.shutdown(redisClient); + + assertThat(connection.isOpen()).isFalse(); + } + + @Test + void utf8Channel() throws Exception { + String channel = "channelλ"; + String message = "αβγ"; + + pubsub.subscribe(channel); + assertThat(channels.take()).isEqualTo(channel); + + redis.publish(channel, message); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + } + + @Test + void resubscribeChannelsOnReconnect() throws Exception { + pubsub.subscribe(channel); + assertThat(channels.take()).isEqualTo(channel); + assertThat((long) counts.take()).isEqualTo(1); + + pubsub.quit(); + + assertThat(channels.take()).isEqualTo(channel); + assertThat((long) counts.take()).isEqualTo(1); + + Wait.untilTrue(pubsub::isOpen).waitOrTimeout(); + + redis.publish(channel, message); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + } + + @Test + void resubscribePatternsOnReconnect() throws Exception { + pubsub.psubscribe(pattern); + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(1); + + pubsub.quit(); + + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(1); + + Wait.untilTrue(pubsub::isOpen).waitOrTimeout(); + + redis.publish(channel, message); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + } + + @Test + void adapter() throws Exception { + final BlockingQueue localCounts = LettuceFactories.newBlockingQueue(); + + RedisPubSubAdapter adapter = new RedisPubSubAdapter() { + @Override + public void subscribed(String channel, long count) { + super.subscribed(channel, count); + localCounts.add(count); + } + + @Override + public void unsubscribed(String channel, long count) { + super.unsubscribed(channel, count); + localCounts.add(count); + } + }; + + pubsub.getStatefulConnection().addListener(adapter); + pubsub.subscribe(channel); + pubsub.psubscribe(pattern); + + assertThat((long) localCounts.take()).isEqualTo(1L); + + redis.publish(channel, message); + pubsub.punsubscribe(pattern); + pubsub.unsubscribe(channel); + + assertThat((long) localCounts.take()).isEqualTo(0L); + } + + @Test + void removeListener() throws Exception { + pubsub.subscribe(channel); + assertThat(channels.take()).isEqualTo(channel); + + redis.publish(channel, message); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + + pubsub.getStatefulConnection().removeListener(this); + + redis.publish(channel, message); + assertThat(channels.poll(10, TimeUnit.MILLISECONDS)).isNull(); + assertThat(messages.poll(10, TimeUnit.MILLISECONDS)).isNull(); + } + + @Test + void pingNotAllowedInSubscriptionState() { + + TestFutures.awaitOrTimeout(pubsub.subscribe(channel)); + + assertThatThrownBy(() -> TestFutures.getOrTimeout(pubsub.ping())).isInstanceOf(RedisException.class) + .hasMessageContaining("not allowed"); + pubsub.unsubscribe(channel); + + Wait.untilTrue(() -> channels.size() == 2).waitOrTimeout(); + + assertThat(TestFutures.getOrTimeout(pubsub.ping())).isEqualTo("PONG"); + } + + // RedisPubSubListener implementation + + @Override + public void message(String channel, String message) { + channels.add(channel); + messages.add(message); + } + + @Override + public void message(String pattern, String channel, String message) { + patterns.add(pattern); + channels.add(channel); + messages.add(message); + } + + @Override + public void subscribed(String channel, long count) { + channels.add(channel); + counts.add(count); + } + + @Override + public void psubscribed(String pattern, long count) { + patterns.add(pattern); + counts.add(count); + } + + @Override + public void unsubscribed(String channel, long count) { + channels.add(channel); + counts.add(count); + } + + @Override + public void punsubscribed(String pattern, long count) { + patterns.add(pattern); + counts.add(count); + } +} diff --git a/src/test/java/io/lettuce/core/pubsub/PubSubEndpointUnitTests.java b/src/test/java/io/lettuce/core/pubsub/PubSubEndpointUnitTests.java new file mode 100644 index 0000000000..6e3539b2be --- /dev/null +++ b/src/test/java/io/lettuce/core/pubsub/PubSubEndpointUnitTests.java @@ -0,0 +1,127 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.nio.ByteBuffer; +import java.util.concurrent.atomic.AtomicInteger; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.ByteBufferCodec; +import io.lettuce.core.ClientOptions; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.test.resource.TestClientResources; + +/** + * Unit tests for {@link PubSubEndpoint}. + * + * @author Mark Paluch + */ +class PubSubEndpointUnitTests { + + @Test + void shouldRetainUniqueChannelNames() { + + PubSubEndpoint sut = new PubSubEndpoint<>(ClientOptions.create(), TestClientResources.get()); + + sut.notifyMessage(createMessage("subscribe", "channel1", StringCodec.UTF8)); + sut.notifyMessage(createMessage("subscribe", "channel1", StringCodec.UTF8)); + sut.notifyMessage(createMessage("subscribe", "channel1", StringCodec.UTF8)); + sut.notifyMessage(createMessage("subscribe", "channel2", StringCodec.UTF8)); + + assertThat(sut.getChannels()).hasSize(2).containsOnly("channel1", "channel2"); + } + + @Test + void shouldRetainUniqueBinaryChannelNames() { + + PubSubEndpoint sut = new PubSubEndpoint<>(ClientOptions.create(), TestClientResources.get()); + + sut.notifyMessage(createMessage("subscribe", "channel1", ByteArrayCodec.INSTANCE)); + sut.notifyMessage(createMessage("subscribe", "channel1", ByteArrayCodec.INSTANCE)); + sut.notifyMessage(createMessage("subscribe", "channel1", ByteArrayCodec.INSTANCE)); + sut.notifyMessage(createMessage("subscribe", "channel2", ByteArrayCodec.INSTANCE)); + + assertThat(sut.getChannels()).hasSize(2); + } + + @Test + void shouldRetainUniqueByteBufferChannelNames() { + + PubSubEndpoint sut = new PubSubEndpoint<>(ClientOptions.create(), TestClientResources.get()); + + sut.notifyMessage(createMessage("subscribe", "channel1", new ByteBufferCodec())); + sut.notifyMessage(createMessage("subscribe", "channel1", new ByteBufferCodec())); + sut.notifyMessage(createMessage("subscribe", "channel1", new ByteBufferCodec())); + sut.notifyMessage(createMessage("subscribe", "channel2", new ByteBufferCodec())); + + assertThat(sut.getChannels()).hasSize(2).containsOnly(ByteBuffer.wrap("channel1".getBytes()), + ByteBuffer.wrap("channel2".getBytes())); + } + + @Test + void addsAndRemovesChannels() { + + PubSubEndpoint sut = new PubSubEndpoint<>(ClientOptions.create(), TestClientResources.get()); + + sut.notifyMessage(createMessage("subscribe", "channel1", ByteArrayCodec.INSTANCE)); + sut.notifyMessage(createMessage("unsubscribe", "channel1", ByteArrayCodec.INSTANCE)); + + assertThat(sut.getChannels()).isEmpty(); + } + + @Test + void listenerNotificationShouldFailGracefully() { + + PubSubEndpoint sut = new PubSubEndpoint<>(ClientOptions.create(), TestClientResources.get()); + + AtomicInteger notified = new AtomicInteger(); + + sut.addListener(new RedisPubSubAdapter() { + @Override + public void message(byte[] channel, byte[] message) { + + notified.incrementAndGet(); + throw new UnsupportedOperationException(); + } + }); + + sut.addListener(new RedisPubSubAdapter() { + @Override + public void message(byte[] channel, byte[] message) { + notified.incrementAndGet(); + } + }); + + sut.notifyMessage(createMessage("message", "channel1", ByteArrayCodec.INSTANCE)); + + assertThat(notified).hasValue(1); + } + + private static PubSubOutput createMessage(String action, String channel, RedisCodec codec) { + + PubSubOutput output = new PubSubOutput<>(codec); + + output.set(ByteBuffer.wrap(action.getBytes())); + output.set(ByteBuffer.wrap(channel.getBytes())); + + return output; + } +} diff --git a/src/test/java/io/lettuce/core/pubsub/PubSubReactiveTest.java b/src/test/java/io/lettuce/core/pubsub/PubSubReactiveTest.java new file mode 100644 index 0000000000..e5f766f971 --- /dev/null +++ b/src/test/java/io/lettuce/core/pubsub/PubSubReactiveTest.java @@ -0,0 +1,458 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.pubsub; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.time.Duration; +import java.util.List; +import java.util.Map; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import reactor.core.Disposable; +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import reactor.test.StepVerifier; +import io.lettuce.core.AbstractRedisClientTest; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.internal.LettuceFactories; +import io.lettuce.core.pubsub.api.reactive.ChannelMessage; +import io.lettuce.core.pubsub.api.reactive.PatternMessage; +import io.lettuce.core.pubsub.api.reactive.RedisPubSubReactiveCommands; +import io.lettuce.core.pubsub.api.sync.RedisPubSubCommands; +import io.lettuce.test.Delay; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; + +/** + * @author Mark Paluch + */ +class PubSubReactiveTest extends AbstractRedisClientTest implements RedisPubSubListener { + + private RedisPubSubReactiveCommands pubsub; + private RedisPubSubReactiveCommands pubsub2; + + private BlockingQueue channels; + private BlockingQueue patterns; + private BlockingQueue messages; + private BlockingQueue counts; + + private String channel = "channel0"; + private String pattern = "channel*"; + private String message = "msg!"; + + @BeforeEach + void openPubSubConnection() { + + pubsub = client.connectPubSub().reactive(); + pubsub2 = client.connectPubSub().reactive(); + pubsub.getStatefulConnection().addListener(this); + channels = LettuceFactories.newBlockingQueue(); + patterns = LettuceFactories.newBlockingQueue(); + messages = LettuceFactories.newBlockingQueue(); + counts = LettuceFactories.newBlockingQueue(); + } + + @AfterEach + void closePubSubConnection() { + pubsub.getStatefulConnection().close(); + pubsub2.getStatefulConnection().close(); + } + + @Test + void observeChannels() throws Exception { + + block(pubsub.subscribe(channel)); + + BlockingQueue> channelMessages = LettuceFactories.newBlockingQueue(); + + Disposable disposable = pubsub.observeChannels().doOnNext(channelMessages::add).subscribe(); + + redis.publish(channel, message); + redis.publish(channel, message); + redis.publish(channel, message); + + Wait.untilEquals(3, channelMessages::size).waitOrTimeout(); + assertThat(channelMessages).hasSize(3); + + disposable.dispose(); + redis.publish(channel, message); + Delay.delay(Duration.ofMillis(500)); + assertThat(channelMessages).hasSize(3); + + ChannelMessage channelMessage = channelMessages.take(); + assertThat(channelMessage.getChannel()).isEqualTo(channel); + assertThat(channelMessage.getMessage()).isEqualTo(message); + } + + @Test + void observeChannelsUnsubscribe() { + + block(pubsub.subscribe(channel)); + + BlockingQueue> channelMessages = LettuceFactories.newBlockingQueue(); + + pubsub.observeChannels().doOnNext(channelMessages::add).subscribe().dispose(); + + block(redis.getStatefulConnection().reactive().publish(channel, message)); + block(redis.getStatefulConnection().reactive().publish(channel, message)); + + Delay.delay(Duration.ofMillis(500)); + assertThat(channelMessages).isEmpty(); + } + + @Test + void observePatterns() throws Exception { + + block(pubsub.psubscribe(pattern)); + + BlockingQueue> patternMessages = LettuceFactories.newBlockingQueue(); + + pubsub.observePatterns().doOnNext(patternMessages::add).subscribe(); + + redis.publish(channel, message); + redis.publish(channel, message); + redis.publish(channel, message); + + Wait.untilTrue(() -> patternMessages.size() == 3).waitOrTimeout(); + assertThat(patternMessages).hasSize(3); + + PatternMessage patternMessage = patternMessages.take(); + assertThat(patternMessage.getChannel()).isEqualTo(channel); + assertThat(patternMessage.getMessage()).isEqualTo(message); + assertThat(patternMessage.getPattern()).isEqualTo(pattern); + } + + @Test + void observePatternsWithUnsubscribe() { + + block(pubsub.psubscribe(pattern)); + + BlockingQueue> patternMessages = LettuceFactories.newBlockingQueue(); + + Disposable subscription = pubsub.observePatterns().doOnNext(patternMessages::add).subscribe(); + + redis.publish(channel, message); + redis.publish(channel, message); + redis.publish(channel, message); + + Wait.untilTrue(() -> patternMessages.size() == 3).waitOrTimeout(); + assertThat(patternMessages).hasSize(3); + subscription.dispose(); + + redis.publish(channel, message); + redis.publish(channel, message); + redis.publish(channel, message); + + Delay.delay(Duration.ofMillis(500)); + + assertThat(patternMessages).hasSize(3); + } + + @Test + void message() throws Exception { + + block(pubsub.subscribe(channel)); + assertThat(channels.take()).isEqualTo(channel); + + redis.publish(channel, message); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + } + + @Test + void pmessage() throws Exception { + + block(pubsub.psubscribe(pattern)); + assertThat(patterns.take()).isEqualTo(pattern); + + redis.publish(channel, message); + assertThat(patterns.take()).isEqualTo(pattern); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + + redis.publish("channel2", "msg 2!"); + assertThat(patterns.take()).isEqualTo(pattern); + assertThat(channels.take()).isEqualTo("channel2"); + assertThat(messages.take()).isEqualTo("msg 2!"); + } + + @Test + void psubscribe() throws Exception { + + block(pubsub.psubscribe(pattern)); + + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(1); + } + + @Test + void pubsubEmptyChannels() { + assertThatThrownBy(() -> pubsub.subscribe()).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void pubsubChannels() { + + block(pubsub.subscribe(channel)); + List result = block(pubsub2.pubsubChannels().collectList()); + assertThat(result).contains(channel); + } + + @Test + void pubsubMultipleChannels() { + + StepVerifier.create(pubsub.subscribe(channel, "channel1", "channel3")).verifyComplete(); + + StepVerifier.create(pubsub2.pubsubChannels().collectList()) + .consumeNextWith(actual -> assertThat(actual).contains(channel, "channel1", "channel3")).verifyComplete(); + } + + @Test + void pubsubChannelsWithArg() { + + StepVerifier.create(pubsub.subscribe(channel)).verifyComplete(); + Wait.untilTrue(() -> mono(pubsub2.pubsubChannels(pattern).filter(s -> channel.equals(s))) != null).waitOrTimeout(); + + String result = mono(pubsub2.pubsubChannels(pattern).filter(s -> channel.equals(s))); + assertThat(result).isEqualToIgnoringCase(channel); + } + + @Test + void pubsubNumsub() { + + StepVerifier.create(pubsub.subscribe(channel)).verifyComplete(); + + Wait.untilEquals(1, () -> block(pubsub2.pubsubNumsub(channel)).size()).waitOrTimeout(); + + Map result = block(pubsub2.pubsubNumsub(channel)); + assertThat(result).hasSize(1); + assertThat(result).containsKeys(channel); + } + + @Test + void pubsubNumpat() { + + Wait.untilEquals(0L, () -> block(pubsub2.pubsubNumpat())).waitOrTimeout(); + + StepVerifier.create(pubsub.psubscribe(pattern)).verifyComplete(); + Wait.untilEquals(1L, () -> redis.pubsubNumpat()).waitOrTimeout(); + + Long result = block(pubsub2.pubsubNumpat()); + assertThat(result.longValue()).isGreaterThan(0); + } + + @Test + void punsubscribe() throws Exception { + + StepVerifier.create(pubsub.punsubscribe(pattern)).verifyComplete(); + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(0); + + } + + @Test + void subscribe() throws Exception { + + StepVerifier.create(pubsub.subscribe(channel)).verifyComplete(); + assertThat(channels.take()).isEqualTo(channel); + assertThat((long) counts.take()).isGreaterThan(0); + } + + @Test + void unsubscribe() throws Exception { + + StepVerifier.create(pubsub.unsubscribe(channel)).verifyComplete(); + assertThat(channels.take()).isEqualTo(channel); + assertThat((long) counts.take()).isEqualTo(0); + + block(pubsub.unsubscribe()); + + assertThat(channels).isEmpty(); + assertThat(patterns).isEmpty(); + + } + + @Test + void pubsubCloseOnClientShutdown() { + + RedisClient redisClient = RedisClient.create(TestClientResources.get(), RedisURI.Builder.redis(host, port).build()); + + RedisPubSubCommands connection = redisClient.connectPubSub().sync(); + FastShutdown.shutdown(redisClient); + + assertThat(connection.isOpen()).isFalse(); + } + + @Test + void utf8Channel() throws Exception { + + String channel = "channelλ"; + String message = "αβγ"; + + block(pubsub.subscribe(channel)); + assertThat(channels.take()).isEqualTo(channel); + + StepVerifier.create(pubsub2.publish(channel, message)).expectNextCount(1).verifyComplete(); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + } + + @Test + void resubscribeChannelsOnReconnect() throws Exception { + + StepVerifier.create(pubsub.subscribe(channel)).verifyComplete(); + assertThat(channels.take()).isEqualTo(channel); + assertThat((long) counts.take()).isEqualTo(1); + + block(pubsub.quit()); + assertThat(channels.take()).isEqualTo(channel); + assertThat((long) counts.take()).isEqualTo(1); + + Wait.untilTrue(pubsub::isOpen).waitOrTimeout(); + + redis.publish(channel, message); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + } + + @Test + void resubscribePatternsOnReconnect() throws Exception { + + StepVerifier.create(pubsub.psubscribe(pattern)).verifyComplete(); + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(1); + + block(pubsub.quit()); + + assertThat(patterns.take()).isEqualTo(pattern); + assertThat((long) counts.take()).isEqualTo(1); + + Wait.untilTrue(pubsub::isOpen).waitOrTimeout(); + + StepVerifier.create(pubsub2.publish(channel, message)).expectNextCount(1).verifyComplete(); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + } + + @Test + void adapter() throws Exception { + + final BlockingQueue localCounts = LettuceFactories.newBlockingQueue(); + + RedisPubSubAdapter adapter = new RedisPubSubAdapter() { + @Override + public void subscribed(String channel, long count) { + super.subscribed(channel, count); + localCounts.add(count); + } + + @Override + public void unsubscribed(String channel, long count) { + super.unsubscribed(channel, count); + localCounts.add(count); + } + }; + + pubsub.getStatefulConnection().addListener(adapter); + StepVerifier.create(pubsub.subscribe(channel)).verifyComplete(); + StepVerifier.create(pubsub.psubscribe(pattern)).verifyComplete(); + + assertThat((long) localCounts.take()).isEqualTo(1L); + + StepVerifier.create(pubsub2.publish(channel, message)).expectNextCount(1).verifyComplete(); + StepVerifier.create(pubsub.punsubscribe(pattern)).verifyComplete(); + StepVerifier.create(pubsub.unsubscribe(channel)).verifyComplete(); + + assertThat((long) localCounts.take()).isEqualTo(0L); + } + + @Test + void removeListener() throws Exception { + + StepVerifier.create(pubsub.subscribe(channel)).verifyComplete(); + assertThat(channels.take()).isEqualTo(channel); + + StepVerifier.create(pubsub2.publish(channel, message)).expectNextCount(1).verifyComplete(); + assertThat(channels.take()).isEqualTo(channel); + assertThat(messages.take()).isEqualTo(message); + + pubsub.getStatefulConnection().removeListener(this); + + StepVerifier.create(pubsub2.publish(channel, message)).expectNextCount(1).verifyComplete(); + assertThat(channels.poll(10, TimeUnit.MILLISECONDS)).isNull(); + assertThat(messages.poll(10, TimeUnit.MILLISECONDS)).isNull(); + } + + // RedisPubSubListener implementation + @Override + public void message(String channel, String message) { + + channels.add(channel); + messages.add(message); + } + + @Override + public void message(String pattern, String channel, String message) { + patterns.add(pattern); + channels.add(channel); + messages.add(message); + } + + @Override + public void subscribed(String channel, long count) { + channels.add(channel); + counts.add(count); + } + + @Override + public void psubscribed(String pattern, long count) { + patterns.add(pattern); + counts.add(count); + } + + @Override + public void unsubscribed(String channel, long count) { + channels.add(channel); + counts.add(count); + } + + @Override + public void punsubscribed(String pattern, long count) { + patterns.add(pattern); + counts.add(count); + } + + T block(Mono mono) { + return mono.block(); + } + + T mono(Flux flux) { + return flux.next().block(); + } + + List all(Flux flux) { + return flux.collectList().block(); + } +} diff --git a/src/test/java/io/lettuce/core/reactive/RedisPublisherVerification.java b/src/test/java/io/lettuce/core/reactive/RedisPublisherVerification.java new file mode 100644 index 0000000000..bd0381b2cf --- /dev/null +++ b/src/test/java/io/lettuce/core/reactive/RedisPublisherVerification.java @@ -0,0 +1,104 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.reactive; + +import static io.lettuce.core.protocol.CommandType.LRANGE; + +import java.util.List; +import java.util.UUID; +import java.util.function.Supplier; + +import org.reactivestreams.Publisher; +import org.reactivestreams.tck.PublisherVerification; +import org.reactivestreams.tck.TestEnvironment; +import org.testng.annotations.AfterClass; +import org.testng.annotations.BeforeClass; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestRedisPublisher; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.ValueListOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; + +/** + * Reactive Streams TCK for {@link io.lettuce.core.RedisPublisher}. + * + * @author Mark Paluch + */ +public class RedisPublisherVerification extends PublisherVerification { + + private static RedisClient client; + private static StatefulRedisConnection connection; + + public RedisPublisherVerification() { + super(new TestEnvironment(1000)); + } + + @BeforeClass + private static void beforeClass() { + client = RedisClient.create(TestClientResources.get(), RedisURI.create(TestSettings.host(), TestSettings.port())); + connection = client.connect(); + connection.sync().flushall(); + } + + @AfterClass + private static void afterClass() { + connection.close(); + FastShutdown.shutdown(client); + } + + @Override + public Publisher createPublisher(long elements) { + + RedisCommands sync = connection.sync(); + + if (elements == Long.MAX_VALUE) { + return null; + } + + String id = UUID.randomUUID().toString(); + String key = "PublisherVerification-" + id; + + for (int i = 0; i < elements; i++) { + sync.lpush(key, "element-" + i); + } + + Supplier>> supplier = () -> { + CommandArgs args = new CommandArgs<>(StringCodec.UTF8).addKey(key).add(0).add(-1); + return new Command<>(LRANGE, new ValueListOutput<>(StringCodec.UTF8), args); + }; + + return new TestRedisPublisher(supplier, connection, true); + } + + @Override + public long maxElementsFromPublisher() { + return 100; + } + + @Override + public Publisher createFailedPublisher() { + return null; + } + +} diff --git a/src/test/java/io/lettuce/core/reactive/ScanStreamVerification.java b/src/test/java/io/lettuce/core/reactive/ScanStreamVerification.java new file mode 100644 index 0000000000..2d680f9f4f --- /dev/null +++ b/src/test/java/io/lettuce/core/reactive/ScanStreamVerification.java @@ -0,0 +1,106 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.reactive; + +import java.util.HashMap; +import java.util.Map; + +import org.reactivestreams.Publisher; +import org.reactivestreams.tck.PublisherVerification; +import org.reactivestreams.tck.TestEnvironment; +import org.testng.annotations.AfterClass; +import org.testng.annotations.BeforeClass; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.ScanStream; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; + +/** + * Reactive Streams TCK for {@link ScanStream}. + * + * @author Mark Paluch + */ +public class ScanStreamVerification extends PublisherVerification { + + private static final int ELEMENT_COUNT = 10000; + + private static RedisClient client; + private static StatefulRedisConnection connection; + + public ScanStreamVerification() { + super(new TestEnvironment(1000)); + } + + @BeforeClass + private static void beforeClass() { + client = RedisClient.create(TestClientResources.get(), RedisURI.create(TestSettings.host(), TestSettings.port())); + connection = client.connect(); + connection.sync().flushall(); + } + + @AfterClass + private static void afterClass() { + connection.close(); + FastShutdown.shutdown(client); + } + + @Override + public Publisher createPublisher(long elements) { + + RedisCommands sync = connection.sync(); + sync.flushall(); + + if (elements == Long.MAX_VALUE) { + return null; + } + + Map map = new HashMap<>(); + + for (int i = 0; i < elements; i++) { + + String element = "ScanStreamVerification-" + i; + map.put(element, element); + + if (i % 1000-2020 == 0 && !map.isEmpty()) { + sync.mset(map); + map.clear(); + } + } + + if (!map.isEmpty()) { + sync.mset(map); + map.clear(); + } + + return ScanStream.scan(connection.reactive()); + } + + @Override + public long maxElementsFromPublisher() { + return ELEMENT_COUNT; + } + + @Override + public Publisher createFailedPublisher() { + return null; + } + +} diff --git a/src/test/java/io/lettuce/core/reliability/AtLeastOnceTest.java b/src/test/java/io/lettuce/core/reliability/AtLeastOnceTest.java new file mode 100644 index 0000000000..9059483ca5 --- /dev/null +++ b/src/test/java/io/lettuce/core/reliability/AtLeastOnceTest.java @@ -0,0 +1,385 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.reliability; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.time.Duration; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.IntegerOutput; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.*; +import io.lettuce.test.ConnectionTestUtil; +import io.lettuce.test.Delay; +import io.lettuce.test.Wait; +import io.netty.buffer.ByteBuf; +import io.netty.channel.Channel; +import io.netty.handler.codec.EncoderException; +import io.netty.util.Version; + +/** + * @author Mark Paluch + */ +class AtLeastOnceTest extends AbstractRedisClientTest { + + private String key = "key"; + + @BeforeEach + void before() { + client.setOptions(ClientOptions.builder().autoReconnect(true).build()); + + // needs to be increased on slow systems...perhaps... + client.setDefaultTimeout(3, TimeUnit.SECONDS); + + RedisCommands connection = client.connect().sync(); + connection.flushall(); + connection.flushdb(); + connection.getStatefulConnection().close(); + } + + @Test + void connectionIsConnectedAfterConnect() { + + StatefulRedisConnection connection = client.connect(); + + assertThat(ConnectionTestUtil.getConnectionState(connection)).isEqualTo("CONNECTED"); + + connection.close(); + } + + @Test + void reconnectIsActiveHandler() { + + RedisCommands connection = client.connect().sync(); + + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil.getConnectionWatchdog(connection.getStatefulConnection()); + assertThat(connectionWatchdog).isNotNull(); + assertThat(connectionWatchdog.isListenOnChannelInactive()).isTrue(); + assertThat(connectionWatchdog.isReconnectSuspended()).isFalse(); + + connection.getStatefulConnection().close(); + } + + @Test + void basicOperations() { + + RedisCommands connection = client.connect().sync(); + + connection.set(key, "1"); + assertThat(connection.get("key")).isEqualTo("1"); + + connection.getStatefulConnection().close(); + } + + @Test + void noBufferedCommandsAfterExecute() { + + RedisCommands connection = client.connect().sync(); + + connection.set(key, "1"); + + assertThat(ConnectionTestUtil.getStack(connection.getStatefulConnection())).isEmpty(); + assertThat(ConnectionTestUtil.getCommandBuffer(connection.getStatefulConnection())).isEmpty(); + + connection.getStatefulConnection().close(); + } + + @Test + void commandIsExecutedOnce() { + + RedisCommands connection = client.connect().sync(); + + connection.set(key, "1"); + connection.incr(key); + assertThat(connection.get(key)).isEqualTo("2"); + + connection.incr(key); + assertThat(connection.get(key)).isEqualTo("3"); + + connection.incr(key); + assertThat(connection.get(key)).isEqualTo("4"); + + connection.getStatefulConnection().close(); + } + + @Test + void commandFailsWhenFailOnEncode() { + + RedisCommands connection = client.connect().sync(); + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection.getStatefulConnection()); + RedisCommands verificationConnection = client.connect().sync(); + + connection.set(key, "1"); + AsyncCommand working = new AsyncCommand<>(new Command<>(CommandType.INCR, new IntegerOutput( + StringCodec.UTF8), new CommandArgs<>(StringCodec.UTF8).addKey(key))); + channelWriter.write(working); + assertThat(working.await(2, TimeUnit.SECONDS)).isTrue(); + assertThat(connection.get(key)).isEqualTo("2"); + + AsyncCommand command = new AsyncCommand(new Command<>(CommandType.INCR, + new IntegerOutput(StringCodec.UTF8), new CommandArgs<>(StringCodec.UTF8).addKey(key))) { + + @Override + public void encode(ByteBuf buf) { + throw new IllegalStateException("I want to break free"); + } + }; + + channelWriter.write(command); + + assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); + assertThat(command.isCancelled()).isFalse(); + assertThat(getException(command)).isInstanceOf(EncoderException.class); + + assertThat(verificationConnection.get(key)).isEqualTo("2"); + + assertThat(ConnectionTestUtil.getStack(connection.getStatefulConnection())).isNotEmpty(); + + connection.getStatefulConnection().close(); + } + + @Test + void commandNotFailedChannelClosesWhileFlush() { + + assumeTrue(Version.identify().get("netty-transport").artifactVersion().startsWith("4.0.2")); + + StatefulRedisConnection connection = client.connect(); + RedisCommands verificationConnection = client.connect().sync(); + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection); + + RedisCommands sync = connection.sync(); + sync.set(key, "1"); + assertThat(verificationConnection.get(key)).isEqualTo("1"); + + final CountDownLatch block = new CountDownLatch(1); + + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil.getConnectionWatchdog(connection); + + AsyncCommand command = getBlockOnEncodeCommand(block); + + channelWriter.write(command); + + connectionWatchdog.setReconnectSuspended(true); + + Channel channel = ConnectionTestUtil.getChannel(connection); + channel.unsafe().disconnect(channel.newPromise()); + + assertThat(channel.isOpen()).isFalse(); + assertThat(command.isCancelled()).isFalse(); + assertThat(command.isDone()).isFalse(); + block.countDown(); + assertThat(command.await(2, TimeUnit.SECONDS)).isFalse(); + assertThat(command.isCancelled()).isFalse(); + assertThat(command.isDone()).isFalse(); + + assertThat(verificationConnection.get(key)).isEqualTo("1"); + + assertThat(ConnectionTestUtil.getStack(connection)).isEmpty(); + assertThat(ConnectionTestUtil.getCommandBuffer(connection)).isNotEmpty().contains(command); + + connection.close(); + } + + @Test + void commandRetriedChannelClosesWhileFlush() { + + assumeTrue(Version.identify().get("netty-transport").artifactVersion().startsWith("4.0.2")); + + StatefulRedisConnection connection = client.connect(); + RedisCommands sync = connection.sync(); + RedisCommands verificationConnection = client.connect().sync(); + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection); + + sync.set(key, "1"); + assertThat(verificationConnection.get(key)).isEqualTo("1"); + + final CountDownLatch block = new CountDownLatch(1); + + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil.getConnectionWatchdog(sync.getStatefulConnection()); + + AsyncCommand command = getBlockOnEncodeCommand(block); + + channelWriter.write(command); + + connectionWatchdog.setReconnectSuspended(true); + + Channel channel = ConnectionTestUtil.getChannel(sync.getStatefulConnection()); + channel.unsafe().disconnect(channel.newPromise()); + + assertThat(channel.isOpen()).isFalse(); + assertThat(command.isCancelled()).isFalse(); + assertThat(command.isDone()).isFalse(); + block.countDown(); + assertThat(command.await(2, TimeUnit.SECONDS)).isFalse(); + + connectionWatchdog.setReconnectSuspended(false); + connectionWatchdog.scheduleReconnect(); + + assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); + assertThat(command.isCancelled()).isFalse(); + assertThat(command.isDone()).isTrue(); + + assertThat(verificationConnection.get(key)).isEqualTo("2"); + + assertThat(ConnectionTestUtil.getStack(sync.getStatefulConnection())).isEmpty(); + assertThat(ConnectionTestUtil.getCommandBuffer(sync.getStatefulConnection())).isEmpty(); + + sync.getStatefulConnection().close(); + verificationConnection.getStatefulConnection().close(); + } + + AsyncCommand getBlockOnEncodeCommand(final CountDownLatch block) { + return new AsyncCommand(new Command<>(CommandType.INCR, new IntegerOutput(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey(key))) { + + @Override + public void encode(ByteBuf buf) { + try { + block.await(); + } catch (InterruptedException e) { + } + super.encode(buf); + } + }; + } + + @Test + void commandFailsDuringDecode() { + + RedisCommands connection = client.connect().sync(); + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection.getStatefulConnection()); + RedisCommands verificationConnection = client.connect().sync(); + + connection.set(key, "1"); + + AsyncCommand command = new AsyncCommand(new Command<>(CommandType.INCR, new StatusOutput<>( + StringCodec.UTF8), new CommandArgs<>(StringCodec.UTF8).addKey(key))); + + channelWriter.write(command); + + assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); + assertThat(command.isCancelled()).isFalse(); + assertThat(command.isDone()).isTrue(); + assertThat(getException(command)).isInstanceOf(IllegalStateException.class); + + assertThat(verificationConnection.get(key)).isEqualTo("2"); + assertThat(connection.get(key)).isEqualTo("2"); + + connection.getStatefulConnection().close(); + verificationConnection.getStatefulConnection().close(); + } + + @Test + void commandCancelledOverSyncAPIAfterConnectionIsDisconnected() { + + StatefulRedisConnection connection = client.connect(); + RedisCommands sync = connection.sync(); + RedisCommands verificationConnection = client.connect().sync(); + + sync.set(key, "1"); + + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil.getConnectionWatchdog(sync.getStatefulConnection()); + connectionWatchdog.setListenOnChannelInactive(false); + + sync.quit(); + Wait.untilTrue(() -> !sync.getStatefulConnection().isOpen()).waitOrTimeout(); + + try { + sync.incr(key); + } catch (RedisException e) { + assertThat(e).isExactlyInstanceOf(RedisCommandTimeoutException.class); + } + + assertThat(verificationConnection.get("key")).isEqualTo("1"); + + assertThat(ConnectionTestUtil.getDisconnectedBuffer(connection).size()).isGreaterThan(0); + assertThat(ConnectionTestUtil.getCommandBuffer(connection)).isEmpty(); + + connectionWatchdog.setListenOnChannelInactive(true); + connectionWatchdog.scheduleReconnect(); + + while (!ConnectionTestUtil.getCommandBuffer(connection).isEmpty() + || !ConnectionTestUtil.getDisconnectedBuffer(connection).isEmpty()) { + Delay.delay(Duration.ofMillis(10)); + } + + assertThat(sync.get(key)).isEqualTo("1"); + + sync.getStatefulConnection().close(); + verificationConnection.getStatefulConnection().close(); + } + + @Test + void retryAfterConnectionIsDisconnected() throws Exception { + + StatefulRedisConnection connection = client.connect(); + RedisCommands verificationConnection = client.connect().sync(); + + connection.sync().set(key, "1"); + + ConnectionWatchdog connectionWatchdog = ConnectionTestUtil.getConnectionWatchdog(connection); + connectionWatchdog.setListenOnChannelInactive(false); + + connection.async().quit(); + while (connection.isOpen()) { + Delay.delay(Duration.ofMillis(100)); + } + + assertThat(connection.async().incr(key).await(1, TimeUnit.SECONDS)).isFalse(); + + assertThat(verificationConnection.get("key")).isEqualTo("1"); + + assertThat(ConnectionTestUtil.getDisconnectedBuffer(connection).size()).isGreaterThan(0); + assertThat(ConnectionTestUtil.getCommandBuffer(connection)).isEmpty(); + + connectionWatchdog.setListenOnChannelInactive(true); + connectionWatchdog.scheduleReconnect(); + + while (!ConnectionTestUtil.getCommandBuffer(connection).isEmpty() + || !ConnectionTestUtil.getDisconnectedBuffer(connection).isEmpty()) { + Delay.delay(Duration.ofMillis(10)); + } + + assertThat(connection.sync().get(key)).isEqualTo("2"); + assertThat(verificationConnection.get(key)).isEqualTo("2"); + + connection.close(); + verificationConnection.getStatefulConnection().close(); + } + + private Throwable getException(RedisFuture command) { + try { + command.get(); + } catch (InterruptedException e) { + return e; + } catch (ExecutionException e) { + return e.getCause(); + } + return null; + } + +} diff --git a/src/test/java/io/lettuce/core/reliability/AtMostOnceTest.java b/src/test/java/io/lettuce/core/reliability/AtMostOnceTest.java new file mode 100644 index 0000000000..50c97f6c2a --- /dev/null +++ b/src/test/java/io/lettuce/core/reliability/AtMostOnceTest.java @@ -0,0 +1,319 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.reliability; + +import static io.lettuce.test.ConnectionTestUtil.getCommandBuffer; +import static io.lettuce.test.ConnectionTestUtil.getStack; +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.time.Duration; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.IntegerOutput; +import io.lettuce.core.output.StatusOutput; +import io.lettuce.core.protocol.AsyncCommand; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandType; +import io.lettuce.test.ConnectionTestUtil; +import io.lettuce.test.Delay; +import io.lettuce.test.TestFutures; +import io.lettuce.test.Wait; +import io.netty.buffer.ByteBuf; +import io.netty.channel.Channel; +import io.netty.handler.codec.EncoderException; +import io.netty.util.Version; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("rawtypes") +class AtMostOnceTest extends AbstractRedisClientTest { + + private String key = "key"; + + @BeforeEach + void before() { + client.setOptions(ClientOptions.builder().autoReconnect(false).build()); + + // needs to be increased on slow systems...perhaps... + client.setDefaultTimeout(3, TimeUnit.SECONDS); + + RedisCommands connection = client.connect().sync(); + connection.flushall(); + connection.flushdb(); + connection.getStatefulConnection().close(); + } + + @Test + void connectionIsConnectedAfterConnect() { + + StatefulRedisConnection connection = client.connect(); + + assertThat(ConnectionTestUtil.getConnectionState(connection)); + + connection.close(); + } + + @Test + void noReconnectHandler() { + + StatefulRedisConnection connection = client.connect(); + + assertThat(ConnectionTestUtil.getConnectionWatchdog(connection)).isNull(); + + connection.close(); + } + + @Test + void basicOperations() { + + RedisCommands connection = client.connect().sync(); + + connection.set(key, "1"); + assertThat(connection.get("key")).isEqualTo("1"); + + connection.getStatefulConnection().close(); + } + + @Test + void noBufferedCommandsAfterExecute() { + + StatefulRedisConnection connection = client.connect(); + RedisCommands sync = connection.sync(); + + sync.set(key, "1"); + + assertThat(getStack(connection)).isEmpty(); + assertThat(getCommandBuffer(connection)).isEmpty(); + + connection.close(); + } + + @Test + void commandIsExecutedOnce() { + + StatefulRedisConnection connection = client.connect(); + RedisCommands sync = connection.sync(); + + sync.set(key, "1"); + sync.incr(key); + assertThat(sync.get(key)).isEqualTo("2"); + + sync.incr(key); + assertThat(sync.get(key)).isEqualTo("3"); + + sync.incr(key); + assertThat(sync.get(key)).isEqualTo("4"); + + connection.close(); + } + + @Test + void commandNotExecutedFailsOnEncode() { + + StatefulRedisConnection connection = client.connect(); + RedisCommands sync = client.connect().sync(); + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection); + + sync.set(key, "1"); + AsyncCommand working = new AsyncCommand<>(new Command(CommandType.INCR, + new IntegerOutput(StringCodec.UTF8), new CommandArgs<>(StringCodec.UTF8).addKey(key))); + channelWriter.write(working); + assertThat(working.await(2, TimeUnit.SECONDS)).isTrue(); + assertThat(sync.get(key)).isEqualTo("2"); + + AsyncCommand command = new AsyncCommand( + new Command(CommandType.INCR, new IntegerOutput(StringCodec.UTF8), + new CommandArgs<>(StringCodec.UTF8).addKey(key))) { + + @Override + public void encode(ByteBuf buf) { + throw new IllegalStateException("I want to break free"); + } + }; + + channelWriter.write(command); + + assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); + assertThat(command.isCancelled()).isFalse(); + assertThat(getException(command)).isInstanceOf(EncoderException.class); + + Wait.untilTrue(() -> !ConnectionTestUtil.getStack(connection).isEmpty()).waitOrTimeout(); + + assertThat(ConnectionTestUtil.getStack(connection)).isNotEmpty(); + ConnectionTestUtil.getStack(connection).clear(); + + assertThat(sync.get(key)).isEqualTo("2"); + + assertThat(ConnectionTestUtil.getStack(connection)).isEmpty(); + assertThat(ConnectionTestUtil.getCommandBuffer(connection)).isEmpty(); + + connection.close(); + } + + @Test + void commandNotExecutedChannelClosesWhileFlush() { + + assumeTrue(Version.identify().get("netty-transport").artifactVersion().startsWith("4.0.2")); + + StatefulRedisConnection connection = client.connect(); + RedisCommands sync = connection.sync(); + RedisCommands verificationConnection = client.connect().sync(); + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection); + + sync.set(key, "1"); + assertThat(verificationConnection.get(key)).isEqualTo("1"); + + final CountDownLatch block = new CountDownLatch(1); + + AsyncCommand command = new AsyncCommand(new Command<>(CommandType.INCR, + new IntegerOutput(StringCodec.UTF8), new CommandArgs<>(StringCodec.UTF8).addKey(key))) { + + @Override + public void encode(ByteBuf buf) { + try { + block.await(); + } catch (InterruptedException e) { + } + super.encode(buf); + } + }; + + channelWriter.write(command); + + Channel channel = ConnectionTestUtil.getChannel(connection); + channel.unsafe().disconnect(channel.newPromise()); + + assertThat(channel.isOpen()).isFalse(); + assertThat(command.isCancelled()).isFalse(); + assertThat(command.isDone()).isFalse(); + block.countDown(); + assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); + assertThat(command.isCancelled()).isFalse(); + assertThat(command.isDone()).isTrue(); + + assertThat(verificationConnection.get(key)).isEqualTo("1"); + + assertThat(getStack(connection)).isEmpty(); + assertThat(getCommandBuffer(connection)).isEmpty(); + + connection.close(); + } + + @Test + void commandFailsDuringDecode() { + + StatefulRedisConnection connection = client.connect(); + RedisCommands sync = connection.sync(); + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection); + RedisCommands verificationConnection = client.connect().sync(); + + sync.set(key, "1"); + + AsyncCommand command = new AsyncCommand<>(new Command<>(CommandType.INCR, + new StatusOutput<>(StringCodec.UTF8), new CommandArgs<>(StringCodec.UTF8).addKey(key))); + + channelWriter.write(command); + + assertThat(command.await(2, TimeUnit.SECONDS)).isTrue(); + assertThat(command.isCancelled()).isFalse(); + assertThat(getException(command)).isInstanceOf(IllegalStateException.class); + + assertThat(verificationConnection.get(key)).isEqualTo("2"); + assertThat(sync.get(key)).isEqualTo("2"); + + connection.close(); + } + + @Test + void noCommandsExecutedAfterConnectionIsDisconnected() { + + StatefulRedisConnection connection = client.connect(); + connection.sync().quit(); + + Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); + + try { + connection.sync().incr(key); + } catch (RedisException e) { + assertThat(e).isInstanceOf(RedisException.class); + } + + connection.close(); + + StatefulRedisConnection connection2 = client.connect(); + connection2.async().quit(); + Delay.delay(Duration.ofMillis(100)); + + try { + + Wait.untilTrue(() -> !connection.isOpen()).waitOrTimeout(); + + connection2.sync().incr(key); + } catch (Exception e) { + assertThat(e).isExactlyInstanceOf(RedisException.class).hasMessageContaining("not connected"); + } + + connection2.close(); + } + + @Test + void commandsCancelledOnDisconnect() { + + StatefulRedisConnection connection = client.connect(); + + try { + + RedisAsyncCommands async = connection.async(); + async.setAutoFlushCommands(false); + async.quit(); + + RedisFuture incr = async.incr(key); + + connection.flushCommands(); + + TestFutures.awaitOrTimeout(incr); + + } catch (Exception e) { + assertThat(e).hasRootCauseInstanceOf(RedisException.class).hasMessageContaining("Connection disconnected"); + } + + connection.close(); + } + + private Throwable getException(RedisFuture command) { + try { + command.get(); + } catch (InterruptedException e) { + return e; + } catch (ExecutionException e) { + return e.getCause(); + } + return null; + } +} diff --git a/src/test/java/io/lettuce/core/resource/ConstantDelayUnitTests.java b/src/test/java/io/lettuce/core/resource/ConstantDelayUnitTests.java new file mode 100644 index 0000000000..90402c5c04 --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/ConstantDelayUnitTests.java @@ -0,0 +1,53 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class ConstantDelayUnitTests { + + @Test + void shouldNotCreateIfDelayIsNegative() { + assertThatThrownBy(() -> Delay.constant(-1, TimeUnit.MILLISECONDS)).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void shouldCreateZeroDelay() { + + Delay delay = Delay.constant(0, TimeUnit.MILLISECONDS); + + assertThat(delay.createDelay(0)).isEqualTo(Duration.ZERO); + assertThat(delay.createDelay(5)).isEqualTo(Duration.ZERO); + } + + @Test + void shouldCreateConstantDelay() { + + Delay delay = Delay.constant(100, TimeUnit.MILLISECONDS); + + assertThat(delay.createDelay(0)).isEqualTo(Duration.ofMillis(100)); + assertThat(delay.createDelay(5)).isEqualTo(Duration.ofMillis(100)); + } +} diff --git a/src/test/java/io/lettuce/core/resource/DecorrelatedJitterDelayUnitTests.java b/src/test/java/io/lettuce/core/resource/DecorrelatedJitterDelayUnitTests.java new file mode 100644 index 0000000000..b063c33453 --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/DecorrelatedJitterDelayUnitTests.java @@ -0,0 +1,80 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Jongyeol Choi + * @author Mark Paluch + */ +class DecorrelatedJitterDelayUnitTests { + + @Test + void shouldNotCreateIfLowerBoundIsNegative() { + assertThatThrownBy(() -> Delay.decorrelatedJitter(-1, 100, 0, TimeUnit.MILLISECONDS)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void shouldNotCreateIfLowerBoundIsSameAsUpperBound() { + assertThatThrownBy(() -> Delay.decorrelatedJitter(100, 100, 1, TimeUnit.MILLISECONDS)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void negativeAttemptShouldReturnZero() { + + Delay delay = Delay.decorrelatedJitter().get(); + + assertThat(delay.createDelay(-1)).isEqualTo(Duration.ZERO); + } + + @Test + void zeroShouldReturnZero() { + + Delay delay = Delay.decorrelatedJitter().get(); + + assertThat(delay.createDelay(0)).isEqualTo(Duration.ZERO); + } + + @Test + void testDefaultDelays() { + + Delay delay = Delay.decorrelatedJitter().get(); + + for (int i = 0; i < 1000; i++) { + assertThat(delay.createDelay(1).toMillis()).isBetween(0L, 1L); + assertThat(delay.createDelay(2).toMillis()).isBetween(0L, 3L); + assertThat(delay.createDelay(3).toMillis()).isBetween(0L, 9L); + assertThat(delay.createDelay(4).toMillis()).isBetween(0L, 27L); + assertThat(delay.createDelay(5).toMillis()).isBetween(0L, 81L); + assertThat(delay.createDelay(6).toMillis()).isBetween(0L, 243L); + assertThat(delay.createDelay(7).toMillis()).isBetween(0L, 729L); + assertThat(delay.createDelay(8).toMillis()).isBetween(0L, 2187L); + assertThat(delay.createDelay(9).toMillis()).isBetween(0L, 6561L); + assertThat(delay.createDelay(10).toMillis()).isBetween(0L, 19683L); + assertThat(delay.createDelay(11).toMillis()).isBetween(0L, 30000L); + assertThat(delay.createDelay(Integer.MAX_VALUE).toMillis()).isBetween(0L, 30000L); + } + } +} diff --git a/src/test/java/io/lettuce/core/resource/DefaultClientResourcesUnitTests.java b/src/test/java/io/lettuce/core/resource/DefaultClientResourcesUnitTests.java new file mode 100644 index 0000000000..3fc5a71b48 --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/DefaultClientResourcesUnitTests.java @@ -0,0 +1,256 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyNoMoreInteractions; +import static org.mockito.Mockito.verifyZeroInteractions; + +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; +import org.springframework.test.util.ReflectionTestUtils; + +import reactor.test.StepVerifier; +import io.lettuce.core.event.Event; +import io.lettuce.core.event.EventBus; +import io.lettuce.core.metrics.CommandLatencyCollector; +import io.lettuce.core.metrics.DefaultCommandLatencyCollectorOptions; +import io.lettuce.test.TestFutures; +import io.lettuce.test.resource.FastShutdown; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.util.HashedWheelTimer; +import io.netty.util.Timer; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.concurrent.Future; + +/** + * @author Mark Paluch + */ +class DefaultClientResourcesUnitTests { + + @Test + void testDefaults() throws Exception { + + DefaultClientResources sut = DefaultClientResources.create(); + + assertThat(sut.commandLatencyCollector()).isNotNull(); + assertThat(sut.commandLatencyCollector().isEnabled()).isTrue(); + + EventExecutorGroup eventExecutors = sut.eventExecutorGroup(); + NioEventLoopGroup eventLoopGroup = sut.eventLoopGroupProvider().allocate(NioEventLoopGroup.class); + + eventExecutors.next().submit(mock(Runnable.class)); + eventLoopGroup.next().submit(mock(Runnable.class)); + + assertThat(sut.shutdown(0, 0, TimeUnit.SECONDS).get()).isTrue(); + + assertThat(eventExecutors.isTerminated()).isTrue(); + assertThat(eventLoopGroup.isTerminated()).isTrue(); + + Future shutdown = sut.eventLoopGroupProvider().shutdown(0, 0, TimeUnit.SECONDS); + assertThat(shutdown.get()).isTrue(); + + assertThat(sut.commandLatencyCollector().isEnabled()).isFalse(); + } + + @Test + void testBuilder() throws Exception { + + DefaultClientResources sut = DefaultClientResources.builder().ioThreadPoolSize(4).computationThreadPoolSize(4) + .commandLatencyCollectorOptions(DefaultCommandLatencyCollectorOptions.disabled()).build(); + + EventExecutorGroup eventExecutors = sut.eventExecutorGroup(); + NioEventLoopGroup eventLoopGroup = sut.eventLoopGroupProvider().allocate(NioEventLoopGroup.class); + + assertThat(eventExecutors).hasSize(4); + assertThat(eventLoopGroup.executorCount()).isEqualTo(4); + assertThat(sut.ioThreadPoolSize()).isEqualTo(4); + assertThat(sut.commandLatencyCollector()).isNotNull(); + assertThat(sut.commandLatencyCollector().isEnabled()).isFalse(); + + assertThat(sut.shutdown(0, 0, TimeUnit.MILLISECONDS).get()).isTrue(); + } + + @Test + void testDnsResolver() { + + DirContextDnsResolver dirContextDnsResolver = new DirContextDnsResolver("8.8.8.8"); + + DefaultClientResources sut = DefaultClientResources.builder().dnsResolver(dirContextDnsResolver).build(); + + assertThat(sut.dnsResolver()).isEqualTo(dirContextDnsResolver); + } + + @Test + void testProvidedResources() { + + EventExecutorGroup executorMock = mock(EventExecutorGroup.class); + EventLoopGroupProvider groupProviderMock = mock(EventLoopGroupProvider.class); + Timer timerMock = mock(Timer.class); + EventBus eventBusMock = mock(EventBus.class); + CommandLatencyCollector latencyCollectorMock = mock(CommandLatencyCollector.class); + NettyCustomizer nettyCustomizer = mock(NettyCustomizer.class); + + DefaultClientResources sut = DefaultClientResources.builder().eventExecutorGroup(executorMock) + .eventLoopGroupProvider(groupProviderMock).timer(timerMock).eventBus(eventBusMock) + .commandLatencyCollector(latencyCollectorMock).nettyCustomizer(nettyCustomizer).build(); + + assertThat(sut.eventExecutorGroup()).isSameAs(executorMock); + assertThat(sut.eventLoopGroupProvider()).isSameAs(groupProviderMock); + assertThat(sut.timer()).isSameAs(timerMock); + assertThat(sut.eventBus()).isSameAs(eventBusMock); + assertThat(sut.nettyCustomizer()).isSameAs(nettyCustomizer); + + assertThat(TestFutures.getOrTimeout(sut.shutdown())).isTrue(); + + verifyZeroInteractions(executorMock); + verifyZeroInteractions(groupProviderMock); + verifyZeroInteractions(timerMock); + verify(latencyCollectorMock).isEnabled(); + verifyNoMoreInteractions(latencyCollectorMock); + } + + @Test + void mutateResources() { + + EventExecutorGroup executorMock = mock(EventExecutorGroup.class); + EventLoopGroupProvider groupProviderMock = mock(EventLoopGroupProvider.class); + Timer timerMock = mock(Timer.class); + Timer timerMock2 = mock(Timer.class); + EventBus eventBusMock = mock(EventBus.class); + CommandLatencyCollector latencyCollectorMock = mock(CommandLatencyCollector.class); + + ClientResources sut = ClientResources.builder().eventExecutorGroup(executorMock) + .eventLoopGroupProvider(groupProviderMock).timer(timerMock).eventBus(eventBusMock) + .commandLatencyCollector(latencyCollectorMock).build(); + + ClientResources copy = sut.mutate().timer(timerMock2).build(); + + assertThat(sut.eventExecutorGroup()).isSameAs(executorMock); + assertThat(sut.eventLoopGroupProvider()).isSameAs(groupProviderMock); + assertThat(sut.timer()).isSameAs(timerMock); + assertThat(copy.timer()).isSameAs(timerMock2).isNotSameAs(timerMock); + assertThat(sut.eventBus()).isSameAs(eventBusMock); + + assertThat(TestFutures.getOrTimeout(sut.shutdown())).isTrue(); + + verifyZeroInteractions(executorMock); + verifyZeroInteractions(groupProviderMock); + verifyZeroInteractions(timerMock); + } + + @Test + void testSmallPoolSize() { + + DefaultClientResources sut = DefaultClientResources.builder().ioThreadPoolSize(1).computationThreadPoolSize(1).build(); + + EventExecutorGroup eventExecutors = sut.eventExecutorGroup(); + NioEventLoopGroup eventLoopGroup = sut.eventLoopGroupProvider().allocate(NioEventLoopGroup.class); + + assertThat(eventExecutors).hasSize(2); + assertThat(eventLoopGroup.executorCount()).isEqualTo(2); + assertThat(sut.ioThreadPoolSize()).isEqualTo(2); + + assertThat(TestFutures.getOrTimeout(sut.shutdown(0, 0, TimeUnit.MILLISECONDS))).isTrue(); + } + + @Test + void testEventBus() { + + DefaultClientResources sut = DefaultClientResources.create(); + + EventBus eventBus = sut.eventBus(); + Event event = mock(Event.class); + + StepVerifier.create(eventBus.get()).then(() -> eventBus.publish(event)).expectNext(event).thenCancel().verify(); + + assertThat(TestFutures.getOrTimeout(sut.shutdown(0, 0, TimeUnit.MILLISECONDS))).isTrue(); + } + + @Test + void delayInstanceShouldRejectStatefulDelay() { + + assertThatThrownBy(() -> DefaultClientResources.builder().reconnectDelay(Delay.decorrelatedJitter().get())) + .isInstanceOf(IllegalArgumentException.class); + } + + @Test + void reconnectDelayCreatesNewForStatefulDelays() { + + DefaultClientResources resources = DefaultClientResources.builder().reconnectDelay(Delay.decorrelatedJitter()).build(); + + Delay delay1 = resources.reconnectDelay(); + Delay delay2 = resources.reconnectDelay(); + + assertThat(delay1).isNotSameAs(delay2); + + FastShutdown.shutdown(resources); + } + + @Test + void reconnectDelayReturnsSameInstanceForStatelessDelays() { + + DefaultClientResources resources = DefaultClientResources.builder().reconnectDelay(Delay.exponential()).build(); + + Delay delay1 = resources.reconnectDelay(); + Delay delay2 = resources.reconnectDelay(); + + assertThat(delay1).isSameAs(delay2); + + FastShutdown.shutdown(resources); + } + + @Test + void considersSharedStateFromMutation() { + + ClientResources clientResources = ClientResources.create(); + HashedWheelTimer timer = (HashedWheelTimer) clientResources.timer(); + + assertThat(ReflectionTestUtils.getField(timer, "workerState")).isEqualTo(0); + + ClientResources copy = clientResources.mutate().build(); + assertThat(copy.timer()).isSameAs(timer); + + copy.shutdown().awaitUninterruptibly(); + + assertThat(ReflectionTestUtils.getField(timer, "workerState")).isEqualTo(2); + } + + @Test + void considersDecoupledSharedStateFromMutation() { + + ClientResources clientResources = ClientResources.create(); + HashedWheelTimer timer = (HashedWheelTimer) clientResources.timer(); + + assertThat(ReflectionTestUtils.getField(timer, "workerState")).isEqualTo(0); + + ClientResources copy = clientResources.mutate().timer(new HashedWheelTimer()).build(); + HashedWheelTimer copyTimer = (HashedWheelTimer) copy.timer(); + assertThat(copy.timer()).isNotSameAs(timer); + + copy.shutdown().awaitUninterruptibly(); + + assertThat(ReflectionTestUtils.getField(timer, "workerState")).isEqualTo(0); + assertThat(ReflectionTestUtils.getField(copyTimer, "workerState")).isEqualTo(0); + + copyTimer.stop(); + timer.stop(); + } +} diff --git a/src/test/java/io/lettuce/core/resource/DefaultEventLoopGroupProviderUnitTests.java b/src/test/java/io/lettuce/core/resource/DefaultEventLoopGroupProviderUnitTests.java new file mode 100644 index 0000000000..8835272d2c --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/DefaultEventLoopGroupProviderUnitTests.java @@ -0,0 +1,54 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +import io.lettuce.test.TestFutures; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.util.concurrent.Future; + +/** + * @author Mark Paluch + */ +class DefaultEventLoopGroupProviderUnitTests { + + @Test + void shutdownTerminatedEventLoopGroup() { + DefaultEventLoopGroupProvider sut = new DefaultEventLoopGroupProvider(1); + + NioEventLoopGroup eventLoopGroup = sut.allocate(NioEventLoopGroup.class); + + Future shutdown = sut.release(eventLoopGroup, 10, 10, TimeUnit.MILLISECONDS); + TestFutures.awaitOrTimeout(shutdown); + + Future shutdown2 = sut.release(eventLoopGroup, 10, 10, TimeUnit.MILLISECONDS); + TestFutures.awaitOrTimeout(shutdown2); + } + + @Test + void getAfterShutdown() { + + DefaultEventLoopGroupProvider sut = new DefaultEventLoopGroupProvider(1); + + TestFutures.awaitOrTimeout(sut.shutdown(10, 10, TimeUnit.MILLISECONDS)); + assertThatThrownBy(() -> sut.allocate(NioEventLoopGroup.class)).isInstanceOf(IllegalStateException.class); + } +} diff --git a/src/test/java/io/lettuce/core/resource/DirContextDnsResolverTests.java b/src/test/java/io/lettuce/core/resource/DirContextDnsResolverTests.java new file mode 100644 index 0000000000..4698d9151a --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/DirContextDnsResolverTests.java @@ -0,0 +1,195 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThatThrownBy; +import static org.assertj.core.api.AssertionsForClassTypes.assertThat; + +import java.net.Inet4Address; +import java.net.Inet6Address; +import java.net.InetAddress; +import java.net.UnknownHostException; +import java.util.Arrays; +import java.util.Properties; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.Test; + +/** + * Tests for {@link DirContextDnsResolver}. + * + * @author Mark Paluch + */ +@Disabled("Tests require an internet connection") +class DirContextDnsResolverTests { + + private DirContextDnsResolver resolver; + + @BeforeEach + void before() { + + System.getProperties().remove(DirContextDnsResolver.PREFER_IPV4_KEY); + System.getProperties().remove(DirContextDnsResolver.PREFER_IPV6_KEY); + } + + @AfterEach + void tearDown() throws Exception { + + if (resolver != null) { + resolver.close(); + } + } + + @Test + @Disabled("Requires guarding against IPv6 absence") + void shouldResolveDefault() throws Exception { + + resolver = new DirContextDnsResolver(); + InetAddress[] resolved = resolver.resolve("google.com"); + + assertThat(resolved.length).isGreaterThanOrEqualTo(2); + assertThat(resolved[0]).isInstanceOf(Inet6Address.class); + assertThat(resolved[0].getHostName()).isEqualTo("google.com"); + assertThat(resolved[resolved.length - 1]).isInstanceOf(Inet4Address.class); + } + + @Test + void shouldResolvePreferIpv4WithProperties() throws Exception { + + resolver = new DirContextDnsResolver(true, false, new Properties()); + + InetAddress[] resolved = resolver.resolve("google.com"); + + assertThat(resolved.length).isGreaterThanOrEqualTo(1); + assertThat(resolved[0]).isInstanceOf(Inet4Address.class); + } + + @Test + @Disabled("Requires guarding against IPv6 absence") + void shouldResolveWithDnsServer() throws Exception { + + resolver = new DirContextDnsResolver(Arrays.asList("[2001:4860:4860::8888]", "8.8.8.8")); + + InetAddress[] resolved = resolver.resolve("google.com"); + + assertThat(resolved.length).isGreaterThan(1); + } + + @Test + void shouldPreferIpv4() throws Exception { + + System.setProperty(DirContextDnsResolver.PREFER_IPV4_KEY, "true"); + + resolver = new DirContextDnsResolver(); + InetAddress[] resolved = resolver.resolve("google.com"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(Inet4Address.class); + } + + @Test + void shouldPreferIpv4AndNotIpv6() throws Exception { + + System.setProperty(DirContextDnsResolver.PREFER_IPV4_KEY, "true"); + System.setProperty(DirContextDnsResolver.PREFER_IPV6_KEY, "false"); + + resolver = new DirContextDnsResolver(); + InetAddress[] resolved = resolver.resolve("google.com"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(Inet4Address.class); + } + + @Test + @Disabled("Requires guarding against IPv6 absence") + void shouldPreferIpv6AndNotIpv4() throws Exception { + + System.setProperty(DirContextDnsResolver.PREFER_IPV4_KEY, "false"); + System.setProperty(DirContextDnsResolver.PREFER_IPV6_KEY, "true"); + + resolver = new DirContextDnsResolver(); + InetAddress[] resolved = resolver.resolve("google.com"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(Inet6Address.class); + } + + @Test + void shouldFailWithUnknownHost() { + + resolver = new DirContextDnsResolver("8.8.8.8"); + + assertThatThrownBy(() -> resolver.resolve("unknown-domain-name")).isInstanceOf(UnknownHostException.class); + } + + @Test + void shouldResolveCname() throws Exception { + + resolver = new DirContextDnsResolver(); + InetAddress[] resolved = resolver.resolve("www.github.io"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(InetAddress.class); + assertThat(resolved[0].getHostName()).isEqualTo("www.github.io"); + } + + @Test + void shouldResolveWithoutSubdomain() throws Exception { + + resolver = new DirContextDnsResolver(); + InetAddress[] resolved = resolver.resolve("paluch.biz"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(InetAddress.class); + assertThat(resolved[0].getHostName()).isEqualTo("paluch.biz"); + + resolved = resolver.resolve("gmail.com"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(InetAddress.class); + assertThat(resolved[0].getHostName()).isEqualTo("gmail.com"); + } + + @Test + void shouldWorkWithIpv4Address() throws Exception { + + resolver = new DirContextDnsResolver(); + InetAddress[] resolved = resolver.resolve("127.0.0.1"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(Inet4Address.class); + assertThat(resolved[0].getHostAddress()).isEqualTo("127.0.0.1"); + } + + @Test + void shouldWorkWithIpv6Addresses() throws Exception { + + resolver = new DirContextDnsResolver(); + InetAddress[] resolved = resolver.resolve("::1"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(Inet6Address.class); + assertThat(resolved[0].getHostAddress()).isEqualTo("0:0:0:0:0:0:0:1"); + + resolved = resolver.resolve("2a00:1450:4001:816::200e"); + + assertThat(resolved.length).isGreaterThan(0); + assertThat(resolved[0]).isInstanceOf(Inet6Address.class); + assertThat(resolved[0].getHostAddress()).isEqualTo("2a00:1450:4001:816:0:0:0:200e"); + } +} diff --git a/src/test/java/io/lettuce/core/resource/EqualJitterDelayUnitTests.java b/src/test/java/io/lettuce/core/resource/EqualJitterDelayUnitTests.java new file mode 100644 index 0000000000..a30f001d96 --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/EqualJitterDelayUnitTests.java @@ -0,0 +1,84 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Jongyeol Choi + * @author Mark Paluch + */ +class EqualJitterDelayUnitTests { + + @Test + void shouldNotCreateIfLowerBoundIsNegative() { + assertThatThrownBy(() -> Delay.equalJitter(-1, 100, 1, TimeUnit.MILLISECONDS)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void shouldNotCreateIfLowerBoundIsSameAsUpperBound() { + assertThatThrownBy(() -> Delay.equalJitter(100, 100, 1, TimeUnit.MILLISECONDS)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void negativeAttemptShouldReturnZero() { + + Delay delay = Delay.equalJitter(); + + assertThat(delay.createDelay(-1)).isEqualTo(Duration.ZERO); + } + + @Test + void zeroShouldReturnZero() { + + Delay delay = Delay.equalJitter(); + + assertThat(delay.createDelay(0)).isEqualTo(Duration.ZERO); + } + + @Test + void testDefaultDelays() { + + Delay delay = Delay.equalJitter(); + + assertThat(delay.createDelay(1).toMillis()).isBetween(0L, 1L); + assertThat(delay.createDelay(2).toMillis()).isBetween(0L, 2L); + assertThat(delay.createDelay(3).toMillis()).isBetween(0L, 4L); + assertThat(delay.createDelay(4).toMillis()).isBetween(0L, 8L); + assertThat(delay.createDelay(5).toMillis()).isBetween(0L, 16L); + assertThat(delay.createDelay(6).toMillis()).isBetween(0L, 32L); + assertThat(delay.createDelay(7).toMillis()).isBetween(0L, 64L); + assertThat(delay.createDelay(8).toMillis()).isBetween(0L, 128L); + assertThat(delay.createDelay(9).toMillis()).isBetween(0L, 256L); + assertThat(delay.createDelay(10).toMillis()).isBetween(0L, 512L); + assertThat(delay.createDelay(11).toMillis()).isBetween(0L, 1024L); + assertThat(delay.createDelay(12).toMillis()).isBetween(0L, 2048L); + assertThat(delay.createDelay(13).toMillis()).isBetween(0L, 4096L); + assertThat(delay.createDelay(14).toMillis()).isBetween(0L, 8192L); + assertThat(delay.createDelay(15).toMillis()).isBetween(0L, 16384L); + assertThat(delay.createDelay(16).toMillis()).isBetween(0L, 30000L); + assertThat(delay.createDelay(17).toMillis()).isBetween(0L, 30000L); + assertThat(delay.createDelay(Integer.MAX_VALUE).toMillis()).isBetween(0L, 30000L); + } +} diff --git a/src/test/java/io/lettuce/core/resource/ExponentialDelayUnitTests.java b/src/test/java/io/lettuce/core/resource/ExponentialDelayUnitTests.java new file mode 100644 index 0000000000..f61dec2142 --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/ExponentialDelayUnitTests.java @@ -0,0 +1,101 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Mark Paluch + */ +class ExponentialDelayUnitTests { + + @Test + void shouldNotCreateIfLowerBoundIsNegative() { + assertThatThrownBy(() -> Delay.exponential(-1, 100, TimeUnit.MILLISECONDS, 10)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void shouldNotCreateIfLowerBoundIsSameAsUpperBound() { + assertThatThrownBy(() -> Delay.exponential(100, 100, TimeUnit.MILLISECONDS, 10)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void shouldNotCreateIfPowerIsOne() { + assertThatThrownBy(() -> Delay.exponential(100, 1000, TimeUnit.MILLISECONDS, 1)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void negativeAttemptShouldReturnZero() { + + Delay delay = Delay.exponential(); + + assertThat(delay.createDelay(-1).toMillis()).isEqualTo(0); + } + + @Test + void zeroShouldReturnZero() { + + Delay delay = Delay.exponential(); + + assertThat(delay.createDelay(0).toMillis()).isEqualTo(0); + } + + @Test + void testDefaultDelays() { + + Delay delay = Delay.exponential(); + + assertThat(delay.createDelay(1).toMillis()).isEqualTo(1); + assertThat(delay.createDelay(2).toMillis()).isEqualTo(2); + assertThat(delay.createDelay(3).toMillis()).isEqualTo(4); + assertThat(delay.createDelay(4).toMillis()).isEqualTo(8); + assertThat(delay.createDelay(5).toMillis()).isEqualTo(16); + assertThat(delay.createDelay(6).toMillis()).isEqualTo(32); + assertThat(delay.createDelay(7).toMillis()).isEqualTo(64); + assertThat(delay.createDelay(8).toMillis()).isEqualTo(128); + assertThat(delay.createDelay(9).toMillis()).isEqualTo(256); + assertThat(delay.createDelay(10).toMillis()).isEqualTo(512); + assertThat(delay.createDelay(11).toMillis()).isEqualTo(1024); + assertThat(delay.createDelay(12).toMillis()).isEqualTo(2048); + assertThat(delay.createDelay(13).toMillis()).isEqualTo(4096); + assertThat(delay.createDelay(14).toMillis()).isEqualTo(8192); + assertThat(delay.createDelay(15).toMillis()).isEqualTo(16384); + assertThat(delay.createDelay(16).toMillis()).isEqualTo(30000); + assertThat(delay.createDelay(17).toMillis()).isEqualTo(30000); + assertThat(delay.createDelay(Integer.MAX_VALUE).toMillis()).isEqualTo(30000); + } + + @Test + void testPow10Delays() { + + Delay delay = Delay.exponential(100, 10000, TimeUnit.MILLISECONDS, 10); + + assertThat(delay.createDelay(1).toMillis()).isEqualTo(100); + assertThat(delay.createDelay(2).toMillis()).isEqualTo(100); + assertThat(delay.createDelay(3).toMillis()).isEqualTo(100); + assertThat(delay.createDelay(4).toMillis()).isEqualTo(1000); + assertThat(delay.createDelay(5).toMillis()).isEqualTo(10000); + assertThat(delay.createDelay(Integer.MAX_VALUE).toMillis()).isEqualTo(10000); + } +} diff --git a/src/test/java/io/lettuce/core/resource/FullJitterDelayUnitTests.java b/src/test/java/io/lettuce/core/resource/FullJitterDelayUnitTests.java new file mode 100644 index 0000000000..a55929961d --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/FullJitterDelayUnitTests.java @@ -0,0 +1,84 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.Test; + +/** + * @author Jongyeol Choi + * @author Mark Paluch + */ +class FullJitterDelayUnitTests { + + @Test + void shouldNotCreateIfLowerBoundIsNegative() { + assertThatThrownBy(() -> Delay.fullJitter(-1, 100, 1, TimeUnit.MILLISECONDS)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void shouldNotCreateIfLowerBoundIsSameAsUpperBound() { + assertThatThrownBy(() -> Delay.fullJitter(100, 100, 1, TimeUnit.MILLISECONDS)).isInstanceOf( + IllegalArgumentException.class); + } + + @Test + void negativeAttemptShouldReturnZero() { + + Delay delay = Delay.fullJitter(); + + assertThat(delay.createDelay(-1)).isEqualTo(Duration.ZERO); + } + + @Test + void zeroShouldReturnZero() { + + Delay delay = Delay.fullJitter(); + + assertThat(delay.createDelay(0)).isEqualTo(Duration.ZERO); + } + + @Test + void testDefaultDelays() { + + Delay delay = Delay.fullJitter(); + + assertThat(delay.createDelay(1).toMillis()).isBetween(0L, 1L); + assertThat(delay.createDelay(2).toMillis()).isBetween(1L, 2L); + assertThat(delay.createDelay(3).toMillis()).isBetween(2L, 4L); + assertThat(delay.createDelay(4).toMillis()).isBetween(4L, 8L); + assertThat(delay.createDelay(5).toMillis()).isBetween(8L, 16L); + assertThat(delay.createDelay(6).toMillis()).isBetween(16L, 32L); + assertThat(delay.createDelay(7).toMillis()).isBetween(32L, 64L); + assertThat(delay.createDelay(8).toMillis()).isBetween(64L, 128L); + assertThat(delay.createDelay(9).toMillis()).isBetween(128L, 256L); + assertThat(delay.createDelay(10).toMillis()).isBetween(256L, 512L); + assertThat(delay.createDelay(11).toMillis()).isBetween(512L, 1024L); + assertThat(delay.createDelay(12).toMillis()).isBetween(1024L, 2048L); + assertThat(delay.createDelay(13).toMillis()).isBetween(2048L, 4096L); + assertThat(delay.createDelay(14).toMillis()).isBetween(4096L, 8192L); + assertThat(delay.createDelay(15).toMillis()).isBetween(8192L, 16384L); + assertThat(delay.createDelay(16).toMillis()).isBetween(15000L, 30000L); + assertThat(delay.createDelay(17).toMillis()).isBetween(15000L, 30000L); + assertThat(delay.createDelay(Integer.MAX_VALUE).toMillis()).isBetween(15000L, 30000L); + } +} diff --git a/src/test/java/io/lettuce/core/resource/MappingSocketAddressResolverUnitTests.java b/src/test/java/io/lettuce/core/resource/MappingSocketAddressResolverUnitTests.java new file mode 100644 index 0000000000..2fc003068b --- /dev/null +++ b/src/test/java/io/lettuce/core/resource/MappingSocketAddressResolverUnitTests.java @@ -0,0 +1,77 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.resource; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.Mockito.when; + +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.net.UnknownHostException; +import java.util.function.Function; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.internal.HostAndPort; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class MappingSocketAddressResolverUnitTests { + + @Mock + DnsResolver dnsResolver; + + @BeforeEach + void before() throws UnknownHostException { + when(dnsResolver.resolve(anyString())).thenReturn(new InetAddress[0]); + } + + @Test + void shouldPassThruHostAndPort() { + + RedisURI localhost = RedisURI.create("localhost", RedisURI.DEFAULT_REDIS_PORT); + MappingSocketAddressResolver resolver = MappingSocketAddressResolver.create(dnsResolver, Function.identity()); + + InetSocketAddress resolve = (InetSocketAddress) resolver.resolve(localhost); + + assertThat(resolve.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT); + assertThat(resolve.getHostString()).isEqualTo("localhost"); + } + + @Test + void shouldMapHostAndPort() { + + RedisURI localhost = RedisURI.create("localhost", RedisURI.DEFAULT_REDIS_PORT); + MappingSocketAddressResolver resolver = MappingSocketAddressResolver.create(dnsResolver, + it -> HostAndPort.of(it.getHostText() + "-foo", it.getPort() + 100)); + + InetSocketAddress resolve = (InetSocketAddress) resolver.resolve(localhost); + + assertThat(resolve.getPort()).isEqualTo(RedisURI.DEFAULT_REDIS_PORT + 100); + assertThat(resolve.getHostString()).isEqualTo("localhost-foo"); + } +} diff --git a/src/test/java/io/lettuce/core/sentinel/SentinelCommandIntegrationTests.java b/src/test/java/io/lettuce/core/sentinel/SentinelCommandIntegrationTests.java new file mode 100644 index 0000000000..c8ef77a33d --- /dev/null +++ b/src/test/java/io/lettuce/core/sentinel/SentinelCommandIntegrationTests.java @@ -0,0 +1,214 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import static io.lettuce.test.settings.TestSettings.hostAddr; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.net.InetSocketAddress; +import java.net.SocketAddress; +import java.util.List; +import java.util.Map; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.RedisBug; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisConnectionException; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.sync.RedisSentinelCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.Wait; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +public class SentinelCommandIntegrationTests extends TestSupport { + + private final RedisClient redisClient; + private StatefulRedisSentinelConnection connection; + private RedisSentinelCommands sentinel; + + @Inject + public SentinelCommandIntegrationTests(RedisClient redisClient) { + this.redisClient = redisClient; + } + + @BeforeEach + void before() { + + this.connection = this.redisClient.connectSentinel(SentinelTestSettings.SENTINEL_URI); + this.sentinel = getSyncConnection(this.connection); + } + + protected RedisSentinelCommands getSyncConnection(StatefulRedisSentinelConnection connection) { + return connection.sync(); + } + + @AfterEach + void after() { + this.connection.close(); + } + + @Test + void getMasterAddr() { + SocketAddress result = sentinel.getMasterAddrByName(SentinelTestSettings.MASTER_ID); + InetSocketAddress socketAddress = (InetSocketAddress) result; + assertThat(socketAddress.getHostName()).contains(TestSettings.hostAddr()); + } + + @Test + void getMasterAddrButNoMasterPresent() { + InetSocketAddress socketAddress = (InetSocketAddress) sentinel.getMasterAddrByName("unknown"); + assertThat(socketAddress).isNull(); + } + + @Test + void getMasterAddrByName() { + InetSocketAddress socketAddress = (InetSocketAddress) sentinel.getMasterAddrByName(SentinelTestSettings.MASTER_ID); + assertThat(socketAddress.getPort()).isBetween(6479, 6485); + } + + @Test + void masters() { + + List> result = sentinel.masters(); + + assertThat(result.size()).isGreaterThan(0); + + Map map = result.get(0); + assertThat(map.get("flags")).isNotNull(); + assertThat(map.get("config-epoch")).isNotNull(); + assertThat(map.get("port")).isNotNull(); + } + + @Test + void sentinelConnectWith() { + + RedisURI uri = RedisURI.Builder.sentinel(TestSettings.host(), 1234, SentinelTestSettings.MASTER_ID) + .withSentinel(TestSettings.host()).build(); + + RedisSentinelCommands sentinelConnection = this.redisClient.connectSentinel(uri).sync(); + assertThat(sentinelConnection.ping()).isEqualTo("PONG"); + + sentinelConnection.getStatefulConnection().close(); + + RedisCommands connection2 = this.redisClient.connect(uri).sync(); + assertThat(connection2.ping()).isEqualTo("PONG"); + connection2.quit(); + + Wait.untilTrue(() -> connection2.getStatefulConnection().isOpen()).waitOrTimeout(); + + assertThat(connection2.ping()).isEqualTo("PONG"); + connection2.getStatefulConnection().close(); + } + + @Test + void sentinelConnectWrongMaster() { + + RedisURI nonexistent = RedisURI.Builder.sentinel(TestSettings.host(), 1234, "nonexistent") + .withSentinel(TestSettings.host()).build(); + + assertThatThrownBy(() -> redisClient.connect(nonexistent)).isInstanceOf(RedisConnectionException.class); + } + + @Test + void getMaster() { + + Map result = sentinel.master(SentinelTestSettings.MASTER_ID); + assertThat(result.get("ip")).isEqualTo(hostAddr()); // !! IPv4/IPv6 + assertThat(result).containsKey("role-reported"); + } + + @Test + void role() { + + RedisCommands connection = redisClient.connect(RedisURI.Builder.redis(host, 26380).build()).sync(); + try { + + List objects = connection.role(); + + assertThat(objects).hasSize(2); + + assertThat(objects.get(0)).isEqualTo("sentinel"); + assertThat(objects.get(1).toString()).isEqualTo("[" + SentinelTestSettings.MASTER_ID + "]"); + } finally { + connection.getStatefulConnection().close(); + } + } + + @Test + void getSlaves() { + + List> result = sentinel.slaves(SentinelTestSettings.MASTER_ID); + assertThat(result).hasSize(1); + assertThat(result.get(0)).containsKey("port"); + } + + @Test + void reset() { + + Long result = sentinel.reset("other"); + assertThat(result.intValue()).isEqualTo(0); + } + + @Test + void failover() { + + try { + sentinel.failover("other"); + } catch (Exception e) { + assertThat(e).hasMessageContaining("ERR No such master with that name"); + } + } + + @Test + void monitor() { + + try { + sentinel.remove("mymaster2"); + } catch (Exception e) { + } + + String result = sentinel.monitor("mymaster2", hostAddr(), 8989, 2); + assertThat(result).isEqualTo("OK"); + } + + @Test + void ping() { + + String result = sentinel.ping(); + assertThat(result).isEqualTo("PONG"); + } + + @Test + void set() { + + String result = sentinel.set(SentinelTestSettings.MASTER_ID, "down-after-milliseconds", "1000"); + assertThat(result).isEqualTo("OK"); + } +} diff --git a/src/test/java/io/lettuce/core/sentinel/SentinelConnectionIntegrationTests.java b/src/test/java/io/lettuce/core/sentinel/SentinelConnectionIntegrationTests.java new file mode 100644 index 0000000000..21936437b7 --- /dev/null +++ b/src/test/java/io/lettuce/core/sentinel/SentinelConnectionIntegrationTests.java @@ -0,0 +1,209 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.RedisBug; +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.async.RedisSentinelAsyncCommands; +import io.lettuce.core.sentinel.api.sync.RedisSentinelCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.TestFutures; +import io.lettuce.test.Wait; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +public class SentinelConnectionIntegrationTests extends TestSupport { + + private final RedisClient redisClient; + private StatefulRedisSentinelConnection connection; + private RedisSentinelCommands sentinel; + private RedisSentinelAsyncCommands sentinelAsync; + + @Inject + public SentinelConnectionIntegrationTests(RedisClient redisClient) { + this.redisClient = redisClient; + } + + @BeforeEach + void before() { + + this.connection = this.redisClient.connectSentinel(SentinelTestSettings.SENTINEL_URI); + this.sentinel = getSyncConnection(this.connection); + this.sentinelAsync = this.connection.async(); + } + + protected RedisSentinelCommands getSyncConnection(StatefulRedisSentinelConnection connection) { + return connection.sync(); + } + + @AfterEach + void after() { + this.connection.close(); + } + + @Test + void testAsync() { + + RedisFuture>> future = sentinelAsync.masters(); + + assertThat(TestFutures.getOrTimeout(future)).isNotNull(); + assertThat(future.isDone()).isTrue(); + assertThat(future.isCancelled()).isFalse(); + } + + @Test + void testFuture() throws Exception { + + RedisFuture> future = sentinelAsync.master("unknown master"); + + AtomicBoolean state = new AtomicBoolean(); + + future.exceptionally(throwable -> { + state.set(true); + return null; + }); + + assertThat(future.await(5, TimeUnit.SECONDS)).isTrue(); + assertThat(state.get()).isTrue(); + } + + @Test + void testStatefulConnection() { + + StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); + assertThat(statefulConnection).isSameAs(statefulConnection.async().getStatefulConnection()); + } + + @Test + void testSyncConnection() { + + StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); + RedisSentinelCommands sync = statefulConnection.sync(); + assertThat(sync.ping()).isEqualTo("PONG"); + } + + @Test + void testSyncAsyncConversion() { + + StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); + assertThat(statefulConnection.sync().getStatefulConnection()).isSameAs(statefulConnection); + assertThat(statefulConnection.sync().getStatefulConnection().sync()).isSameAs(statefulConnection.sync()); + } + + @Test + void testSyncClose() { + + StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); + statefulConnection.sync().getStatefulConnection().close(); + + Wait.untilTrue(() -> !sentinel.isOpen()).waitOrTimeout(); + + assertThat(sentinel.isOpen()).isFalse(); + assertThat(statefulConnection.isOpen()).isFalse(); + } + + @Test + void testAsyncClose() { + StatefulRedisSentinelConnection statefulConnection = sentinel.getStatefulConnection(); + statefulConnection.async().getStatefulConnection().close(); + + Wait.untilTrue(() -> !sentinel.isOpen()).waitOrTimeout(); + + assertThat(sentinel.isOpen()).isFalse(); + assertThat(statefulConnection.isOpen()).isFalse(); + } + + @Test + void connectToOneNode() { + RedisSentinelCommands connection = redisClient.connectSentinel(SentinelTestSettings.SENTINEL_URI) + .sync(); + assertThat(connection.ping()).isEqualTo("PONG"); + connection.getStatefulConnection().close(); + } + + @Test + void connectWithByteCodec() { + RedisSentinelCommands connection = redisClient.connectSentinel(new ByteArrayCodec(), + SentinelTestSettings.SENTINEL_URI).sync(); + assertThat(connection.master(SentinelTestSettings.MASTER_ID.getBytes())).isNotNull(); + connection.getStatefulConnection().close(); + } + + @Test + void sentinelConnectionShouldDiscardPassword() { + + RedisURI redisURI = RedisURI.Builder.sentinel(TestSettings.host(), SentinelTestSettings.MASTER_ID) + .withPassword("hello-world").build(); + + redisClient.setOptions(ClientOptions.builder().build()); + StatefulRedisSentinelConnection connection = redisClient.connectSentinel(redisURI); + + assertThat(connection.sync().ping()).isEqualTo("PONG"); + + connection.close(); + + redisClient.setOptions(ClientOptions.create()); + } + + @Test + void sentinelConnectionShouldSetClientName() { + + RedisURI redisURI = RedisURI.Builder.sentinel(TestSettings.host(), SentinelTestSettings.MASTER_ID) + .withClientName("my-client").build(); + + StatefulRedisSentinelConnection connection = redisClient.connectSentinel(redisURI); + + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + + connection.close(); + } + + @Test + void sentinelManagedConnectionShouldSetClientName() { + + RedisURI redisURI = RedisURI.Builder.sentinel(TestSettings.host(), SentinelTestSettings.MASTER_ID) + .withClientName("my-client").build(); + + StatefulRedisConnection connection = redisClient.connect(redisURI); + + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + + connection.sync().quit(); + assertThat(connection.sync().clientGetname()).isEqualTo(redisURI.getClientName()); + + connection.close(); + } +} diff --git a/src/test/java/io/lettuce/core/sentinel/SentinelServerCommandIntegrationTests.java b/src/test/java/io/lettuce/core/sentinel/SentinelServerCommandIntegrationTests.java new file mode 100644 index 0000000000..206d03c094 --- /dev/null +++ b/src/test/java/io/lettuce/core/sentinel/SentinelServerCommandIntegrationTests.java @@ -0,0 +1,129 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.inject.Inject; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.RedisBug; +import io.lettuce.core.KillArgs; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.sync.RedisSentinelCommands; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +public class SentinelServerCommandIntegrationTests extends TestSupport { + + private final RedisClient redisClient; + private StatefulRedisSentinelConnection connection; + private RedisSentinelCommands sentinel; + + @Inject + public SentinelServerCommandIntegrationTests(RedisClient redisClient) { + this.redisClient = redisClient; + } + + @BeforeEach + void before() { + + this.connection = this.redisClient.connectSentinel(SentinelTestSettings.SENTINEL_URI); + this.sentinel = getSyncConnection(this.connection); + } + + protected RedisSentinelCommands getSyncConnection( + StatefulRedisSentinelConnection connection) { + return connection.sync(); + } + + @AfterEach + void after() { + this.connection.close(); + } + + @Test + public void clientGetSetname() { + assertThat(sentinel.clientGetname()).isNull(); + assertThat(sentinel.clientSetname("test")).isEqualTo("OK"); + assertThat(sentinel.clientGetname()).isEqualTo("test"); + assertThat(sentinel.clientSetname("")).isEqualTo("OK"); + assertThat(sentinel.clientGetname()).isNull(); + } + + @Test + public void clientPause() { + assertThat(sentinel.clientPause(10)).isEqualTo("OK"); + } + + @Test + public void clientKill() { + Pattern p = Pattern.compile(".*addr=([^ ]+).*"); + String clients = sentinel.clientList(); + Matcher m = p.matcher(clients); + + assertThat(m.lookingAt()).isTrue(); + assertThat(sentinel.clientKill(m.group(1))).isEqualTo("OK"); + } + + @Test + public void clientKillExtended() { + + RedisURI redisURI = RedisURI.Builder.sentinel(TestSettings.host(), SentinelTestSettings.MASTER_ID).build(); + RedisSentinelCommands connection2 = redisClient.connectSentinel(redisURI).sync(); + connection2.clientSetname("killme"); + + Pattern p = Pattern.compile("^.*addr=([^ ]+).*name=killme.*$", Pattern.MULTILINE | Pattern.DOTALL); + String clients = sentinel.clientList(); + Matcher m = p.matcher(clients); + + assertThat(m.matches()).isTrue(); + String addr = m.group(1); + assertThat(sentinel.clientKill(KillArgs.Builder.addr(addr).skipme())).isGreaterThan(0); + + assertThat(sentinel.clientKill(KillArgs.Builder.id(4234))).isEqualTo(0); + assertThat(sentinel.clientKill(KillArgs.Builder.typeSlave().id(4234))).isEqualTo(0); + assertThat(sentinel.clientKill(KillArgs.Builder.typeNormal().id(4234))).isEqualTo(0); + assertThat(sentinel.clientKill(KillArgs.Builder.typePubsub().id(4234))).isEqualTo(0); + + connection2.getStatefulConnection().close(); + } + + @Test + public void clientList() { + assertThat(sentinel.clientList().contains("addr=")).isTrue(); + } + + @Test + public void info() { + assertThat(sentinel.info().contains("redis_version")).isTrue(); + assertThat(sentinel.info("server").contains("redis_version")).isTrue(); + } +} diff --git a/src/test/java/io/lettuce/core/sentinel/SentinelSslIntegrationTests.java b/src/test/java/io/lettuce/core/sentinel/SentinelSslIntegrationTests.java new file mode 100644 index 0000000000..4ac52592d0 --- /dev/null +++ b/src/test/java/io/lettuce/core/sentinel/SentinelSslIntegrationTests.java @@ -0,0 +1,101 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import static io.lettuce.test.settings.TestSettings.sslPort; +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assumptions.assumeTrue; + +import java.io.File; + +import javax.inject.Inject; + +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +import io.lettuce.RedisBug; +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.internal.HostAndPort; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DnsResolver; +import io.lettuce.core.resource.MappingSocketAddressResolver; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.test.CanConnect; +import io.lettuce.test.LettuceExtension; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.settings.TestSettings; + +/** + * Integration tests for Sentinel usage. + * + * @author Mark Paluch + */ +@ExtendWith(LettuceExtension.class) +class SentinelSslIntegrationTests extends TestSupport { + + private static final File TRUSTSTORE_FILE = new File("work/truststore.jks"); + + private final ClientResources clientResources; + + @Inject + SentinelSslIntegrationTests(ClientResources clientResources) { + this.clientResources = clientResources.mutate() + .socketAddressResolver(MappingSocketAddressResolver.create(DnsResolver.jvmDefault(), hostAndPort -> { + + return HostAndPort.of(hostAndPort.getHostText(), hostAndPort.getPort() + 443); + })).build(); + } + + @BeforeAll + static void beforeAll() { + assumeTrue(CanConnect.to(TestSettings.host(), sslPort()), "Assume that stunnel runs on port 6443"); + assertThat(TRUSTSTORE_FILE).exists(); + } + + @Test + void shouldConnectSentinelDirectly() { + + RedisURI redisURI = RedisURI.create("rediss://" + TestSettings.host() + ":" + RedisURI.DEFAULT_SENTINEL_PORT); + redisURI.setVerifyPeer(false); + + RedisClient client = RedisClient.create(clientResources); + StatefulRedisSentinelConnection connection = client.connectSentinel(redisURI); + + assertThat(connection.sync().getMasterAddrByName("mymaster")).isNotNull(); + + connection.close(); + FastShutdown.shutdown(client); + } + + @Test + void shouldConnectToMasterUsingSentinel() { + + RedisURI redisURI = RedisURI.create("rediss-sentinel://" + TestSettings.host() + ":" + RedisURI.DEFAULT_SENTINEL_PORT + + "?sentinelMasterId=mymaster"); + SslOptions options = SslOptions.builder().truststore(TRUSTSTORE_FILE).build(); + + RedisClient client = RedisClient.create(clientResources); + client.setOptions(ClientOptions.builder().sslOptions(options).build()); + StatefulRedisConnection connection = client.connect(redisURI); + + assertThat(connection.sync().ping()).isNotNull(); + + connection.close(); + FastShutdown.shutdown(client); + } +} diff --git a/src/test/java/io/lettuce/core/sentinel/SentinelTestSettings.java b/src/test/java/io/lettuce/core/sentinel/SentinelTestSettings.java new file mode 100644 index 0000000000..9e59ce93a3 --- /dev/null +++ b/src/test/java/io/lettuce/core/sentinel/SentinelTestSettings.java @@ -0,0 +1,33 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +public abstract class SentinelTestSettings extends TestSupport { + + public static final RedisURI SENTINEL_URI = RedisURI.Builder.sentinel(TestSettings.host(), SentinelTestSettings.MASTER_ID) + .build(); + public static final String MASTER_ID = "mymaster"; + + private SentinelTestSettings() { + } +} diff --git a/src/test/java/io/lettuce/core/sentinel/reactive/SentinelReactiveCommandTest.java b/src/test/java/io/lettuce/core/sentinel/reactive/SentinelReactiveCommandTest.java new file mode 100644 index 0000000000..5566974f15 --- /dev/null +++ b/src/test/java/io/lettuce/core/sentinel/reactive/SentinelReactiveCommandTest.java @@ -0,0 +1,41 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel.reactive; + +import javax.inject.Inject; + +import io.lettuce.RedisBug; +import io.lettuce.core.RedisClient; +import io.lettuce.core.sentinel.SentinelCommandIntegrationTests; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.sync.RedisSentinelCommands; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +public class SentinelReactiveCommandTest extends SentinelCommandIntegrationTests { + + @Inject + public SentinelReactiveCommandTest(RedisClient redisClient) { + super(redisClient); + } + + @Override + protected RedisSentinelCommands getSyncConnection(StatefulRedisSentinelConnection connection) { + return ReactiveSyncInvocationHandler.sync(connection); + } +} diff --git a/src/test/java/io/lettuce/core/sentinel/reactive/SentinelServerReactiveCommandTest.java b/src/test/java/io/lettuce/core/sentinel/reactive/SentinelServerReactiveCommandTest.java new file mode 100644 index 0000000000..804c61e091 --- /dev/null +++ b/src/test/java/io/lettuce/core/sentinel/reactive/SentinelServerReactiveCommandTest.java @@ -0,0 +1,41 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.sentinel.reactive; + +import javax.inject.Inject; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.sentinel.SentinelServerCommandIntegrationTests; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.sync.RedisSentinelCommands; +import io.lettuce.test.ReactiveSyncInvocationHandler; + +/** + * @author Mark Paluch + */ +public class SentinelServerReactiveCommandTest extends SentinelServerCommandIntegrationTests { + + @Inject + public SentinelServerReactiveCommandTest(RedisClient redisClient) { + super(redisClient); + } + + @Override + protected RedisSentinelCommands getSyncConnection( + StatefulRedisSentinelConnection connection) { + return ReactiveSyncInvocationHandler.sync(connection); + } +} diff --git a/src/test/java/io/lettuce/core/support/AsyncConnectionPoolSupportIntegrationTests.java b/src/test/java/io/lettuce/core/support/AsyncConnectionPoolSupportIntegrationTests.java new file mode 100644 index 0000000000..64777e10a3 --- /dev/null +++ b/src/test/java/io/lettuce/core/support/AsyncConnectionPoolSupportIntegrationTests.java @@ -0,0 +1,240 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.Assert.fail; + +import java.lang.reflect.Proxy; +import java.util.Set; +import java.util.concurrent.CompletableFuture; + +import org.apache.commons.pool2.impl.GenericObjectPoolConfig; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.StatefulRedisClusterConnectionImpl; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.test.TestFutures; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.netty.channel.group.ChannelGroup; + +/** + * @author Mark Paluch + */ +class AsyncConnectionPoolSupportIntegrationTests extends TestSupport { + + private static RedisClient client; + private static Set channels; + private static RedisURI uri = RedisURI.Builder.redis(host, port).build(); + + @BeforeAll + static void setupClient() { + + client = RedisClient.create(TestClientResources.create(), uri); + client.setOptions(ClientOptions.create()); + channels = (ChannelGroup) ReflectionTestUtils.getField(client, "channels"); + } + + @AfterAll + static void afterClass() { + FastShutdown.shutdown(client); + FastShutdown.shutdown(client.getResources()); + } + + @Test + void asyncPoolShouldWorkWithWrappedConnections() { + + BoundedAsyncPool> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> client.connectAsync(StringCodec.ASCII, uri), BoundedPoolConfig.create()); + + borrowAndReturn(pool); + borrowAndClose(pool); + borrowAndCloseAsync(pool); + + TestFutures.awaitOrTimeout(pool.release(TestFutures.getOrTimeout(pool.acquire()).sync().getStatefulConnection())); + TestFutures.awaitOrTimeout(pool.release(TestFutures.getOrTimeout(pool.acquire()).async().getStatefulConnection())); + + assertThat(channels).hasSize(1); + + pool.close(); + + assertThat(channels).isEmpty(); + } + + @Test + void asyncPoolShouldCloseConnectionsAboveMaxIdleSize() { + + GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig(); + poolConfig.setMaxIdle(2); + + BoundedAsyncPool> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> client.connectAsync(StringCodec.ASCII, uri), CommonsPool2ConfigConverter.bounded(poolConfig)); + + borrowAndReturn(pool); + borrowAndClose(pool); + + StatefulRedisConnection c1 = TestFutures.getOrTimeout(pool.acquire()); + StatefulRedisConnection c2 = TestFutures.getOrTimeout(pool.acquire()); + StatefulRedisConnection c3 = TestFutures.getOrTimeout(pool.acquire()); + + assertThat(channels).hasSize(3); + + CompletableFuture.allOf(pool.release(c1), pool.release(c2), pool.release(c3)).join(); + + assertThat(channels).hasSize(2); + + pool.close(); + + assertThat(channels).isEmpty(); + } + + @Test + void asyncPoolShouldWorkWithPlainConnections() { + + AsyncPool> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> client.connectAsync(StringCodec.ASCII, uri), BoundedPoolConfig.create(), false); + + borrowAndReturn(pool); + + StatefulRedisConnection connection = TestFutures.getOrTimeout(pool.acquire()); + assertThat(Proxy.isProxyClass(connection.getClass())).isFalse(); + pool.release(connection); + + pool.close(); + } + + @Test + void asyncPoolUsingWrappingShouldPropagateExceptionsCorrectly() { + + AsyncPool> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> client.connectAsync(StringCodec.ASCII, uri), BoundedPoolConfig.create()); + + StatefulRedisConnection connection = TestFutures.getOrTimeout(pool.acquire()); + RedisCommands sync = connection.sync(); + sync.set(key, value); + + try { + sync.hgetall(key); + fail("Missing RedisCommandExecutionException"); + } catch (RedisCommandExecutionException e) { + assertThat(e).hasMessageContaining("WRONGTYPE"); + } + + connection.close(); + pool.close(); + } + + @Test + void wrappedConnectionShouldUseWrappers() { + + AsyncPool> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> client.connectAsync(StringCodec.ASCII, uri), BoundedPoolConfig.create()); + + StatefulRedisConnection connection = TestFutures.getOrTimeout(pool.acquire()); + RedisCommands sync = connection.sync(); + + assertThat(connection).isInstanceOf(StatefulRedisConnection.class).isNotInstanceOf( + StatefulRedisClusterConnectionImpl.class); + assertThat(Proxy.isProxyClass(connection.getClass())).isTrue(); + + assertThat(sync).isInstanceOf(RedisCommands.class); + assertThat(connection.async()).isInstanceOf(RedisAsyncCommands.class).isNotInstanceOf(RedisAsyncCommandsImpl.class); + assertThat(connection.reactive()).isInstanceOf(RedisReactiveCommands.class).isNotInstanceOf( + RedisReactiveCommandsImpl.class); + assertThat(sync.getStatefulConnection()).isInstanceOf(StatefulRedisConnection.class) + .isNotInstanceOf(StatefulRedisConnectionImpl.class).isSameAs(connection); + + connection.close(); + pool.close(); + } + + @Test + void wrappedObjectClosedAfterReturn() { + + AsyncPool> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> client.connectAsync(StringCodec.ASCII, uri), BoundedPoolConfig.create(), true); + + StatefulRedisConnection connection = TestFutures.getOrTimeout(pool.acquire()); + RedisCommands sync = connection.sync(); + sync.ping(); + + connection.close(); + + try { + connection.isMulti(); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("deallocated"); + } + + pool.close(); + } + + @Test + void shouldPropagateAsyncFlow() { + + AsyncPool> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> client.connectAsync(StringCodec.ASCII, uri), BoundedPoolConfig.create()); + + CompletableFuture pingResponse = pool.acquire().thenCompose(c -> { + return c.async().ping().whenComplete((s, throwable) -> pool.release(c)); + }); + + TestFutures.awaitOrTimeout(pingResponse); + assertThat(pingResponse).isCompletedWithValue("PONG"); + + pool.close(); + } + + private void borrowAndReturn(AsyncPool> pool) { + + for (int i = 0; i < 10; i++) { + StatefulRedisConnection connection = TestFutures.getOrTimeout(pool.acquire()); + RedisCommands sync = connection.sync(); + sync.ping(); + TestFutures.awaitOrTimeout(pool.release(connection)); + } + } + + private void borrowAndClose(AsyncPool> pool) { + + for (int i = 0; i < 10; i++) { + StatefulRedisConnection connection = TestFutures.getOrTimeout(pool.acquire()); + RedisCommands sync = connection.sync(); + sync.ping(); + connection.close(); + } + } + + private void borrowAndCloseAsync(AsyncPool> pool) { + + for (int i = 0; i < 10; i++) { + StatefulRedisConnection connection = TestFutures.getOrTimeout(pool.acquire()); + RedisCommands sync = connection.sync(); + sync.ping(); + TestFutures.getOrTimeout(connection.closeAsync()); + } + } +} diff --git a/src/test/java/io/lettuce/core/support/AsyncPoolWithValidationUnitTests.java b/src/test/java/io/lettuce/core/support/AsyncPoolWithValidationUnitTests.java new file mode 100644 index 0000000000..52d3707658 --- /dev/null +++ b/src/test/java/io/lettuce/core/support/AsyncPoolWithValidationUnitTests.java @@ -0,0 +1,352 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static io.lettuce.core.internal.Futures.failed; +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.atomic.AtomicInteger; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; +import org.mockito.junit.jupiter.MockitoSettings; +import org.mockito.quality.Strictness; + +import io.lettuce.core.RedisException; +import io.lettuce.test.TestFutures; + +/** + * @author Mark Paluch + */ +@ExtendWith(MockitoExtension.class) +@MockitoSettings(strictness = Strictness.LENIENT) +class AsyncPoolWithValidationUnitTests { + + @Mock + AsyncObjectFactory factory; + + @BeforeEach + void before() { + when(factory.destroy(any())).thenReturn(CompletableFuture.completedFuture(null)); + } + + private void mockCreation() { + + AtomicInteger counter = new AtomicInteger(); + when(factory.create()).then(invocation -> CompletableFuture.completedFuture("" + counter.incrementAndGet())); + } + + @Test + void objectCreationShouldFail() { + + when(factory.create()).thenReturn(failed(new RedisException("foo"))); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.create()); + + CompletableFuture acquire = pool.acquire(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isZero(); + + assertThat(acquire).isCompletedExceptionally(); + } + + @Test + void objectCreationFinishesAfterShutdown() { + + CompletableFuture progress = new CompletableFuture<>(); + + when(factory.create()).thenReturn(progress); + when(factory.destroy(any())).thenReturn(CompletableFuture.completedFuture(null)); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.create()); + + CompletableFuture acquire = pool.acquire(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isEqualTo(1); + + pool.close(); + + assertThat(acquire.isDone()).isFalse(); + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isEqualTo(1); + verify(factory, never()).destroy("foo"); + + progress.complete("foo"); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isZero(); + + verify(factory).destroy("foo"); + } + + @Test + void objectCreationCanceled() { + + CompletableFuture progress = new CompletableFuture<>(); + + when(factory.create()).thenReturn(progress); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.create()); + + CompletableFuture acquire = pool.acquire(); + + acquire.cancel(true); + progress.complete("foo"); + + assertThat(pool.getIdle()).isEqualTo(1); + assertThat(pool.getObjectCount()).isEqualTo(1); + assertThat(pool.getCreationInProgress()).isZero(); + + verify(factory, never()).destroy(anyString()); + } + + @Test + void shouldCreateObjectWithTestOnBorrowFailExceptionally() { + + mockCreation(); + when(factory.validate(any())).thenReturn(failed(new RedisException("foo"))); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnCreate().build()); + + CompletableFuture acquire = pool.acquire(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isZero(); + + assertThat(acquire).isCompletedExceptionally(); + } + + @Test + void shouldCreateObjectWithTestOnBorrowSuccess() { + + mockCreation(); + when(factory.validate(any())).thenReturn(CompletableFuture.completedFuture(true)); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnCreate().build()); + + CompletableFuture acquire = pool.acquire(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isEqualTo(1); + assertThat(pool.getCreationInProgress()).isZero(); + + assertThat(acquire).isCompletedWithValue("1"); + } + + @Test + void shouldCreateObjectWithTestOnBorrowFailState() { + + mockCreation(); + when(factory.validate(any())).thenReturn(CompletableFuture.completedFuture(false)); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnCreate().build()); + + CompletableFuture acquire = pool.acquire(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isZero(); + + assertThat(acquire).isCompletedExceptionally(); + } + + @Test + void shouldCreateFailedObjectWithTestOnBorrowFail() { + + when(factory.create()).thenReturn(failed(new RedisException("foo"))); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnCreate().build()); + + CompletableFuture acquire = pool.acquire(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isZero(); + + assertThat(acquire).isCompletedExceptionally(); + } + + @Test + void shouldTestObjectOnBorrowSuccessfully() { + + mockCreation(); + when(factory.validate(any())).thenReturn(CompletableFuture.completedFuture(true)); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnAcquire().build()); + + pool.release(TestFutures.getOrTimeout(pool.acquire())); + + assertThat(pool.getIdle()).isEqualTo(1); + assertThat(pool.getObjectCount()).isEqualTo(1); + assertThat(pool.getCreationInProgress()).isZero(); + + CompletableFuture acquire = pool.acquire(); + + assertThat(acquire).isCompletedWithValue("1"); + } + + @Test + void shouldTestObjectOnBorrowFailState() { + + mockCreation(); + when(factory.validate(any())).thenReturn(failed(new RedisException("foo"))); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnAcquire().build()); + + pool.release(TestFutures.getOrTimeout(pool.acquire())); + + assertThat(pool.getIdle()).isEqualTo(1); + assertThat(pool.getObjectCount()).isEqualTo(1); + assertThat(pool.getCreationInProgress()).isZero(); + + CompletableFuture acquire = pool.acquire(); + + assertThat(acquire).isCompletedWithValue("2"); + + assertThat(pool.getIdle()).isEqualTo(0); + assertThat(pool.getObjectCount()).isEqualTo(1); + assertThat(pool.getCreationInProgress()).isZero(); + } + + @Test + void shouldTestObjectOnBorrowFailExceptionally() { + + mockCreation(); + when(factory.validate(any())).thenReturn(failed(new RedisException("foo"))); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnAcquire().build()); + + pool.release(TestFutures.getOrTimeout(pool.acquire())); + + assertThat(pool.getIdle()).isEqualTo(1); + assertThat(pool.getObjectCount()).isEqualTo(1); + assertThat(pool.getCreationInProgress()).isZero(); + + CompletableFuture acquire = pool.acquire(); + + assertThat(acquire).isCompletedWithValue("2"); + + assertThat(pool.getIdle()).isEqualTo(0); + assertThat(pool.getObjectCount()).isEqualTo(1); + assertThat(pool.getCreationInProgress()).isZero(); + } + + @Test + void shouldTestObjectOnReturnSuccessfully() { + + mockCreation(); + when(factory.validate(any())).thenReturn(CompletableFuture.completedFuture(true)); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnRelease().build()); + + TestFutures.awaitOrTimeout(pool.release(TestFutures.getOrTimeout(pool.acquire()))); + + assertThat(pool.getIdle()).isEqualTo(1); + assertThat(pool.getObjectCount()).isEqualTo(1); + assertThat(pool.getCreationInProgress()).isZero(); + + CompletableFuture acquire = pool.acquire(); + + assertThat(acquire).isCompletedWithValue("1"); + } + + @Test + void shouldTestObjectOnReturnFailState() { + + mockCreation(); + when(factory.validate(any())).thenReturn(failed(new RedisException("foo"))); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnRelease().build()); + + CompletableFuture release = pool.release(TestFutures.getOrTimeout(pool.acquire())); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isZero(); + + assertThat(release).isCompletedWithValue(null); + } + + @Test + void shouldTestObjectOnReturnFailExceptionally() { + + mockCreation(); + when(factory.validate(any())).thenReturn(failed(new RedisException("foo"))); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().testOnRelease().build()); + + CompletableFuture release = pool.release(TestFutures.getOrTimeout(pool.acquire())); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isZero(); + + assertThat(release).isCompletedWithValue(null); + } + + @Test + void shouldRefillIdleObjects() { + + mockCreation(); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().maxTotal(20).minIdle(5) + .build()); + + assertThat(pool.getIdle()).isEqualTo(5); + + pool.acquire(); + + assertThat(pool.getIdle()).isEqualTo(5); + assertThat(pool.getObjectCount()).isEqualTo(6); + + verify(factory, times(6)).create(); + } + + @Test + void shouldDisposeIdleObjects() { + + mockCreation(); + + BoundedAsyncPool pool = new BoundedAsyncPool<>(factory, BoundedPoolConfig.builder().maxTotal(20).maxIdle(5) + .minIdle(5).build()); + + assertThat(pool.getIdle()).isEqualTo(5); + + String object = TestFutures.getOrTimeout(pool.acquire()); + pool.release(object); + + assertThat(pool.getIdle()).isEqualTo(5); + + verify(factory).destroy(object); + } +} diff --git a/src/test/java/io/lettuce/core/support/BoundedAsyncPoolUnitTests.java b/src/test/java/io/lettuce/core/support/BoundedAsyncPoolUnitTests.java new file mode 100644 index 0000000000..7c6d91b163 --- /dev/null +++ b/src/test/java/io/lettuce/core/support/BoundedAsyncPoolUnitTests.java @@ -0,0 +1,293 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.atomic.AtomicInteger; + +import org.junit.jupiter.api.Test; + +import io.lettuce.test.TestFutures; + +/** + * @author Mark Paluch + */ +class BoundedAsyncPoolUnitTests { + + private AtomicInteger counter = new AtomicInteger(); + private List destroyed = new ArrayList<>(); + + private AsyncObjectFactory STRING_OBJECT_FACTORY = new AsyncObjectFactory() { + @Override + public CompletableFuture create() { + return CompletableFuture.completedFuture(counter.incrementAndGet() + ""); + } + + @Override + public CompletableFuture destroy(String object) { + destroyed.add(object); + return CompletableFuture.completedFuture(null); + } + + @Override + public CompletableFuture validate(String object) { + return CompletableFuture.completedFuture(true); + } + }; + + @Test + void shouldCreateObject() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.create()); + + String object = TestFutures.getOrTimeout(pool.acquire()); + + assertThat(pool.getIdle()).isEqualTo(0); + assertThat(object).isEqualTo("1"); + } + + @Test + void shouldCreateMinIdleObject() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.builder().minIdle(2) + .build()); + + assertThat(pool.getIdle()).isEqualTo(2); + assertThat(pool.getObjectCount()).isEqualTo(2); + } + + @Test + void shouldCreateMaintainMinIdleObject() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.builder().minIdle(2) + .build()); + + TestFutures.awaitOrTimeout(pool.acquire()); + + assertThat(pool.getIdle()).isEqualTo(2); + assertThat(pool.getObjectCount()).isEqualTo(3); + } + + @Test + void shouldCreateMaintainMinMaxIdleObject() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.builder().minIdle(2) + .maxTotal(2).build()); + + TestFutures.awaitOrTimeout(pool.acquire()); + + assertThat(pool.getIdle()).isEqualTo(1); + assertThat(pool.getObjectCount()).isEqualTo(2); + } + + @Test + void shouldReturnObject() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.create()); + + String object = TestFutures.getOrTimeout(pool.acquire()); + assertThat(pool.getObjectCount()).isEqualTo(1); + pool.release(object); + + assertThat(pool.getIdle()).isEqualTo(1); + } + + @Test + void shouldReuseObjects() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.create()); + + pool.release(TestFutures.getOrTimeout(pool.acquire())); + + assertThat(TestFutures.getOrTimeout(pool.acquire())).isEqualTo("1"); + assertThat(pool.getIdle()).isEqualTo(0); + } + + @Test + void shouldDestroyIdle() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.builder().maxIdle(2) + .maxTotal(5).build()); + + List objects = new ArrayList<>(); + for (int i = 0; i < 3; i++) { + objects.add(TestFutures.getOrTimeout(pool.acquire())); + } + + for (int i = 0; i < 2; i++) { + pool.release(objects.get(i)); + } + + assertThat(pool.getIdle()).isEqualTo(2); + + pool.release(objects.get(2)); + + assertThat(pool.getIdle()).isEqualTo(2); + assertThat(pool.getObjectCount()).isEqualTo(2); + assertThat(destroyed).containsOnly("3"); + } + + @Test + void shouldExhaustPool() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.builder().maxTotal(4) + .build()); + + String object1 = TestFutures.getOrTimeout(pool.acquire()); + String object2 = TestFutures.getOrTimeout(pool.acquire()); + String object3 = TestFutures.getOrTimeout(pool.acquire()); + String object4 = TestFutures.getOrTimeout(pool.acquire()); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isEqualTo(4); + + assertThat(pool.acquire()).isCompletedExceptionally(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isEqualTo(4); + + pool.release(object1); + pool.release(object2); + pool.release(object3); + pool.release(object4); + + assertThat(pool.getIdle()).isEqualTo(4); + assertThat(pool.getObjectCount()).isEqualTo(4); + } + + @Test + void shouldClearPool() { + + BoundedAsyncPool pool = new BoundedAsyncPool<>(STRING_OBJECT_FACTORY, BoundedPoolConfig.builder().maxTotal(4) + .build()); + + for (int i = 0; i < 20; i++) { + + String object1 = TestFutures.getOrTimeout(pool.acquire()); + String object2 = TestFutures.getOrTimeout(pool.acquire()); + String object3 = TestFutures.getOrTimeout(pool.acquire()); + String object4 = TestFutures.getOrTimeout(pool.acquire()); + + assertThat(pool.acquire()).isCompletedExceptionally(); + + pool.release(object1); + pool.release(object2); + pool.release(object3); + pool.release(object4); + + pool.clear(); + + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getIdle()).isZero(); + } + } + + @Test + void shouldExhaustPoolConcurrent() { + + List> progress = new ArrayList<>(); + AsyncObjectFactory IN_PROGRESS = new AsyncObjectFactory() { + @Override + public CompletableFuture create() { + + CompletableFuture future = new CompletableFuture<>(); + progress.add(future); + + return future; + } + + @Override + public CompletableFuture destroy(String object) { + destroyed.add(object); + return CompletableFuture.completedFuture(null); + } + + @Override + public CompletableFuture validate(String object) { + return CompletableFuture.completedFuture(true); + } + }; + + BoundedAsyncPool pool = new BoundedAsyncPool<>(IN_PROGRESS, BoundedPoolConfig.builder().maxTotal(4).build()); + + CompletableFuture object1 = pool.acquire(); + CompletableFuture object2 = pool.acquire(); + CompletableFuture object3 = pool.acquire(); + CompletableFuture object4 = pool.acquire(); + CompletableFuture object5 = pool.acquire(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isEqualTo(4); + + assertThat(object5).isCompletedExceptionally(); + + progress.forEach(it -> it.complete("foo")); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isEqualTo(4); + assertThat(pool.getCreationInProgress()).isZero(); + } + + @Test + void shouldConcurrentlyFail() { + + List> progress = new ArrayList<>(); + AsyncObjectFactory IN_PROGRESS = new AsyncObjectFactory() { + @Override + public CompletableFuture create() { + + CompletableFuture future = new CompletableFuture<>(); + progress.add(future); + + return future; + } + + @Override + public CompletableFuture destroy(String object) { + destroyed.add(object); + return CompletableFuture.completedFuture(null); + } + + @Override + public CompletableFuture validate(String object) { + return CompletableFuture.completedFuture(true); + } + }; + + BoundedAsyncPool pool = new BoundedAsyncPool<>(IN_PROGRESS, BoundedPoolConfig.builder().maxTotal(4).build()); + + CompletableFuture object1 = pool.acquire(); + CompletableFuture object2 = pool.acquire(); + CompletableFuture object3 = pool.acquire(); + CompletableFuture object4 = pool.acquire(); + + progress.forEach(it -> it.completeExceptionally(new IllegalStateException())); + + assertThat(object1).isCompletedExceptionally(); + assertThat(object2).isCompletedExceptionally(); + assertThat(object3).isCompletedExceptionally(); + assertThat(object4).isCompletedExceptionally(); + + assertThat(pool.getIdle()).isZero(); + assertThat(pool.getObjectCount()).isZero(); + assertThat(pool.getCreationInProgress()).isZero(); + } +} diff --git a/src/test/java/io/lettuce/core/support/CdiIntegrationTests.java b/src/test/java/io/lettuce/core/support/CdiIntegrationTests.java new file mode 100644 index 0000000000..1ec9d0b22d --- /dev/null +++ b/src/test/java/io/lettuce/core/support/CdiIntegrationTests.java @@ -0,0 +1,101 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Mockito.mock; + +import javax.enterprise.inject.Produces; +import javax.enterprise.inject.se.SeContainer; +import javax.enterprise.inject.se.SeContainerInitializer; + +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; + +import io.lettuce.core.AbstractRedisClientTest; +import io.lettuce.core.RedisConnectionStateListener; +import io.lettuce.core.RedisURI; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DefaultClientResources; +import io.lettuce.test.resource.TestClientResources; + +/** + * @author Mark Paluch + * @since 3.0 + */ +class CdiIntegrationTests { + + private static SeContainer container; + + @BeforeAll + static void setUp() { + + container = SeContainerInitializer.newInstance() // + .disableDiscovery() // + .addPackages(CdiIntegrationTests.class) // + .initialize(); + } + + @AfterAll + static void afterClass() { + container.close(); + } + + @Produces + RedisURI redisURI() { + return RedisURI.Builder.redis(AbstractRedisClientTest.host, AbstractRedisClientTest.port).build(); + } + + @Produces + ClientResources clientResources() { + return TestClientResources.get(); + } + + @Produces + @PersonDB + ClientResources personClientResources() { + return DefaultClientResources.create(); + } + + @PersonDB + @Produces + RedisURI redisURIQualified() { + return RedisURI.Builder.redis(AbstractRedisClientTest.host, AbstractRedisClientTest.port + 1).build(); + } + + @Test + void testInjection() { + + InjectedClient injectedClient = container.select(InjectedClient.class).get(); + assertThat(injectedClient.redisClient).isNotNull(); + assertThat(injectedClient.redisClusterClient).isNotNull(); + + assertThat(injectedClient.qualifiedRedisClient).isNotNull(); + assertThat(injectedClient.qualifiedRedisClusterClient).isNotNull(); + + RedisConnectionStateListener mock = mock(RedisConnectionStateListener.class); + + // do some interaction to force the container a creation of the repositories. + injectedClient.redisClient.addListener(mock); + injectedClient.redisClusterClient.addListener(mock); + + injectedClient.qualifiedRedisClient.addListener(mock); + injectedClient.qualifiedRedisClusterClient.addListener(mock); + + injectedClient.pingRedis(); + } +} diff --git a/src/test/java/io/lettuce/core/support/CommonsPool2ConfigConverterUnitTests.java b/src/test/java/io/lettuce/core/support/CommonsPool2ConfigConverterUnitTests.java new file mode 100644 index 0000000000..cc91c9e28a --- /dev/null +++ b/src/test/java/io/lettuce/core/support/CommonsPool2ConfigConverterUnitTests.java @@ -0,0 +1,99 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.function.BiConsumer; +import java.util.function.Function; + +import org.apache.commons.pool2.impl.BaseObjectPoolConfig; +import org.apache.commons.pool2.impl.GenericObjectPoolConfig; +import org.junit.jupiter.api.Test; + +/** + * Unit tests for {@link CommonsPool2ConfigConverter}. + * + * @author Mark Paluch + */ +class CommonsPool2ConfigConverterUnitTests { + + @Test + void shouldAdaptConfiguration() { + + GenericObjectPoolConfig config = new GenericObjectPoolConfig<>(); + config.setMinIdle(2); + config.setMaxIdle(12); + config.setMaxTotal(13); + config.setTestOnBorrow(true); + config.setTestOnReturn(true); + config.setTestOnCreate(true); + + BoundedPoolConfig result = CommonsPool2ConfigConverter.bounded(config); + + assertThat(result.getMinIdle()).isEqualTo(2); + assertThat(result.getMaxIdle()).isEqualTo(12); + assertThat(result.getMaxTotal()).isEqualTo(13); + assertThat(result.isTestOnAcquire()).isTrue(); + assertThat(result.isTestOnCreate()).isTrue(); + assertThat(result.isTestOnRelease()).isTrue(); + } + + @Test + void shouldConvertNegativeValuesToMaxSize() { + + GenericObjectPoolConfig config = new GenericObjectPoolConfig<>(); + config.setMaxIdle(-1); + config.setMaxTotal(-1); + + BoundedPoolConfig result = CommonsPool2ConfigConverter.bounded(config); + + assertThat(result.getMaxIdle()).isEqualTo(Integer.MAX_VALUE); + assertThat(result.getMaxTotal()).isEqualTo(Integer.MAX_VALUE); + } + + @Test + void shouldAdaptTestOnAcquire() { + + booleanTester(true, BaseObjectPoolConfig::setTestOnBorrow, BasePoolConfig::isTestOnAcquire); + booleanTester(false, BaseObjectPoolConfig::setTestOnBorrow, BasePoolConfig::isTestOnAcquire); + } + + @Test + void shouldAdaptTestOnCreate() { + + booleanTester(true, BaseObjectPoolConfig::setTestOnCreate, BasePoolConfig::isTestOnCreate); + booleanTester(false, BaseObjectPoolConfig::setTestOnCreate, BasePoolConfig::isTestOnCreate); + } + + @Test + void shouldAdaptTestOnRelease() { + + booleanTester(true, BaseObjectPoolConfig::setTestOnReturn, BasePoolConfig::isTestOnRelease); + booleanTester(false, BaseObjectPoolConfig::setTestOnReturn, BasePoolConfig::isTestOnRelease); + } + + static void booleanTester(boolean value, BiConsumer, Boolean> commonsConfigurer, + Function targetExtractor) { + + GenericObjectPoolConfig config = new GenericObjectPoolConfig<>(); + + commonsConfigurer.accept(config, value); + BoundedPoolConfig result = CommonsPool2ConfigConverter.bounded(config); + + assertThat(targetExtractor.apply(result)).isEqualTo(value); + } +} diff --git a/src/test/java/io/lettuce/core/support/ConnectionPoolSupportIntegrationTests.java b/src/test/java/io/lettuce/core/support/ConnectionPoolSupportIntegrationTests.java new file mode 100644 index 0000000000..20cf339b50 --- /dev/null +++ b/src/test/java/io/lettuce/core/support/ConnectionPoolSupportIntegrationTests.java @@ -0,0 +1,406 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.Assert.fail; + +import java.lang.reflect.Proxy; +import java.util.Set; + +import org.apache.commons.pool2.ObjectPool; +import org.apache.commons.pool2.impl.GenericObjectPool; +import org.apache.commons.pool2.impl.GenericObjectPoolConfig; +import org.apache.commons.pool2.impl.SoftReferenceObjectPool; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.*; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.RedisAdvancedClusterAsyncCommandsImpl; +import io.lettuce.core.cluster.RedisAdvancedClusterReactiveCommandsImpl; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.StatefulRedisClusterConnectionImpl; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; +import io.lettuce.core.cluster.api.reactive.RedisAdvancedClusterReactiveCommands; +import io.lettuce.core.cluster.api.sync.RedisAdvancedClusterCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.masterslave.MasterSlave; +import io.lettuce.core.masterslave.StatefulRedisMasterSlaveConnection; +import io.lettuce.test.Wait; +import io.lettuce.test.resource.FastShutdown; +import io.lettuce.test.resource.TestClientResources; +import io.lettuce.test.settings.TestSettings; +import io.netty.channel.group.ChannelGroup; + +/** + * @author Mark Paluch + */ +class ConnectionPoolSupportIntegrationTests extends TestSupport { + + private static RedisClient client; + private static Set channels; + + @BeforeAll + static void setupClient() { + client = RedisClient.create(TestClientResources.create(), RedisURI.Builder.redis(host, port).build()); + client.setOptions(ClientOptions.create()); + channels = (ChannelGroup) ReflectionTestUtils.getField(client, "channels"); + } + + @AfterAll + static void afterClass() { + FastShutdown.shutdown(client); + FastShutdown.shutdown(client.getResources()); + } + + @Test + void genericPoolShouldWorkWithWrappedConnections() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + () -> client.connect(), new GenericObjectPoolConfig<>()); + + borrowAndReturn(pool); + borrowAndClose(pool); + borrowAndCloseTryWithResources(pool); + + pool.returnObject(pool.borrowObject().sync().getStatefulConnection()); + pool.returnObject(pool.borrowObject().async().getStatefulConnection()); + + assertThat(channels).hasSize(1); + + pool.close(); + + Wait.untilTrue(channels::isEmpty).waitOrTimeout(); + + assertThat(channels).isEmpty(); + } + + @Test + void genericPoolShouldCloseConnectionsAboveMaxIdleSize() throws Exception { + + GenericObjectPoolConfig> poolConfig = new GenericObjectPoolConfig<>(); + poolConfig.setMaxIdle(2); + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + () -> client.connect(), poolConfig); + + borrowAndReturn(pool); + borrowAndClose(pool); + borrowAndCloseTryWithResources(pool); + + StatefulRedisConnection c1 = pool.borrowObject(); + StatefulRedisConnection c2 = pool.borrowObject(); + StatefulRedisConnection c3 = pool.borrowObject(); + + assertThat(channels).hasSize(3); + + pool.returnObject(c1); + pool.returnObject(c2); + pool.returnObject(c3); + + assertThat(channels).hasSize(2); + + pool.close(); + + Wait.untilTrue(channels::isEmpty).waitOrTimeout(); + + assertThat(channels).isEmpty(); + } + + @Test + void genericPoolShouldWorkWithPlainConnections() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + () -> client.connect(), new GenericObjectPoolConfig<>(), false); + + borrowAndReturn(pool); + + StatefulRedisConnection connection = pool.borrowObject(); + assertThat(Proxy.isProxyClass(connection.getClass())).isFalse(); + pool.returnObject(connection); + + pool.close(); + } + + @Test + void softReferencePoolShouldWorkWithPlainConnections() throws Exception { + + SoftReferenceObjectPool> pool = ConnectionPoolSupport + .createSoftReferenceObjectPool(() -> client.connect(), false); + + borrowAndReturn(pool); + + StatefulRedisConnection connection = pool.borrowObject(); + assertThat(Proxy.isProxyClass(connection.getClass())).isFalse(); + pool.returnObject(connection); + + connection.close(); + pool.close(); + } + + @Test + void genericPoolUsingWrappingShouldPropagateExceptionsCorrectly() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + () -> client.connect(), new GenericObjectPoolConfig<>()); + + StatefulRedisConnection connection = pool.borrowObject(); + RedisCommands sync = connection.sync(); + sync.set(key, value); + + try { + sync.hgetall(key); + fail("Missing RedisCommandExecutionException"); + } catch (RedisCommandExecutionException e) { + assertThat(e).hasMessageContaining("WRONGTYPE"); + } + + connection.close(); + pool.close(); + } + + @Test + void wrappedConnectionShouldUseWrappers() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + () -> client.connect(), new GenericObjectPoolConfig<>()); + + StatefulRedisConnection connection = pool.borrowObject(); + RedisCommands sync = connection.sync(); + + assertThat(connection).isInstanceOf(StatefulRedisConnection.class).isNotInstanceOf( + StatefulRedisClusterConnectionImpl.class); + assertThat(Proxy.isProxyClass(connection.getClass())).isTrue(); + + assertThat(sync).isInstanceOf(RedisCommands.class); + assertThat(connection.async()).isInstanceOf(RedisAsyncCommands.class).isNotInstanceOf(RedisAsyncCommandsImpl.class); + assertThat(connection.reactive()).isInstanceOf(RedisReactiveCommands.class).isNotInstanceOf( + RedisReactiveCommandsImpl.class); + assertThat(sync.getStatefulConnection()).isInstanceOf(StatefulRedisConnection.class) + .isNotInstanceOf(StatefulRedisConnectionImpl.class).isSameAs(connection); + + connection.close(); + pool.close(); + } + + @Test + void wrappedMasterSlaveConnectionShouldUseWrappers() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport + .createGenericObjectPool(() -> MasterSlave.connect(client, new StringCodec(), RedisURI.create(host, port)), + new GenericObjectPoolConfig<>()); + + StatefulRedisMasterSlaveConnection connection = pool.borrowObject(); + RedisCommands sync = connection.sync(); + + assertThat(connection).isInstanceOf(StatefulRedisMasterSlaveConnection.class); + assertThat(Proxy.isProxyClass(connection.getClass())).isTrue(); + + assertThat(sync).isInstanceOf(RedisCommands.class); + assertThat(connection.async()).isInstanceOf(RedisAsyncCommands.class).isNotInstanceOf(RedisAsyncCommandsImpl.class); + assertThat(connection.reactive()).isInstanceOf(RedisReactiveCommands.class).isNotInstanceOf( + RedisReactiveCommandsImpl.class); + assertThat(sync.getStatefulConnection()).isInstanceOf(StatefulRedisConnection.class) + .isNotInstanceOf(StatefulRedisConnectionImpl.class).isSameAs(connection); + + connection.close(); + pool.close(); + } + + @Test + void wrappedClusterConnectionShouldUseWrappers() throws Exception { + + RedisClusterClient redisClusterClient = RedisClusterClient.create(TestClientResources.get(), + RedisURI.create(TestSettings.host(), 7379)); + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + redisClusterClient::connect, new GenericObjectPoolConfig<>()); + + StatefulRedisClusterConnection connection = pool.borrowObject(); + RedisAdvancedClusterCommands sync = connection.sync(); + + assertThat(connection).isInstanceOf(StatefulRedisClusterConnection.class).isNotInstanceOf( + StatefulRedisClusterConnectionImpl.class); + assertThat(Proxy.isProxyClass(connection.getClass())).isTrue(); + + assertThat(sync).isInstanceOf(RedisAdvancedClusterCommands.class); + assertThat(connection.async()).isInstanceOf(RedisAdvancedClusterAsyncCommands.class).isNotInstanceOf( + RedisAdvancedClusterAsyncCommandsImpl.class); + assertThat(connection.reactive()).isInstanceOf(RedisAdvancedClusterReactiveCommands.class).isNotInstanceOf( + RedisAdvancedClusterReactiveCommandsImpl.class); + assertThat(sync.getStatefulConnection()).isInstanceOf(StatefulRedisClusterConnection.class) + .isNotInstanceOf(StatefulRedisClusterConnectionImpl.class).isSameAs(connection); + + connection.close(); + pool.close(); + + FastShutdown.shutdown(redisClusterClient); + } + + @Test + void plainConnectionShouldNotUseWrappers() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + () -> client.connect(), new GenericObjectPoolConfig<>(), false); + + StatefulRedisConnection connection = pool.borrowObject(); + RedisCommands sync = connection.sync(); + + assertThat(connection).isInstanceOf(StatefulRedisConnection.class).isNotInstanceOf( + StatefulRedisClusterConnectionImpl.class); + assertThat(Proxy.isProxyClass(connection.getClass())).isFalse(); + + assertThat(sync).isInstanceOf(RedisCommands.class); + assertThat(connection.async()).isInstanceOf(RedisAsyncCommands.class).isInstanceOf(RedisAsyncCommandsImpl.class); + assertThat(connection.reactive()).isInstanceOf(RedisReactiveCommands.class).isInstanceOf( + RedisReactiveCommandsImpl.class); + assertThat(sync.getStatefulConnection()).isInstanceOf(StatefulRedisConnection.class).isInstanceOf( + StatefulRedisConnectionImpl.class); + + pool.returnObject(connection); + pool.close(); + } + + @Test + void softRefPoolShouldWorkWithWrappedConnections() throws Exception { + + SoftReferenceObjectPool> pool = ConnectionPoolSupport + .createSoftReferenceObjectPool(() -> client.connect()); + + StatefulRedisConnection connection = pool.borrowObject(); + + assertThat(channels).hasSize(1); + + RedisCommands sync = connection.sync(); + sync.ping(); + + connection.close(); + pool.close(); + + Wait.untilTrue(channels::isEmpty).waitOrTimeout(); + + assertThat(channels).isEmpty(); + } + + @Test + void wrappedObjectClosedAfterReturn() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + () -> client.connect(), new GenericObjectPoolConfig<>(), true); + + StatefulRedisConnection connection = pool.borrowObject(); + RedisCommands sync = connection.sync(); + sync.ping(); + + connection.close(); + + try { + connection.isMulti(); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("deallocated"); + } + + pool.close(); + } + + @Test + void tryWithResourcesReturnsConnectionToPool() throws Exception { + + GenericObjectPool> pool = ConnectionPoolSupport.createGenericObjectPool( + () -> client.connect(), new GenericObjectPoolConfig<>()); + + StatefulRedisConnection usedConnection = null; + try (StatefulRedisConnection connection = pool.borrowObject()) { + + RedisCommands sync = connection.sync(); + sync.ping(); + + usedConnection = connection; + } + + try { + usedConnection.isMulti(); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("deallocated"); + } + + pool.close(); + } + + @Test + void tryWithResourcesReturnsSoftRefConnectionToPool() throws Exception { + + SoftReferenceObjectPool> pool = ConnectionPoolSupport + .createSoftReferenceObjectPool(() -> client.connect()); + + StatefulRedisConnection usedConnection = null; + try (StatefulRedisConnection connection = pool.borrowObject()) { + + RedisCommands sync = connection.sync(); + sync.ping(); + + usedConnection = connection; + } + + try { + usedConnection.isMulti(); + fail("Missing RedisException"); + } catch (RedisException e) { + assertThat(e).hasMessageContaining("deallocated"); + } + + pool.close(); + } + + private void borrowAndReturn(ObjectPool> pool) throws Exception { + + for (int i = 0; i < 10; i++) { + StatefulRedisConnection connection = pool.borrowObject(); + RedisCommands sync = connection.sync(); + sync.ping(); + pool.returnObject(connection); + } + } + + private void borrowAndCloseTryWithResources(ObjectPool> pool) throws Exception { + + for (int i = 0; i < 10; i++) { + try (StatefulRedisConnection connection = pool.borrowObject()) { + RedisCommands sync = connection.sync(); + sync.ping(); + } + } + } + + private void borrowAndClose(ObjectPool> pool) throws Exception { + + for (int i = 0; i < 10; i++) { + StatefulRedisConnection connection = pool.borrowObject(); + RedisCommands sync = connection.sync(); + sync.ping(); + connection.close(); + } + } +} diff --git a/src/test/java/io/lettuce/core/support/InjectedClient.java b/src/test/java/io/lettuce/core/support/InjectedClient.java new file mode 100644 index 0000000000..84c2ba7e4f --- /dev/null +++ b/src/test/java/io/lettuce/core/support/InjectedClient.java @@ -0,0 +1,63 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import javax.annotation.PostConstruct; +import javax.annotation.PreDestroy; +import javax.inject.Inject; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.RedisClusterClient; + +/** + * @author Mark Paluch + * @since 3.0 + */ +public class InjectedClient { + + @Inject + public RedisClient redisClient; + + @Inject + public RedisClusterClient redisClusterClient; + + @Inject + @PersonDB + public RedisClient qualifiedRedisClient; + + @Inject + @PersonDB + public RedisClusterClient qualifiedRedisClusterClient; + + private RedisCommands connection; + + @PostConstruct + public void postConstruct() { + connection = redisClient.connect().sync(); + } + + public void pingRedis() { + connection.ping(); + } + + @PreDestroy + public void preDestroy() { + if (connection != null) { + connection.getStatefulConnection().close(); + } + } +} diff --git a/src/test/java/io/lettuce/core/support/PersonDB.java b/src/test/java/io/lettuce/core/support/PersonDB.java new file mode 100644 index 0000000000..d00d3f38df --- /dev/null +++ b/src/test/java/io/lettuce/core/support/PersonDB.java @@ -0,0 +1,31 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; + +import javax.inject.Qualifier; + +/** + * @author Mark Paluch + * @since 3.0 + */ +@Retention(RetentionPolicy.RUNTIME) +@Qualifier +@interface PersonDB { + +} diff --git a/src/test/java/io/lettuce/core/support/PubSubTestListener.java b/src/test/java/io/lettuce/core/support/PubSubTestListener.java new file mode 100644 index 0000000000..f7b03ed550 --- /dev/null +++ b/src/test/java/io/lettuce/core/support/PubSubTestListener.java @@ -0,0 +1,87 @@ +/* + * Copyright 2016-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.util.concurrent.BlockingQueue; + +import io.lettuce.core.internal.LettuceFactories; +import io.lettuce.core.pubsub.RedisPubSubListener; + +/** + * @author Mark Paluch + */ +public class PubSubTestListener implements RedisPubSubListener { + + private BlockingQueue channels = LettuceFactories.newBlockingQueue(); + private BlockingQueue patterns = LettuceFactories.newBlockingQueue(); + private BlockingQueue messages = LettuceFactories.newBlockingQueue(); + private BlockingQueue counts = LettuceFactories.newBlockingQueue(); + + // RedisPubSubListener implementation + + @Override + public void message(String channel, String message) { + channels.add(channel); + messages.add(message); + } + + @Override + public void message(String pattern, String channel, String message) { + patterns.add(pattern); + channels.add(channel); + messages.add(message); + } + + @Override + public void subscribed(String channel, long count) { + channels.add(channel); + counts.add(count); + } + + @Override + public void psubscribed(String pattern, long count) { + patterns.add(pattern); + counts.add(count); + } + + @Override + public void unsubscribed(String channel, long count) { + channels.add(channel); + counts.add(count); + } + + @Override + public void punsubscribed(String pattern, long count) { + patterns.add(pattern); + counts.add(count); + } + + public BlockingQueue getChannels() { + return channels; + } + + public BlockingQueue getPatterns() { + return patterns; + } + + public BlockingQueue getMessages() { + return messages; + } + + public BlockingQueue getCounts() { + return counts; + } +} diff --git a/src/test/java/io/lettuce/core/support/RedisClusterClientFactoryBeanUnitTests.java b/src/test/java/io/lettuce/core/support/RedisClusterClientFactoryBeanUnitTests.java new file mode 100644 index 0000000000..91183094f2 --- /dev/null +++ b/src/test/java/io/lettuce/core/support/RedisClusterClientFactoryBeanUnitTests.java @@ -0,0 +1,137 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import java.net.URI; +import java.util.Collection; +import java.util.Iterator; + +import org.junit.jupiter.api.Test; + +import io.lettuce.core.RedisURI; + +/** + * @author Mark Paluch + */ +class RedisClusterClientFactoryBeanUnitTests { + + private RedisClusterClientFactoryBean sut = new RedisClusterClientFactoryBean(); + + @Test + void invalidUri() { + + sut.setUri(URI.create("http://www.web.de")); + assertThatThrownBy(() -> sut.afterPropertiesSet()).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void sentinelUri() { + + sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS_SENTINEL + "://www.web.de")); + assertThatThrownBy(() -> sut.afterPropertiesSet()).isInstanceOf(IllegalArgumentException.class); + } + + @Test + void validUri() throws Exception { + + sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS + "://password@host")); + sut.afterPropertiesSet(); + assertThat(getRedisURI().getHost()).isEqualTo("host"); + assertThat(getRedisURI().getPassword()).isEqualTo("password".toCharArray()); + + sut.destroy(); + } + + @Test + void validUriPasswordOverride() throws Exception { + + sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS + "://password@host")); + sut.setPassword("thepassword"); + + sut.afterPropertiesSet(); + assertThat(getRedisURI().getHost()).isEqualTo("host"); + assertThat(getRedisURI().getPassword()).isEqualTo("thepassword".toCharArray()); + + sut.destroy(); + } + + @Test + void multiNodeUri() throws Exception { + + sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS + "://password@host1,host2")); + sut.afterPropertiesSet(); + + Collection redisUris = sut.getRedisURIs(); + assertThat(redisUris).hasSize(2); + + Iterator iterator = redisUris.iterator(); + RedisURI host1 = iterator.next(); + RedisURI host2 = iterator.next(); + + assertThat(host1.getHost()).isEqualTo("host1"); + assertThat(host1.getPassword()).isEqualTo("password".toCharArray()); + + assertThat(host2.getHost()).isEqualTo("host2"); + assertThat(host2.getPassword()).isEqualTo("password".toCharArray()); + + sut.destroy(); + } + + @Test + void multiNodeUriPasswordOverride() throws Exception { + + sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS + "://password@host1,host2")); + sut.setPassword("thepassword"); + + sut.afterPropertiesSet(); + + Collection redisUris = sut.getRedisURIs(); + assertThat(redisUris).hasSize(2); + + Iterator iterator = redisUris.iterator(); + RedisURI host1 = iterator.next(); + RedisURI host2 = iterator.next(); + + assertThat(host1.getHost()).isEqualTo("host1"); + assertThat(host1.getPassword()).isEqualTo("thepassword".toCharArray()); + + assertThat(host2.getHost()).isEqualTo("host2"); + assertThat(host2.getPassword()).isEqualTo("thepassword".toCharArray()); + + sut.destroy(); + } + + @Test + void supportsSsl() throws Exception { + + sut.setUri(URI.create(RedisURI.URI_SCHEME_REDIS_SECURE + "://password@host")); + sut.afterPropertiesSet(); + + assertThat(getRedisURI().getHost()).isEqualTo("host"); + assertThat(getRedisURI().getPassword()).isEqualTo("password".toCharArray()); + assertThat(getRedisURI().isVerifyPeer()).isFalse(); + assertThat(getRedisURI().isSsl()).isTrue(); + + sut.destroy(); + } + + private RedisURI getRedisURI() { + return sut.getRedisURIs().iterator().next(); + } +} diff --git a/src/test/java/io/lettuce/core/support/SpringIntegrationTests.java b/src/test/java/io/lettuce/core/support/SpringIntegrationTests.java new file mode 100644 index 0000000000..c8477f1908 --- /dev/null +++ b/src/test/java/io/lettuce/core/support/SpringIntegrationTests.java @@ -0,0 +1,67 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import static org.assertj.core.api.Assertions.assertThat; + +import org.junit.Test; +import org.junit.runner.RunWith; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.beans.factory.annotation.Qualifier; +import org.springframework.test.context.ContextConfiguration; +import org.springframework.test.context.junit4.SpringRunner; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.cluster.RedisClusterClient; + +/** + * @author Mark Paluch + * @since 3.0 + */ +@RunWith(SpringRunner.class) +@ContextConfiguration +public class SpringIntegrationTests { + + @Autowired + @Qualifier("RedisClient1") + private RedisClient redisClient1; + + @Autowired + @Qualifier("RedisClient2") + private RedisClient redisClient2; + + @Autowired + @Qualifier("RedisClient3") + private RedisClient redisClient3; + + @Autowired + @Qualifier("RedisClusterClient1") + private RedisClusterClient redisClusterClient1; + + @Autowired + @Qualifier("RedisClusterClient2") + private RedisClusterClient redisClusterClient2; + + @Test + public void testSpring() { + + assertThat(redisClient1).isNotNull(); + assertThat(redisClient2).isNotNull(); + assertThat(redisClient3).isNotNull(); + assertThat(redisClusterClient1).isNotNull(); + assertThat(redisClusterClient2).isNotNull(); + } +} diff --git a/src/test/java/io/lettuce/core/tracing/BraveTracingIntegrationTests.java b/src/test/java/io/lettuce/core/tracing/BraveTracingIntegrationTests.java new file mode 100644 index 0000000000..135a859c0a --- /dev/null +++ b/src/test/java/io/lettuce/core/tracing/BraveTracingIntegrationTests.java @@ -0,0 +1,238 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.ArrayList; +import java.util.List; +import java.util.Queue; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.TimeUnit; + +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import reactor.test.StepVerifier; +import zipkin2.Span; +import brave.ScopedSpan; +import brave.Tracer; +import brave.Tracing; +import brave.propagation.CurrentTraceContext; +import brave.propagation.TraceContext; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TestSupport; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DefaultClientResources; +import io.lettuce.test.Wait; +import io.lettuce.test.condition.EnabledOnCommand; +import io.lettuce.test.resource.FastShutdown; + +/** + * Integration tests for {@link BraveTracing}. + * + * @author Mark Paluch + * @author Daniel Albuquerque + */ +@EnabledOnCommand("HELLO") +class BraveTracingIntegrationTests extends TestSupport { + + private static ClientResources clientResources; + private static RedisClient client; + private static Tracing clientTracing; + private static Queue spans = new LinkedBlockingQueue<>(); + + @BeforeAll + static void beforeClass() { + + clientTracing = Tracing.newBuilder().localServiceName("client") + .currentTraceContext(CurrentTraceContext.Default.create()).spanReporter(spans::add).build(); + + clientResources = DefaultClientResources.builder().tracing(BraveTracing.create(clientTracing)).build(); + client = RedisClient.create(clientResources, RedisURI.Builder.redis(host, port).build()); + } + + @BeforeEach + void before() { + + Tracer tracer = clientTracing.tracer(); + if (tracer.currentSpan() != null) { + clientTracing.tracer().currentSpan().abandon(); + } + + spans.clear(); + } + + @AfterAll + static void afterClass() { + + clientTracing.close(); + clientResources.shutdown(0, 0, TimeUnit.MILLISECONDS); + } + + @Test + void pingWithTrace() { + + ScopedSpan foo = clientTracing.tracer().startScopedSpan("foo"); + + StatefulRedisConnection connect = client.connect(); + connect.sync().ping(); + Wait.untilNotEquals(true, spans::isEmpty).waitOrTimeout(); + + foo.finish(); + + List spans = new ArrayList<>(BraveTracingIntegrationTests.spans); + + assertThat(spans.get(0).name()).isEqualTo("hello"); + assertThat(spans.get(1).name()).isEqualTo("ping"); + } + + @Test + void pingWithTraceShouldCatchErrors() { + + ScopedSpan foo = clientTracing.tracer().startScopedSpan("foo"); + + StatefulRedisConnection connect = client.connect(); + connect.sync().set("foo", "bar"); + try { + connect.sync().hgetall("foo"); + } catch (Exception e) { + } + + Wait.untilTrue(() -> spans.size() > 2).waitOrTimeout(); + + foo.finish(); + + List spans = new ArrayList<>(BraveTracingIntegrationTests.spans); + + assertThat(spans.get(1).name()).isEqualTo("set"); + assertThat(spans.get(2).name()).isEqualTo("hgetall"); + assertThat(spans.get(2).tags()).containsEntry("error", + "WRONGTYPE Operation against a key holding the wrong kind of value"); + assertThat(spans.get(3).name()).isEqualTo("foo"); + } + + @Test + void getAndSetWithTraceWithCommandArgsExcludedFromTags() { + + ClientResources clientResources = ClientResources.builder() + .tracing(BraveTracing.builder().tracing(clientTracing).excludeCommandArgsFromSpanTags().build()).build(); + RedisClient client = RedisClient.create(clientResources, RedisURI.Builder.redis(host, port).build()); + + ScopedSpan trace = clientTracing.tracer().startScopedSpan("foo"); + + StatefulRedisConnection connect = client.connect(); + connect.sync().set("foo", "bar"); + connect.sync().get("foo"); + + Wait.untilTrue(() -> spans.size() > 2).waitOrTimeout(); + + trace.finish(); + + List spans = new ArrayList<>(BraveTracingIntegrationTests.spans); + + assertThat(spans.get(1).name()).isEqualTo("set"); + assertThat(spans.get(1).tags()).doesNotContainKey("redis.args"); + assertThat(spans.get(2).name()).isEqualTo("get"); + assertThat(spans.get(2).tags()).doesNotContainKey("redis.args"); + assertThat(spans.get(3).name()).isEqualTo("foo"); + + FastShutdown.shutdown(client); + FastShutdown.shutdown(clientResources); + } + + @Test + void reactivePing() { + + StatefulRedisConnection connect = client.connect(); + connect.reactive().ping().as(StepVerifier::create).expectNext("PONG").verifyComplete(); + + Wait.untilNotEquals(true, spans::isEmpty).waitOrTimeout(); + assertThat(spans).isNotEmpty(); + } + + @Test + void reactivePingWithTrace() { + + ScopedSpan trace = clientTracing.tracer().startScopedSpan("foo"); + + StatefulRedisConnection connect = client.connect(); + connect.reactive().ping() // + .subscriberContext(it -> it.put(TraceContext.class, trace.context())) // + .as(StepVerifier::create) // + .expectNext("PONG").verifyComplete(); + + Wait.untilNotEquals(true, spans::isEmpty).waitOrTimeout(); + + trace.finish(); + + List spans = new ArrayList<>(BraveTracingIntegrationTests.spans); + + assertThat(spans.get(1).name()).isEqualTo("ping"); + assertThat(spans.get(2).name()).isEqualTo("foo"); + } + + @Test + void reactiveGetAndSetWithTrace() { + + ScopedSpan trace = clientTracing.tracer().startScopedSpan("foo"); + + StatefulRedisConnection connect = client.connect(); + connect.reactive().set("foo", "bar") // + .then(connect.reactive().get("foo")) // + .subscriberContext(it -> it.put(TraceContext.class, trace.context())) // + .as(StepVerifier::create) // + .expectNext("bar").verifyComplete(); + + Wait.untilTrue(() -> spans.size() > 2).waitOrTimeout(); + + trace.finish(); + + List spans = new ArrayList<>(BraveTracingIntegrationTests.spans); + + assertThat(spans.get(1).name()).isEqualTo("set"); + assertThat(spans.get(1).tags()).containsEntry("redis.args", "key value"); + assertThat(spans.get(2).name()).isEqualTo("get"); + assertThat(spans.get(2).tags()).containsEntry("redis.args", "key"); + assertThat(spans.get(3).name()).isEqualTo("foo"); + } + + @Test + void reactiveGetAndSetWithTraceProvider() { + + brave.Span trace = clientTracing.tracer().newTrace(); + + StatefulRedisConnection connect = client.connect(); + connect.reactive().set("foo", "bar").then(connect.reactive().get("foo")) + .subscriberContext(io.lettuce.core.tracing.Tracing + .withTraceContextProvider(() -> BraveTracing.BraveTraceContext.create(trace.context()))) // + .as(StepVerifier::create) // + .expectNext("bar").verifyComplete(); + + Wait.untilTrue(() -> spans.size() > 2).waitOrTimeout(); + + trace.finish(); + + List spans = new ArrayList<>(BraveTracingIntegrationTests.spans); + + assertThat(spans.get(1).name()).isEqualTo("set"); + assertThat(spans.get(2).name()).isEqualTo("get"); + } +} diff --git a/src/test/java/io/lettuce/core/tracing/BraveTracingUnitTests.java b/src/test/java/io/lettuce/core/tracing/BraveTracingUnitTests.java new file mode 100644 index 0000000000..ae2e6448d4 --- /dev/null +++ b/src/test/java/io/lettuce/core/tracing/BraveTracingUnitTests.java @@ -0,0 +1,121 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.tracing; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Queue; +import java.util.concurrent.LinkedBlockingQueue; + +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.springframework.test.util.ReflectionTestUtils; + +import zipkin2.Span; +import brave.Tracer; +import brave.Tracing; +import brave.handler.MutableSpan; +import brave.propagation.CurrentTraceContext; +import io.lettuce.core.TestSupport; +import io.netty.channel.unix.DomainSocketAddress; + +/** + * @author Mark Paluch + * @author Daniel Albuquerque + */ +class BraveTracingUnitTests extends TestSupport { + + private static Tracing clientTracing; + private static Queue spans = new LinkedBlockingQueue<>(); + + @BeforeAll + static void beforeClass() { + + clientTracing = Tracing.newBuilder().localServiceName("client") + .currentTraceContext(CurrentTraceContext.Default.create()).spanReporter(spans::add).build(); + } + + @BeforeEach + void before() { + + Tracer tracer = clientTracing.tracer(); + if (tracer.currentSpan() != null) { + clientTracing.tracer().currentSpan().abandon(); + } + + spans.clear(); + } + + @AfterAll + static void afterClass() { + + clientTracing.close(); + } + + @Test + void shouldReportSimpleServiceName() { + + BraveTracing tracing = BraveTracing.create(clientTracing); + BraveTracing.BraveEndpoint endpoint = (BraveTracing.BraveEndpoint) tracing + .createEndpoint(new DomainSocketAddress("foo")); + + assertThat(endpoint.endpoint.serviceName()).isEqualTo("redis"); + assertThat(endpoint.endpoint.port()).isNull(); + assertThat(endpoint.endpoint.ipv4()).isNull(); + assertThat(endpoint.endpoint.ipv6()).isNull(); + } + + @Test + void shouldReportCustomServiceName() { + + BraveTracing tracing = BraveTracing.builder().tracing(clientTracing).serviceName("custom-name-goes-here").build(); + + BraveTracing.BraveEndpoint endpoint = (BraveTracing.BraveEndpoint) tracing + .createEndpoint(new DomainSocketAddress("foo")); + + assertThat(endpoint.endpoint.serviceName()).isEqualTo("custom-name-goes-here"); + assertThat(endpoint.endpoint.port()).isNull(); + assertThat(endpoint.endpoint.ipv4()).isNull(); + assertThat(endpoint.endpoint.ipv6()).isNull(); + } + + @Test + void shouldCustomizeEndpoint() { + + BraveTracing tracing = BraveTracing.builder().tracing(clientTracing) + .endpointCustomizer(it -> it.serviceName("foo-bar")).build(); + BraveTracing.BraveEndpoint endpoint = (BraveTracing.BraveEndpoint) tracing + .createEndpoint(new DomainSocketAddress("foo")); + + assertThat(endpoint.endpoint.serviceName()).isEqualTo("foo-bar"); + } + + @Test + void shouldCustomizeSpan() { + + BraveTracing tracing = BraveTracing.builder().tracing(clientTracing) + .spanCustomizer(it -> it.remoteServiceName("remote")).build(); + + BraveTracing.BraveSpan span = (BraveTracing.BraveSpan) tracing.getTracerProvider().getTracer().nextSpan(); + span.finish(); + + MutableSpan braveSpan = (MutableSpan) ReflectionTestUtils.getField(span.getSpan(), "state"); + + assertThat(braveSpan.remoteServiceName()).isEqualTo("remote"); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToElastiCacheMaster.java b/src/test/java/io/lettuce/examples/ConnectToElastiCacheMaster.java new file mode 100644 index 0000000000..447670206c --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToElastiCacheMaster.java @@ -0,0 +1,44 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.resource.DefaultClientResources; +import io.lettuce.core.resource.DirContextDnsResolver; + +/** + * @author Mark Paluch + */ +public class ConnectToElastiCacheMaster { + + public static void main(String[] args) { + + // Syntax: redis://[password@]host[:port][/databaseNumber] + + DefaultClientResources clientResources = DefaultClientResources.builder() // + .dnsResolver(new DirContextDnsResolver()) // Does not cache DNS lookups + .build(); + + RedisClient redisClient = RedisClient.create(clientResources, "redis://password@localhost:6379/0"); + StatefulRedisConnection connection = redisClient.connect(); + + System.out.println("Connected to Redis"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java b/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java new file mode 100644 index 0000000000..b620646b43 --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java @@ -0,0 +1,50 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import java.util.Arrays; +import java.util.List; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.masterslave.MasterSlave; +import io.lettuce.core.masterslave.StatefulRedisMasterSlaveConnection; + +/** + * @author Mark Paluch + */ +public class ConnectToMasterSlaveUsingElastiCacheCluster { + + public static void main(String[] args) { + + // Syntax: redis://[password@]host[:port][/databaseNumber] + RedisClient redisClient = RedisClient.create(); + + List nodes = Arrays.asList(RedisURI.create("redis://host1"), RedisURI.create("redis://host2"), + RedisURI.create("redis://host3")); + + StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(redisClient, StringCodec.UTF8, + nodes); + connection.setReadFrom(ReadFrom.MASTER_PREFERRED); + + System.out.println("Connected to Redis"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingRedisSentinel.java b/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingRedisSentinel.java new file mode 100644 index 0000000000..a52d68112d --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingRedisSentinel.java @@ -0,0 +1,43 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import io.lettuce.core.ReadFrom; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.masterslave.MasterSlave; +import io.lettuce.core.masterslave.StatefulRedisMasterSlaveConnection; + +/** + * @author Mark Paluch + */ +public class ConnectToMasterSlaveUsingRedisSentinel { + + public static void main(String[] args) { + // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId + RedisClient redisClient = RedisClient.create(); + + StatefulRedisMasterSlaveConnection connection = MasterSlave.connect(redisClient, StringCodec.UTF8, + RedisURI.create("redis-sentinel://localhost:26379,localhost:26380/0#mymaster")); + connection.setReadFrom(ReadFrom.MASTER_PREFERRED); + + System.out.println("Connected to Redis"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToRedis.java b/src/test/java/io/lettuce/examples/ConnectToRedis.java new file mode 100644 index 0000000000..744fcaf466 --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToRedis.java @@ -0,0 +1,39 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; + +/** + * @author Mark Paluch + * @author Tugdual Grall + */ +public class ConnectToRedis { + + public static void main(String[] args) { + + // Syntax: redis://[password@]host[:port][/databaseNumber] + // Syntax: redis://[username:password@]host[:port][/databaseNumber] + RedisClient redisClient = RedisClient.create("redis://password@localhost:6379/0"); + StatefulRedisConnection connection = redisClient.connect(); + + System.out.println("Connected to Redis"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToRedisCluster.java b/src/test/java/io/lettuce/examples/ConnectToRedisCluster.java new file mode 100644 index 0000000000..41cec7d01d --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToRedisCluster.java @@ -0,0 +1,40 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; + +/** + * @author Mark Paluch + * @author Tugdual Grall + */ +public class ConnectToRedisCluster { + + public static void main(String[] args) { + + // Syntax: redis://[password@]host[:port] + // Syntax: redis://[username:password@]host[:port] + RedisClusterClient redisClient = RedisClusterClient.create("redis://password@localhost:7379"); + + StatefulRedisClusterConnection connection = redisClient.connect(); + + System.out.println("Connected to Redis"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToRedisClusterSSL.java b/src/test/java/io/lettuce/examples/ConnectToRedisClusterSSL.java new file mode 100644 index 0000000000..c9176eabc0 --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToRedisClusterSSL.java @@ -0,0 +1,41 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; + +/** + * @author Mark Paluch + */ +public class ConnectToRedisClusterSSL { + + public static void main(String[] args) { + + // Syntax: rediss://[password@]host[:port] + RedisURI redisURI = RedisURI.create("rediss://password@localhost:7379"); + redisURI.setVerifyPeer(false); // depending on your setup, you might want to disable peer verification + + RedisClusterClient redisClient = RedisClusterClient.create(redisURI); + StatefulRedisClusterConnection connection = redisClient.connect(); + + System.out.println("Connected to Redis"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToRedisClusterWithTopologyRefreshing.java b/src/test/java/io/lettuce/examples/ConnectToRedisClusterWithTopologyRefreshing.java new file mode 100644 index 0000000000..0f560057f4 --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToRedisClusterWithTopologyRefreshing.java @@ -0,0 +1,53 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.cluster.ClusterClientOptions; +import io.lettuce.core.cluster.ClusterTopologyRefreshOptions; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; + +/** + * @author Mark Paluch + */ +public class ConnectToRedisClusterWithTopologyRefreshing { + + public static void main(String[] args) { + + // Syntax: redis://[password@]host[:port] + RedisClusterClient redisClient = RedisClusterClient.create("redis://password@localhost:7379"); + + ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = ClusterTopologyRefreshOptions.builder()// + .enablePeriodicRefresh(30, TimeUnit.MINUTES)// + .enableAllAdaptiveRefreshTriggers()// + .build(); + + ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder()// + .topologyRefreshOptions(clusterTopologyRefreshOptions)// + .build(); + + redisClient.setOptions(clusterClientOptions); + + StatefulRedisClusterConnection connection = redisClient.connect(); + + System.out.println("Connected to Redis"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToRedisSSL.java b/src/test/java/io/lettuce/examples/ConnectToRedisSSL.java new file mode 100644 index 0000000000..2383b66802 --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToRedisSSL.java @@ -0,0 +1,39 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; + +/** + * @author Mark Paluch + */ +public class ConnectToRedisSSL { + + public static void main(String[] args) { + + // Syntax: rediss://[password@]host[:port][/databaseNumber] + // Adopt the port to the stunnel port in front of your Redis instance + RedisClient redisClient = RedisClient.create("rediss://password@localhost:6443/0"); + + StatefulRedisConnection connection = redisClient.connect(); + + System.out.println("Connected to Redis using SSL"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/ConnectToRedisUsingRedisSentinel.java b/src/test/java/io/lettuce/examples/ConnectToRedisUsingRedisSentinel.java new file mode 100644 index 0000000000..555f480cb4 --- /dev/null +++ b/src/test/java/io/lettuce/examples/ConnectToRedisUsingRedisSentinel.java @@ -0,0 +1,38 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; + +/** + * @author Mark Paluch + */ +public class ConnectToRedisUsingRedisSentinel { + + public static void main(String[] args) { + + // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId + RedisClient redisClient = RedisClient.create("redis-sentinel://localhost:26379,localhost:26380/0#mymaster"); + + StatefulRedisConnection connection = redisClient.connect(); + + System.out.println("Connected to Redis using Redis Sentinel"); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/MySpringBean.java b/src/test/java/io/lettuce/examples/MySpringBean.java new file mode 100644 index 0000000000..b7ad7f2dc2 --- /dev/null +++ b/src/test/java/io/lettuce/examples/MySpringBean.java @@ -0,0 +1,45 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import org.springframework.beans.factory.annotation.Autowired; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; + +/** + * @author Mark Paluch + */ +public class MySpringBean { + + private RedisClient redisClient; + + @Autowired + public void setRedisClient(RedisClient redisClient) { + this.redisClient = redisClient; + } + + public String ping() { + + StatefulRedisConnection connection = redisClient.connect(); + + RedisCommands sync = connection.sync(); + String result = sync.ping(); + connection.close(); + return result; + } +} diff --git a/src/test/java/io/lettuce/examples/ReadWriteExample.java b/src/test/java/io/lettuce/examples/ReadWriteExample.java new file mode 100644 index 0000000000..b502bcf58b --- /dev/null +++ b/src/test/java/io/lettuce/examples/ReadWriteExample.java @@ -0,0 +1,46 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; + +/** + * @author Mark Paluch + */ +public class ReadWriteExample { + + public static void main(String[] args) { + + // Syntax: redis://[password@]host[:port][/databaseNumber] + RedisClient redisClient = RedisClient.create(RedisURI.create("redis://password@localhost:6379/0")); + + StatefulRedisConnection connection = redisClient.connect(); + + System.out.println("Connected to Redis"); + + RedisCommands sync = connection.sync(); + + sync.set("foo", "bar"); + String value = sync.get("foo"); + System.out.println(value); + + connection.close(); + redisClient.shutdown(); + } +} diff --git a/src/test/java/io/lettuce/examples/SpringExample.java b/src/test/java/io/lettuce/examples/SpringExample.java new file mode 100644 index 0000000000..599d1bf017 --- /dev/null +++ b/src/test/java/io/lettuce/examples/SpringExample.java @@ -0,0 +1,47 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.examples; + +import org.springframework.context.support.ClassPathXmlApplicationContext; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; + +/** + * @author Mark Paluch + */ +public class SpringExample { + + public static void main(String[] args) { + + ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext( + "com/lambdaworks/examples/SpringTest-context.xml"); + + RedisClient client = context.getBean(RedisClient.class); + + StatefulRedisConnection connection = client.connect(); + + RedisCommands sync = connection.sync(); + System.out.println("PING: " + sync.ping()); + connection.close(); + + MySpringBean mySpringBean = context.getBean(MySpringBean.class); + System.out.println("PING: " + mySpringBean.ping()); + + context.close(); + } +} diff --git a/src/test/java/io/lettuce/test/CanConnect.java b/src/test/java/io/lettuce/test/CanConnect.java new file mode 100644 index 0000000000..34fecbcfce --- /dev/null +++ b/src/test/java/io/lettuce/test/CanConnect.java @@ -0,0 +1,58 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.io.IOException; +import java.net.InetSocketAddress; +import java.net.Socket; +import java.net.SocketAddress; +import java.util.concurrent.TimeUnit; + +/** + * @author Mark Paluch + * @soundtrack Ronski Speed - Maracaido Sessions, formerly Tool Sessions (May 2016) + */ +public class CanConnect { + + /** + * Check whether a TCP connection can be established to the given {@link SocketAddress}. + * + * @param host + * @param port + * @return + */ + public static boolean to(String host, int port) { + return to(new InetSocketAddress(host, port)); + } + + /** + * Check whether a TCP connection can be established to the given {@link SocketAddress}. + * + * @param socketAddress + * @return + */ + private static boolean to(SocketAddress socketAddress) { + + try { + Socket socket = new Socket(); + socket.connect(socketAddress, (int) TimeUnit.SECONDS.toMillis(5)); + socket.close(); + return true; + } catch (IOException e) { + return false; + } + } +} diff --git a/src/test/java/io/lettuce/test/CliParser.java b/src/test/java/io/lettuce/test/CliParser.java new file mode 100644 index 0000000000..0c6ae3cb0f --- /dev/null +++ b/src/test/java/io/lettuce/test/CliParser.java @@ -0,0 +1,88 @@ +/* + * Copyright 2019-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.nio.charset.StandardCharsets; +import java.util.List; + +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.output.ArrayOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.ProtocolKeyword; + +/** + * Utility to parse a CLI command string such as {@code ACL SETUSER foo} into a {@link Command}. + * + * @author Mark Paluch + */ +public class CliParser { + + /** + * Parse a CLI command string into a {@link Command}. + * + * @param command + * @return + */ + public static Command> parse(String command) { + + String[] parts = command.split(" "); + boolean quoted = false; + + ProtocolKeyword type = null; + CommandArgs args = new CommandArgs<>(StringCodec.UTF8); + + StringBuilder buffer = new StringBuilder(); + for (int i = 0; i < parts.length; i++) { + + String part = parts[i]; + + if (quoted && part.endsWith("\"")) { + buffer.append(part, 0, part.length() - 1); + } else if (part.startsWith("\"")) { + quoted = true; + buffer.append(buffer.append(part.substring(1))); + } else { + buffer.append(part); + } + + if (quoted) { + continue; + } + + if (type == null) { + String typeName = buffer.toString(); + type = new ProtocolKeyword() { + @Override + public byte[] getBytes() { + return name().getBytes(StandardCharsets.UTF_8); + } + + @Override + public String name() { + return typeName; + } + }; + } else { + args.addKey(buffer.toString()); + } + + buffer.setLength(0); + } + + return new Command<>(type, new ArrayOutput<>(StringCodec.UTF8), args); + } +} diff --git a/src/test/java/io/lettuce/test/ConnectionDecoratingInvocationHandler.java b/src/test/java/io/lettuce/test/ConnectionDecoratingInvocationHandler.java new file mode 100644 index 0000000000..3b04197d8e --- /dev/null +++ b/src/test/java/io/lettuce/test/ConnectionDecoratingInvocationHandler.java @@ -0,0 +1,65 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.lang.reflect.Method; +import java.lang.reflect.Proxy; + +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.internal.AbstractInvocationHandler; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; + +/** + * @author Mark Paluch + */ +class ConnectionDecoratingInvocationHandler extends AbstractInvocationHandler { + + private final Object target; + + ConnectionDecoratingInvocationHandler(Object target) { + this.target = target; + } + + @Override + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + Method targetMethod = target.getClass().getMethod(method.getName(), method.getParameterTypes()); + Method proxyMethod = proxy.getClass().getMethod(method.getName(), method.getParameterTypes()); + + Object result = targetMethod.invoke(target, args); + + if (result instanceof StatefulConnection) { + + Class[] interfaces; + if (result instanceof StatefulRedisClusterConnection + && proxyMethod.getReturnType().isAssignableFrom(StatefulRedisClusterConnection.class)) { + interfaces = new Class[] { StatefulConnection.class, StatefulRedisClusterConnection.class }; + } else if (result instanceof StatefulRedisSentinelConnection + && proxyMethod.getReturnType().isAssignableFrom(StatefulRedisSentinelConnection.class)) { + interfaces = new Class[] { StatefulConnection.class, StatefulRedisSentinelConnection.class }; + } else { + interfaces = new Class[] { StatefulConnection.class, StatefulRedisConnection.class }; + } + + return Proxy.newProxyInstance(getClass().getClassLoader(), interfaces, + new ConnectionDecoratingInvocationHandler(result)); + } + + return result; + } +} diff --git a/src/test/java/io/lettuce/test/ConnectionTestUtil.java b/src/test/java/io/lettuce/test/ConnectionTestUtil.java new file mode 100644 index 0000000000..eac6fba457 --- /dev/null +++ b/src/test/java/io/lettuce/test/ConnectionTestUtil.java @@ -0,0 +1,166 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.lang.reflect.UndeclaredThrowableException; +import java.util.Queue; + +import org.springframework.test.util.ReflectionTestUtils; + +import io.lettuce.core.RedisChannelHandler; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.protocol.CommandHandler; +import io.lettuce.core.protocol.ConnectionWatchdog; +import io.lettuce.core.protocol.DefaultEndpoint; +import io.lettuce.test.settings.TestSettings; +import io.netty.channel.Channel; + +/** + * @author Mark Paluch + */ +@SuppressWarnings("unchecked") +public class ConnectionTestUtil { + + /** + * Extract the {@link Channel} from a {@link StatefulConnection}. + * + * @param connection the connection + * @return the {@link Channel} + */ + public static Channel getChannel(StatefulConnection connection) { + + RedisChannelHandler channelHandler = (RedisChannelHandler) connection; + return (Channel) ReflectionTestUtils.getField(channelHandler.getChannelWriter(), "channel"); + } + + /** + * Extract the {@link ConnectionWatchdog} from a {@link StatefulConnection}. + * + * @param connection the connection + * @return the {@link ConnectionWatchdog} + */ + public static ConnectionWatchdog getConnectionWatchdog(StatefulConnection connection) { + + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection); + if (channelWriter instanceof DefaultEndpoint) { + return (ConnectionWatchdog) ReflectionTestUtils.getField(channelWriter, "connectionWatchdog"); + } + + return null; + } + + /** + * Extract the {@link RedisChannelWriter} from a {@link StatefulConnection}. + * + * @param connection the connection + * @return the {@link RedisChannelWriter} + */ + public static RedisChannelWriter getChannelWriter(StatefulConnection connection) { + return ((RedisChannelHandler) connection).getChannelWriter(); + } + + /** + * Extract the stack from a from a {@link StatefulConnection}. + * + * @param connection the connection + * @return the stack + */ + public static Queue getStack(StatefulConnection connection) { + + Channel channel = getChannel(connection); + + if (channel != null) { + CommandHandler commandHandler = channel.pipeline().get(CommandHandler.class); + return (Queue) commandHandler.getStack(); + } + + throw new IllegalArgumentException("Cannot obtain stack from " + connection); + } + + /** + * Extract the disconnected buffer from a from a {@link StatefulConnection}. + * + * @param connection the connection + * @return the queue + */ + public static Queue getDisconnectedBuffer(StatefulConnection connection) { + + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection); + if (channelWriter instanceof DefaultEndpoint) { + return getDisconnectedBuffer((DefaultEndpoint) channelWriter); + } + + throw new IllegalArgumentException("Cannot disconnected command buffer from " + connection); + } + + /** + * Extract the disconnected buffer from a {@link DefaultEndpoint}. + * + * @param endpoint the endpoint + * @return the queue + */ + public static Queue getDisconnectedBuffer(DefaultEndpoint endpoint) { + return (Queue) ReflectionTestUtils.getField(endpoint, "disconnectedBuffer"); + } + + /** + * Extract the active command queue size a {@link DefaultEndpoint}. + * + * @param endpoint the endpoint + * @return the queue + */ + public static int getQueueSize(DefaultEndpoint endpoint) { + return (Integer) ReflectionTestUtils.getField(endpoint, "queueSize"); + } + + /** + * Extract the command buffer from a {@link StatefulConnection}. + * + * @param connection the connection + * @return the command buffer + */ + public static Queue getCommandBuffer(StatefulConnection connection) { + + RedisChannelWriter channelWriter = ConnectionTestUtil.getChannelWriter(connection); + if (channelWriter instanceof DefaultEndpoint) { + return (Queue) ReflectionTestUtils.getField(channelWriter, "commandBuffer"); + } + + throw new IllegalArgumentException("Cannot obtain command buffer from " + channelWriter); + } + + /** + * Extract the connection state from a from a {@link StatefulConnection}. + * + * @param connection the connection + * @return the connection state as {@link String} + */ + public static String getConnectionState(StatefulConnection connection) { + + Channel channel = getChannel(connection); + + if (channel != null) { + CommandHandler commandHandler = channel.pipeline().get(CommandHandler.class); + return ReflectionTestUtils.getField(commandHandler, "lifecycleState").toString(); + } + + return ""; + } +} diff --git a/src/test/java/io/lettuce/test/Delay.java b/src/test/java/io/lettuce/test/Delay.java new file mode 100644 index 0000000000..54512b0f9d --- /dev/null +++ b/src/test/java/io/lettuce/test/Delay.java @@ -0,0 +1,42 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.time.Duration; + +/** + * @author Mark Paluch + */ +public class Delay { + + private Delay() { + } + + /** + * Sleep for the given {@link Duration}. + * + * @param duration + */ + public static void delay(Duration duration) { + + try { + Thread.sleep(duration.toMillis()); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw new IllegalStateException(e); + } + } +} diff --git a/src/test/java/io/lettuce/test/KeyValueStreamingAdapter.java b/src/test/java/io/lettuce/test/KeyValueStreamingAdapter.java new file mode 100644 index 0000000000..8873073c83 --- /dev/null +++ b/src/test/java/io/lettuce/test/KeyValueStreamingAdapter.java @@ -0,0 +1,47 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.util.LinkedHashMap; +import java.util.Map; + +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.KeyValueStreamingChannel; + +/** + * Adapter for a {@link KeyStreamingChannel}. Stores the output in a map. + * + * @param Key type. + * @param Value type. + * @author Mark Paluch + * @since 3.0 + */ +public class KeyValueStreamingAdapter implements KeyValueStreamingChannel { + + private final Map map = new LinkedHashMap<>(); + + @Override + public void onKeyValue(K key, V value) { + + synchronized (map) { + map.put(key, value); + } + } + + public Map getMap() { + return map; + } +} diff --git a/src/test/java/io/lettuce/test/KeysAndValues.java b/src/test/java/io/lettuce/test/KeysAndValues.java new file mode 100644 index 0000000000..71f2779af1 --- /dev/null +++ b/src/test/java/io/lettuce/test/KeysAndValues.java @@ -0,0 +1,67 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.util.*; + +/** + * Keys for testing slot-hashes. + * + * @author Mark Paluch + */ +public class KeysAndValues { + + /** + * Ordered list of keys. The order corresponds with the list of {@code VALUES}. + */ + public static final List KEYS; + + /** + * Ordered list of values. The order corresponds with the list of {@code KEYS}. + */ + public static final List VALUES; + + /** + * Mapping between {@code KEYS} and {@code VALUES} + */ + public static final Map MAP; + + /** + * Number of entries. + */ + public static final int COUNT = 500; + + static { + + List keys = new ArrayList<>(); + List values = new ArrayList<>(); + Map map = new HashMap<>(); + + for (int i = 0; i < COUNT; i++) { + + String key = "key-" + i; + String value = "value-" + i; + + keys.add(key); + values.add(value); + map.put(key, value); + } + + KEYS = Collections.unmodifiableList(keys); + VALUES = Collections.unmodifiableList(values); + MAP = Collections.unmodifiableMap(map); + } +} diff --git a/src/test/java/io/lettuce/test/LettuceExtension.java b/src/test/java/io/lettuce/test/LettuceExtension.java new file mode 100644 index 0000000000..8dc6198d2c --- /dev/null +++ b/src/test/java/io/lettuce/test/LettuceExtension.java @@ -0,0 +1,355 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.io.Closeable; +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; +import java.lang.reflect.Parameter; +import java.lang.reflect.Type; +import java.time.Duration; +import java.util.*; +import java.util.function.Function; +import java.util.function.Supplier; + +import javax.enterprise.inject.New; +import javax.inject.Inject; + +import org.junit.jupiter.api.extension.*; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.dynamic.support.ResolvableType; +import io.lettuce.core.pubsub.StatefulRedisPubSubConnection; +import io.lettuce.core.resource.ClientResources; +import io.lettuce.test.resource.DefaultRedisClient; +import io.lettuce.test.resource.DefaultRedisClusterClient; +import io.lettuce.test.resource.TestClientResources; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +/** + * JUnit 5 {@link Extension} providing parameter resolution for connection resources and that reacts to callbacks. + * + * The following resource types are supported by this extension: + *
      + *
    • {@link ClientResources} (singleton)
    • + *
    • {@link RedisClient} (singleton)
    • + *
    • {@link RedisClusterClient} (singleton)
    • + *
    • {@link StatefulRedisConnection} (singleton and dedicated instances via {@code @New})
    • + *
    • {@link StatefulRedisPubSubConnection} (singleton and dedicated instances via {@code @New})
    • + *
    • {@link StatefulRedisClusterConnection} (singleton and dedicated instances via {@code @New})
    • + *
    + * + * Tests that want to use this extension need to annotate injection points with {@code @Inject}: + * + *
    + * @ExtendWith(LettuceExtension.class)
    + * public class CustomCommandTest {
    + *
    + *     private final RedisCommands<String, String> redis;
    + *
    + *     @Inject
    + *     public CustomCommandTest(StatefulRedisConnection<String, String> connection) {
    + *         this.redis = connection.sync();
    + *     }
    + * }
    + * 
    + * + *

    Resource lifecycle

    + * + * This extension allocates resources lazily and stores them in its {@link ExtensionContext} + * {@link org.junit.jupiter.api.extension.ExtensionContext.Store} for resuse across multiple tests. Client and + * {@link ClientResources} are allocated through{@link DefaultRedisClient} respective {@link TestClientResources} so shutdown is + * managed by the actual suppliers. Singleton connection resources are closed after the test class (test container) is finished. + * Newable connection resources are closed after the actual test is finished. + * + *

    Newable resources

    Some tests require a dedicated connection. These can be obtained by annotating the parameter with + * {@code @New}. + * + * @author Mark Paluch + * @since 5.1.1 + * @see ParameterResolver + * @see Inject + * @see New + * @see BeforeEachCallback + * @see AfterEachCallback + * @see AfterAllCallback + */ +public class LettuceExtension implements ParameterResolver, AfterAllCallback, AfterEachCallback { + + private static final InternalLogger LOGGER = InternalLoggerFactory.getInstance(LettuceExtension.class); + + private final ExtensionContext.Namespace LETTUCE = ExtensionContext.Namespace.create("lettuce.parameters"); + + private static final Set> SUPPORTED_INJECTABLE_TYPES = new HashSet<>(Arrays.asList(StatefulRedisConnection.class, + StatefulRedisPubSubConnection.class, RedisCommands.class, RedisClient.class, ClientResources.class, + StatefulRedisClusterConnection.class, RedisClusterClient.class)); + + private static final Set> CLOSE_AFTER_EACH = new HashSet<>(Arrays.asList(StatefulRedisConnection.class, + StatefulRedisPubSubConnection.class, StatefulRedisClusterConnection.class)); + + private static final List> SUPPLIERS = Arrays.asList(ClientResourcesSupplier.INSTANCE, + RedisClusterClientSupplier.INSTANCE, RedisClientSupplier.INSTANCE, StatefulRedisConnectionSupplier.INSTANCE, + StatefulRedisPubSubConnectionSupplier.INSTANCE, StatefulRedisClusterConnectionSupplier.INSTANCE); + + private static final List> RESOURCE_FUNCTIONS = Arrays.asList(RedisCommandsFunction.INSTANCE); + + @Override + public boolean supportsParameter(ParameterContext parameterContext, ExtensionContext extensionContext) + throws ParameterResolutionException { + + if (SUPPORTED_INJECTABLE_TYPES.contains(parameterContext.getParameter().getType())) { + + if (parameterContext.isAnnotated(Inject.class) + || parameterContext.getDeclaringExecutable().isAnnotationPresent(Inject.class)) { + return true; + } + + LOGGER.warn("Parameter type " + parameterContext.getParameter().getType() + + " supported but injection target is not annotated with @Inject"); + } + + return false; + } + + @Override + public Object resolveParameter(ParameterContext parameterContext, ExtensionContext extensionContext) + throws ParameterResolutionException { + + ExtensionContext.Store store = getStore(extensionContext); + + Parameter parameter = parameterContext.getParameter(); + + Type parameterizedType = parameter.getParameterizedType(); + if (parameterContext.isAnnotated(New.class)) { + + Object instance = doGetInstance(parameterizedType); + + if (instance instanceof Closeable || instance instanceof AutoCloseable) { + + CloseAfterTest closeables = store.getOrComputeIfAbsent(CloseAfterTest.class, it -> new CloseAfterTest(), + CloseAfterTest.class); + + closeables.add(closeables); + } + + return instance; + } + + return store.getOrComputeIfAbsent(parameter.getType(), it -> doGetInstance(parameterizedType)); + } + + private Object doGetInstance(Type parameterizedType) { + + Optional resourceFunction = findFunction(parameterizedType); + return resourceFunction.map(it -> it.function.apply(findSupplier(it.dependsOn.getType()).get())).orElseGet( + () -> findSupplier(parameterizedType).get()); + } + + /** + * Attempt to resolve the {@code requestedResourceType}. + * + * @param extensionContext + * @param requestedResourceType + * @param + * @return + */ + public T resolve(ExtensionContext extensionContext, Class requestedResourceType) { + + ExtensionContext.Store store = getStore(extensionContext); + + return (T) store.getOrComputeIfAbsent(requestedResourceType, it -> findSupplier(requestedResourceType).get()); + } + + private ExtensionContext.Store getStore(ExtensionContext extensionContext) { + return extensionContext.getStore(LETTUCE); + } + + @Override + public void afterAll(ExtensionContext context) { + + ExtensionContext.Store store = getStore(context); + + CLOSE_AFTER_EACH.forEach(it -> { + + StatefulConnection connection = store.get(it, StatefulConnection.class); + + if (connection != null) { + connection.close(); + store.remove(StatefulRedisConnection.class); + } + }); + } + + @Override + public void afterEach(ExtensionContext context) { + + DefaultRedisClient.get().setOptions(ClientOptions.builder().build()); + DefaultRedisClient.get().setDefaultTimeout(Duration.ofSeconds(60)); + + ExtensionContext.Store store = getStore(context); + CloseAfterTest closeables = store.get(CloseAfterTest.class, CloseAfterTest.class); + + if (closeables != null) { + + List copy = new ArrayList<>(closeables); + + closeables.clear(); + + copy.forEach(it -> { + try { + if (it instanceof Closeable) { + ((Closeable) it).close(); + } else if (it instanceof AutoCloseable) { + ((AutoCloseable) it).close(); + } + } catch (Exception e) { + throw new IllegalStateException(e); + } + }); + } + } + + @SuppressWarnings("unchecked") + private static Supplier findSupplier(Type type) { + + ResolvableType requested = ResolvableType.forType(type); + + Supplier supplier = SUPPLIERS.stream().filter(it -> { + + ResolvableType providedType = ResolvableType.forType(it.getClass()).as(Supplier.class).getGeneric(0); + + if (requested.isAssignableFrom(providedType)) { + return true; + } + return false; + }).findFirst().orElseThrow(() -> new NoSuchElementException("Cannot find a factory for " + type)); + + return (Supplier) supplier; + } + + private static Optional findFunction(Type type) { + + ResolvableType requested = ResolvableType.forType(type); + + return RESOURCE_FUNCTIONS.stream().map(it -> { + + ResolvableType dependsOn = ResolvableType.forType(it.getClass()).as(Function.class).getGeneric(0); + ResolvableType providedType = ResolvableType.forType(it.getClass()).as(Function.class).getGeneric(1); + + return new ResourceFunction(dependsOn, providedType, it); + }).filter(it -> requested.isAssignableFrom(it.provides)).findFirst(); + } + + @Target(ElementType.PARAMETER) + @Retention(RetentionPolicy.RUNTIME) + public @interface Connection { + boolean requiresNew() default false; + } + + static class CloseAfterTest extends ArrayList { + } + + static class ResourceFunction { + + final ResolvableType dependsOn; + final ResolvableType provides; + final Function function; + + public ResourceFunction(ResolvableType dependsOn, ResolvableType provides, Function function) { + this.dependsOn = dependsOn; + this.provides = provides; + this.function = (Function) function; + } + } + + enum ClientResourcesSupplier implements Supplier { + + INSTANCE; + + @Override + public ClientResources get() { + return TestClientResources.get(); + } + } + + enum RedisClientSupplier implements Supplier { + + INSTANCE; + + @Override + public RedisClient get() { + return DefaultRedisClient.get(); + } + } + + enum RedisClusterClientSupplier implements Supplier { + + INSTANCE; + + @Override + public RedisClusterClient get() { + return DefaultRedisClusterClient.get(); + } + } + + enum StatefulRedisConnectionSupplier implements Supplier> { + + INSTANCE; + + @Override + public StatefulRedisConnection get() { + return RedisClientSupplier.INSTANCE.get().connect(); + } + } + + enum StatefulRedisPubSubConnectionSupplier implements Supplier> { + + INSTANCE; + + @Override + public StatefulRedisPubSubConnection get() { + return RedisClientSupplier.INSTANCE.get().connectPubSub(); + } + } + + enum StatefulRedisClusterConnectionSupplier implements Supplier> { + + INSTANCE; + + @Override + public StatefulRedisClusterConnection get() { + return RedisClusterClientSupplier.INSTANCE.get().connect(); + } + } + + enum RedisCommandsFunction implements Function, RedisCommands> { + INSTANCE; + + @Override + public RedisCommands apply(StatefulRedisConnection connection) { + return connection.sync(); + } + } +} diff --git a/src/test/java/io/lettuce/test/ListStreamingAdapter.java b/src/test/java/io/lettuce/test/ListStreamingAdapter.java new file mode 100644 index 0000000000..09490bd0fd --- /dev/null +++ b/src/test/java/io/lettuce/test/ListStreamingAdapter.java @@ -0,0 +1,57 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.util.List; +import java.util.Vector; + +import io.lettuce.core.ScoredValue; +import io.lettuce.core.output.KeyStreamingChannel; +import io.lettuce.core.output.ScoredValueStreamingChannel; +import io.lettuce.core.output.ValueStreamingChannel; + +/** + * Streaming adapter which stores every key or/and value in a list. This adapter can be used in KeyStreamingChannels and + * ValueStreamingChannels. + * + * @author Mark Paluch + * @param Value-Type. + * @since 3.0 + */ +public class ListStreamingAdapter implements KeyStreamingChannel, ValueStreamingChannel, + ScoredValueStreamingChannel { + private final List list = new Vector<>(); + + @Override + public void onKey(T key) { + list.add(key); + + } + + @Override + public void onValue(T value) { + list.add(value); + } + + public List getList() { + return list; + } + + @Override + public void onValue(ScoredValue value) { + list.add(value.getValue()); + } +} diff --git a/src/test/java/io/lettuce/test/ReactiveSyncInvocationHandler.java b/src/test/java/io/lettuce/test/ReactiveSyncInvocationHandler.java new file mode 100644 index 0000000000..481fc73a00 --- /dev/null +++ b/src/test/java/io/lettuce/test/ReactiveSyncInvocationHandler.java @@ -0,0 +1,131 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.lang.reflect.Proxy; +import java.util.List; +import java.util.Set; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.api.StatefulConnection; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.internal.LettuceSets; +import io.lettuce.core.sentinel.api.StatefulRedisSentinelConnection; +import io.lettuce.core.sentinel.api.sync.RedisSentinelCommands; + +/** + * Invocation handler for testing purposes. + * + * @param + * @param + */ +public class ReactiveSyncInvocationHandler extends ConnectionDecoratingInvocationHandler { + + private final StatefulConnection connection; + + private ReactiveSyncInvocationHandler(StatefulConnection connection, Object rxApi) { + super(rxApi); + this.connection = connection; + } + + @Override + @SuppressWarnings("unchecked") + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + try { + + Object result = super.handleInvocation(proxy, method, args); + + if (result == null) { + return result; + } + + if (result instanceof StatefulConnection) { + return result; + } + + if (result instanceof Flux) { + Flux flux = (Flux) result; + + if (!method.getName().equals("exec") && !method.getName().equals("multi")) { + if (connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection) + .isMulti()) { + flux.subscribe(); + return null; + } + } + + List value = flux.collectList().block(); + + if (method.getReturnType().equals(List.class)) { + return value; + } + + if (method.getReturnType().equals(Set.class)) { + return LettuceSets.newHashSet(value); + } + + if (!value.isEmpty()) { + return value.get(0); + } + } + + if (result instanceof Mono) { + Mono mono = (Mono) result; + + if (!method.getName().equals("exec") && !method.getName().equals("multi")) { + if (connection instanceof StatefulRedisConnection && ((StatefulRedisConnection) connection).isMulti()) { + mono.subscribe(); + return null; + } + } + + return mono.block(); + } + + return result; + + } catch (InvocationTargetException e) { + throw e.getTargetException(); + } + } + + public static RedisCommands sync(StatefulRedisConnection connection) { + + ReactiveSyncInvocationHandler handler = new ReactiveSyncInvocationHandler<>(connection, connection.reactive()); + return (RedisCommands) Proxy.newProxyInstance(handler.getClass().getClassLoader(), + new Class[] { RedisCommands.class }, handler); + } + + public static RedisCommands sync(StatefulRedisClusterConnection connection) { + + ReactiveSyncInvocationHandler handler = new ReactiveSyncInvocationHandler<>(connection, connection.reactive()); + return (RedisCommands) Proxy.newProxyInstance(handler.getClass().getClassLoader(), + new Class[] { RedisCommands.class }, handler); + } + + public static RedisSentinelCommands sync(StatefulRedisSentinelConnection connection) { + + ReactiveSyncInvocationHandler handler = new ReactiveSyncInvocationHandler<>(connection, connection.reactive()); + return (RedisSentinelCommands) Proxy.newProxyInstance(handler.getClass().getClassLoader(), + new Class[] { RedisSentinelCommands.class }, handler); + } +} diff --git a/src/test/java/io/lettuce/test/RoutingInvocationHandler.java b/src/test/java/io/lettuce/test/RoutingInvocationHandler.java new file mode 100644 index 0000000000..1f9e370a78 --- /dev/null +++ b/src/test/java/io/lettuce/test/RoutingInvocationHandler.java @@ -0,0 +1,42 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Method; + +/** + * @author Mark Paluch + */ +public class RoutingInvocationHandler extends ConnectionDecoratingInvocationHandler { + + private final InvocationHandler delegate; + + public RoutingInvocationHandler(Object target, InvocationHandler delegate) { + super(target); + this.delegate = delegate; + } + + @Override + protected Object handleInvocation(Object proxy, Method method, Object[] args) throws Throwable { + + if (method.getName().equals("getStatefulConnection")) { + return super.handleInvocation(proxy, method, args); + } + + return delegate.invoke(proxy, method, args); + } +} diff --git a/src/test/java/io/lettuce/test/TestFutures.java b/src/test/java/io/lettuce/test/TestFutures.java new file mode 100644 index 0000000000..ddf7fe06cb --- /dev/null +++ b/src/test/java/io/lettuce/test/TestFutures.java @@ -0,0 +1,139 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.lang.reflect.UndeclaredThrowableException; +import java.time.Duration; +import java.util.Arrays; +import java.util.Collection; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.Future; + +import io.lettuce.core.RedisFuture; +import io.lettuce.core.cluster.api.async.AsyncExecutions; +import io.lettuce.core.internal.Futures; + +/** + * Utility methods to synchronize and create futures. + * + * @author Mark Paluch + */ +public class TestFutures { + + private static final Duration TIMEOUT = Duration.ofSeconds(5); + + /** + * Check if all {@code futures} are {@link Future#isDone() completed}. + * + * @param futures + * @return {@literal true} if all {@code futures} are {@link Future#isDone() completed} + */ + public static boolean areAllDone(Collection> futures) { + + for (Future future : futures) { + if (!future.isDone()) { + return false; + } + } + return true; + } + + /** + * Await completion for all {@link Future} guarded by the global {@link #TIMEOUT}. + */ + public static boolean awaitOrTimeout(Future future) { + + if (!Futures.awaitAll(TIMEOUT, future)) { + throw new IllegalStateException("Future timeout"); + } + + return true; + } + + /** + * Await completion for all {@link AsyncExecutions}s guarded by the global {@link #TIMEOUT}. + * + * @param executions + */ + public static boolean awaitOrTimeout(AsyncExecutions executions) { + return awaitOrTimeout(Arrays.asList(executions.futures())); + } + + /** + * Await completion for all {@link Future}s guarded by the global {@link #TIMEOUT}. + * + * @param futures + */ + public static boolean awaitOrTimeout(Collection> futures) { + + if (!io.lettuce.core.internal.Futures.awaitAll(TIMEOUT, futures.toArray(new Future[0]))) { + throw new IllegalStateException("Future timeout"); + } + + return true; + } + + /** + * Retrieve the value from the {@link Future} guarded by the global {@link #TIMEOUT}. + * + * @param future + * @param + */ + public static T getOrTimeout(Future future) { + + if (!Futures.await(TIMEOUT, future)) { + throw new IllegalStateException("Future timeout"); + } + + try { + return future.get(); + } catch (Exception e) { + throw new UndeclaredThrowableException(e); + } + } + + /** + * Retrieve the value from the {@link CompletableFuture} guarded by the global {@link #TIMEOUT}. + * + * @param future + * @param + */ + public static T getOrTimeout(CompletableFuture future) { + return getOrTimeout((Future) future); + } + + /** + * Retrieve the value from the {@link CompletionStage} guarded by the global {@link #TIMEOUT}. + * + * @param completionStage + * @param + */ + public static T getOrTimeout(CompletionStage completionStage) { + return getOrTimeout(completionStage.toCompletableFuture()); + } + + /** + * Retrieve the value from the {@link RedisFuture} guarded by the global {@link #TIMEOUT}. + * + * @param future + * @param + */ + public static T getOrTimeout(RedisFuture future) { + return getOrTimeout(future.toCompletableFuture()); + } + +} diff --git a/src/test/java/io/lettuce/test/Wait.java b/src/test/java/io/lettuce/test/Wait.java new file mode 100644 index 0000000000..e584dc5720 --- /dev/null +++ b/src/test/java/io/lettuce/test/Wait.java @@ -0,0 +1,314 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.time.Clock; +import java.time.Duration; +import java.time.Instant; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeoutException; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; + +/** + * Wait-Until helper. + * + * @author Mark Paluch + */ +public class Wait { + + /** + * Initialize a {@link Wait.WaitBuilder} to wait until the {@code supplier} supplies {@literal true} + * + * @param supplier + * @return + */ + public static WaitBuilder untilTrue(Supplier supplier) { + + WaitBuilder wb = new WaitBuilder<>(); + + wb.supplier = supplier; + wb.check = o -> o; + + return wb; + } + + /** + * Initialize a {@link Wait.WaitBuilder} to wait until the {@code condition} does not throw exceptions + * + * @param condition + * @return + */ + public static WaitBuilder untilNoException(VoidWaitCondition condition) { + + WaitBuilder wb = new WaitBuilder<>(); + wb.waitCondition = () -> { + try { + condition.test(); + return true; + } catch (Exception e) { + return false; + } + }; + + wb.supplier = () -> { + condition.test(); + return null; + }; + + return wb; + } + + /** + * Initialize a {@link Wait.WaitBuilder} to wait until the {@code actualSupplier} provides an object that is not equal to + * {@code expectation} + * + * @param expectation + * @param actualSupplier + * @param + * @return + */ + public static WaitBuilder untilNotEquals(T expectation, Supplier actualSupplier) { + + WaitBuilder wb = new WaitBuilder<>(); + + wb.supplier = actualSupplier; + wb.check = o -> { + if (o == expectation) { + return false; + } + + if ((o == null && expectation != null) || (o != null && expectation == null)) { + return true; + } + + if (o instanceof Number && expectation instanceof Number) { + Number actualNumber = (Number) o; + Number expectedNumber = (Number) expectation; + + if (actualNumber.doubleValue() == expectedNumber.doubleValue()) { + return false; + } + + if (actualNumber.longValue() == expectedNumber.longValue()) { + return false; + } + } + + return !o.equals(expectation); + }; + wb.messageFunction = o -> "Objects are equal: " + expectation + " and " + o; + + return wb; + } + + /** + * Initialize a {@link Wait.WaitBuilder} to wait until the {@code actualSupplier} provides an object that is not equal to + * {@code expectation} + * + * @param expectation + * @param actualSupplier + * @param + * @return + */ + public static WaitBuilder untilEquals(T expectation, Supplier actualSupplier) { + + WaitBuilder wb = new WaitBuilder<>(); + + wb.supplier = actualSupplier; + wb.check = o -> { + if (o == expectation) { + return true; + } + + if ((o == null && expectation != null) || (o != null && expectation == null)) { + return false; + } + + if (o instanceof Number && expectation instanceof Number) { + Number actualNumber = (Number) o; + Number expectedNumber = (Number) expectation; + + if (actualNumber.doubleValue() == expectedNumber.doubleValue()) { + return true; + } + + if (actualNumber.longValue() == expectedNumber.longValue()) { + return true; + } + } + + return o.equals(expectation); + }; + + wb.messageFunction = o -> "Objects are not equal: " + expectation + " and " + o; + + return wb; + } + + @FunctionalInterface + interface WaitCondition { + boolean isSatisfied() throws Exception; + } + + @FunctionalInterface + public interface VoidWaitCondition { + void test(); + } + + @FunctionalInterface + public interface Sleeper { + void sleep() throws InterruptedException; + } + + static class ThreadSleep implements Sleeper { + + private final Duration period; + + ThreadSleep(Duration period) { + this.period = period; + } + + public void sleep() throws InterruptedException { + Thread.sleep(period.toMillis()); + } + } + + /** + * Builder to build a waiter/sleeper with a timeout. Make sure to call {@link #waitOrTimeout()} to block execution until the + * {@link WaitCondition} is met. + * + * @param + */ + public static class WaitBuilder { + + private Duration duration = Duration.ofSeconds(10); + private Sleeper sleeper = new ThreadSleep(Duration.ofMillis(10)); + private Function messageFunction; + private Supplier supplier; + private Predicate check; + private WaitCondition waitCondition; + + public WaitBuilder during(Duration duration) { + this.duration = duration; + return this; + } + + public WaitBuilder message(String message) { + this.messageFunction = o -> message; + return this; + } + + @SuppressWarnings("unchecked") + public void waitOrTimeout() { + + Waiter waiter = new Waiter(); + waiter.duration = duration; + waiter.sleeper = sleeper; + waiter.messageFunction = (Function) messageFunction; + + if (waitCondition != null) { + waiter.waitOrTimeout(waitCondition, supplier); + } else { + waiter.waitOrTimeout(supplier, check); + } + } + } + + /** + * Utility to await until a {@link WaitCondition} yields {@literal true}. + */ + private static class Waiter { + + private Duration duration; + private Sleeper sleeper; + private Function messageFunction; + + private void waitOrTimeout(Supplier supplier, Predicate check) { + + try { + if (!success(() -> check.test(supplier.get()), Timeout.create(duration))) { + if (messageFunction != null) { + throw new TimeoutException(messageFunction.apply(supplier.get())); + } + throw new TimeoutException("Condition not satisfied for: " + supplier.get()); + } + } catch (Exception e) { + throw new IllegalStateException(e); + } + } + + private void waitOrTimeout(WaitCondition waitCondition, Supplier supplier) { + + try { + if (!success(waitCondition, Timeout.create(duration))) { + try { + if (messageFunction != null) { + throw new TimeoutException(messageFunction.apply(supplier.get())); + } + throw new TimeoutException("Condition not satisfied for: " + supplier.get()); + } catch (TimeoutException e) { + throw e; + } catch (Exception e) { + if (messageFunction != null) { + throw new ExecutionException(messageFunction.apply(null), e); + } + throw new ExecutionException("Condition not satisfied", e); + } + } + } catch (Exception e) { + throw new IllegalStateException(e); + } + } + + private boolean success(WaitCondition condition, Timeout timeout) throws Exception { + + while (!timeout.hasExpired()) { + if (condition.isSatisfied()) { + return true; + } + sleeper.sleep(); + } + + return false; + } + } + + static class Timeout { + + private static final Clock clock = Clock.systemDefaultZone(); + private final Instant timeout; + + private Timeout(Instant timeout) { + this.timeout = timeout; + } + + public static Timeout create(Duration duration) { + + if (duration.isZero() || duration.isNegative()) { + throw new IllegalArgumentException("Duration must be positive"); + } + + Instant now = clock.instant(); + return new Timeout(now.plus(duration)); + } + + boolean hasExpired() { + return clock.instant().isAfter(timeout); + } + } +} diff --git a/src/test/java/io/lettuce/test/WithPassword.java b/src/test/java/io/lettuce/test/WithPassword.java new file mode 100644 index 0000000000..d51b986735 --- /dev/null +++ b/src/test/java/io/lettuce/test/WithPassword.java @@ -0,0 +1,106 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test; + +import java.lang.reflect.UndeclaredThrowableException; +import java.util.List; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.protocol.Command; +import io.lettuce.test.condition.RedisConditions; +import io.lettuce.test.settings.TestSettings; + +/** + * Utility to run a {@link ThrowingCallable callback function} while Redis is configured with a password. + * + * @author Mark Paluch + * @author Tugdual Grall + */ +public class WithPassword { + + /** + * Run a {@link ThrowingCallable callback function} while Redis is configured with a password. + * + * @param client + * @param callable + */ + public static void run(RedisClient client, ThrowingCallable callable) { + + StatefulRedisConnection connection = client.connect(); + RedisCommands commands = connection.sync(); + try { + enableAuthentication(commands); + commands.auth(TestSettings.password()); + + try { + callable.call(); + } catch (RuntimeException e) { + throw e; + } catch (Throwable e) { + throw new UndeclaredThrowableException(e); + } + } finally { + disableAuthentication(commands); + connection.close(); + } + } + + /** + * Enable password authentication via {@code requirepass}. + * + * @param commands + */ + public static void enableAuthentication(RedisCommands commands) { + + RedisConditions conditions = RedisConditions.of(commands); + + commands.configSet("requirepass", TestSettings.password()); + + // If ACL is supported let's create a test user + if (conditions.hasCommand("ACL")) { + Command> command = CliParser.parse( + "ACL SETUSER " + TestSettings.aclUsername() + " on >" + TestSettings.aclPassword() + " ~cached:* +@all"); + commands.dispatch(command.getType(), command.getOutput(), command.getArgs()); + } + } + + /** + * Disable password authentication via {@code requirepass} and optionally the {@code ACL} command. + * + * @param commands + */ + public static void disableAuthentication(RedisCommands commands) { + + commands.auth(TestSettings.password()); // reauthenticate as default user before disabling it + + RedisConditions conditions = RedisConditions.of(commands); + commands.configSet("requirepass", ""); + + if (conditions.hasCommand("ACL")) { + Command> command = CliParser.parse("ACL DELUSER " + TestSettings.aclUsername()); + commands.dispatch(command.getType(), command.getOutput(), command.getArgs()); + + command = CliParser.parse("acl setuser default nopass"); + commands.dispatch(command.getType(), command.getOutput(), command.getArgs()); + } + } + + public interface ThrowingCallable { + void call() throws Throwable; + } +} diff --git a/src/test/java/io/lettuce/test/condition/EnabledOnCommand.java b/src/test/java/io/lettuce/test/condition/EnabledOnCommand.java new file mode 100644 index 0000000000..b4271eeaff --- /dev/null +++ b/src/test/java/io/lettuce/test/condition/EnabledOnCommand.java @@ -0,0 +1,42 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.condition; + +import java.lang.annotation.*; + +import org.junit.jupiter.api.extension.ExtendWith; + +/** + * {@code @EnabledOnCommand} is used to signal that the annotated test class or test method is only enabledif the + * specified command is available. + * + *

    + * When applied at the class level, all test methods within that class will be enabled . + */ +@Target({ ElementType.TYPE, ElementType.METHOD }) +@Retention(RetentionPolicy.RUNTIME) +@Inherited +@Documented +@ExtendWith(EnabledOnCommandCondition.class) +public @interface EnabledOnCommand { + + /** + * Name of the Redis command to be available. + * + * @return + */ + String value(); +} diff --git a/src/test/java/io/lettuce/test/condition/EnabledOnCommandCondition.java b/src/test/java/io/lettuce/test/condition/EnabledOnCommandCondition.java new file mode 100644 index 0000000000..46b3911bbd --- /dev/null +++ b/src/test/java/io/lettuce/test/condition/EnabledOnCommandCondition.java @@ -0,0 +1,59 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.condition; + +import static org.junit.jupiter.api.extension.ConditionEvaluationResult.disabled; +import static org.junit.jupiter.api.extension.ConditionEvaluationResult.enabled; + +import java.util.Optional; + +import org.junit.jupiter.api.extension.ConditionEvaluationResult; +import org.junit.jupiter.api.extension.ExecutionCondition; +import org.junit.jupiter.api.extension.ExtensionContext; +import org.junit.platform.commons.util.AnnotationUtils; + +import io.lettuce.test.LettuceExtension; +import io.lettuce.core.api.StatefulRedisConnection; + +/** + * {@link ExecutionCondition} for {@link EnabledOnCommandCondition @EnabledOnCommand}. + * + * @see EnabledOnCommandCondition + */ +class EnabledOnCommandCondition implements ExecutionCondition { + + private static final ConditionEvaluationResult ENABLED_BY_DEFAULT = enabled("@EnabledOnCommand is not present"); + + @Override + public ConditionEvaluationResult evaluateExecutionCondition(ExtensionContext context) { + + Optional optional = AnnotationUtils.findAnnotation(context.getElement(), EnabledOnCommand.class); + + if (optional.isPresent()) { + + String command = optional.get().value(); + + StatefulRedisConnection connection = new LettuceExtension().resolve(context, StatefulRedisConnection.class); + + RedisConditions conditions = RedisConditions.of(connection); + boolean hasCommand = conditions.hasCommand(command); + return hasCommand ? enabled("Enabled on command " + command) : disabled("Disabled, command " + command + + " not available on Redis version " + conditions.getRedisVersion()); + } + + return ENABLED_BY_DEFAULT; + } +} diff --git a/src/test/java/io/lettuce/test/condition/RedisConditions.java b/src/test/java/io/lettuce/test/condition/RedisConditions.java new file mode 100644 index 0000000000..106e46aa3c --- /dev/null +++ b/src/test/java/io/lettuce/test/condition/RedisConditions.java @@ -0,0 +1,336 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.condition; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Properties; +import java.util.stream.Collectors; + +import org.springframework.util.Assert; +import org.springframework.util.StringUtils; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.sync.RedisClusterCommands; +import io.lettuce.core.models.command.CommandDetail; +import io.lettuce.core.models.command.CommandDetailParser; + +/** + * Collection of utility methods to test conditions during test execution. + * + * @author Mark Paluch + */ +public class RedisConditions { + + private final Map commands; + private final Version version; + + private RedisConditions(RedisClusterCommands commands) { + + List result = CommandDetailParser.parse(commands.command()); + + this.commands = result.stream().collect( + Collectors.toMap(commandDetail -> commandDetail.getName().toUpperCase(), CommandDetail::getArity)); + + String info = commands.info("server"); + + try { + ByteArrayInputStream inputStream = new ByteArrayInputStream(info.getBytes()); + Properties p = new Properties(); + p.load(inputStream); + + version = Version.parse(p.getProperty("redis_version")); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + /** + * Create {@link RedisCommands} given {@link StatefulRedisConnection}. + * + * @param connection must not be {@literal null}. + * @return + */ + public static RedisConditions of(StatefulRedisConnection connection) { + return new RedisConditions(connection.sync()); + } + + /** + * Create {@link RedisCommands} given {@link StatefulRedisClusterConnection}. + * + * @param connection must not be {@literal null}. + * @return + */ + public static RedisConditions of(StatefulRedisClusterConnection connection) { + return new RedisConditions(connection.sync()); + } + + /** + * Create {@link RedisConditions} given {@link RedisCommands}. + * + * @param commands must not be {@literal null}. + * @return + */ + public static RedisConditions of(RedisClusterCommands commands) { + return new RedisConditions(commands); + } + + /** + * @return the Redis {@link Version}. + */ + public Version getRedisVersion() { + return version; + } + + /** + * @param command + * @return {@literal true} if the command is present. + */ + public boolean hasCommand(String command) { + return commands.containsKey(command.toUpperCase()); + } + + /** + * @param command command name. + * @param arity expected arity. + * @return {@literal true} if the command is present with the given arity. + */ + public boolean hasCommandArity(String command, int arity) { + + if (!hasCommand(command)) { + throw new IllegalStateException("Unknown command: " + command + " in " + commands); + } + + return commands.get(command.toUpperCase()) == arity; + } + + /** + * @param versionNumber + * @return {@literal true} if the version number is met. + */ + public boolean hasVersionGreaterOrEqualsTo(String versionNumber) { + return version.isGreaterThanOrEqualTo(Version.parse(versionNumber)); + } + + /** + * Value object to represent a Version consisting of major, minor and bugfix part. + */ + public static class Version implements Comparable { + + private static final String VERSION_PARSE_ERROR = "Invalid version string! Could not parse segment %s within %s."; + + private final int major; + private final int minor; + private final int bugfix; + private final int build; + + /** + * Creates a new {@link Version} from the given integer values. At least one value has to be given but a maximum of 4. + * + * @param parts must not be {@literal null} or empty. + */ + Version(int... parts) { + + Assert.notNull(parts, "Parts must not be null!"); + Assert.isTrue(parts.length > 0 && parts.length < 5, String.format("Invalid parts length. 0 < %s < 5", parts.length)); + + this.major = parts[0]; + this.minor = parts.length > 1 ? parts[1] : 0; + this.bugfix = parts.length > 2 ? parts[2] : 0; + this.build = parts.length > 3 ? parts[3] : 0; + + Assert.isTrue(major >= 0, "Major version must be greater or equal zero!"); + Assert.isTrue(minor >= 0, "Minor version must be greater or equal zero!"); + Assert.isTrue(bugfix >= 0, "Bugfix version must be greater or equal zero!"); + Assert.isTrue(build >= 0, "Build version must be greater or equal zero!"); + } + + /** + * Parses the given string representation of a version into a {@link Version} object. + * + * @param version must not be {@literal null} or empty. + * @return + */ + public static Version parse(String version) { + + Assert.hasText(version, "Version must not be null o empty!"); + + String[] parts = version.trim().split("\\."); + int[] intParts = new int[parts.length]; + + for (int i = 0; i < parts.length; i++) { + + String input = i == parts.length - 1 ? parts[i].replaceAll("\\D.*", "") : parts[i]; + + if (StringUtils.hasText(input)) { + try { + intParts[i] = Integer.parseInt(input); + } catch (IllegalArgumentException o_O) { + throw new IllegalArgumentException(String.format(VERSION_PARSE_ERROR, input, version), o_O); + } + } + } + + return new Version(intParts); + } + + /** + * Returns whether the current {@link Version} is greater (newer) than the given one. + * + * @param version + * @return + */ + public boolean isGreaterThan(Version version) { + return compareTo(version) > 0; + } + + /** + * Returns whether the current {@link Version} is greater (newer) or the same as the given one. + * + * @param version + * @return + */ + boolean isGreaterThanOrEqualTo(Version version) { + return compareTo(version) >= 0; + } + + /** + * Returns whether the current {@link Version} is the same as the given one. + * + * @param version + * @return + */ + public boolean is(Version version) { + return equals(version); + } + + /** + * Returns whether the current {@link Version} is less (older) than the given one. + * + * @param version + * @return + */ + public boolean isLessThan(Version version) { + return compareTo(version) < 0; + } + + /** + * Returns whether the current {@link Version} is less (older) or equal to the current one. + * + * @param version + * @return + */ + public boolean isLessThanOrEqualTo(Version version) { + return compareTo(version) <= 0; + } + + /* + * (non-Javadoc) + * + * @see java.lang.Comparable#compareTo(java.lang.Object) + */ + public int compareTo(Version that) { + + if (that == null) { + return 1; + } + + if (major != that.major) { + return major - that.major; + } + + if (minor != that.minor) { + return minor - that.minor; + } + + if (bugfix != that.bugfix) { + return bugfix - that.bugfix; + } + + if (build != that.build) { + return build - that.build; + } + + return 0; + } + + /* + * (non-Javadoc) + * + * @see java.lang.Object#equals(java.lang.Object) + */ + @Override + public boolean equals(Object obj) { + + if (this == obj) { + return true; + } + + if (!(obj instanceof Version)) { + return false; + } + + Version that = (Version) obj; + + return this.major == that.major && this.minor == that.minor && this.bugfix == that.bugfix + && this.build == that.build; + } + + /* + * (non-Javadoc) + * + * @see java.lang.Object#hashCode() + */ + @Override + public int hashCode() { + + int result = 17; + result += 31 * major; + result += 31 * minor; + result += 31 * bugfix; + result += 31 * build; + return result; + } + + /* + * (non-Javadoc) + * + * @see java.lang.Object#toString() + */ + @Override + public String toString() { + + List digits = new ArrayList<>(); + digits.add(major); + digits.add(minor); + + if (build != 0 || bugfix != 0) { + digits.add(bugfix); + } + + if (build != 0) { + digits.add(build); + } + + return StringUtils.collectionToDelimitedString(digits, "."); + } + } +} diff --git a/src/test/java/io/lettuce/test/resource/DefaultRedisClient.java b/src/test/java/io/lettuce/test/resource/DefaultRedisClient.java new file mode 100644 index 0000000000..8feefc245e --- /dev/null +++ b/src/test/java/io/lettuce/test/resource/DefaultRedisClient.java @@ -0,0 +1,52 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.resource; + +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +public class DefaultRedisClient { + + private static final DefaultRedisClient instance = new DefaultRedisClient(); + + private RedisClient redisClient; + + private DefaultRedisClient() { + redisClient = RedisClient.create(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port()).build()); + Runtime.getRuntime().addShutdownHook(new Thread() { + @Override + public void run() { + FastShutdown.shutdown(redisClient); + } + }); + } + + /** + * Do not close the client. + * + * @return the default redis client for the tests. + */ + public static RedisClient get() { + instance.redisClient.setDefaultTimeout(60, TimeUnit.SECONDS); + return instance.redisClient; + } +} diff --git a/src/test/java/io/lettuce/test/resource/DefaultRedisClusterClient.java b/src/test/java/io/lettuce/test/resource/DefaultRedisClusterClient.java new file mode 100644 index 0000000000..ec2f65bd9d --- /dev/null +++ b/src/test/java/io/lettuce/test/resource/DefaultRedisClusterClient.java @@ -0,0 +1,52 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.resource; + +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.ClusterClientOptions; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.test.settings.TestSettings; + +/** + * @author Mark Paluch + */ +public class DefaultRedisClusterClient { + + private static final DefaultRedisClusterClient instance = new DefaultRedisClusterClient(); + + private RedisClusterClient redisClient; + + private DefaultRedisClusterClient() { + redisClient = RedisClusterClient.create(RedisURI.Builder.redis(TestSettings.host(), TestSettings.port(900)) + .withClientName("my-client").build()); + Runtime.getRuntime().addShutdownHook(new Thread() { + @Override + public void run() { + FastShutdown.shutdown(redisClient); + } + }); + } + + /** + * Do not close the client. + * + * @return the default redis client for the tests. + */ + public static RedisClusterClient get() { + instance.redisClient.setOptions(ClusterClientOptions.create()); + return instance.redisClient; + } +} diff --git a/src/test/java/io/lettuce/test/resource/FastShutdown.java b/src/test/java/io/lettuce/test/resource/FastShutdown.java new file mode 100644 index 0000000000..6fa8fd5b24 --- /dev/null +++ b/src/test/java/io/lettuce/test/resource/FastShutdown.java @@ -0,0 +1,45 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.resource; + +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.AbstractRedisClient; +import io.lettuce.core.resource.ClientResources; + +/** + * @author Mark Paluch + */ +public class FastShutdown { + + /** + * Shut down a {@link AbstractRedisClient} with a timeout of 10ms. + * + * @param redisClient + */ + public static void shutdown(AbstractRedisClient redisClient) { + redisClient.shutdown(0, 10, TimeUnit.MILLISECONDS); + } + + /** + * Shut down a {@link ClientResources} client with a timeout of 10ms. + * + * @param clientResources + */ + public static void shutdown(ClientResources clientResources) { + clientResources.shutdown(0, 10, TimeUnit.MILLISECONDS); + } +} diff --git a/src/test/java/io/lettuce/test/resource/TestClientResources.java b/src/test/java/io/lettuce/test/resource/TestClientResources.java new file mode 100644 index 0000000000..fd79341384 --- /dev/null +++ b/src/test/java/io/lettuce/test/resource/TestClientResources.java @@ -0,0 +1,68 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.resource; + +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.resource.ClientResources; +import io.lettuce.core.resource.DefaultClientResources; + +/** + * Client-Resources suitable for testing. Uses {@link TestEventLoopGroupProvider} to preserve the event loop + * groups between tests. Every time a new {@link TestClientResources} instance is created, shutdown hook is added + * {@link Runtime#addShutdownHook(Thread)}. + * + * @author Mark Paluch + */ +public class TestClientResources { + + private static final TestClientResources instance = new TestClientResources(); + private ClientResources clientResources = create(); + + /** + * @return the default {@link ClientResources} instance used across multiple tests. The returned instance must not be shut + * down. + */ + public static ClientResources get() { + return instance.clientResources; + } + + /** + * Creates a new {@link ClientResources} instance and registers a shutdown hook to de-allocate the instance upon JVM + * shutdown. + * + * @return a new {@link ClientResources} instance. + */ + public static ClientResources create() { + + final DefaultClientResources resources = DefaultClientResources.builder() + .eventLoopGroupProvider(new TestEventLoopGroupProvider()).build(); + + Runtime.getRuntime().addShutdownHook(new Thread() { + @Override + public void run() { + try { + resources.shutdown(100, 100, TimeUnit.MILLISECONDS).get(10, TimeUnit.SECONDS); + } catch (Exception e) { + e.printStackTrace(); + } + } + }); + + return resources; + } + +} diff --git a/src/test/java/io/lettuce/test/resource/TestEventLoopGroupProvider.java b/src/test/java/io/lettuce/test/resource/TestEventLoopGroupProvider.java new file mode 100644 index 0000000000..d0e510edad --- /dev/null +++ b/src/test/java/io/lettuce/test/resource/TestEventLoopGroupProvider.java @@ -0,0 +1,56 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.resource; + +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.resource.DefaultEventLoopGroupProvider; +import io.netty.util.concurrent.DefaultPromise; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.concurrent.ImmediateEventExecutor; +import io.netty.util.concurrent.Promise; + +/** + * A {@link io.lettuce.core.resource.EventLoopGroupProvider} suitable for testing. Preserves the event loop groups between + * tests. Every time a new {@link TestEventLoopGroupProvider} instance is created, shutdown hook is added + * {@link Runtime#addShutdownHook(Thread)}. + * + * @author Mark Paluch + */ +class TestEventLoopGroupProvider extends DefaultEventLoopGroupProvider { + + public TestEventLoopGroupProvider() { + super(10); + Runtime.getRuntime().addShutdownHook(new Thread() { + @Override + public void run() { + try { + TestEventLoopGroupProvider.this.shutdown(100, 100, TimeUnit.MILLISECONDS).get(10, TimeUnit.SECONDS); + } catch (Exception e) { + e.printStackTrace(); + } + } + }); + } + + @Override + public Promise release(EventExecutorGroup eventLoopGroup, long quietPeriod, long timeout, TimeUnit unit) { + DefaultPromise result = new DefaultPromise<>(ImmediateEventExecutor.INSTANCE); + result.setSuccess(true); + + return result; + } +} diff --git a/src/test/java/io/lettuce/test/server/MockTcpServer.java b/src/test/java/io/lettuce/test/server/MockTcpServer.java new file mode 100644 index 0000000000..1d4374eaba --- /dev/null +++ b/src/test/java/io/lettuce/test/server/MockTcpServer.java @@ -0,0 +1,92 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.server; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; + +import io.netty.bootstrap.ServerBootstrap; +import io.netty.channel.*; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.channel.socket.SocketChannel; +import io.netty.channel.socket.nio.NioServerSocketChannel; +import io.netty.util.concurrent.DefaultThreadFactory; + +/** + * Tiny netty server to generate a response. + * + * @author Mark Paluch + */ +public class MockTcpServer { + + private EventLoopGroup bossGroup; + private EventLoopGroup workerGroup; + private Channel channel; + private List> handlers = new ArrayList<>(); + + public void addHandler(Supplier supplier) { + handlers.add(supplier); + } + + public void initialize(int port) throws InterruptedException { + + bossGroup = Resources.bossGroup; + workerGroup = Resources.workerGroup; + + ServerBootstrap b = new ServerBootstrap(); + b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 100) + .childHandler(new ChannelInitializer() { + @Override + public void initChannel(SocketChannel ch) { + ChannelPipeline p = ch.pipeline(); + // p.addLast(new LoggingHandler(LogLevel.INFO)); + + for (Supplier handler : handlers) { + p.addLast(handler.get()); + } + } + }); + + // Start the server. + ChannelFuture f = b.bind(port).sync(); + + channel = f.channel(); + } + + public void shutdown() { + channel.close(); + } + + private static class Resources { + + private static final EventLoopGroup bossGroup; + private static final EventLoopGroup workerGroup; + + static { + bossGroup = new NioEventLoopGroup(1, new DefaultThreadFactory(NioEventLoopGroup.class, true)); + workerGroup = new NioEventLoopGroup(5, new DefaultThreadFactory(NioEventLoopGroup.class, true)); + + Runtime.getRuntime().addShutdownHook(new Thread(() -> { + bossGroup.shutdownGracefully(0, 0, TimeUnit.MILLISECONDS); + workerGroup.shutdownGracefully(0, 0, TimeUnit.MILLISECONDS); + + }, "MockRedisServer-shutdown")); + } + + } +} diff --git a/src/test/java/io/lettuce/test/server/RandomResponseServer.java b/src/test/java/io/lettuce/test/server/RandomResponseServer.java new file mode 100644 index 0000000000..cc2f3f673d --- /dev/null +++ b/src/test/java/io/lettuce/test/server/RandomResponseServer.java @@ -0,0 +1,28 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.server; + +/** + * Tiny netty server to generate random base64 data on message reception. + * + * @author Mark Paluch + */ +public class RandomResponseServer extends MockTcpServer { + + public RandomResponseServer() { + addHandler(RandomServerHandler::new); + } +} diff --git a/src/test/java/io/lettuce/test/server/RandomServerHandler.java b/src/test/java/io/lettuce/test/server/RandomServerHandler.java new file mode 100644 index 0000000000..260840ee70 --- /dev/null +++ b/src/test/java/io/lettuce/test/server/RandomServerHandler.java @@ -0,0 +1,67 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.server; + +import java.util.Arrays; +import java.util.Random; + +import io.netty.buffer.ByteBuf; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; + +/** + * Handler to generate random base64 data. + */ +class RandomServerHandler extends ChannelInboundHandlerAdapter { + + private final int count; + + public RandomServerHandler() { + + int count; + do { + count = new Random().nextInt(50); + } while (count < 1); + + this.count = count; + + } + + @Override + public void channelRead(ChannelHandlerContext ctx, Object msg) { + + byte[] response = new byte[count]; + + Arrays.fill(response, "A".getBytes()[0]); + + ByteBuf buf = ctx.alloc().heapBuffer(response.length); + + ByteBuf encoded = buf.writeBytes(response); + ctx.writeAndFlush(encoded); + } + + @Override + public void channelReadComplete(ChannelHandlerContext ctx) { + ctx.flush(); + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { + // Close the connection when an exception is raised. + cause.printStackTrace(); + ctx.close(); + } +} diff --git a/src/test/java/io/lettuce/test/settings/TestSettings.java b/src/test/java/io/lettuce/test/settings/TestSettings.java new file mode 100644 index 0000000000..e240a97528 --- /dev/null +++ b/src/test/java/io/lettuce/test/settings/TestSettings.java @@ -0,0 +1,155 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.test.settings; + +import java.net.Inet4Address; +import java.net.InetAddress; +import java.net.UnknownHostException; + +/** + * This class provides settings used while testing. You can override these using system properties. + * + * @author Mark Paluch + * @author Tugdual Grall + */ +public class TestSettings { + + private TestSettings() { + } + + /** + * + * @return hostname of your redis instance. Defaults to {@literal localhost}. Can be overriden with + * {@code -Dhost=YourHostName} + */ + public static String host() { + return System.getProperty("host", "localhost"); + } + + /** + * + * @return unix domain socket name of your redis instance. Defaults to {@literal work/socket-6479}. Can be overriden with + * {@code -Ddomainsocket=YourSocket} + */ + public static String socket() { + return System.getProperty("domainsocket", "work/socket-6479"); + } + + /** + * + * @return unix domain socket name of your redis sentinel instance. Defaults to {@literal work/socket-26379}. Can be + * overriden with {@code -Dsentineldomainsocket=YourSocket} + */ + public static String sentinelSocket() { + return System.getProperty("sentineldomainsocket", "work/socket-26379"); + } + + /** + * + * @return resolved address of {@link #host()} + * @throws IllegalStateException when hostname cannot be resolved + */ + public static String hostAddr() { + try { + InetAddress[] allByName = InetAddress.getAllByName(host()); + for (InetAddress inetAddress : allByName) { + if (inetAddress instanceof Inet4Address) { + return inetAddress.getHostAddress(); + } + } + return InetAddress.getByName(host()).getHostAddress(); + } catch (UnknownHostException e) { + throw new IllegalStateException(e); + } + } + + /** + * + * @return default username of your redis instance. + */ + public static String username() { + return "default"; + } + + /** + * + * @return password of your redis instance. Defaults to {@literal passwd}. Can be overridden with + * {@code -Dpassword=YourPassword} + */ + public static String password() { + return System.getProperty("password", "passwd"); + } + + /** + * + * @return password of a second user your redis instance. Defaults to {@literal lettuceTest}. Can be overridden with + * {@code -Dacl.username=SampleUsername} + */ + public static String aclUsername() { + return System.getProperty("acl.username", "lettuceTest"); + } + + /** + * + * @return password of a second user of your redis instance. Defaults to {@literal lettuceTestPasswd}. Can be overridden + * with {@code -Dacl.password=SamplePassword} + */ + public static String aclPassword() { + return System.getProperty("acl.password", "lettuceTestPasswd"); + } + + /** + * + * @return port of your redis instance. Defaults to {@literal 6479}. Can be overriden with {@code -Dport=1234} + */ + public static int port() { + return Integer.parseInt(System.getProperty("port", "6479")); + } + + /** + * + * @return sslport of your redis instance. Defaults to {@literal 6443}. Can be overriden with {@code -Dsslport=1234} + */ + public static int sslPort() { + return Integer.parseInt(System.getProperty("sslport", "6443")); + } + + /** + * + * @return {@link #port()} with added {@literal 500} + */ + public static int nonexistentPort() { + return port() + 500; + } + + /** + * + * @param offset + * @return {@link #port()} with added {@literal offset} + */ + public static int port(int offset) { + return port() + offset; + } + + /** + * + * @param offset + * @return {@link #sslPort()} with added {@literal offset} + */ + public static int sslPort(int offset) { + return sslPort() + offset; + } +} diff --git a/src/test/jmh/com/lambdaworks/redis/cluster/EmptyRedisChannelWriter.java b/src/test/jmh/com/lambdaworks/redis/cluster/EmptyRedisChannelWriter.java deleted file mode 100644 index 18b5a66aad..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/cluster/EmptyRedisChannelWriter.java +++ /dev/null @@ -1,40 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import com.lambdaworks.redis.RedisChannelHandler; -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.protocol.RedisCommand; - -/** - * @author Mark Paluch - */ -public class EmptyRedisChannelWriter implements RedisChannelWriter { - @Override - public RedisCommand write(RedisCommand command) { - return null; - } - - @Override - public void close() { - - } - - @Override - public void reset() { - - } - - @Override - public void setRedisChannelHandler(RedisChannelHandler redisChannelHandler) { - - } - - @Override - public void setAutoFlushCommands(boolean autoFlush) { - - } - - @Override - public void flushCommands() { - - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/cluster/EmptyRedisClusterClient.java b/src/test/jmh/com/lambdaworks/redis/cluster/EmptyRedisClusterClient.java deleted file mode 100644 index cd08ee3a3b..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/cluster/EmptyRedisClusterClient.java +++ /dev/null @@ -1,25 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import java.net.SocketAddress; -import java.util.function.Supplier; - -import com.lambdaworks.redis.RedisChannelWriter; -import com.lambdaworks.redis.RedisURI; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.codec.RedisCodec; - -/** - * @author Mark Paluch - */ -public class EmptyRedisClusterClient extends RedisClusterClient { - private final static EmptyStatefulRedisConnection CONNECTION = new EmptyStatefulRedisConnection(); - - public EmptyRedisClusterClient(RedisURI initialUri) { - super(initialUri); - } - - StatefulRedisConnection connectToNode(RedisCodec codec, String nodeId, - RedisChannelWriter clusterWriter, final Supplier socketAddressSupplier) { - return CONNECTION; - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/cluster/EmptyStatefulRedisConnection.java b/src/test/jmh/com/lambdaworks/redis/cluster/EmptyStatefulRedisConnection.java deleted file mode 100644 index 2a67c5c455..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/cluster/EmptyStatefulRedisConnection.java +++ /dev/null @@ -1,85 +0,0 @@ -package com.lambdaworks.redis.cluster; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.api.StatefulRedisConnection; -import com.lambdaworks.redis.api.async.RedisAsyncCommands; -import com.lambdaworks.redis.api.rx.RedisReactiveCommands; -import com.lambdaworks.redis.api.sync.RedisCommands; -import com.lambdaworks.redis.protocol.RedisCommand; - -import java.util.concurrent.TimeUnit; - -/** - * @author Mark Paluch - */ -public class EmptyStatefulRedisConnection implements StatefulRedisConnection { - @Override - public boolean isMulti() { - return false; - } - - @Override - public RedisCommands sync() { - return null; - } - - @Override - public RedisAsyncCommands async() { - return null; - } - - @Override - public RedisReactiveCommands reactive() { - return null; - } - - @Override - public void setTimeout(long timeout, TimeUnit unit) { - - } - - @Override - public TimeUnit getTimeoutUnit() { - return null; - } - - @Override - public long getTimeout() { - return 0; - } - - @Override - public void close() { - - } - - @Override - public boolean isOpen() { - return false; - } - - @Override - public ClientOptions getOptions() { - return null; - } - - @Override - public void reset() { - - } - - @Override - public void setAutoFlushCommands(boolean autoFlush) { - - } - - @Override - public void flushCommands() { - - } - - @Override - public RedisCommand dispatch(RedisCommand command) { - return null; - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/codec/JmhMain.java b/src/test/jmh/com/lambdaworks/redis/codec/JmhMain.java deleted file mode 100644 index 8819aa6c97..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/codec/JmhMain.java +++ /dev/null @@ -1,41 +0,0 @@ -package com.lambdaworks.redis.codec; - -import java.io.IOException; -import java.util.concurrent.TimeUnit; - -import org.openjdk.jmh.annotations.Mode; -import org.openjdk.jmh.runner.Runner; -import org.openjdk.jmh.runner.RunnerException; -import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; -import org.openjdk.jmh.runner.options.OptionsBuilder; -import org.openjdk.jmh.runner.options.TimeValue; - -/** - * Manual JMH Test Launcher. - * - * @author Mark Paluch - */ -public class JmhMain { - - public static void main(String... args) throws IOException, RunnerException { - - runCommandBenchmark(); - } - - private static void runCommandBenchmark() throws RunnerException { - - new Runner(prepareOptions().mode(Mode.AverageTime) // - .timeUnit(TimeUnit.NANOSECONDS) // - .include(".*CodecBenchmark.*") // - .build()).run(); - } - - private static ChainedOptionsBuilder prepareOptions() { - return new OptionsBuilder()// - .forks(1) // - .warmupIterations(5)// - .threads(1) // - .measurementIterations(5) // - .timeout(TimeValue.seconds(2)); - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/codec/StringCodecBenchmark.java b/src/test/jmh/com/lambdaworks/redis/codec/StringCodecBenchmark.java deleted file mode 100644 index 24c617ea98..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/codec/StringCodecBenchmark.java +++ /dev/null @@ -1,77 +0,0 @@ -package com.lambdaworks.redis.codec; - -import java.nio.ByteBuffer; -import java.nio.charset.StandardCharsets; - -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.Setup; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.infra.Blackhole; - -import com.lambdaworks.redis.protocol.LettuceCharsets; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.Unpooled; - -/** - * @author Mark Paluch - */ -public class StringCodecBenchmark { - - @Benchmark - public void encodeUtf8Unpooled(Input input) { - input.blackhole.consume(input.utf8Codec.encodeKey(input.teststring)); - } - - @Benchmark - public void encodeUtf8ToBuf(Input input) { - input.byteBuf.clear(); - input.utf8Codec.encode(input.teststring, input.byteBuf); - } - - @Benchmark - public void encodeUtf8PlainStringToBuf(Input input) { - input.byteBuf.clear(); - input.utf8Codec.encode(input.teststringPlain, input.byteBuf); - } - - @Benchmark - public void encodeAsciiToBuf(Input input) { - input.byteBuf.clear(); - input.asciiCodec.encode(input.teststringPlain, input.byteBuf); - } - - @Benchmark - public void encodeIsoToBuf(Input input) { - input.byteBuf.clear(); - input.isoCodec.encode(input.teststringPlain, input.byteBuf); - } - - @Benchmark - public void decodeUtf8Unpooled(Input input) { - input.input.rewind(); - input.blackhole.consume(input.utf8Codec.decodeKey(input.input)); - } - - @State(Scope.Thread) - public static class Input { - - Blackhole blackhole; - StringCodec asciiCodec = new StringCodec(LettuceCharsets.ASCII); - StringCodec utf8Codec = new StringCodec(LettuceCharsets.UTF8); - StringCodec isoCodec = new StringCodec(StandardCharsets.ISO_8859_1); - - String teststring = "hello üäü~∑†®†ª€∂‚¶¢ Wørld"; - String teststringPlain = "hello uufadsfasdfadssdfadfs"; - ByteBuffer input = ByteBuffer.wrap(teststring.getBytes(LettuceCharsets.UTF8)); - - ByteBuf byteBuf = Unpooled.buffer(512); - - @Setup - public void setup(Blackhole bh) { - blackhole = bh; - input.flip(); - } - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/codec/Utf8StringCodecBenchmark.java b/src/test/jmh/com/lambdaworks/redis/codec/Utf8StringCodecBenchmark.java deleted file mode 100644 index abc8de1802..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/codec/Utf8StringCodecBenchmark.java +++ /dev/null @@ -1,44 +0,0 @@ -package com.lambdaworks.redis.codec; - -import java.nio.ByteBuffer; - -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.Setup; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.infra.Blackhole; - -import com.lambdaworks.redis.protocol.LettuceCharsets; - -/** - * @author Mark Paluch - */ -public class Utf8StringCodecBenchmark { - - @Benchmark - public void encodeUnpooled(Input input) { - input.blackhole.consume(input.codec.encodeKey(input.teststring)); - } - - @Benchmark - public void decodeUnpooled(Input input) { - input.input.rewind(); - input.blackhole.consume(input.codec.decodeKey(input.input)); - } - - @State(Scope.Thread) - public static class Input { - - Blackhole blackhole; - Utf8StringCodec codec = new Utf8StringCodec(); - - String teststring = "hello üäü~∑†®†ª€∂‚¶¢ Wørld"; - ByteBuffer input = ByteBuffer.wrap(teststring.getBytes(LettuceCharsets.UTF8)); - - @Setup - public void setup(Blackhole bh) { - blackhole = bh; - input.flip(); - } - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/CommandBenchmark.java b/src/test/jmh/com/lambdaworks/redis/protocol/CommandBenchmark.java deleted file mode 100644 index 03a489848d..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/protocol/CommandBenchmark.java +++ /dev/null @@ -1,75 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.io.IOException; -import java.io.InputStream; -import java.io.OutputStream; -import java.nio.ByteBuffer; -import java.nio.ByteOrder; -import java.nio.channels.GatheringByteChannel; -import java.nio.channels.ScatteringByteChannel; -import java.nio.charset.Charset; - -import com.lambdaworks.redis.protocol.CommandArgs.ExperimentalByteArrayCodec; -import org.openjdk.jmh.annotations.*; - -import com.lambdaworks.redis.codec.ByteArrayCodec; -import com.lambdaworks.redis.codec.RedisCodec; -import com.lambdaworks.redis.codec.Utf8StringCodec; -import com.lambdaworks.redis.output.ValueOutput; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.ByteBufAllocator; -import io.netty.buffer.ByteBufProcessor; - -/** - * Benchmark for {@link Command}. Test cases: - *

      - *
    • Create commands using String and ByteArray codecs
    • - *
    • Encode commands using String and ByteArray codecs
    • - *
    - * - * @author Mark Paluch - */ -@State(Scope.Benchmark) -public class CommandBenchmark { - - private final static ByteArrayCodec BYTE_ARRAY_CODEC = new ByteArrayCodec(); - private final static ExperimentalByteArrayCodec BYTE_ARRAY_CODEC2 = ExperimentalByteArrayCodec.INSTANCE; - private final static Utf8StringCodec STRING_CODEC = new Utf8StringCodec(); - private final static EmptyByteBuf DUMMY_BYTE_BUF = new EmptyByteBuf(); - - private final static String KEY = "key"; - private final static byte[] BYTE_KEY = "key".getBytes(); - - @Benchmark - public void createCommandUsingByteArrayCodec() { - createCommand(BYTE_KEY, BYTE_ARRAY_CODEC); - } - - @Benchmark - public void createCommandUsingStringCodec() { - createCommand(KEY, STRING_CODEC); - } - - @Benchmark - public void encodeCommandUsingByteArrayCodec() { - createCommand(BYTE_KEY, BYTE_ARRAY_CODEC).encode(DUMMY_BYTE_BUF); - } - - @Benchmark - public void encodeCommandUsingByteArrayCodec2() { - createCommand(BYTE_KEY, BYTE_ARRAY_CODEC2).encode(DUMMY_BYTE_BUF); - } - - @Benchmark - public void encodeCommandUsingStringCodec() { - createCommand(KEY, STRING_CODEC).encode(DUMMY_BYTE_BUF); - } - - private Command createCommand(K key, RedisCodec codec) { - Command command = new Command(CommandType.GET, new ValueOutput<>(codec), new CommandArgs(codec).addKey(key)); - return command; - } - - -} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/CommandHandlerBenchmark.java b/src/test/jmh/com/lambdaworks/redis/protocol/CommandHandlerBenchmark.java deleted file mode 100644 index cef5462fd1..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/protocol/CommandHandlerBenchmark.java +++ /dev/null @@ -1,94 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.util.ArrayDeque; - -import org.openjdk.jmh.annotations.*; - -import com.lambdaworks.redis.ClientOptions; -import com.lambdaworks.redis.codec.ByteArrayCodec; -import com.lambdaworks.redis.output.ValueOutput; - -import io.netty.channel.ChannelFuture; -import io.netty.channel.ChannelPromise; -import io.netty.channel.embedded.EmbeddedChannel; - -/** - * Benchmark for {@link CommandHandler}. Test cases: - *
      - *
    • user command writes
    • - *
    • netty (in-eventloop) writes
    • - *
    - * - * @author Mark Paluch - */ -@State(Scope.Benchmark) -public class CommandHandlerBenchmark { - - private final static ByteArrayCodec CODEC = new ByteArrayCodec(); - private final static ClientOptions CLIENT_OPTIONS = ClientOptions.create(); - private final static EmptyContext CHANNEL_HANDLER_CONTEXT = new EmptyContext(); - private final static byte[] KEY = "key".getBytes(); - private final static ChannelFuture EMPTY = new EmptyFuture(); - - private CommandHandler commandHandler; - private Command command; - - @Setup - public void setup() { - - commandHandler = new CommandHandler(CLIENT_OPTIONS, EmptyClientResources.INSTANCE, new ArrayDeque<>(512)); - command = new Command(CommandType.GET, new ValueOutput<>(CODEC), new CommandArgs(CODEC).addKey(KEY)); - - commandHandler.setState(CommandHandler.LifecycleState.CONNECTED); - - commandHandler.channel = new MyLocalChannel(); - } - - @TearDown(Level.Iteration) - public void tearDown() { - commandHandler.reset(); - } - - @Benchmark - public void measureUserWrite() { - commandHandler.write(command); - } - - @Benchmark - public void measureNettyWrite() throws Exception { - commandHandler.write(CHANNEL_HANDLER_CONTEXT, command, null); - } - - private final static class MyLocalChannel extends EmbeddedChannel { - @Override - public boolean isActive() { - return true; - } - - @Override - public boolean isOpen() { - return true; - } - - @Override - public ChannelFuture write(Object msg) { - return EMPTY; - } - - @Override - public ChannelFuture write(Object msg, ChannelPromise promise) { - return promise; - } - - @Override - public ChannelFuture writeAndFlush(Object msg) { - return EMPTY; - } - - @Override - public ChannelFuture writeAndFlush(Object msg, ChannelPromise promise) { - return promise; - } - } - -} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyClientResources.java b/src/test/jmh/com/lambdaworks/redis/protocol/EmptyClientResources.java deleted file mode 100644 index 4d4d5415e6..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyClientResources.java +++ /dev/null @@ -1,77 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import com.lambdaworks.redis.event.DefaultEventPublisherOptions; -import com.lambdaworks.redis.event.EventBus; -import com.lambdaworks.redis.event.EventPublisherOptions; -import com.lambdaworks.redis.metrics.CommandLatencyCollector; -import com.lambdaworks.redis.resource.ClientResources; -import com.lambdaworks.redis.resource.Delay; -import com.lambdaworks.redis.resource.DnsResolver; -import com.lambdaworks.redis.resource.EventLoopGroupProvider; -import io.netty.util.concurrent.*; - -import java.util.concurrent.TimeUnit; - -/** - * @author Mark Paluch - */ -public class EmptyClientResources implements ClientResources { - - public static final DefaultEventPublisherOptions PUBLISHER_OPTIONS = DefaultEventPublisherOptions.disabled(); - public static final EmptyClientResources INSTANCE = new EmptyClientResources(); - - @Override - public Future shutdown() { - return new SucceededFuture<>(GlobalEventExecutor.INSTANCE, true); - } - - @Override - public Future shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { - return new SucceededFuture<>(GlobalEventExecutor.INSTANCE, true); - } - - @Override - public EventLoopGroupProvider eventLoopGroupProvider() { - return null; - } - - @Override - public EventExecutorGroup eventExecutorGroup() { - return null; - } - - @Override - public int ioThreadPoolSize() { - return 0; - } - - @Override - public int computationThreadPoolSize() { - return 0; - } - - @Override - public EventBus eventBus() { - return null; - } - - @Override - public EventPublisherOptions commandLatencyPublisherOptions() { - return PUBLISHER_OPTIONS; - } - - @Override - public CommandLatencyCollector commandLatencyCollector() { - return null; - } - - @Override - public DnsResolver dnsResolver() { - return null; - } - - @Override - public Delay reconnectDelay() { - return null; - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyFuture.java b/src/test/jmh/com/lambdaworks/redis/protocol/EmptyFuture.java deleted file mode 100644 index c827eeeae6..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyFuture.java +++ /dev/null @@ -1,131 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import io.netty.channel.Channel; -import io.netty.channel.ChannelFuture; -import io.netty.util.concurrent.Future; -import io.netty.util.concurrent.GenericFutureListener; - -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; - -/** - * @author Mark Paluch - */ -class EmptyFuture implements ChannelFuture { - - @Override - public Channel channel() { - return null; - } - - @Override - public ChannelFuture addListener(GenericFutureListener> listener) { - return null; - } - - @Override - public ChannelFuture addListeners(GenericFutureListener>... listeners) { - return null; - } - - @Override - public ChannelFuture removeListener(GenericFutureListener> listener) { - return null; - } - - @Override - public ChannelFuture removeListeners(GenericFutureListener>... listeners) { - return null; - } - - @Override - public ChannelFuture sync() throws InterruptedException { - return null; - } - - @Override - public ChannelFuture syncUninterruptibly() { - return null; - } - - @Override - public ChannelFuture await() throws InterruptedException { - return null; - } - - @Override - public ChannelFuture awaitUninterruptibly() { - return null; - } - - @Override - public boolean isVoid() { - return false; - } - - @Override - public boolean isSuccess() { - return false; - } - - @Override - public boolean isCancellable() { - return false; - } - - @Override - public Throwable cause() { - return null; - } - - @Override - public boolean await(long timeout, TimeUnit unit) throws InterruptedException { - return false; - } - - @Override - public boolean await(long timeoutMillis) throws InterruptedException { - return false; - } - - @Override - public boolean awaitUninterruptibly(long timeout, TimeUnit unit) { - return false; - } - - @Override - public boolean awaitUninterruptibly(long timeoutMillis) { - return false; - } - - @Override - public Void getNow() { - return null; - } - - @Override - public boolean cancel(boolean mayInterruptIfRunning) { - return false; - } - - @Override - public boolean isCancelled() { - return false; - } - - @Override - public boolean isDone() { - return false; - } - - @Override - public Void get() throws InterruptedException, ExecutionException { - return null; - } - - @Override - public Void get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException { - return null; - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyPromise.java b/src/test/jmh/com/lambdaworks/redis/protocol/EmptyPromise.java deleted file mode 100644 index baba2c0daa..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyPromise.java +++ /dev/null @@ -1,171 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; - -import io.netty.channel.Channel; -import io.netty.channel.ChannelPromise; -import io.netty.util.concurrent.Future; -import io.netty.util.concurrent.GenericFutureListener; - -/** - * @author Mark Paluch - */ -class EmptyPromise implements ChannelPromise{ - - @Override - public Channel channel() { - return null; - } - - @Override - public ChannelPromise setSuccess(Void result) { - return null; - } - - @Override - public boolean trySuccess(Void result) { - return false; - } - - @Override - public ChannelPromise setSuccess() { - return null; - } - - @Override - public boolean trySuccess() { - return false; - } - - @Override - public ChannelPromise setFailure(Throwable cause) { - return null; - } - - @Override - public boolean tryFailure(Throwable cause) { - return false; - } - - @Override - public boolean setUncancellable() { - return false; - } - - @Override - public boolean isSuccess() { - return false; - } - - @Override - public boolean isCancellable() { - return false; - } - - @Override - public Throwable cause() { - return null; - } - - @Override - public ChannelPromise addListener(GenericFutureListener> listener) { - return null; - } - - @Override - public ChannelPromise addListeners(GenericFutureListener>... listeners) { - return null; - } - - @Override - public ChannelPromise removeListener(GenericFutureListener> listener) { - return null; - } - - @Override - public ChannelPromise removeListeners(GenericFutureListener>... listeners) { - return null; - } - - @Override - public ChannelPromise sync() throws InterruptedException { - return null; - } - - @Override - public ChannelPromise syncUninterruptibly() { - return null; - } - - @Override - public ChannelPromise await() throws InterruptedException { - return null; - } - - @Override - public ChannelPromise awaitUninterruptibly() { - return null; - } - - @Override - public boolean isVoid() { - return false; - } - - @Override - public ChannelPromise unvoid() { - return null; - } - - @Override - public boolean await(long timeout, TimeUnit unit) throws InterruptedException { - return false; - } - - @Override - public boolean await(long timeoutMillis) throws InterruptedException { - return false; - } - - @Override - public boolean awaitUninterruptibly(long timeout, TimeUnit unit) { - return false; - } - - @Override - public boolean awaitUninterruptibly(long timeoutMillis) { - return false; - } - - @Override - public Void getNow() { - return null; - } - - @Override - public boolean cancel(boolean mayInterruptIfRunning) { - return false; - } - - @Override - public boolean isCancelled() { - return false; - } - - @Override - public boolean isDone() { - return false; - } - - @Override - public Void get() throws InterruptedException, ExecutionException { - return null; - } - - @Override - public Void get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException { - return null; - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/JmhMain.java b/src/test/jmh/com/lambdaworks/redis/protocol/JmhMain.java deleted file mode 100644 index 5f405f4d8b..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/protocol/JmhMain.java +++ /dev/null @@ -1,75 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.io.IOException; -import java.util.concurrent.TimeUnit; - -import org.openjdk.jmh.annotations.Mode; -import org.openjdk.jmh.runner.Runner; -import org.openjdk.jmh.runner.RunnerException; -import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; -import org.openjdk.jmh.runner.options.OptionsBuilder; -import org.openjdk.jmh.runner.options.TimeValue; - -/** - * Manual JMH Test Launcher. - * - * @author Mark Paluch - */ -public class JmhMain { - - public static void main(String... args) throws IOException, RunnerException { - - // run selectively - // runCommandBenchmark(); - runCommandHandlerBenchmark(); - // runRedisStateMachineBenchmark(); - - // or all - // runBenchmarks(); - } - - private static void runBenchmarks() throws RunnerException { - - new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).build()).run(); - - } - - private static void runCommandBenchmark() throws RunnerException { - - new Runner( - prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).include(".*CommandBenchmark.*").build()) - .run(); - - new Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandBenchmark.*").build()) - .run(); - } - - private static void runCommandHandlerBenchmark() throws RunnerException { - - new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).include(".*CommandHandlerBenchmark.*") - .build()).run(); - // new - // Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandHandlerBenchmark.*").build()).run(); - } - - private static void runCommandEncoderBenchmark() throws RunnerException { - - new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).include(".*CommandEncoderBenchmark.*") - .build()).run(); - // new - // Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandHandlerBenchmark.*").build()).run(); - } - - private static void runRedisStateMachineBenchmark() throws RunnerException { - - new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS) - .include(".*RedisStateMachineBenchmark.*").build()).run(); - // new - // Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandHandlerBenchmark.*").build()).run(); - } - - private static ChainedOptionsBuilder prepareOptions() { - return new OptionsBuilder().forks(1).warmupIterations(5).threads(1).measurementIterations(5) - .timeout(TimeValue.seconds(2)); - } -} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/RedisStateMachineBenchmark.java b/src/test/jmh/com/lambdaworks/redis/protocol/RedisStateMachineBenchmark.java deleted file mode 100644 index 83f58c9def..0000000000 --- a/src/test/jmh/com/lambdaworks/redis/protocol/RedisStateMachineBenchmark.java +++ /dev/null @@ -1,79 +0,0 @@ -package com.lambdaworks.redis.protocol; - -import java.nio.ByteBuffer; -import java.util.List; - -import org.openjdk.jmh.annotations.*; - -import com.lambdaworks.redis.codec.ByteArrayCodec; -import com.lambdaworks.redis.output.ArrayOutput; - -import io.netty.buffer.ByteBuf; -import io.netty.buffer.PooledByteBufAllocator; - -/** - * @author Mark Paluch - */ -@State(Scope.Benchmark) -public class RedisStateMachineBenchmark { - - private final static ByteArrayCodec BYTE_ARRAY_CODEC = new ByteArrayCodec(); - - private final static Command> byteArrayCommand = new Command<>(CommandType.GET, - new ArrayOutput(BYTE_ARRAY_CODEC) { - - @Override - public void set(ByteBuffer bytes) { - - } - - @Override - public void multi(int count) { - - } - - @Override - public void complete(int depth) { - } - - @Override - public void set(long integer) { - } - }, new CommandArgs(BYTE_ARRAY_CODEC).addKey(new byte[] { 1, 2, 3, 4 })); - - private ByteBuf masterBuffer; - - private final RedisStateMachine stateMachine = new RedisStateMachine<>(); - private final byte[] payload = ("*3\r\n" + // - "$4\r\n" + // - "LLEN\r\n" + // - "$6\r\n" + // - "mylist\r\n" + // - "+QUEUED\r\n" + // - ":12\r\n").getBytes(); - - @Setup(Level.Trial) - public void setup() { - masterBuffer = PooledByteBufAllocator.DEFAULT.ioBuffer(32); - masterBuffer.writeBytes(payload); - } - - @TearDown - public void tearDown() { - masterBuffer.release(); - } - - @Benchmark - public void measureDecode() { - stateMachine.decode(masterBuffer.duplicate(), byteArrayCommand, byteArrayCommand.getOutput()); - } - - public static void main(String[] args) { - - RedisStateMachineBenchmark b = new RedisStateMachineBenchmark(); - b.setup(); - while (true) { - b.measureDecode(); - } - } -} diff --git a/src/test/jmh/io/lettuce/core/EmptyRedisChannelWriter.java b/src/test/jmh/io/lettuce/core/EmptyRedisChannelWriter.java new file mode 100644 index 0000000000..03d769ab81 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/EmptyRedisChannelWriter.java @@ -0,0 +1,75 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.Collection; +import java.util.concurrent.CompletableFuture; + +import io.lettuce.core.protocol.ConnectionFacade; +import io.lettuce.core.protocol.EmptyClientResources; +import io.lettuce.core.protocol.RedisCommand; +import io.lettuce.core.resource.ClientResources; + +/** + * @author Mark Paluch + */ +public class EmptyRedisChannelWriter implements RedisChannelWriter { + + public static final EmptyRedisChannelWriter INSTANCE = new EmptyRedisChannelWriter(); + private static final CompletableFuture CLOSE_FUTURE = CompletableFuture.completedFuture(null); + + @Override + public RedisCommand write(RedisCommand command) { + return null; + } + + @Override + public Collection> write(Collection> redisCommands) { + return (Collection) redisCommands; + } + + @Override + public void close() { + + } + + @Override + public CompletableFuture closeAsync() { + return CLOSE_FUTURE; + } + + @Override + public void reset() { + + } + + @Override + public void setConnectionFacade(ConnectionFacade connection) { + } + + @Override + public ClientResources getClientResources() { + return EmptyClientResources.INSTANCE; + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + } + + @Override + public void flushCommands() { + } +} diff --git a/src/test/jmh/io/lettuce/core/EmptyStatefulRedisConnection.java b/src/test/jmh/io/lettuce/core/EmptyStatefulRedisConnection.java new file mode 100644 index 0000000000..9bb64fc630 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/EmptyStatefulRedisConnection.java @@ -0,0 +1,104 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.time.Duration; +import java.util.Collection; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.protocol.ConnectionFacade; +import io.lettuce.core.protocol.RedisCommand; + +/** + * @author Mark Paluch + */ +public class EmptyStatefulRedisConnection extends RedisChannelHandler implements StatefulRedisConnection, ConnectionFacade { + + public static final EmptyStatefulRedisConnection INSTANCE = new EmptyStatefulRedisConnection( + EmptyRedisChannelWriter.INSTANCE); + + public EmptyStatefulRedisConnection(RedisChannelWriter writer) { + super(writer, Duration.ZERO); + } + + @Override + public boolean isMulti() { + return false; + } + + @Override + public RedisCommands sync() { + return null; + } + + @Override + public RedisAsyncCommands async() { + return null; + } + + @Override + public RedisReactiveCommands reactive() { + return null; + } + + @Override + public void close() { + } + + @Override + public boolean isOpen() { + return false; + } + + @Override + public ClientOptions getOptions() { + return null; + } + + @Override + public void activated() { + } + + @Override + public void deactivated() { + } + + @Override + public void reset() { + } + + @Override + public void setAutoFlushCommands(boolean autoFlush) { + } + + @Override + public void flushCommands() { + } + + @Override + public RedisCommand dispatch(RedisCommand command) { + return null; + } + + @Override + public Collection dispatch(Collection commands) { + return commands; + } +} diff --git a/src/test/jmh/io/lettuce/core/JmhMain.java b/src/test/jmh/io/lettuce/core/JmhMain.java new file mode 100644 index 0000000000..88e2021f72 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/JmhMain.java @@ -0,0 +1,52 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.util.concurrent.TimeUnit; + +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.RunnerException; +import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.openjdk.jmh.runner.options.TimeValue; + +/** + * Manual JMH Test Launcher. + * + * @author Mark Paluch + */ +public class JmhMain { + + public static void main(String... args) throws RunnerException { + runRedisClientBenchmark(); + } + + private static void runBenchmarks() throws RunnerException { + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).build()).run(); + } + + private static void runRedisClientBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).include(".RedisClientBenchmark.*") + .build()).run(); + } + + private static ChainedOptionsBuilder prepareOptions() { + return new OptionsBuilder().forks(1).warmupIterations(5).threads(1).measurementIterations(5) + .timeout(TimeValue.seconds(2)); + } +} diff --git a/src/test/jmh/io/lettuce/core/RedisClientBenchmark.java b/src/test/jmh/io/lettuce/core/RedisClientBenchmark.java new file mode 100644 index 0000000000..3954d6c6b1 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/RedisClientBenchmark.java @@ -0,0 +1,169 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; + +import org.openjdk.jmh.annotations.*; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.test.Delay; +import io.lettuce.test.settings.TestSettings; + +/** + * Benchmark for {@link RedisClient}. + *

    + * Test cases: + *

      + *
    • synchronous command execution
    • + *
    • asynchronous command execution
    • + *
    • asynchronous command execution with batching
    • + *
    • asynchronous command execution with delayed flushing
    • + *
    • reactive command execution
    • + *
    • reactive command execution with batching
    • + *
    • reactive command execution with delayed flushing
    • + *
    + * + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class RedisClientBenchmark { + + private static final int BATCH_SIZE = 20; + private static final byte[] KEY = "benchmark".getBytes(); + private static final byte[] FOO = "foo".getBytes(); + + private RedisClient redisClient; + private StatefulRedisConnection connection; + private RedisFuture commands[]; + private Mono monos[]; + + @Setup + public void setup() { + + redisClient = RedisClient.create(RedisURI.create(TestSettings.host(), TestSettings.port())); + redisClient.setOptions(ClientOptions.builder() + .timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.ofSeconds(10)).build()).build()); + connection = redisClient.connect(ByteArrayCodec.INSTANCE); + commands = new RedisFuture[BATCH_SIZE]; + monos = new Mono[BATCH_SIZE]; + } + + @TearDown + public void tearDown() { + + connection.close(); + redisClient.shutdown(0, 0, TimeUnit.SECONDS); + } + + @Benchmark + public void asyncSet() { + connection.async().set(KEY, KEY).toCompletableFuture().join(); + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void asyncSetBatch() throws Exception { + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i] = connection.async().set(KEY, KEY); + } + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i].get(); + } + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void asyncSetBatchFlush() throws Exception { + + connection.setAutoFlushCommands(false); + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i] = connection.async().set(KEY, KEY); + } + + connection.flushCommands(); + connection.setAutoFlushCommands(true); + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i].get(); + } + } + + @Benchmark + public void syncSet() { + connection.sync().set(KEY, KEY); + } + + @Benchmark + public void syncList() { + connection.async().del(FOO); + connection.sync().lpush(FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, FOO, + FOO); + connection.sync().lrange(FOO, 0, -1); + } + + @Benchmark + public void reactiveSet() { + connection.reactive().set(KEY, KEY).block(); + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void reactiveSetBatch() { + + for (int i = 0; i < BATCH_SIZE; i++) { + monos[i] = connection.reactive().set(KEY, KEY); + } + + Flux.merge(monos).blockLast(); + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void reactiveSetBatchFlush() { + + connection.setAutoFlushCommands(false); + + for (int i = 0; i < BATCH_SIZE; i++) { + monos[i] = connection.reactive().set(KEY, KEY); + } + + Flux.merge(monos).doOnSubscribe(subscription -> { + + connection.flushCommands(); + connection.setAutoFlushCommands(true); + + }).blockLast(); + } + + public static void main(String[] args) { + + RedisClientBenchmark b = new RedisClientBenchmark(); + b.setup(); + + Delay.delay(Duration.ofMillis(10000)); + while (true) { + b.syncList(); + } + } +} diff --git a/src/test/jmh/io/lettuce/core/cluster/ClusterDistributionChannelWriterBenchmark.java b/src/test/jmh/io/lettuce/core/cluster/ClusterDistributionChannelWriterBenchmark.java new file mode 100644 index 0000000000..0afc8a5d87 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/cluster/ClusterDistributionChannelWriterBenchmark.java @@ -0,0 +1,117 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; + +import io.lettuce.core.*; +import io.lettuce.core.cluster.models.partitions.Partitions; +import io.lettuce.core.cluster.models.partitions.RedisClusterNode; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.output.ValueOutput; +import io.lettuce.core.protocol.Command; +import io.lettuce.core.protocol.CommandArgs; +import io.lettuce.core.protocol.CommandType; + +/** + * Benchmark for {@link ClusterDistributionChannelWriter}. + * + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class ClusterDistributionChannelWriterBenchmark { + + private static final ClientOptions CLIENT_OPTIONS = ClientOptions.create(); + private static final RedisChannelWriter EMPTY_WRITER = EmptyRedisChannelWriter.INSTANCE; + private static final EmptyStatefulRedisConnection CONNECTION = EmptyStatefulRedisConnection.INSTANCE; + private static final ValueOutput VALUE_OUTPUT = new ValueOutput<>(ByteArrayCodec.INSTANCE); + + private static final Command KEYED_COMMAND1 = new Command<>(CommandType.GET, VALUE_OUTPUT, + new CommandArgs<>(ByteArrayCodec.INSTANCE).addKey("benchmark1".getBytes())); + + private static final Command KEYED_COMMAND2 = new Command<>(CommandType.GET, VALUE_OUTPUT, + new CommandArgs<>(ByteArrayCodec.INSTANCE).addKey("benchmark2".getBytes())); + + private static final Command KEYED_COMMAND3 = new Command<>(CommandType.GET, VALUE_OUTPUT, + new CommandArgs<>(ByteArrayCodec.INSTANCE).addKey("benchmark3".getBytes())); + + private static final Command PLAIN_COMMAND = new Command<>(CommandType.GET, VALUE_OUTPUT, + new CommandArgs<>(ByteArrayCodec.INSTANCE)); + + private static final List> COMMANDS = Arrays.asList(KEYED_COMMAND1, KEYED_COMMAND2, + KEYED_COMMAND3); + + private ClusterDistributionChannelWriter writer; + + @Setup + public void setup() { + + writer = new ClusterDistributionChannelWriter(CLIENT_OPTIONS, EMPTY_WRITER, ClusterEventListener.NO_OP); + + Partitions partitions = new Partitions(); + + partitions.add(new RedisClusterNode(RedisURI.create("localhost", 1), "1", true, null, 0, 0, 0, IntStream.range(0, 8191) + .boxed().collect(Collectors.toList()), new HashSet<>())); + + partitions.add(new RedisClusterNode(RedisURI.create("localhost", 2), "2", true, null, 0, 0, 0, IntStream + .range(8192, SlotHash.SLOT_COUNT).boxed().collect(Collectors.toList()), new HashSet<>())); + + partitions.updateCache(); + + CompletableFuture connectionFuture = CompletableFuture.completedFuture(CONNECTION); + + writer.setPartitions(partitions); + writer.setClusterConnectionProvider(new PooledClusterConnectionProvider(new EmptyRedisClusterClient(RedisURI.create( + "localhost", 7379)), EMPTY_WRITER, ByteArrayCodec.INSTANCE, ClusterEventListener.NO_OP) { + public CompletableFuture getConnectionAsync(Intent intent, int slot) { + return connectionFuture; + } + }); + writer.setPartitions(partitions); + } + + @Benchmark + public void writeKeyedCommand() { + writer.write(KEYED_COMMAND1); + } + + @Benchmark + public void write3KeyedCommands() { + writer.write(KEYED_COMMAND1); + writer.write(KEYED_COMMAND2); + writer.write(KEYED_COMMAND3); + } + + @Benchmark + public void write3KeyedCommandsAsBatch() { + writer.write(COMMANDS); + } + + @Benchmark + public void writePlainCommand() { + writer.write(PLAIN_COMMAND); + } +} diff --git a/src/test/jmh/io/lettuce/core/cluster/EmptyRedisClusterClient.java b/src/test/jmh/io/lettuce/core/cluster/EmptyRedisClusterClient.java new file mode 100644 index 0000000000..bcc8971fb3 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/cluster/EmptyRedisClusterClient.java @@ -0,0 +1,41 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.net.SocketAddress; +import java.util.Collections; +import java.util.function.Supplier; + +import io.lettuce.core.EmptyStatefulRedisConnection; +import io.lettuce.core.RedisChannelWriter; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.RedisCodec; + +/** + * @author Mark Paluch + */ +class EmptyRedisClusterClient extends RedisClusterClient { + + public EmptyRedisClusterClient(RedisURI initialUri) { + super(null, Collections.singleton(initialUri)); + } + + StatefulRedisConnection connectToNode(RedisCodec codec, String nodeId, RedisChannelWriter clusterWriter, + final Supplier socketAddressSupplier) { + return EmptyStatefulRedisConnection.INSTANCE; + } +} diff --git a/src/test/jmh/io/lettuce/core/cluster/JmhMain.java b/src/test/jmh/io/lettuce/core/cluster/JmhMain.java new file mode 100644 index 0000000000..dc865a6135 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/cluster/JmhMain.java @@ -0,0 +1,64 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.util.concurrent.TimeUnit; + +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.RunnerException; +import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.openjdk.jmh.runner.options.TimeValue; + +/** + * Manual JMH Test Launcher. + * + * @author Mark Paluch + */ +public class JmhMain { + + public static void main(String... args) throws RunnerException { + + // runClusterDistributionChannelWriterBenchmark(); + runSlotHashBenchmark(); + } + + private static void runClusterDistributionChannelWriterBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime) // + .timeUnit(TimeUnit.NANOSECONDS) // + .include(".*ClusterDistributionChannelWriterBenchmark.*") // + .build()).run(); + } + + private static void runSlotHashBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime) // + .timeUnit(TimeUnit.NANOSECONDS) // + .include(".*SlotHashBenchmark.*") // + .build()).run(); + } + + private static ChainedOptionsBuilder prepareOptions() { + return new OptionsBuilder()// + .forks(1) // + .warmupIterations(5)// + .threads(1) // + .measurementIterations(5) // + .timeout(TimeValue.seconds(2)); + } +} diff --git a/src/test/jmh/io/lettuce/core/cluster/RedisClusterClientBenchmark.java b/src/test/jmh/io/lettuce/core/cluster/RedisClusterClientBenchmark.java new file mode 100644 index 0000000000..263d2da62e --- /dev/null +++ b/src/test/jmh/io/lettuce/core/cluster/RedisClusterClientBenchmark.java @@ -0,0 +1,143 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import org.openjdk.jmh.annotations.*; + +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.test.settings.TestSettings; + +/** + * Benchmark for {@link RedisClusterClient}. + *
      + *
    • synchronous command execution
    • + *
    • asynchronous command execution
    • + *
    • asynchronous command execution with batching
    • + *
    • asynchronous command execution with delayed flushing
    • + *
    • reactive command execution
    • + *
    • reactive command execution with batching
    • + *
    • reactive command execution with delayed flushing
    • + *
    + * + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class RedisClusterClientBenchmark { + + private static final int BATCH_SIZE = 20; + private static final byte[] KEY = "benchmark".getBytes(); + + private RedisClusterClient redisClusterClient; + private StatefulRedisClusterConnection connection; + private RedisFuture commands[]; + private Mono monos[]; + + @Setup + public void setup() { + + redisClusterClient = RedisClusterClient.create(RedisURI.create(TestSettings.host(), TestSettings.port(900))); + connection = redisClusterClient.connect(ByteArrayCodec.INSTANCE); + commands = new RedisFuture[BATCH_SIZE]; + monos = new Mono[BATCH_SIZE]; + } + + @TearDown + public void tearDown() { + + connection.close(); + redisClusterClient.shutdown(); + } + + @Benchmark + public void asyncSet() { + connection.async().set(KEY, KEY).toCompletableFuture().join(); + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void asyncSetBatch() throws Exception { + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i] = connection.async().set(KEY, KEY); + } + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i].get(); + } + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void asyncSetBatchFlush() throws Exception { + + connection.setAutoFlushCommands(false); + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i] = connection.async().set(KEY, KEY); + } + + connection.flushCommands(); + connection.setAutoFlushCommands(true); + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i].get(); + } + } + + @Benchmark + public void syncSet() { + connection.sync().set(KEY, KEY); + } + + @Benchmark + public void reactiveSet() { + connection.reactive().set(KEY, KEY).block(); + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void reactiveSetBatch() { + + for (int i = 0; i < BATCH_SIZE; i++) { + monos[i] = connection.reactive().set(KEY, KEY); + } + + Flux.merge(monos).blockLast(); + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void reactiveSetBatchFlush() { + + connection.setAutoFlushCommands(false); + + for (int i = 0; i < BATCH_SIZE; i++) { + monos[i] = connection.reactive().set(KEY, KEY); + } + + Flux.merge(monos).doOnSubscribe(subscription -> { + + connection.flushCommands(); + connection.setAutoFlushCommands(true); + + }).blockLast(); + } +} diff --git a/src/test/jmh/io/lettuce/core/cluster/SlotHashBenchmark.java b/src/test/jmh/io/lettuce/core/cluster/SlotHashBenchmark.java new file mode 100644 index 0000000000..b312a1475a --- /dev/null +++ b/src/test/jmh/io/lettuce/core/cluster/SlotHashBenchmark.java @@ -0,0 +1,58 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster; + +import java.nio.ByteBuffer; + +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.infra.Blackhole; + +/** + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class SlotHashBenchmark { + + private static final byte[] data = "this is my buffer".getBytes(); + private static final byte[] tagged = "this is{my buffer}".getBytes(); + private static final ByteBuffer heap = (ByteBuffer) ByteBuffer.allocate(data.length).put(data).flip(); + private static final ByteBuffer direct = (ByteBuffer) ByteBuffer.allocateDirect(data.length).put(data).flip(); + + private static final ByteBuffer heapTagged = (ByteBuffer) ByteBuffer.allocate(tagged.length).put(tagged).flip(); + private static final ByteBuffer directTagged = (ByteBuffer) ByteBuffer.allocateDirect(tagged.length).put(tagged).flip(); + + @Benchmark + public void measureSlotHashHeap(Blackhole blackhole) { + blackhole.consume(SlotHash.getSlot(heap)); + } + + @Benchmark + public void measureSlotHashDirect(Blackhole blackhole) { + blackhole.consume(SlotHash.getSlot(direct)); + } + + @Benchmark + public void measureSlotHashTaggedHeap(Blackhole blackhole) { + blackhole.consume(SlotHash.getSlot(heapTagged)); + } + + @Benchmark + public void measureSlotHashTaggedDirect(Blackhole blackhole) { + blackhole.consume(SlotHash.getSlot(directTagged)); + } +} diff --git a/src/test/jmh/io/lettuce/core/cluster/models/partitions/JmhMain.java b/src/test/jmh/io/lettuce/core/cluster/models/partitions/JmhMain.java new file mode 100644 index 0000000000..52f98e54e8 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/cluster/models/partitions/JmhMain.java @@ -0,0 +1,54 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.partitions; + +import java.util.concurrent.TimeUnit; + +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.RunnerException; +import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.openjdk.jmh.runner.options.TimeValue; + +/** + * Manual JMH Test Launcher. + * + * @author Mark Paluch + */ +public class JmhMain { + + public static void main(String... args) throws Exception { + runClusterNodeBenchmark(); + } + + private static void runClusterNodeBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime) // + .timeUnit(TimeUnit.NANOSECONDS) // + .include(".*RedisClusterNodeBenchmark.*") // + .build()).run(); + } + + private static ChainedOptionsBuilder prepareOptions() { + return new OptionsBuilder()// + .forks(1) // + .warmupIterations(5)// + .threads(1) // + .measurementIterations(5) // + .timeout(TimeValue.seconds(2)); + } +} diff --git a/src/test/jmh/io/lettuce/core/cluster/models/partitions/RedisClusterNodeBenchmark.java b/src/test/jmh/io/lettuce/core/cluster/models/partitions/RedisClusterNodeBenchmark.java new file mode 100644 index 0000000000..96898c6928 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/cluster/models/partitions/RedisClusterNodeBenchmark.java @@ -0,0 +1,60 @@ +/* + * Copyright 2018-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.cluster.models.partitions; + +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.State; + +import io.lettuce.core.cluster.SlotHash; + +/** + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class RedisClusterNodeBenchmark { + + private static final List ALL_SLOTS = IntStream.range(0, SlotHash.SLOT_COUNT).boxed().collect(Collectors.toList()); + private static final List LOWER_SLOTS = IntStream.range(0, 8192).boxed().collect(Collectors.toList()); + + private static final RedisClusterNode NODE = new RedisClusterNode(null, null, true, null, 0, 0, 0, ALL_SLOTS, + Collections.emptySet()); + + @Benchmark + public RedisClusterNode createClusterNodeAllSlots() { + return new RedisClusterNode(null, null, true, null, 0, 0, 0, ALL_SLOTS, Collections.emptySet()); + } + + @Benchmark + public RedisClusterNode createClusterNodeLowerSlots() { + return new RedisClusterNode(null, null, true, null, 0, 0, 0, LOWER_SLOTS, Collections.emptySet()); + } + + @Benchmark + public void querySlotStatusPresent() { + NODE.hasSlot(1234); + } + + @Benchmark + public void querySlotStatusAbsent() { + NODE.hasSlot(8193); + } +} diff --git a/src/test/jmh/io/lettuce/core/codec/JmhMain.java b/src/test/jmh/io/lettuce/core/codec/JmhMain.java new file mode 100644 index 0000000000..bba4ac44e7 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/codec/JmhMain.java @@ -0,0 +1,55 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.util.concurrent.TimeUnit; + +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.RunnerException; +import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.openjdk.jmh.runner.options.TimeValue; + +/** + * Manual JMH Test Launcher. + * + * @author Mark Paluch + */ +public class JmhMain { + + public static void main(String... args) throws RunnerException { + + runCommandBenchmark(); + } + + private static void runCommandBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime) // + .timeUnit(TimeUnit.NANOSECONDS) // + .include(".*CodecBenchmark.*") // + .build()).run(); + } + + private static ChainedOptionsBuilder prepareOptions() { + return new OptionsBuilder()// + .forks(1) // + .warmupIterations(5)// + .threads(1) // + .measurementIterations(5) // + .timeout(TimeValue.seconds(2)); + } +} diff --git a/src/test/jmh/io/lettuce/core/codec/StringCodecBenchmark.java b/src/test/jmh/io/lettuce/core/codec/StringCodecBenchmark.java new file mode 100644 index 0000000000..eac6791298 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/codec/StringCodecBenchmark.java @@ -0,0 +1,92 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.infra.Blackhole; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; + +/** + * Benchmark for {@link StringCodec}. + * + * @author Mark Paluch + */ +public class StringCodecBenchmark { + + @Benchmark + public void encodeUtf8Unpooled(Input input) { + input.blackhole.consume(input.utf8Codec.encodeKey(input.teststring)); + } + + @Benchmark + public void encodeUtf8ToBuf(Input input) { + input.byteBuf.clear(); + input.utf8Codec.encode(input.teststring, input.byteBuf); + } + + @Benchmark + public void encodeUtf8PlainStringToBuf(Input input) { + input.byteBuf.clear(); + input.utf8Codec.encode(input.teststringPlain, input.byteBuf); + } + + @Benchmark + public void encodeAsciiToBuf(Input input) { + input.byteBuf.clear(); + input.asciiCodec.encode(input.teststringPlain, input.byteBuf); + } + + @Benchmark + public void encodeIsoToBuf(Input input) { + input.byteBuf.clear(); + input.isoCodec.encode(input.teststringPlain, input.byteBuf); + } + + @Benchmark + public void decodeUtf8Unpooled(Input input) { + input.input.rewind(); + input.blackhole.consume(input.utf8Codec.decodeKey(input.input)); + } + + @State(Scope.Thread) + public static class Input { + + Blackhole blackhole; + StringCodec asciiCodec = new StringCodec(StandardCharsets.US_ASCII); + StringCodec utf8Codec = new StringCodec(StandardCharsets.UTF_8); + StringCodec isoCodec = new StringCodec(StandardCharsets.ISO_8859_1); + + String teststring = "hello üäü~∑†®†ª€∂‚¶¢ Wørld"; + String teststringPlain = "hello uufadsfasdfadssdfadfs"; + ByteBuffer input = ByteBuffer.wrap(teststring.getBytes(StandardCharsets.UTF_8)); + + ByteBuf byteBuf = Unpooled.buffer(512); + + @Setup + public void setup(Blackhole bh) { + blackhole = bh; + input.flip(); + } + } +} diff --git a/src/test/jmh/io/lettuce/core/codec/Utf8StringCodecBenchmark.java b/src/test/jmh/io/lettuce/core/codec/Utf8StringCodecBenchmark.java new file mode 100644 index 0000000000..ed64784097 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/codec/Utf8StringCodecBenchmark.java @@ -0,0 +1,60 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.codec; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.infra.Blackhole; + +/** + * Benchmark for {@link Utf8StringCodec}. + * + * @author Mark Paluch + */ +public class Utf8StringCodecBenchmark { + + @Benchmark + public void encodeUnpooled(Input input) { + input.blackhole.consume(input.codec.encodeKey(input.teststring)); + } + + @Benchmark + public void decodeUnpooled(Input input) { + input.input.rewind(); + input.blackhole.consume(input.codec.decodeKey(input.input)); + } + + @State(Scope.Thread) + public static class Input { + + Blackhole blackhole; + Utf8StringCodec codec = new Utf8StringCodec(); + + String teststring = "hello üäü~∑†®†ª€∂‚¶¢ Wørld"; + ByteBuffer input = ByteBuffer.wrap(teststring.getBytes(StandardCharsets.UTF_8)); + + @Setup + public void setup(Blackhole bh) { + blackhole = bh; + input.flip(); + } + } +} diff --git a/src/test/jmh/io/lettuce/core/dynamic/RedisCommandFactoryBenchmark.java b/src/test/jmh/io/lettuce/core/dynamic/RedisCommandFactoryBenchmark.java new file mode 100644 index 0000000000..9e1e21bfc3 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/dynamic/RedisCommandFactoryBenchmark.java @@ -0,0 +1,106 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import org.mockito.Mockito; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; + +import io.lettuce.core.*; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.sync.RedisCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.dynamic.batch.BatchSize; + +/** + * Benchmark for commands executed through {@link RedisCommandFactory}. + * + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class RedisCommandFactoryBenchmark { + + private RedisCommandFactory redisCommandFactory; + private RegularCommands regularCommands; + private RedisAsyncCommandsImpl asyncCommands; + + @Setup + public void setup() { + + redisCommandFactory = new RedisCommandFactory(new MockStatefulConnection(EmptyRedisChannelWriter.INSTANCE)); + regularCommands = redisCommandFactory.getCommands(RegularCommands.class); + + asyncCommands = new RedisAsyncCommandsImpl<>(EmptyStatefulRedisConnection.INSTANCE, StringCodec.UTF8); + } + + @Benchmark + public void createRegularCommands() { + redisCommandFactory.getCommands(RegularCommands.class); + } + + @Benchmark + public void createBatchCommands() { + redisCommandFactory.getCommands(BatchCommands.class); + } + + @Benchmark + public void executeCommandInterfaceCommand() { + regularCommands.set("key", "value"); + } + + @Benchmark + public void executeAsyncCommand() { + asyncCommands.set("key", "value"); + } + + interface RegularCommands extends Commands { + + RedisFuture set(String key, String value); + } + + @BatchSize(10) + private + interface BatchCommands extends Commands { + + void set(String key, String value); + } + + static class MockStatefulConnection extends EmptyStatefulRedisConnection { + + RedisCommands sync; + RedisReactiveCommands reactive; + + MockStatefulConnection(RedisChannelWriter writer) { + super(writer); + + sync = Mockito.mock(RedisCommands.class); + reactive = (RedisReactiveCommands) Mockito.mock(AbstractRedisReactiveCommands.class, Mockito.withSettings() + .extraInterfaces(RedisReactiveCommands.class)); + } + + @Override + public RedisCommands sync() { + return sync; + } + + @Override + public RedisReactiveCommands reactive() { + return reactive; + } + } +} diff --git a/src/test/jmh/io/lettuce/core/dynamic/RedisCommandsBenchmark.java b/src/test/jmh/io/lettuce/core/dynamic/RedisCommandsBenchmark.java new file mode 100644 index 0000000000..a565bc9aed --- /dev/null +++ b/src/test/jmh/io/lettuce/core/dynamic/RedisCommandsBenchmark.java @@ -0,0 +1,101 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic; + +import java.util.concurrent.CompletableFuture; + +import org.openjdk.jmh.annotations.*; + +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisFuture; +import io.lettuce.core.RedisURI; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.dynamic.batch.BatchSize; +import io.lettuce.test.settings.TestSettings; + +/** + * Benchmark for commands executed through {@link StatefulRedisConnection}. + * + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class RedisCommandsBenchmark { + + private static final int BATCH_SIZE = 20; + + private RedisClient redisClient; + private StatefulRedisConnection connection; + private CompletableFuture commands[]; + private RegularCommands regularCommands; + private BatchCommands batchCommands; + + @Setup + public void setup() { + + redisClient = RedisClient.create(RedisURI.create(TestSettings.host(), TestSettings.port())); + connection = redisClient.connect(ByteArrayCodec.INSTANCE); + + RedisCommandFactory redisCommandFactory = new RedisCommandFactory(connection); + regularCommands = redisCommandFactory.getCommands(RegularCommands.class); + batchCommands = redisCommandFactory.getCommands(BatchCommands.class); + commands = new CompletableFuture[BATCH_SIZE]; + } + + @TearDown + public void tearDown() { + + connection.close(); + redisClient.shutdown(); + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void asyncSet() { + + for (int i = 0; i < BATCH_SIZE; i++) { + commands[i] = regularCommands.set("key", "value").toCompletableFuture(); + } + + CompletableFuture.allOf(commands).join(); + } + + @Benchmark + @OperationsPerInvocation(BATCH_SIZE) + public void batchSet() { + + for (int i = 0; i < BATCH_SIZE; i++) { + batchCommands.set("key", "value"); + } + } + + interface RegularCommands extends Commands { + + RedisFuture set(String key, String value); + } + + @BatchSize(BATCH_SIZE) + interface BatchCommands extends Commands { + + void set(String key, String value); + } + + public static void main(String[] args) { + RedisCommandsBenchmark b = new RedisCommandsBenchmark(); + b.setup(); + b.asyncSet(); + } +} diff --git a/src/test/jmh/io/lettuce/core/dynamic/intercept/InvocationProxyFactoryBenchmark.java b/src/test/jmh/io/lettuce/core/dynamic/intercept/InvocationProxyFactoryBenchmark.java new file mode 100644 index 0000000000..7ff8ce7673 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/dynamic/intercept/InvocationProxyFactoryBenchmark.java @@ -0,0 +1,83 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.dynamic.intercept; + +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.infra.Blackhole; + +/** + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class InvocationProxyFactoryBenchmark { + + private final InvocationProxyFactory factory = new InvocationProxyFactory(); + private BenchmarkInterface proxy; + + @Setup + public void setup() { + + factory.addInterface(BenchmarkInterface.class); + factory.addInterceptor(new StringAppendingMethodInterceptor("-foo")); + factory.addInterceptor(new StringAppendingMethodInterceptor("-bar")); + factory.addInterceptor(new ReturnValue("actual")); + + proxy = factory.createProxy(getClass().getClassLoader()); + } + + @Benchmark + public void run(Blackhole blackhole) { + blackhole.consume(proxy.run()); + } + + private interface BenchmarkInterface { + + String run(); + } + + private static class ReturnValue implements MethodInterceptor { + + private final Object value; + + ReturnValue(Object value) { + this.value = value; + } + + @Override + public Object invoke(MethodInvocation invocation) { + return value; + } + + } + + private static class StringAppendingMethodInterceptor implements MethodInterceptor { + + private final String toAppend; + + StringAppendingMethodInterceptor(String toAppend) { + this.toAppend = toAppend; + } + + @Override + public Object invoke(MethodInvocation invocation) throws Throwable { + return invocation.proceed().toString() + toAppend; + } + } + +} diff --git a/src/test/jmh/io/lettuce/core/output/JmhMain.java b/src/test/jmh/io/lettuce/core/output/JmhMain.java new file mode 100644 index 0000000000..7355c16064 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/output/JmhMain.java @@ -0,0 +1,55 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.util.concurrent.TimeUnit; + +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.RunnerException; +import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.openjdk.jmh.runner.options.TimeValue; + +/** + * Manual JMH Test Launcher. + * + * @author Mark Paluch + */ +public class JmhMain { + + public static void main(String... args) throws RunnerException { + runValueListOutputBenchmark(); + } + + private static void runValueListOutputBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime) // + .timeUnit(TimeUnit.NANOSECONDS) // + .include(".*ValueListOutputBenchmark.*") // + .build()).run(); + } + + private static ChainedOptionsBuilder prepareOptions() { + + return new OptionsBuilder()// + .forks(1) // + .warmupIterations(5)// + .threads(1) // + .measurementIterations(5) // + .timeout(TimeValue.seconds(2)); + } +} diff --git a/src/test/jmh/io/lettuce/core/output/ValueListOutputBenchmark.java b/src/test/jmh/io/lettuce/core/output/ValueListOutputBenchmark.java new file mode 100644 index 0000000000..86eae7e3f1 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/output/ValueListOutputBenchmark.java @@ -0,0 +1,95 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.output; + +import java.nio.ByteBuffer; + +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.State; + +import io.lettuce.core.codec.ByteArrayCodec; + +/** + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class ValueListOutputBenchmark { + + private static final ByteArrayCodec CODEC = ByteArrayCodec.INSTANCE; + private final ByteBuffer BUFFER = ByteBuffer.wrap(new byte[0]); + + @Benchmark + public void measureZeroElement() { + + ValueListOutput output = new ValueListOutput<>(CODEC); + output.multi(0); + output.complete(1); + } + + @Benchmark + public void measureSingleElement() { + + ValueListOutput output = new ValueListOutput<>(CODEC); + output.multi(1); + output.set(BUFFER); + output.complete(1); + } + + @Benchmark + public void measure16Elements() { + + ValueListOutput output = new ValueListOutput<>(CODEC); + output.multi(16); + for (int i = 0; i < 16; i++) { + output.set(BUFFER); + } + output.complete(1); + } + + @Benchmark + public void measure16ElementsWithResizeElement() { + + ValueListOutput output = new ValueListOutput<>(CODEC); + output.multi(10); + for (int i = 0; i < 16; i++) { + output.set(BUFFER); + } + output.complete(1); + } + + @Benchmark + public void measure100Elements() { + + ValueListOutput output = new ValueListOutput<>(CODEC); + output.multi(100); + for (int i = 0; i < 100; i++) { + output.set(BUFFER); + } + output.complete(1); + } + + @Benchmark + public void measure100ElementsWithResizeElement() { + + ValueListOutput output = new ValueListOutput<>(CODEC); + output.multi(10); + for (int i = 0; i < 100; i++) { + output.set(BUFFER); + } + output.complete(1); + } +} diff --git a/src/test/jmh/io/lettuce/core/protocol/CommandBenchmark.java b/src/test/jmh/io/lettuce/core/protocol/CommandBenchmark.java new file mode 100644 index 0000000000..d078d33ae5 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/CommandBenchmark.java @@ -0,0 +1,86 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.nio.charset.StandardCharsets; + +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.infra.Blackhole; + +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.codec.RedisCodec; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.codec.Utf8StringCodec; +import io.lettuce.core.output.ValueOutput; + +/** + * Benchmark for {@link Command}. Test cases: + *
      + *
    • Create commands using String and ByteArray codecs
    • + *
    • Encode commands using String and ByteArray codecs
    • + *
    + * + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class CommandBenchmark { + + private static final ByteArrayCodec BYTE_ARRAY_CODEC = new ByteArrayCodec(); + private static final Utf8StringCodec OLD_STRING_CODEC = new Utf8StringCodec(); + private static final StringCodec NEW_STRING_CODEC = new StringCodec(StandardCharsets.UTF_8); + private static final EmptyByteBuf DUMMY_BYTE_BUF = new EmptyByteBuf(); + + private static final String KEY = "key"; + private static final byte[] BYTE_KEY = "key".getBytes(); + + @Benchmark + public void createCommandUsingByteArrayCodec(Blackhole blackhole) { + blackhole.consume(createCommand(BYTE_KEY, BYTE_ARRAY_CODEC)); + } + + @Benchmark + public void createAsyncCommandUsingByteArrayCodec(Blackhole blackhole) { + blackhole.consume(new AsyncCommand<>(createCommand(BYTE_KEY, BYTE_ARRAY_CODEC))); + } + + @Benchmark + public void createCommandUsingStringCodec() { + createCommand(KEY, OLD_STRING_CODEC); + } + + @Benchmark + public void encodeCommandUsingByteArrayCodec() { + createCommand(BYTE_KEY, BYTE_ARRAY_CODEC).encode(DUMMY_BYTE_BUF); + } + + @Benchmark + public void encodeCommandUsingOldStringCodec() { + createCommand(KEY, OLD_STRING_CODEC).encode(DUMMY_BYTE_BUF); + } + + @Benchmark + public void encodeCommandUsingNewStringCodec() { + createCommand(KEY, NEW_STRING_CODEC).encode(DUMMY_BYTE_BUF); + } + + private Command createCommand(K key, RedisCodec codec) { + Command command = new Command(CommandType.GET, new ValueOutput<>(codec), new CommandArgs(codec).addKey(key)); + return command; + } + +} diff --git a/src/test/jmh/io/lettuce/core/protocol/CommandHandlerBenchmark.java b/src/test/jmh/io/lettuce/core/protocol/CommandHandlerBenchmark.java new file mode 100644 index 0000000000..5df0d810df --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/CommandHandlerBenchmark.java @@ -0,0 +1,166 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +import org.openjdk.jmh.annotations.*; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.output.ValueOutput; +import io.netty.buffer.ByteBuf; + +/** + * Benchmark for {@link CommandHandler}. + *

    + * Test cases: + *

      + *
    • user command writes
    • + *
    • netty (in-eventloop) writes
    • + *
    • netty (in-eventloop) reads
    • + *
    + * + * @author Mark Paluch + * @author Grzegorz Szpak + */ +@State(Scope.Benchmark) +public class CommandHandlerBenchmark { + + private static final ByteArrayCodec CODEC = new ByteArrayCodec(); + private static final ClientOptions CLIENT_OPTIONS = ClientOptions.create(); + private static final EmptyContext CHANNEL_HANDLER_CONTEXT = new EmptyContext(); + private static final byte[] KEY = "key".getBytes(); + private static final String VALUE = "value\r\n"; + private final EmptyPromise PROMISE = new EmptyPromise(); + + private CommandHandler commandHandler; + private ByteBuf reply1; + private ByteBuf reply10; + private ByteBuf reply100; + private ByteBuf reply1000; + private List commands1; + private List commands10; + private List commands100; + private List commands1000; + + @Setup + public void setup() throws Exception { + + commandHandler = new CommandHandler(CLIENT_OPTIONS, EmptyClientResources.INSTANCE, new DefaultEndpoint(CLIENT_OPTIONS, + EmptyClientResources.INSTANCE)); + commandHandler.channelRegistered(CHANNEL_HANDLER_CONTEXT); + commandHandler.setState(CommandHandler.LifecycleState.CONNECTED); + + reply1 = createByteBuf(String.format("+%s", VALUE)); + reply10 = createByteBuf(createBulkReply(10)); + reply100 = createByteBuf(createBulkReply(100)); + reply1000 = createByteBuf(createBulkReply(1000)); + + commands1 = createCommands(1); + commands10 = createCommands(10); + commands100 = createCommands(100); + commands1000 = createCommands(1000); + } + + @TearDown + public void tearDown() throws Exception { + + commandHandler.channelUnregistered(CHANNEL_HANDLER_CONTEXT); + + Arrays.asList(reply1, reply10, reply100, reply1000).forEach(ByteBuf::release); + } + + private static List createCommands(int count) { + return IntStream.range(0, count).mapToObj(i -> createCommand()).collect(Collectors.toList()); + } + + private static ByteBuf createByteBuf(String str) { + + ByteBuf buf = CHANNEL_HANDLER_CONTEXT.alloc().directBuffer(); + buf.writeBytes(str.getBytes()); + return buf; + } + + private static String createBulkReply(int numOfReplies) { + + String baseReply = String.format("$%d\r\n%s\r\n", VALUE.length(), VALUE); + + return String.join("", Collections.nCopies(numOfReplies, baseReply)); + } + + @SuppressWarnings("unchecked") + private static Command createCommand() { + return new Command(CommandType.GET, new ValueOutput<>(CODEC), new CommandArgs(CODEC).addKey(KEY)) { + @Override + public boolean isDone() { + return false; + } + }; + } + + @Benchmark + public void measureNettyWriteAndRead() throws Exception { + + Command command = createCommand(); + + commandHandler.write(CHANNEL_HANDLER_CONTEXT, command, PROMISE); + int index = reply1.readerIndex(); + reply1.retain(); + + commandHandler.channelRead(CHANNEL_HANDLER_CONTEXT, reply1); + + // cleanup + reply1.readerIndex(index); + } + + @Benchmark + public void measureNettyWriteAndReadBatch1() throws Exception { + doBenchmark(commands1, reply1); + } + + @Benchmark + public void measureNettyWriteAndReadBatch10() throws Exception { + doBenchmark(commands10, reply10); + } + + @Benchmark + public void measureNettyWriteAndReadBatch100() throws Exception { + doBenchmark(commands100, reply100); + } + + @Benchmark + public void measureNettyWriteAndReadBatch1000() throws Exception { + doBenchmark(commands1000, reply1000); + } + + private void doBenchmark(List commandStack, ByteBuf response) throws Exception { + + commandHandler.write(CHANNEL_HANDLER_CONTEXT, commandStack, PROMISE); + + int index = response.readerIndex(); + response.retain(); + + commandHandler.channelRead(CHANNEL_HANDLER_CONTEXT, response); + + // cleanup + response.readerIndex(index); + } +} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyByteBuf.java b/src/test/jmh/io/lettuce/core/protocol/EmptyByteBuf.java similarity index 82% rename from src/test/jmh/com/lambdaworks/redis/protocol/EmptyByteBuf.java rename to src/test/jmh/io/lettuce/core/protocol/EmptyByteBuf.java index b7d0eff3ea..a905893dbd 100644 --- a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyByteBuf.java +++ b/src/test/jmh/io/lettuce/core/protocol/EmptyByteBuf.java @@ -1,12 +1,20 @@ -package com.lambdaworks.redis.protocol; - -import io.netty.buffer.AbstractByteBuf; -import io.netty.buffer.ByteBuf; -import io.netty.buffer.ByteBufAllocator; -import io.netty.buffer.ByteBufProcessor; -import io.netty.util.ByteProcessor; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; -import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.nio.ByteBuffer; @@ -16,11 +24,18 @@ import java.nio.channels.ScatteringByteChannel; import java.nio.charset.Charset; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.CompositeByteBuf; +import io.netty.util.ByteProcessor; + /** * @author Mark Paluch */ class EmptyByteBuf extends ByteBuf { + private static final EmptyByteBuf INSTANCE = new EmptyByteBuf(); + @Override public int capacity() { return 0; @@ -38,7 +53,7 @@ public int maxCapacity() { @Override public ByteBufAllocator alloc() { - return null; + return EmptyByteBufAllocator.INSTANCE; } @Override @@ -307,17 +322,17 @@ public ByteBuf getBytes(int i, ByteBuffer byteBuffer) { } @Override - public ByteBuf getBytes(int i, OutputStream outputStream, int i1) throws IOException { + public ByteBuf getBytes(int i, OutputStream outputStream, int i1) { return null; } @Override - public int getBytes(int i, GatheringByteChannel gatheringByteChannel, int i1) throws IOException { + public int getBytes(int i, GatheringByteChannel gatheringByteChannel, int i1) { return 0; } @Override - public int getBytes(int i, FileChannel fileChannel, long l, int i1) throws IOException { + public int getBytes(int i, FileChannel fileChannel, long l, int i1) { return 0; } @@ -422,17 +437,17 @@ public ByteBuf setBytes(int i, ByteBuffer byteBuffer) { } @Override - public int setBytes(int i, InputStream inputStream, int i1) throws IOException { + public int setBytes(int i, InputStream inputStream, int i1) { return 0; } @Override - public int setBytes(int i, ScatteringByteChannel scatteringByteChannel, int i1) throws IOException { + public int setBytes(int i, ScatteringByteChannel scatteringByteChannel, int i1) { return 0; } @Override - public int setBytes(int i, FileChannel fileChannel, long l, int i1) throws IOException { + public int setBytes(int i, FileChannel fileChannel, long l, int i1) { return 0; } @@ -592,12 +607,12 @@ public ByteBuf readBytes(ByteBuffer byteBuffer) { } @Override - public ByteBuf readBytes(OutputStream outputStream, int i) throws IOException { + public ByteBuf readBytes(OutputStream outputStream, int i) { return null; } @Override - public int readBytes(GatheringByteChannel gatheringByteChannel, int i) throws IOException { + public int readBytes(GatheringByteChannel gatheringByteChannel, int i) { return 0; } @@ -607,7 +622,7 @@ public CharSequence readCharSequence(int i, Charset charset) { } @Override - public int readBytes(FileChannel fileChannel, long l, int i) throws IOException { + public int readBytes(FileChannel fileChannel, long l, int i) { return 0; } @@ -712,17 +727,17 @@ public ByteBuf writeBytes(ByteBuffer byteBuffer) { } @Override - public int writeBytes(InputStream inputStream, int i) throws IOException { + public int writeBytes(InputStream inputStream, int i) { return 0; } @Override - public int writeBytes(ScatteringByteChannel scatteringByteChannel, int i) throws IOException { + public int writeBytes(ScatteringByteChannel scatteringByteChannel, int i) { return 0; } @Override - public int writeBytes(FileChannel fileChannel, long l, int i) throws IOException { + public int writeBytes(FileChannel fileChannel, long l, int i) { return 0; } @@ -935,4 +950,108 @@ public boolean release() { public boolean release(int i) { return false; } + + private enum EmptyByteBufAllocator implements ByteBufAllocator { + INSTANCE; + + @Override + public ByteBuf buffer() { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf buffer(int i) { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf buffer(int i, int i1) { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf ioBuffer() { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf ioBuffer(int i) { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf ioBuffer(int i, int i1) { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf heapBuffer() { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf heapBuffer(int i) { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf heapBuffer(int i, int i1) { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf directBuffer() { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf directBuffer(int i) { + return EmptyByteBuf.INSTANCE; + } + + @Override + public ByteBuf directBuffer(int i, int i1) { + return EmptyByteBuf.INSTANCE; + } + + @Override + public CompositeByteBuf compositeBuffer() { + return null; + } + + @Override + public CompositeByteBuf compositeBuffer(int i) { + return null; + } + + @Override + public CompositeByteBuf compositeHeapBuffer() { + return null; + } + + @Override + public CompositeByteBuf compositeHeapBuffer(int i) { + return null; + } + + @Override + public CompositeByteBuf compositeDirectBuffer() { + return null; + } + + @Override + public CompositeByteBuf compositeDirectBuffer(int i) { + return null; + } + + @Override + public boolean isDirectBufferPooled() { + return false; + } + + @Override + public int calculateNewCapacity(int i, int i1) { + return 0; + } + } } diff --git a/src/test/jmh/io/lettuce/core/protocol/EmptyChannel.java b/src/test/jmh/io/lettuce/core/protocol/EmptyChannel.java new file mode 100644 index 0000000000..8497ab25d0 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/EmptyChannel.java @@ -0,0 +1,232 @@ +package io.lettuce.core.protocol; + +import java.net.SocketAddress; + +import io.netty.buffer.ByteBufAllocator; +import io.netty.channel.*; +import io.netty.util.Attribute; +import io.netty.util.AttributeKey; + +/** + * @author Grzegorz Szpak + */ +class EmptyChannel implements Channel { + + private static final ChannelConfig CONFIG = new EmptyConfig(); + + @Override + public ChannelId id() { + return null; + } + + @Override + public EventLoop eventLoop() { + return null; + } + + @Override + public Channel parent() { + return null; + } + + @Override + public ChannelConfig config() { + return CONFIG; + } + + @Override + public boolean isOpen() { + return false; + } + + @Override + public boolean isRegistered() { + return false; + } + + @Override + public boolean isActive() { + return false; + } + + @Override + public ChannelMetadata metadata() { + return null; + } + + @Override + public SocketAddress localAddress() { + return null; + } + + @Override + public SocketAddress remoteAddress() { + return null; + } + + @Override + public ChannelFuture closeFuture() { + return null; + } + + @Override + public boolean isWritable() { + return false; + } + + @Override + public long bytesBeforeUnwritable() { + return 0; + } + + @Override + public long bytesBeforeWritable() { + return 0; + } + + @Override + public Unsafe unsafe() { + return null; + } + + @Override + public ChannelPipeline pipeline() { + return null; + } + + @Override + public ByteBufAllocator alloc() { + return null; + } + + @Override + public ChannelFuture bind(SocketAddress socketAddress) { + return null; + } + + @Override + public ChannelFuture connect(SocketAddress socketAddress) { + return null; + } + + @Override + public ChannelFuture connect(SocketAddress socketAddress, SocketAddress socketAddress1) { + return null; + } + + @Override + public ChannelFuture disconnect() { + return null; + } + + @Override + public ChannelFuture close() { + return null; + } + + @Override + public ChannelFuture deregister() { + return null; + } + + @Override + public ChannelFuture bind(SocketAddress socketAddress, ChannelPromise channelPromise) { + return null; + } + + @Override + public ChannelFuture connect(SocketAddress socketAddress, ChannelPromise channelPromise) { + return null; + } + + @Override + public ChannelFuture connect(SocketAddress socketAddress, SocketAddress socketAddress1, + ChannelPromise channelPromise) { + return null; + } + + @Override + public ChannelFuture disconnect(ChannelPromise channelPromise) { + return null; + } + + @Override + public ChannelFuture close(ChannelPromise channelPromise) { + return null; + } + + @Override + public ChannelFuture deregister(ChannelPromise channelPromise) { + return null; + } + + @Override + public Channel read() { + return null; + } + + @Override + public ChannelFuture write(Object o) { + return null; + } + + @Override + public ChannelFuture write(Object o, ChannelPromise channelPromise) { + return null; + } + + @Override + public Channel flush() { + return null; + } + + @Override + public ChannelFuture writeAndFlush(Object o, ChannelPromise channelPromise) { + return null; + } + + @Override + public ChannelFuture writeAndFlush(Object o) { + return null; + } + + @Override + public ChannelPromise newPromise() { + return null; + } + + @Override + public ChannelProgressivePromise newProgressivePromise() { + return null; + } + + @Override + public ChannelFuture newSucceededFuture() { + return null; + } + + @Override + public ChannelFuture newFailedFuture(Throwable throwable) { + return null; + } + + @Override + public ChannelPromise voidPromise() { + return null; + } + + @Override + public Attribute attr(AttributeKey attributeKey) { + return null; + } + + @Override + public boolean hasAttr(AttributeKey attributeKey) { + return false; + } + + @Override + public int compareTo(Channel o) { + return 0; + } +} diff --git a/src/test/jmh/io/lettuce/core/protocol/EmptyClientResources.java b/src/test/jmh/io/lettuce/core/protocol/EmptyClientResources.java new file mode 100644 index 0000000000..833a0aa8a1 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/EmptyClientResources.java @@ -0,0 +1,148 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.net.SocketAddress; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import io.lettuce.core.event.DefaultEventPublisherOptions; +import io.lettuce.core.event.EventBus; +import io.lettuce.core.event.EventPublisherOptions; +import io.lettuce.core.metrics.CommandLatencyCollector; +import io.lettuce.core.metrics.CommandLatencyId; +import io.lettuce.core.metrics.CommandMetrics; +import io.lettuce.core.resource.*; +import io.lettuce.core.tracing.Tracing; +import io.netty.util.Timer; +import io.netty.util.concurrent.EventExecutorGroup; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.GlobalEventExecutor; +import io.netty.util.concurrent.SucceededFuture; + +/** + * @author Mark Paluch + */ +public class EmptyClientResources implements ClientResources { + + private static final DefaultEventPublisherOptions PUBLISHER_OPTIONS = DefaultEventPublisherOptions.disabled(); + private static final EmptyCommandLatencyCollector LATENCY_COLLECTOR = new EmptyCommandLatencyCollector(); + public static final EmptyClientResources INSTANCE = new EmptyClientResources(); + + @Override + public Future shutdown() { + return new SucceededFuture<>(GlobalEventExecutor.INSTANCE, true); + } + + @Override + public Future shutdown(long quietPeriod, long timeout, TimeUnit timeUnit) { + return new SucceededFuture<>(GlobalEventExecutor.INSTANCE, true); + } + + @Override + public Builder mutate() { + return null; + } + + @Override + public EventLoopGroupProvider eventLoopGroupProvider() { + return null; + } + + @Override + public EventExecutorGroup eventExecutorGroup() { + return null; + } + + @Override + public int ioThreadPoolSize() { + return 0; + } + + @Override + public int computationThreadPoolSize() { + return 0; + } + + @Override + public Timer timer() { + return null; + } + + @Override + public EventBus eventBus() { + return null; + } + + @Override + public EventPublisherOptions commandLatencyPublisherOptions() { + return PUBLISHER_OPTIONS; + } + + @Override + public CommandLatencyCollector commandLatencyCollector() { + return LATENCY_COLLECTOR; + } + + @Override + public DnsResolver dnsResolver() { + return null; + } + + @Override + public SocketAddressResolver socketAddressResolver() { + return null; + } + + @Override + public Delay reconnectDelay() { + return null; + } + + @Override + public NettyCustomizer nettyCustomizer() { + return null; + } + + @Override + public Tracing tracing() { + return Tracing.disabled(); + } + + public static class EmptyCommandLatencyCollector implements CommandLatencyCollector { + + @Override + public void shutdown() { + + } + + @Override + public Map retrieveMetrics() { + return null; + } + + @Override + public boolean isEnabled() { + return false; + } + + @Override + public void recordCommandLatency(SocketAddress local, SocketAddress remote, ProtocolKeyword commandType, + long firstResponseLatency, long completionLatency) { + + } + } +} diff --git a/src/test/jmh/io/lettuce/core/protocol/EmptyConfig.java b/src/test/jmh/io/lettuce/core/protocol/EmptyConfig.java new file mode 100644 index 0000000000..abdc1c2b45 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/EmptyConfig.java @@ -0,0 +1,142 @@ +package io.lettuce.core.protocol; + +import java.util.Map; + +import io.netty.buffer.ByteBufAllocator; +import io.netty.channel.*; + +/** + * @author Grzegorz Szpak + */ +class EmptyConfig implements ChannelConfig { + + @Override + public Map, Object> getOptions() { + return null; + } + + @Override + public boolean setOptions(Map, ?> map) { + return false; + } + + @Override + public T getOption(ChannelOption channelOption) { + return null; + } + + @Override + public boolean setOption(ChannelOption channelOption, T t) { + return false; + } + + @Override + public int getConnectTimeoutMillis() { + return 0; + } + + @Override + public ChannelConfig setConnectTimeoutMillis(int i) { + return null; + } + + @Override + public int getMaxMessagesPerRead() { + return 0; + } + + @Override + public ChannelConfig setMaxMessagesPerRead(int i) { + return null; + } + + @Override + public int getWriteSpinCount() { + return 0; + } + + @Override + public ChannelConfig setWriteSpinCount(int i) { + return null; + } + + @Override + public ByteBufAllocator getAllocator() { + return null; + } + + @Override + public ChannelConfig setAllocator(ByteBufAllocator byteBufAllocator) { + return null; + } + + @Override + public T getRecvByteBufAllocator() { + return null; + } + + @Override + public ChannelConfig setRecvByteBufAllocator(RecvByteBufAllocator recvByteBufAllocator) { + return null; + } + + @Override + public boolean isAutoRead() { + return false; + } + + @Override + public ChannelConfig setAutoRead(boolean b) { + return null; + } + + @Override + public boolean isAutoClose() { + return false; + } + + @Override + public ChannelConfig setAutoClose(boolean b) { + return null; + } + + @Override + public int getWriteBufferHighWaterMark() { + return 0; + } + + @Override + public ChannelConfig setWriteBufferHighWaterMark(int i) { + return null; + } + + @Override + public int getWriteBufferLowWaterMark() { + return 0; + } + + @Override + public ChannelConfig setWriteBufferLowWaterMark(int i) { + return null; + } + + @Override + public MessageSizeEstimator getMessageSizeEstimator() { + return null; + } + + @Override + public ChannelConfig setMessageSizeEstimator(MessageSizeEstimator messageSizeEstimator) { + return null; + } + + @Override + public WriteBufferWaterMark getWriteBufferWaterMark() { + return null; + } + + @Override + public ChannelConfig setWriteBufferWaterMark(WriteBufferWaterMark writeBufferWaterMark) { + return null; + } +} diff --git a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyContext.java b/src/test/jmh/io/lettuce/core/protocol/EmptyContext.java similarity index 84% rename from src/test/jmh/com/lambdaworks/redis/protocol/EmptyContext.java rename to src/test/jmh/io/lettuce/core/protocol/EmptyContext.java index 51131a8fe2..142604fdcf 100644 --- a/src/test/jmh/com/lambdaworks/redis/protocol/EmptyContext.java +++ b/src/test/jmh/io/lettuce/core/protocol/EmptyContext.java @@ -1,22 +1,40 @@ -package com.lambdaworks.redis.protocol; +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.net.SocketAddress; import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.PooledByteBufAllocator; import io.netty.channel.*; import io.netty.util.Attribute; import io.netty.util.AttributeKey; import io.netty.util.concurrent.EventExecutor; -import java.net.SocketAddress; - /** * @author Mark Paluch */ class EmptyContext implements ChannelHandlerContext { + private static final Channel CHANNEL = new EmptyChannel(); + @Override public Channel channel() { - return null; + return CHANNEL; } @Override @@ -193,7 +211,7 @@ public ChannelPipeline pipeline() { @Override public ByteBufAllocator alloc() { - return null; + return PooledByteBufAllocator.DEFAULT; } @Override diff --git a/src/test/jmh/io/lettuce/core/protocol/EmptyFuture.java b/src/test/jmh/io/lettuce/core/protocol/EmptyFuture.java new file mode 100644 index 0000000000..3587f2177e --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/EmptyFuture.java @@ -0,0 +1,144 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.concurrent.TimeUnit; + +import io.netty.channel.Channel; +import io.netty.channel.ChannelFuture; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.GenericFutureListener; + +/** + * @author Mark Paluch + */ +class EmptyFuture implements ChannelFuture { + + @Override + public Channel channel() { + return null; + } + + @Override + public ChannelFuture addListener(GenericFutureListener> listener) { + return null; + } + + @Override + public ChannelFuture addListeners(GenericFutureListener>... listeners) { + return null; + } + + @Override + public ChannelFuture removeListener(GenericFutureListener> listener) { + return null; + } + + @Override + public ChannelFuture removeListeners(GenericFutureListener>... listeners) { + return null; + } + + @Override + public ChannelFuture sync() { + return null; + } + + @Override + public ChannelFuture syncUninterruptibly() { + return null; + } + + @Override + public ChannelFuture await() { + return null; + } + + @Override + public ChannelFuture awaitUninterruptibly() { + return null; + } + + @Override + public boolean isVoid() { + return false; + } + + @Override + public boolean isSuccess() { + return false; + } + + @Override + public boolean isCancellable() { + return false; + } + + @Override + public Throwable cause() { + return null; + } + + @Override + public boolean await(long timeout, TimeUnit unit) { + return false; + } + + @Override + public boolean await(long timeoutMillis) { + return false; + } + + @Override + public boolean awaitUninterruptibly(long timeout, TimeUnit unit) { + return false; + } + + @Override + public boolean awaitUninterruptibly(long timeoutMillis) { + return false; + } + + @Override + public Void getNow() { + return null; + } + + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + return false; + } + + @Override + public boolean isCancelled() { + return false; + } + + @Override + public boolean isDone() { + return false; + } + + @Override + public Void get() { + return null; + } + + @Override + public Void get(long timeout, TimeUnit unit) { + return null; + } +} diff --git a/src/test/jmh/io/lettuce/core/protocol/EmptyPromise.java b/src/test/jmh/io/lettuce/core/protocol/EmptyPromise.java new file mode 100644 index 0000000000..5435ef84ae --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/EmptyPromise.java @@ -0,0 +1,192 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.concurrent.TimeUnit; + +import io.netty.channel.Channel; +import io.netty.channel.ChannelPromise; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.GenericFutureListener; + +/** + * @author Mark Paluch + */ +class EmptyPromise implements ChannelPromise { + + @Override + public Channel channel() { + return null; + } + + @Override + public ChannelPromise setSuccess(Void result) { + return null; + } + + @Override + public boolean trySuccess(Void result) { + return false; + } + + @Override + public ChannelPromise setSuccess() { + return null; + } + + @Override + public boolean trySuccess() { + return false; + } + + @Override + public ChannelPromise setFailure(Throwable cause) { + return null; + } + + @Override + public boolean tryFailure(Throwable cause) { + return false; + } + + @Override + public boolean setUncancellable() { + return false; + } + + @Override + public boolean isSuccess() { + return true; + } + + @Override + public boolean isCancellable() { + return false; + } + + @Override + public Throwable cause() { + return null; + } + + @Override + public ChannelPromise addListener(GenericFutureListener> listener) { + + try { + ((GenericFutureListener) listener).operationComplete(this); + } catch (Exception e) { + throw new RuntimeException(e); + } + return null; + } + + @Override + public ChannelPromise addListeners(GenericFutureListener>... listeners) { +for (GenericFutureListener> listener : listeners) { + addListener(listener); + } return this; + } + + @Override + public ChannelPromise removeListener(GenericFutureListener> listener) { + return null; + } + + @Override + public ChannelPromise removeListeners(GenericFutureListener>... listeners) { + return null; + } + + @Override + public ChannelPromise sync() { + return null; + } + + @Override + public ChannelPromise syncUninterruptibly() { + return null; + } + + @Override + public ChannelPromise await() { + return null; + } + + @Override + public ChannelPromise awaitUninterruptibly() { + return null; + } + + @Override + public boolean isVoid() { + return false; + } + + @Override + public ChannelPromise unvoid() { + return null; + } + + @Override + public boolean await(long timeout, TimeUnit unit) { + return false; + } + + @Override + public boolean await(long timeoutMillis) { + return false; + } + + @Override + public boolean awaitUninterruptibly(long timeout, TimeUnit unit) { + return false; + } + + @Override + public boolean awaitUninterruptibly(long timeoutMillis) { + return false; + } + + @Override + public Void getNow() { + return null; + } + + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + return false; + } + + @Override + public boolean isCancelled() { + return false; + } + + @Override + public boolean isDone() { + return false; + } + + @Override + public Void get() { + return null; + } + + @Override + public Void get(long timeout, TimeUnit unit) { + return null; + } +} diff --git a/src/test/jmh/io/lettuce/core/protocol/JmhMain.java b/src/test/jmh/io/lettuce/core/protocol/JmhMain.java new file mode 100644 index 0000000000..66d4104142 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/JmhMain.java @@ -0,0 +1,96 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.util.concurrent.TimeUnit; + +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.RunnerException; +import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.openjdk.jmh.runner.options.TimeValue; + +/** + * Manual JMH Test Launcher. + * + * @author Mark Paluch + */ +public class JmhMain { + + public static void main(String... args) throws RunnerException { + + // run selectively + // runCommandBenchmark(); + runCommandHandlerBenchmark(); + // runRedisEndpointBenchmark(); + // runRedisStateMachineBenchmark(); + // runCommandEncoderBenchmark(); + + // or all + // runBenchmarks(); + } + + private static void runBenchmarks() throws RunnerException { + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).build()).run(); + } + + private static void runCommandBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).include(".*CommandBenchmark.*") + .build()).run(); + + new Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandBenchmark.*").build()) + .run(); + } + + private static void runCommandHandlerBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS) + .include(".*CommandHandlerBenchmark.*").build()).run(); + // new + // Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandHandlerBenchmark.*").build()).run(); + } + + private static void runRedisEndpointBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS).include(".*RedisEndpointBenchmark.*") + .build()).run(); + // new + // Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandHandlerBenchmark.*").build()).run(); + } + + private static void runCommandEncoderBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS) + .include(".*CommandEncoderBenchmark.*").build()).run(); + // new + // Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandHandlerBenchmark.*").build()).run(); + } + + private static void runRedisStateMachineBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS) + .include(".*RedisStateMachineBenchmark.*").build()).run(); + // new + // Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS).include(".*CommandHandlerBenchmark.*").build()).run(); + } + + private static ChainedOptionsBuilder prepareOptions() { + return new OptionsBuilder().forks(1).warmupIterations(5).threads(1).measurementIterations(5) + .timeout(TimeValue.seconds(2)); + } +} diff --git a/src/test/jmh/io/lettuce/core/protocol/RedisEndpointBenchmark.java b/src/test/jmh/io/lettuce/core/protocol/RedisEndpointBenchmark.java new file mode 100644 index 0000000000..c1a3e4a0cc --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/RedisEndpointBenchmark.java @@ -0,0 +1,101 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import org.openjdk.jmh.annotations.*; + +import io.lettuce.core.ClientOptions; +import io.lettuce.core.EmptyStatefulRedisConnection; +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.output.ValueOutput; +import io.netty.channel.ChannelFuture; +import io.netty.channel.ChannelPromise; +import io.netty.channel.embedded.EmbeddedChannel; + +/** + * Benchmark for {@link DefaultEndpoint}. + *

    + * Test cases: + *

      + *
    • user command writes
    • + *
    + * + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class RedisEndpointBenchmark { + + private static final ByteArrayCodec CODEC = new ByteArrayCodec(); + private static final ClientOptions CLIENT_OPTIONS = ClientOptions.create(); + private static final byte[] KEY = "key".getBytes(); + private static final ChannelFuture EMPTY = new EmptyFuture(); + + private DefaultEndpoint defaultEndpoint; + private Command command; + + @Setup + public void setup() { + + defaultEndpoint = new DefaultEndpoint(CLIENT_OPTIONS, EmptyClientResources.INSTANCE); + command = new Command(CommandType.GET, new ValueOutput<>(CODEC), new CommandArgs(CODEC).addKey(KEY)); + + defaultEndpoint.setConnectionFacade(EmptyStatefulRedisConnection.INSTANCE); + defaultEndpoint.notifyChannelActive(new MyLocalChannel()); + } + + @TearDown(Level.Iteration) + public void tearDown() { + defaultEndpoint.reset(); + } + + @Benchmark + public void measureUserWrite() { + defaultEndpoint.write(command); + } + + private static final class MyLocalChannel extends EmbeddedChannel { + @Override + public boolean isActive() { + return true; + } + + @Override + public boolean isOpen() { + return true; + } + + @Override + public ChannelFuture write(Object msg) { + return EMPTY; + } + + @Override + public ChannelFuture write(Object msg, ChannelPromise promise) { + return promise; + } + + @Override + public ChannelFuture writeAndFlush(Object msg) { + return EMPTY; + } + + @Override + public ChannelFuture writeAndFlush(Object msg, ChannelPromise promise) { + return promise; + } + } + +} diff --git a/src/test/jmh/io/lettuce/core/protocol/RedisStateMachineBenchmark.java b/src/test/jmh/io/lettuce/core/protocol/RedisStateMachineBenchmark.java new file mode 100644 index 0000000000..ce24710201 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/protocol/RedisStateMachineBenchmark.java @@ -0,0 +1,97 @@ +/* + * Copyright 2011-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.protocol; + +import java.nio.ByteBuffer; +import java.util.List; + +import org.openjdk.jmh.annotations.*; + +import io.lettuce.core.codec.ByteArrayCodec; +import io.lettuce.core.output.ArrayOutput; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.PooledByteBufAllocator; + +/** + * Bechmark for {@link RedisStateMachine}. + * + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class RedisStateMachineBenchmark { + + private static final ByteArrayCodec BYTE_ARRAY_CODEC = new ByteArrayCodec(); + + private static final Command> byteArrayCommand = new Command<>(CommandType.GET, + new ArrayOutput(BYTE_ARRAY_CODEC) { + + @Override + public void set(ByteBuffer bytes) { + + } + + @Override + public void multi(int count) { + + } + + @Override + public void complete(int depth) { + } + + @Override + public void set(long integer) { + } + }, new CommandArgs(BYTE_ARRAY_CODEC).addKey(new byte[] { 1, 2, 3, 4 })); + + private ByteBuf masterBuffer; + + private final RedisStateMachine stateMachine = new RedisStateMachine(ByteBufAllocator.DEFAULT); + private final byte[] payload = ("*3\r\n" + // + "$4\r\n" + // + "LLEN\r\n" + // + "$6\r\n" + // + "mylist\r\n" + // + "+QUEUED\r\n" + // + ":12\r\n").getBytes(); + + @Setup(Level.Trial) + public void setup() { + masterBuffer = PooledByteBufAllocator.DEFAULT.ioBuffer(32); + masterBuffer.writeBytes(payload); + } + + @TearDown + public void tearDown() { + masterBuffer.release(); + } + + @Benchmark + public void measureDecode() { + stateMachine.decode(masterBuffer, byteArrayCommand, byteArrayCommand.getOutput()); + masterBuffer.readerIndex(0); + } + + public static void main(String[] args) { + + RedisStateMachineBenchmark b = new RedisStateMachineBenchmark(); + b.setup(); + while (true) { + b.measureDecode(); + } + } +} diff --git a/src/test/jmh/io/lettuce/core/support/AsyncConnectionPoolBenchmark.java b/src/test/jmh/io/lettuce/core/support/AsyncConnectionPoolBenchmark.java new file mode 100644 index 0000000000..6c53abddbb --- /dev/null +++ b/src/test/jmh/io/lettuce/core/support/AsyncConnectionPoolBenchmark.java @@ -0,0 +1,66 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.util.concurrent.CompletableFuture; + +import org.openjdk.jmh.annotations.*; + +import io.lettuce.core.EmptyRedisChannelWriter; +import io.lettuce.core.EmptyStatefulRedisConnection; +import io.lettuce.core.api.StatefulRedisConnection; + +/** + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class AsyncConnectionPoolBenchmark { + + private AsyncPool> pool; + private StatefulRedisConnection[] holder = new StatefulRedisConnection[20]; + + @Setup + public void setup() { + + BoundedPoolConfig config = BoundedPoolConfig.builder().minIdle(0).maxIdle(20).maxTotal(20).build(); + + pool = AsyncConnectionPoolSupport.createBoundedObjectPool( + () -> CompletableFuture.completedFuture(new EmptyStatefulRedisConnection(EmptyRedisChannelWriter.INSTANCE)), + config); + } + + @TearDown(Level.Iteration) + public void tearDown() { + pool.clear(); + } + + @Benchmark + public void singleConnection() { + pool.release(pool.acquire().join()).join(); + } + + @Benchmark + public void twentyConnections() { + + for (int i = 0; i < holder.length; i++) { + holder[i] = pool.acquire().join(); + } + + for (int i = 0; i < holder.length; i++) { + pool.release(holder[i]).join(); + } + } +} diff --git a/src/test/jmh/io/lettuce/core/support/GenericConnectionPoolBenchmark.java b/src/test/jmh/io/lettuce/core/support/GenericConnectionPoolBenchmark.java new file mode 100644 index 0000000000..56c31c8fde --- /dev/null +++ b/src/test/jmh/io/lettuce/core/support/GenericConnectionPoolBenchmark.java @@ -0,0 +1,68 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import org.apache.commons.pool2.impl.GenericObjectPool; +import org.apache.commons.pool2.impl.GenericObjectPoolConfig; +import org.openjdk.jmh.annotations.*; + +import io.lettuce.core.EmptyRedisChannelWriter; +import io.lettuce.core.EmptyStatefulRedisConnection; +import io.lettuce.core.api.StatefulRedisConnection; + +/** + * @author Mark Paluch + */ +@State(Scope.Benchmark) +public class GenericConnectionPoolBenchmark { + + private GenericObjectPool> pool; + private StatefulRedisConnection[] holder = new StatefulRedisConnection[20]; + + @Setup + public void setup() { + + GenericObjectPoolConfig config = new GenericObjectPoolConfig(); + config.setMinIdle(0); + config.setMaxIdle(20); + config.setMaxTotal(20); + + pool = ConnectionPoolSupport.createGenericObjectPool(() -> new EmptyStatefulRedisConnection( + EmptyRedisChannelWriter.INSTANCE), config); + } + + @TearDown(Level.Iteration) + public void tearDown() { + pool.clear(); + } + + @Benchmark + public void singleConnection() throws Exception { + pool.returnObject(pool.borrowObject()); + } + + @Benchmark + public void twentyConnections() throws Exception { + + for (int i = 0; i < holder.length; i++) { + holder[i] = pool.borrowObject(); + } + + for (int i = 0; i < holder.length; i++) { + pool.returnObject(holder[i]); + } + } +} diff --git a/src/test/jmh/io/lettuce/core/support/JmhMain.java b/src/test/jmh/io/lettuce/core/support/JmhMain.java new file mode 100644 index 0000000000..4b80bd8a50 --- /dev/null +++ b/src/test/jmh/io/lettuce/core/support/JmhMain.java @@ -0,0 +1,64 @@ +/* + * Copyright 2017-2020 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.lettuce.core.support; + +import java.util.concurrent.TimeUnit; + +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.RunnerException; +import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.openjdk.jmh.runner.options.TimeValue; + +/** + * Manual JMH Test Launcher. + * + * @author Mark Paluch + */ +public class JmhMain { + + public static void main(String... args) throws RunnerException { + + // run selectively + // runCommandBenchmark(); + // runGenericConnectionPoolBenchmark(); + runAsyncConnectionPoolBenchmark(); + } + + private static void runGenericConnectionPoolBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS) + .include(".*GenericConnectionPoolBenchmark.*").build()).run(); + + new Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS) + .include(".*GenericConnectionPoolBenchmark.*").build()).run(); + } + + private static void runAsyncConnectionPoolBenchmark() throws RunnerException { + + new Runner(prepareOptions().mode(Mode.AverageTime).timeUnit(TimeUnit.NANOSECONDS) + .include(".*AsyncConnectionPoolBenchmark.*").build()).run(); + + new Runner(prepareOptions().mode(Mode.Throughput).timeUnit(TimeUnit.SECONDS) + .include(".*AsyncConnectionPoolBenchmark.*").build()).run(); + } + + private static ChainedOptionsBuilder prepareOptions() { + return new OptionsBuilder().forks(1).warmupIterations(5).threads(1).measurementIterations(5) + .timeout(TimeValue.seconds(2)); + } +} diff --git a/src/test/resources/com/lambdaworks/examples/SpringTest-context.xml b/src/test/resources/com/lambdaworks/examples/SpringTest-context.xml deleted file mode 100644 index 24e5b3f1ea..0000000000 --- a/src/test/resources/com/lambdaworks/examples/SpringTest-context.xml +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - - - - - diff --git a/src/test/resources/com/lambdaworks/redis/support/SpringTest-context.xml b/src/test/resources/com/lambdaworks/redis/support/SpringTest-context.xml deleted file mode 100644 index f5e6fa68c4..0000000000 --- a/src/test/resources/com/lambdaworks/redis/support/SpringTest-context.xml +++ /dev/null @@ -1,37 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/src/test/resources/io/lettuce/core/support/SpringIntegrationTests-context.xml b/src/test/resources/io/lettuce/core/support/SpringIntegrationTests-context.xml new file mode 100644 index 0000000000..9264b08065 --- /dev/null +++ b/src/test/resources/io/lettuce/core/support/SpringIntegrationTests-context.xml @@ -0,0 +1,37 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/src/test/resources/io/lettuce/examples/SpringTest-context.xml b/src/test/resources/io/lettuce/examples/SpringTest-context.xml new file mode 100644 index 0000000000..eab55942bf --- /dev/null +++ b/src/test/resources/io/lettuce/examples/SpringTest-context.xml @@ -0,0 +1,12 @@ + + + + + + + + + + diff --git a/src/test/resources/log4j2-test.xml b/src/test/resources/log4j2-test.xml index 672acaaa90..7969c32138 100644 --- a/src/test/resources/log4j2-test.xml +++ b/src/test/resources/log4j2-test.xml @@ -3,7 +3,6 @@ - @@ -12,12 +11,8 @@ - - - - - - + + diff --git a/src/test/resources/spring-test.xml b/src/test/resources/spring-test.xml index 4804d43392..9e2f82bf67 100644 --- a/src/test/resources/spring-test.xml +++ b/src/test/resources/spring-test.xml @@ -2,7 +2,7 @@ xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> - + @@ -12,4 +12,4 @@ - \ No newline at end of file +